this post was submitted on 07 Mar 2026
875 points (97.4% liked)
Technology
82378 readers
4000 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean, there's a good reason the first rules of firearm safety are to always treat a weapon as loaded, and to never direct the weapon at something you aren't prepared to destroy. The key point being that you never know when some freak accident can happen with a loose pin, bad ammo, a broken spring, or just a person tripping and shaking the gun a bit too hard.
A gun should never go off by itself. You still treat it as if it can, because in the real world freak accidents happen.
Sure. The point is it's entirely possible to use a firearm safely. There is no safe use for LLMs because they "make decisions", for lack of a better phrase, for themselves, without any user input.
That is not at all how LLMs work. It’s the software written around LLMs that aide it in constructing and running commands and “making decisions”. That same software can also prompt the user to confirm if they should do something or sandbox the actions in some way.
It can, but we've already seen many times that it does not.
Only if the user has configured it to bypass those authorizations.
With an agentic coding assistant, the LLM does not decide when it does and doesn’t prompt for authorization to proceed. The surrounding software is the one that makes that call, which is a normal program with hard guardrails in place. The only way to bypass the authorization prompts is to configure that software to bypass them. Many do allow that option, but of course you should only do so when operating in a sandbox.
The person in this article was a moron, that’s all there is to it. They ran the LLM on their live system, with no sandbox, went out of their way to remove all guardrails, and had no backup. The fallout is 100% on them.
As I said elsewhere, if you're denying access to your agentic AI, what is the point of it? It needs access to complete agentic tasks.
No disagreement there.
Yes, which it can prompt you for. Three options:
Obviously optional 1 is useless, but there’s nothing wrong with choosing option 2, or even option 3 if you run it in a sandbox where it can’t do any real-world damage.
You can fine-grain nr. 2 even more: You can give access to e.g. modify files only in a certain sub-tree, or run only specific commands with only specific options.
A restrictive yet quite safe approach is to only permit e.g.
git add,git commit, and only allow changes to files under the VC. That effectively prevents any irreversible damage, without requiring you to manually approve all the time.And then when you give it access, it fucks shit up. I don't know why this is hard to understand.
You clearly have absolutely zero experience here. When you're prompted for access, it tells you the exact command that's going to be run. You don't just give blind approval to "run something", you're shown the exact command it's going to run and you can choose to approve or reject it.
Unless you're managing app permissions on android 🙄