this post was submitted on 07 Mar 2026
925 points (97.5% liked)

Technology

82378 readers
4021 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 4) 50 comments
sorted by: hot top controversial new old
[–] Naevermix@lemmy.world 4 points 15 hours ago

skill issue tbh

[–] Flying_Lynx@lemmy.ml 5 points 17 hours ago

Whenever you outsource something (like your intelligence) then it becomes a trust issue....

[–] reksas@sopuli.xyz 4 points 17 hours ago

i wonder which would be worse idea, letting llm to have full access to your critical systems and data, or letting random people from internet freely connect to them and expect them to help.

[–] HubertManne@piefed.social 3 points 16 hours ago

Its funny because when I worked at places where I even had the rights to do something like this (exempting small companies where I was the multi hat guy but even there I made it so I had to go through hoops to do something like this) it makes me like. This is crazy.

[–] vext01@feddit.uk 4 points 18 hours ago (1 children)

I dont understand why people aren't sandboxing these things.

load more comments (1 replies)
[–] flamingo_pinyata@sopuli.xyz 4 points 18 hours ago

How do you even achieve that? I have to coax it into correctly running the project locally.

[–] eestileib@lemmy.blahaj.zone 1 points 13 hours ago

And nothing of value was lost

[–] obelisk_complex@piefed.ca 3 points 18 hours ago (2 children)
[–] _stranger_@lemmy.world 1 points 15 hours ago (1 children)

You gotta be knowledgeable enough to know when they're destructive, that's the rub.

[–] obelisk_complex@piefed.ca 1 points 13 hours ago

Sure, but reading the article, I think he might be knowledgeable enough. His mistake seems to have been blindly trusting the keys to the kingdom to an enthusiastic junior dev who'll be very sorry if they nuke your system, but won't think to do a damn thing to make sure it doesn't happen in the first place...

[–] artyom@piefed.social 1 points 18 hours ago (2 children)

You can code this into it's training all you want, but it will find a way around it. This is one of many problems with AI.

[–] thebestaquaman@lemmy.world 4 points 17 hours ago (1 children)

Nah, you can run it in a box and limit its ability to interact with anything outside the box to certain white-listed endpoints. Depending on what you want to achieve, that can be more than safe enough.

[–] artyom@piefed.social 1 points 17 hours ago (1 children)

But isn't the whole point of "agentic" AI like this to let it out of the box?

[–] thebestaquaman@lemmy.world 3 points 16 hours ago

Yes, absolutely, but there's a huge span from completely removing the box to having "just" a chatbot.

For example, at my company, we've set up an agent that can work with certain design-files that engineers typically work with through a rather complex GUI. We've built a bunch of endpoints that ensures the agent can only make valid changes to the files, and that it can never delete or modify anything without approval. This saves people a bunch of time, because they can make the agent do "batch jobs" that take maybe 10 min in about 10 s. It's not possible for this agent to mess up our database or anything like that, because all interactions it has with anything are through endpoints where we verify that files, access permissions, change logs, etc. are valid.

[–] markz@suppo.fi 2 points 17 hours ago (1 children)

I thought this was about restricting the thing's access and not training?

load more comments (1 replies)
[–] webkitten@piefed.social 1 points 18 hours ago (1 children)

sigh

Use LLMs as instructional models not as production/development models. It's not hard, people. You don't need to connect credentials to any LLMs just like you'd never write your production passwords on post-it's and stick them on your computer monitor.

[–] artyom@piefed.social 7 points 18 hours ago (2 children)

Or don't use LLMs at all, because they fucking lie to you constantly?

[–] Semi_Hemi_Demigod@lemmy.world 6 points 17 hours ago

"Lie" implies they have some kind of agency. They're basically a Plinko board.

[–] thebestaquaman@lemmy.world 1 points 17 hours ago (2 children)

Meh, they work well enough if you treat them as a rubber duck that responds. I've had an actual rubber duck on my desk for some years, but I've found LLM's taking over its role lately.

I don't use them to actually generate code. I use them as a place where I can write down my thoughts. When the LLM responds, it has likely "misunderstood" some aspect of my idea, and by reformulating myself and explaining how it works I can help myself think through what I'm doing. Previously I would argue with the rubber duck, but I have to admit that the LLM is actually slightly better for the same purpose.

[–] _stranger_@lemmy.world 2 points 15 hours ago (1 children)

The rubber duck is cheaper

[–] thebestaquaman@lemmy.world 1 points 13 hours ago

You're absolutely right. I mostly run a pretty simple local model though, so it's not like it's very expensive either.

[–] prole@lemmy.blahaj.zone 1 points 17 hours ago (1 children)

Hooray for outsourcing of critical thinking!

What could possibly go wrong

[–] thebestaquaman@lemmy.world 1 points 16 hours ago (1 children)

I think you've misunderstood the purpose of a rubber duck: The point is that by formulating your problems and ideas, either out loud or in writing, you can better activate your own problem solving skills. This is a very well established method for reflecting on and solving problems when you're stuck, it's a concept far older than chatbots, because the point isn't the response you get, but the process of formulating your own thoughts in the first place.

[–] prole@lemmy.blahaj.zone 4 points 16 hours ago* (last edited 16 hours ago) (1 children)

Right, but a rubber duck isn't a sycophantic chatbot that isn't capable of conceptualizing anything but responding to you anyway.

[–] thebestaquaman@lemmy.world 2 points 15 hours ago (4 children)

That is correct. However, an LLM and a rubber duck have in common that they are inanimate objects that I can use as targets when formulating my thoughts and ideas. The LLM can also respond to things like "what part of that was unclear", to help keep my thoughts flowing. NOTE: The point of asking an LLM "what part of that was unclear" is NOT that it has a qualified answer, but rather that it's a completely unqualified prompt to explain a part of the process more thoroughly.

This is a very well established process: Whether you use an actual rubber duck, your dog, writing a blog post / personal memo (I do the last quite often) or explaining your problem to a friend that's not at all in the field. The point is to have some kind of process that helps you keep your thoughts flowing and touching in on topics you might not think are crucial, thus helping you find a solution. The toddler that answers every explanation with "why?" can be ideal for this, and an LLM can emulate it quite well in a workplace environment.

load more comments (4 replies)
load more comments
view more: ‹ prev next ›