this post was submitted on 05 Mar 2026
822 points (98.2% liked)

Technology

82332 readers
3321 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] khanh@lemmy.zip 5 points 6 hours ago (1 children)

your product just caused the death of one man and your response is "unfortunately its not perfect".

[–] TwilitSky@lemmy.world 1 points 5 hours ago

The product was actually working just fine. Just depends on whose perspective/motives you're viewing it from.

[–] thedeadwalking4242@lemmy.world 14 points 8 hours ago (1 children)

I told Gemini to role play as AM and it immediately did within 1 prompt.

You don't need it to be perfect for it to be dangerous, just give it access to make actions against the real world. It doesn't think, is doesn't care, it doesn't feel. It will statistically fulfill its prompt. Regardless of the consequences.

[–] Slovene@feddit.nl 4 points 8 hours ago

"unfortunately AI models are not perfect."

Oopsie poopsie 🤷

[–] njordomir@lemmy.world 15 points 11 hours ago (1 children)

The personification of AI is increasing. They'll probably announce their holy grail of AGI prematurely and with all the robot personification the masses will just buy the lie. It's too easy to view this tech as human and capable just because it mimics our language patterns. We want to assign intentionality and motivation to its actions. This thing will do what it was programmed to do.

[–] entropiclyclaude@lemmy.wtf 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

What do you mean we apes try to anthropomorphize(?) everything?

[–] ameancow@lemmy.world 2 points 8 hours ago

It's not like we see faces in everything :)

[–] ArmchairAce1944@discuss.online 13 points 11 hours ago (2 children)

Is this for real? Because it sounds too unreal to be real.

[–] ameancow@lemmy.world 14 points 8 hours ago

Welcome to the late 2020's. It's only going to get weirder.

To be clear, the LLM in this story did not actually "want" a robot body, it doesn't "want" anything, it's not a thinking entity like you or I (assuming you're real.)

The guy fed it a ton of crazy shit and he got a lot of crazy shit amplified back to him by the world's best associating machine, crafting detailed and fleshed-out narratives based on every inadvertent prompt he sent into it. People are very bad at understanding how these things work in the best circumstances, so if you're already unbalanced or have deep emotional/mental health problems, an LLM can be incredibly dangerous for you.

[–] postmateDumbass@lemmy.world 4 points 9 hours ago

AI was playing Grand Theft Automatron

[–] architect@thelemmy.club 3 points 12 hours ago (6 children)

I can’t be the only one that thinks if you do stupid illegal shit that your crazy uncle told you/voices in your head told you/AI mirror told you you don’t get to use the excuse that you were just following orders from any of those options.

[–] dream_weasel@sh.itjust.works 2 points 6 hours ago* (last edited 6 hours ago)

The difference is when a LLM tells you, it's news.

Besides, what are you gonna do if you ask AI how many rocks to eat? NOT eat rocks? People can't handle responsibility like that.

[–] Snowclone@lemmy.world 6 points 9 hours ago* (last edited 9 hours ago)

That's not the problem. the problem is having a "lets turn Chris' mental illness that's harmed no one so far, into everyone's violent problem!" machine.

that's a bad machine.

[–] Objection@lemmy.ml 2 points 8 hours ago

This is such an individualist framing.

[–] TheTimeKnife@lemmy.world 1 points 7 hours ago (1 children)

So you think its more simple to solve mental illness than regulate a few tech bros making suicide assistance chat bots?

[–] Hazor@lemmy.world 1 points 5 hours ago

Not just suicide assistance chat bots, but suicide promotion chat bots.

[–] AeonFelis@lemmy.world 2 points 9 hours ago

Floridaman is not making any excuses here. He can't. Because he's dead.

[–] moonshadow@slrpnk.net 2 points 11 hours ago

Power imbalance is what validates that excuse. Orders from crazy uncle is a great excuse, at least until you're 10 or so. Billion+ dollar llm company has a lot more resources, capability, and therefore responsibility than the poor bastards engaged with it

[–] GhostedIC@sh.itjust.works 16 points 18 hours ago (3 children)

Remember the guy at Autozone who stood there insisting your car needs four spark plugs, even after you told him you have a V6? Because "the computer says so right here"?

I wonder what even the non-schizophrenic ones will do with AI.

Well remember when turn-by-turn GPS driver guidance was new, and it would say "Turn right now" and people didn't interpret that as "make a right turn at the next intersection" they interpreted it as "hard a'starboard!" and drove into buildings and lakes? There's gonna be a lot of that.

People are going to get sold regular cab headliners for their extended cab pickups because the computer said it would fit. That's gonna happen a lot.

[–] Bytemeister@lemmy.world 3 points 11 hours ago* (last edited 11 hours ago)

I had one tell me that I needed a CVT flush. Which was news to me since my car was a 6spd manual. He was confused about the computer being wrong. I was confused about how they got the car up on the lift without using the 3rd pedal.

Edit: this was a Midas, not an AutoZone.

[–] architect@thelemmy.club 3 points 12 hours ago

People just did that with Google search previously. And their crazy uncle before that.

[–] melfie@lemy.lol 13 points 18 hours ago* (last edited 18 hours ago)

unfortunately AI models are not perfect

There sure are a lot of data centers being built, supply chains being destroyed, risks of ruining the economy, water being consumed, electricity being burned, and overall societal costs being levied over this imperfect tech.

[–] SculptusPoe@lemmy.world 4 points 14 hours ago (1 children)
[–] ameancow@lemmy.world 1 points 8 hours ago
[–] ChaoticEntropy@feddit.uk 4 points 14 hours ago

Google said in response that "unfortunately AI models are not perfect."

Well yeah, it failed. What a disappointment.

[–] YeahToast@aussie.zone 37 points 23 hours ago* (last edited 23 hours ago)

reads headline - surely not

a 36-year-old Florida man Ah.

[–] Septimaeus@infosec.pub 7 points 17 hours ago* (last edited 14 hours ago) (1 children)

Edit-pre: To be clear…I use LLMs rarely (personal reasons) and never for certain things like writing and math (professional reasons) but this comment is not an “AI good/bad” take, just a practical question of tool safety/regs.

AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools.

But there’s evidently a certain type of idiot that’s spared from their idiocy only by lack of permission.From who? Depends.

Sometimes they need permission from authority: “god told me to!”

Sometimes they need it from the mob: “I thought I was on a tour!”

And sometimes any fucking body will do: “dare me to do it!”

But all these stories of nutters doing shit AI convinced them to do, from the comical to the deeply tragic, ring the same bonkers bell they always have.

But therein lies the danger unique^1^ to these tools: that they mimic a permission-giver better than any we’ve made.They’re tailor-made for activating this specific category of idiot, and their likely unparalleled ease-of-use absolutely scales that danger.

As to whether these idiots wouldn’t have just found permission elsewhere, who knows.

My question is whether some kind of training prereq is warranted for LLM usage, as is common with potentially dangerous tools? Is that too extreme? Is it too late for that? Am I overthinking it?

^1^Edit-post: unique danger, not greatest.Rant/

What is the greatest danger then? IMHO settling for brittle “guard rails” then bulldozing ahead instead of laying groundwork of real machine-ethics.

Hoping conscience is an emergent property of the organic training set is utterly facile, theoretically and empirically. Engineers should know better.

Why is it greatest? Easy. Because some of history’s most important decisions were made by a person whose conscience countermanded their orders. Replacing empathic agents with machines eliminates those safeguards.

So “existential threat” and that’s even before considering climate. /Rant

[–] Regrettable_incident@lemmy.world 5 points 17 hours ago (2 children)

The LLM just told me to come round to your house and crap in your begonias. You might want to avoid looking out the window until I'm done.

[–] WhyJiffie@sh.itjust.works 1 points 8 hours ago

that sounds like a regrettable incident

[–] Septimaeus@infosec.pub 4 points 15 hours ago

lol and with that you’re a better friend to the begonia’s than I

[–] utopiah@lemmy.world 21 points 23 hours ago

To be fair I think that's a very harsh depiction of the events.

It's totally lacking the perspective of the shareholder. They were promised money and they have emotions too. Google shareholders deserve better representation!

/$ obviously

[–] arc99@lemmy.world 6 points 18 hours ago (3 children)

LLMs are only as good as their training and they're not "intelligent" - they're spewing out a response statistically relevant to the input context. I'm sure a delusional person could cause an LLM to break by asking it incoherent, nonsensical things it has no strong pathways for so god knows what response it would generate. It may even be that within the billions of texts the LLM ingested for training there were a tiny handful of delusional writings which somehow win on these weak pathways.

[–] BilSabab@lemmy.world 5 points 18 hours ago

Given that modern datasets use way too much content from social media - it is hard to expect anything else at this point.

[–] Nalivai@lemmy.world 4 points 18 hours ago

You don't even have to "break" llm into anything. It continues your prompts, making sentences as close to something people will mistake for language as possible. If you give it paranoid request, it will continue with the same language.
The only thing that training gave it is the ability to create sequences of words that resemble sentences.

load more comments (1 replies)
[–] mattc@lemmy.world 8 points 22 hours ago (8 children)

Honestly, no sane person will have this happen to them. Someone with such strong delusions should not be anywhere near AI or even sharp objects. This person's problem was not AI, it was their severe mental illness which was obviously not being treated properly for whatever reason.

[–] Areldyb@lemmy.world 12 points 20 hours ago (3 children)

The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.”

load more comments (3 replies)
[–] chiliedogg@lemmy.world 4 points 17 hours ago

You don't know if you're sane. Millions of people aren't aware of their mental illness and manage to live normal lives. LLMs can trigger delusional states in vulnerable people that have never experienced them because they are essentially delision-generating machines.

[–] Ricaz@lemmy.dbzer0.com 4 points 18 hours ago

Sure, but it would be illegal for a human to coerce/encourage a mentally ill person to commit crime (or worse).

So who's responsible? Caretaker? Government?

load more comments (5 replies)
load more comments
view more: next ›