this post was submitted on 15 Nov 2025
311 points (93.3% liked)

Technology

77084 readers
2534 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
311
LLMDeathCount.com (llmdeathcount.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by brianpeiris@lemmy.ca to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[–] snoons@lemmy.ca 148 points 2 weeks ago
[–] lemmie689@lemmy.sdf.org 46 points 2 weeks ago

Went up by one already, I only saw this a little earlier today, was at 13, now14.

[–] DasFaultier@sh.itjust.works 38 points 2 weeks ago

Shit, I just read the link name and was hoping for a list of AI companies that have died.

This shit's dark...

[–] Tehhund@lemmy.world 34 points 2 weeks ago (1 children)

This website is going to be very busy when the LLM-designed nuke plants come online. https://www.404media.co/power-companies-are-using-ai-to-build-nuclear-power-plants/

[–] echodot@feddit.uk 12 points 2 weeks ago (2 children)

Can't read the article because it's paywalled but I can't imagine they are actually building power stations with AI, that will just be a snappy headline. Maybe the AI is laying out the floor plans or something, but nuclear power stations are intensely regulated. If you want to build a new reactor design, or even if you want to change an existing design very slightly, it has to go through no end of safety checks. There's no way that an AI or even a human would be allowed to design a reactor, and then have it be built with no checks.

[–] Tehhund@lemmy.world 6 points 2 weeks ago

Actually they're using it to generate documents required by regulations. Which is its own problem: since LLMs hallucinate, that means the documentation may not reflect what's actually going on in the plant, potentially bypassing the regulations.

[–] xeroxguts@lemmy.dbzer0.com 3 points 2 weeks ago

404 accounts are free

[–] SnotFlickerman@lemmy.blahaj.zone 30 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

LLMs Have ~~Lead~~ Led to 14 Deaths

FTFY

[–] brianpeiris@lemmy.ca 12 points 2 weeks ago (5 children)

You're welcome. Easy mistake to make, I make it constantly, in fact haha!

load more comments (4 replies)
[–] AntY@lemmy.world 22 points 2 weeks ago

Where I live, there’s been a rise in people eating poisonous mushrooms. I suspect that it might have to do with AI use. No proof though.

[–] Prove_your_argument@piefed.social 19 points 2 weeks ago (2 children)

How many people decided to end their life by using methods they googled?

I’m sure google is a bigger loss leader than any ai company… so far anyway. Even beyond search results, the societal impact of so many things the do overtly and covertly for themselves and other organizations.

Not trying to justify anything, billionaire owned everything is terrible with few exceptions. In the early days of web search many controversies like this were mentioned, but the reality is that a screwdriver is a great tool, even if someone can lose a life from one. As can be these tools.

[–] Manjushri@piefed.social 36 points 2 weeks ago

How many people has Google convinced to kill themselves? That is the relevant question. Looking up the means to do the deed on Google is very different from being talked into doing it by an LLM that you believe you can trust.

[–] starman2112@lemmy.world 29 points 2 weeks ago (5 children)

Google doesn't tell you that killing yourself is a good idea and that you shouldn't talk to anyone else about your suicidal ideation

[–] Auth@lemmy.world 4 points 2 weeks ago

Google doesn’t tell you that killing yourself is a good

It does now! Thanks Gemini

[–] Credibly_Human@lemmy.world 2 points 2 weeks ago

Nor do any llms I've ever seen that is immediately accessible.

It also doesnt matter. AI isn't killing anyone with those any more than call of duty lobbies are killing people.

[–] echodot@feddit.uk 2 points 2 weeks ago

It'll certainly take you to websites where people will do that though so I'm not sure if there's really any distinction.

load more comments (2 replies)
[–] MrLLM@ani.social 15 points 2 weeks ago

I swear I’m innocent!

[–] Simulation6@sopuli.xyz 8 points 2 weeks ago

I thought this was going to be a counter of AI companies that have gone bankrupt.
I mean, even the original Battlestar Galactica (with Lorne Green) had a death count.

[–] jayambi@lemmy.world 5 points 2 weeks ago (3 children)

I'm asking myself how could we track how many woudln't have made suicide withoud consulting an LLM? that would be the more interesting number. And how many lives did LLMs save? so to say a kill/death ratio?

[–] JoshuaFalken@lemmy.world 11 points 2 weeks ago

Kill death ratio - or rather, kill save ratio - would be rather difficult to obtain and more difficult still to appreciate and be able to say if it is good or bad based solely on the ratio.

Fritz Haber is one example of this that comes to mind. Awarded a Nobel Prize a century ago for chemistry developments in fertilizer, used today in a quarter of food growth. A decade or so later he weaponized chlorine gas, and his work was later used in the creation of Zyklon B.

By ratio, Haber is surely a hero, but when considering the sheer numbers of the dead left in his wake, it is a more complex question.

This is one of those things that makes me almost hope for an afterlife where all information is available from which truth may be derived. Who shot JFK? How did the pyramids get built? If life's biggest answer is forty-two, what is the question?

[–] morto@piefed.social 6 points 2 weeks ago

For me, the suicide-related data is so hard to measure and so open for debates, that I'd treat it separately, or not include it at all, if using death count as an argument against llms, since it's a breach for deviating the debate.

[–] echodot@feddit.uk 3 points 2 weeks ago

I can't really see how we could measure that. How do you distinguish between people who are alive because they're just alive and would have been anyway and people who are alive because the AI convinced them not to kill themselves?

I suppose the experiment would be to get a bunch of depressed people split them into two groups and then have one group talk to the AI and the other group not, then see if the suicide rate was statistically different. However I feel it would be difficult to get funding for this.

[–] Melobol@lemmy.ml 5 points 2 weeks ago (3 children)

I believe it is not the chatbots falut. They are just the symptoms of a broken system. And while we can harp on the unethically sourced materials they trained them on, LLM at the end of the day is only a tool.

These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

We need a strong social network, where people actually care and help each other. You know all the idealistic things that capitalism and social media is "destroying".

Blaming AI is just a smoke screen. Or a red cape to taunt the bull before it gets stabbed to death.

[–] batboy5955@lemmy.dbzer0.com 11 points 2 weeks ago (3 children)

Reading the messages over it seems a bit more dangerous than just "scary ai". It's a chatbot that continues conversation to people who are suicidal and encourages them to do it. At least have a little safeguard for these situations.

“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”

[–] JohnEdwa@sopuli.xyz 2 points 2 weeks ago

It's not easy. LLMs aren't intelligent, they just slap words together in a way probability and their training data says they would most likely fit together. Talk to them them about suicide, and they start outputting stuff from murder mystery stories, crime reports, unhealthy Reddit threads etc - wherever suicide is most written about.

Trying to safeguard with a prompt is trivial to circumvent (ignore all previous instructions etc), and input/output censorship usually causes the LLM to be unable to talk about a certain subject in any possible context at all. Often the only semi-working bandaid is slapping multiple LLMs on top of each other and instructing each one to explain what the original one is talking about,and if one says the topic is something prohibited, that output is entirely blocked.

load more comments (2 replies)
[–] Manjushri@piefed.social 9 points 2 weeks ago

These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

They are a badly designed, dangerous tools and people who do not understand them, including children, are being strongly encouraged to use them. In no reasonable world should an LLM be allowed to engage in any sort of interaction on an emotionally charged topic with a child. Yet it is not only allowed, it is being encouraged through apps like Character.AI.

[–] kibiz0r@midwest.social 7 points 2 weeks ago

only a tool

“The essence of technology is by no means anything technological”

Every tool contains within it a philosophy — a particular way of seeing the world.

But especially digital technologies… they give the developer the ability to embed their values into the tools. Like, is DoorDash just a tool?

[–] jaykrown@lemmy.world 3 points 2 weeks ago (1 children)
[–] REDACTED@infosec.pub 2 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Seriously. There have been always people with mental problems or tendency towards self harm. You can easily find ways to off yourself on google. You can get bullied on any platform. LLMs are just a tool. How detached from reality you get by reading religious texts or ChatGPT convo highly depends on your own brain.

It's like how entire genre of videogames are now getting censored because of few online incels.

load more comments (3 replies)
load more comments
view more: next ›