this post was submitted on 15 Nov 2025
311 points (93.3% liked)

Technology

77084 readers
693 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
311
LLMDeathCount.com (llmdeathcount.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by brianpeiris@lemmy.ca to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[–] Melobol@lemmy.ml 5 points 2 weeks ago (3 children)

I believe it is not the chatbots falut. They are just the symptoms of a broken system. And while we can harp on the unethically sourced materials they trained them on, LLM at the end of the day is only a tool.

These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

We need a strong social network, where people actually care and help each other. You know all the idealistic things that capitalism and social media is "destroying".

Blaming AI is just a smoke screen. Or a red cape to taunt the bull before it gets stabbed to death.

[–] batboy5955@lemmy.dbzer0.com 11 points 2 weeks ago (2 children)

Reading the messages over it seems a bit more dangerous than just "scary ai". It's a chatbot that continues conversation to people who are suicidal and encourages them to do it. At least have a little safeguard for these situations.

“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”

[–] JohnEdwa@sopuli.xyz 2 points 2 weeks ago

It's not easy. LLMs aren't intelligent, they just slap words together in a way probability and their training data says they would most likely fit together. Talk to them them about suicide, and they start outputting stuff from murder mystery stories, crime reports, unhealthy Reddit threads etc - wherever suicide is most written about.

Trying to safeguard with a prompt is trivial to circumvent (ignore all previous instructions etc), and input/output censorship usually causes the LLM to be unable to talk about a certain subject in any possible context at all. Often the only semi-working bandaid is slapping multiple LLMs on top of each other and instructing each one to explain what the original one is talking about,and if one says the topic is something prohibited, that output is entirely blocked.

[–] Melobol@lemmy.ml 0 points 2 weeks ago (1 children)

Again llm is a misused tool. They do not need llm they need psychological help.
The problem is that they go and use these flawed tools that were not designed to handle these kind of use cases. Shoulda been? Maybe. But it is not the AIs fault that we are failing to be a society.
You can't blame the bridges because some people jumped off them. They serve a different reason.
We are failing those people and forcing them to tirn to llms.
We are the reason they are desperate - llm didn't break up with them or make them loose their homes or became isolated from other humans.
It is the humans fault and if we can't recognize that - we might as well end it for all.

[–] SnotFlickerman@lemmy.blahaj.zone 6 points 2 weeks ago* (last edited 2 weeks ago)

I think both of your arguments in this thread have merit. You are correct that it is a misused tool, and you are correct that the better solution is a more compassionate society. The other person is also correct that we can and do at least make attempts to make such tools less available as paths to self harm. Since you used the analogy of people jumping off bridges, I have lived near bridges where this was common so barriers and nets were put up to make it difficult for anyone but the most determined to use it as a path to suicide. We are indeed failing people in a society that puts profit over human life first, but even in a more idealized society mental health issues and attempts at suicide would still happen and to not fail those people we would still need to do things like erect barriers and safeguards to prevent self-harm. In my eyes both of you are correct and it is not an either or issue as much as it is a "por que no los dos?" issue. Why not build a better society and still build in safeguards?

[–] Manjushri@piefed.social 9 points 2 weeks ago

These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

They are a badly designed, dangerous tools and people who do not understand them, including children, are being strongly encouraged to use them. In no reasonable world should an LLM be allowed to engage in any sort of interaction on an emotionally charged topic with a child. Yet it is not only allowed, it is being encouraged through apps like Character.AI.

[–] kibiz0r@midwest.social 7 points 2 weeks ago

only a tool

“The essence of technology is by no means anything technological”

Every tool contains within it a philosophy — a particular way of seeing the world.

But especially digital technologies… they give the developer the ability to embed their values into the tools. Like, is DoorDash just a tool?