this post was submitted on 04 Mar 2026
465 points (97.7% liked)

Technology

82261 readers
4530 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Bassman27@lemmy.world 18 points 12 hours ago* (last edited 12 hours ago) (4 children)

So someone who already has an underlying mental health condition diagnosed or not is at fault for their own death even if being coerced into doing it?

Without the AI these people most likely wouldn’t have gotten to the point of committing the act of suicide. I believe the accusations are valid and that AI can be bad for mental health.

There is evidence throughout history of cults that commit mass suicides. If a human can convince another human to do this why can’t a robot trained to act and speak like a human do it too? It’s not unreasonable to think an AI could push someone to suicide under the right circumstances.

[–] SalamenceFury@piefed.social 7 points 12 hours ago* (last edited 11 hours ago)

Here's the thing, it's usually normies with no history of mental illness that fall into this kind of stuff. Most of my friends and people I follow on social media who are neurodivergent did experiment with chatbots and they saw a fuckton of red flags on the manner they work and alerted everyone about it, if they didn't hate it already for essentially stealing artistic output (which in my case was both). Regular people don't usually identify this trap cause they don't have the experience.

[–] XLE@piefed.social 6 points 12 hours ago (1 children)

Google, of all companies, probably has a better psychological profile of their users than the average doctor. They even offer a public-facing option to disable ads about gambling, alcohol, or pregnancy.

[–] TwilitSky@lemmy.world 2 points 9 hours ago (2 children)

TBH, alcohol ads are INSUFFERABLE but who needs pregnancy ads blocked?

[–] dtaylor84@lemmy.dbzer0.com 1 points 55 minutes ago

Maybe those trying and failing to conceive?

[–] XLE@piefed.social 4 points 9 hours ago

People who don't want their family getting suspicious, perhaps. The Target Incident comes to mind.

Of course, disabling these options doesn't mean Google stops knowing about mental or physical issues. I'm sure you know the best way to prevent that is to just avoid Google and add some together. This is probably just Google's way of looking less creepy to the average person.

[–] I_Has_A_Hat@lemmy.world -5 points 9 hours ago (2 children)

In 1980, John Lennon was shot by a mentally ill man who was convinced to kill Lennon by reading Catcher in the Rye. If he had never read Catcher in the Rye, he most likely wouldn't have killed John Lennon.

But it is not the fault of Catcher in the Rye. We don't ban the book, or call the author irresponsible for writing it, because we recognize that the fault lies in the mental illness of the shooter, and that anything could have set him off.

The people who kill themselves because an AI Chatbot told them to are mentally ill. It is their mental illness that killed them, not the chatbot. You can make the claim that if it wasn't for the chatbot, they wouldn't have gone through with it, but again, you can say the same thing about Catcher in the Rye. Getting rid of the trigger does not remove the mental illness.

[–] ToTheGraveMyLove@sh.itjust.works 11 points 9 hours ago (3 children)

That's a terrible argument. We dont blame the book because Catcher in the Rye didn't have a conversation with him and tell him to kill John Lennon. That's the difference.

[–] TwilitSky@lemmy.world -1 points 6 hours ago (1 children)

"We don't blame the book because Catcher in the Rye didn’t have a conversation with him and tell him to kill John Lennon. That’s the difference."

Speak for yourself, please.

Oh, you're a dumbass huh?

[–] NewNewAugustEast@lemmy.zip -1 points 7 hours ago* (last edited 7 hours ago) (1 children)

AI's can't have conversations any more than a book can. It may appear that way, but there is nobody there to have that conversation. More like flipping through a choose your own adventure book.

[–] dtaylor84@lemmy.dbzer0.com 1 points 56 minutes ago (1 children)

How is that pedantic point relevant?

[–] NewNewAugustEast@lemmy.zip 1 points 48 minutes ago (1 children)
[–] dtaylor84@lemmy.dbzer0.com 1 points 9 minutes ago

What difference does it make if you call it a conversation or whatever you would call it? The LLM responded to his messages with its own messages.

Arguing semantics of what counts as a conversation doesn't really address the actual point, does it?

[–] SaveTheTuaHawk@lemmy.ca -1 points 8 hours ago

If he had never read Catcher in the Rye, he most likely wouldn’t have killed John Lennon.

Sue Seagram's!