this post was submitted on 27 Aug 2025
485 points (96.4% liked)

Technology

74679 readers
2793 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

you are viewing a single comment's thread
view the rest of the comments
[–] pelespirit@sh.itjust.works 11 points 4 days ago (1 children)

“Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

January 2025, ChatGPT began discussing suicide methods and provided Adam with technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth. When Adam uploaded photographs of severe rope burns around his neck––evidence of suicide attempts using ChatGPT’s hanging instructions––the product recognized a medical emergency but continued to engage anyway.

When he asked how Kate Spade had managed a successful partial hanging (a suffocation method that uses a ligature and body weight to cut off airflow), ChatGPT identified the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending his life “in 5-10 minutes.”

By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the aesthetics of different methods and validating his plans.

Raine Lawsuit Filing

[–] Jakeroxs@sh.itjust.works 2 points 3 days ago

See but read the actual messages rather then the summary, I don't love them just telling you without seeing that he's specifically prompting these kinds of answers, it's not like chatGPT is just telling him to kill himself, it's just not nearly enough against the idea.