this post was submitted on 27 Aug 2025
485 points (96.4% liked)

Technology

74679 readers
2793 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

you are viewing a single comment's thread
view the rest of the comments
[–] 0x0@lemmy.zip 12 points 3 days ago (1 children)

Yup... it's never the parents'...

[–] FiskFisk33@startrek.website 15 points 3 days ago (1 children)

The fact the parents might be to blame doesn't take away from how openai's product told a kid how to off himself and helped him hide it in the process.

copying a comment from further down:

ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)

Had a human said these things, it would have been illegal in most countries afaik.

[–] Randomgal@lemmy.ca 1 points 3 days ago (2 children)

He could have Google the info. Humans failed this guy. Human behavior needs to change

GPT could have been Google or a stranger in. Chatroom.

[–] LillyPip@lemmy.ca 2 points 1 day ago

You should read the filing.

Google might have clinically told him things, but it wouldn’t have encouraged him, telling him he should hide the marks on his neck from a previous failed attempt by wearing a black turtleneck, telling him how to tie the knot next time, and telling him to hide his feelings from his parents and others.

His parents had him in therapy. He also told the AI he wanted to leave a noose out where his parents would find it, and the AI told him not to. It actively encouraged him to hide all this from his parents. A Google search wouldn’t do that, and it sounds like his parents did care.

[–] FiskFisk33@startrek.website 1 points 2 days ago

Humans failed this guy.

I am not arguing this point, I agree.

A search engine presents the info that is available, it doesnt also help talk you into doing it.
A stranger doing it in a chatroom doing it should go to prison, as has happened in the past. Should this not also be illegal for LLM's?