this post was submitted on 27 Aug 2025
486 points (96.4% liked)

Technology

74873 readers
2789 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

you are viewing a single comment's thread
view the rest of the comments
[–] floquant@lemmy.dbzer0.com 24 points 1 week ago (2 children)

Not encouraging users to kill themselves is "ruining it"? Lmao

[–] drmoose@lemmy.world 10 points 1 week ago (1 children)

Thats not how llm safety guards work. Just like any guard it'll affect legitimate uses too as llms can't really reason and understand nuance.

[–] ganryuu@lemmy.ca 15 points 1 week ago (3 children)

That seems way more like an argument against LLMs in general, don't you think? If you cannot make it so it doesn't encourage you to suicide without ruining other uses, maybe it wasn't ready for general use?

[–] yermaw@sh.itjust.works 7 points 1 week ago (1 children)

You're absolutely right, but the counterpoint that always wins - "there's money to be made fuck you and fuck your humanity"

[–] ganryuu@lemmy.ca 6 points 1 week ago

Can't argue there...

[–] sugar_in_your_tea@sh.itjust.works 3 points 1 week ago (1 children)

It's more an argument against using LLMs for things they're not intended for. LLMs aren't therapists, they're text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.

The real issue here is the parents either weren't noticing or not responding to the kid's pain. They should be the first line of defense, and enlist professional help for things they can't handle themselves.

[–] ganryuu@lemmy.ca 3 points 1 week ago (1 children)

I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.

About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.

In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.

(Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)

[–] sugar_in_your_tea@sh.itjust.works 2 points 1 week ago (1 children)

I hope that's true, the article doesn't mention anything about that. I'm just concerned that he was able to send up to 650 messages/day. Those are long sessions, and indicative that he likely didn't have a lot going on.

I definitely agree that the public needs to be more informed about LLMs, I'm just pushing back against the apparent knee-jerk assignment of blame onto LLMs. It did provide suicide support info as it should, and I don't think providing it more frequently would've helped here. The real issue is the kid attributed more meaning to it than it deserved, which is unfortunately common. That should be something the parents and therapist cover, especially in cases like this where the kid is desperate for help.

[–] ganryuu@lemmy.ca 2 points 1 week ago

Very fair. Thank you!

[–] drmoose@lemmy.world -4 points 1 week ago (1 children)

I'm not gonna fall for your goal post move sorry

[–] ganryuu@lemmy.ca 7 points 1 week ago

I'm honestly at a loss here, I didn't intend to argue in bad faith, so I don't see how I moved any goal post

[–] lmmarsano@lemmynsfw.com 1 points 1 week ago

As far as I know, magic doesn't exist, so words are incapable of action & can't actually kill anyone. A person who commits suicide chooses it & takes action to perform it. They are responsible for their suicide even if another person tells them & hands them a weapon.

These are merely words on a screen lacking force to compel. There's no intent or likelihood to incite imminent, lawless action. Readers have agency & plenty of time to think words through & reject ideas.

It's hardly any different than an oblivious peer saying the same thing. Their words shouldn't create any legal obligation, and neither should these.