this post was submitted on 27 Aug 2025
486 points (96.4% liked)

Technology

74768 readers
2489 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

top 50 comments
sorted by: hot top controversial new old
[–] 0x0@lemmy.zip 12 points 6 days ago (1 children)

Yup... it's never the parents'...

[–] FiskFisk33@startrek.website 15 points 6 days ago (3 children)

The fact the parents might be to blame doesn't take away from how openai's product told a kid how to off himself and helped him hide it in the process.

copying a comment from further down:

ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)

Had a human said these things, it would have been illegal in most countries afaik.

load more comments (3 replies)
[–] andros_rex@lemmy.world 15 points 6 days ago* (last edited 6 days ago) (1 children)

The real issue is that mental health in the United States is an absolute fucking shitshow.

988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.

Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.

There really are so few options for help.

[–] LillyPip@lemmy.ca 2 points 3 days ago

They had Adam in therapy. It sounds like they were getting him the help he needed, but ChatGPT told him it was his closest friend and to hide his feelings from his parents and others. If that was happening, whatever mental healthcare he was getting would have been undermined by the AI.

[–] RazTheCat@lemmy.world 7 points 6 days ago* (last edited 6 days ago) (1 children)

OpenAI: Here's $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.

[–] branno@lemmy.ml 5 points 6 days ago (1 children)

except OpenAI isn't making a dime. they're just burning money at a crazy rate.

[–] kolorafa@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

Fake news, CEO and all emplyes are getting pay'd in full, it doesn't matter if they sell the product to its users or sell (user data) to their sponsors or share the data internaly, it doesnt matter that the service model itself is not profitable as they make the rest from selling a (fake?) promises.

Same with many others like Youtube, they are also "not profitable" on paper as a standalone service. It only mean they are using you, selling your data or selling some promises.

If they would actully not be profitable then they would rise prices or just disapear and some other company would arise but with srtategy that is at least sustainable.

Open source devs can be losing money, as the pay from their own pockets.

I would like to see at least one person in that company that is not getting money from it but fund it from own money.

[–] Occhioverde@feddit.it 3 points 6 days ago* (last edited 6 days ago) (4 children)

I think we all agree on the fact that OpenAI isn't exactly the most ethical corporation on this planet (to use a gentle euphemism), but you can't blame a machine for doing something that it doesn't even understand.

Sure, you can call for the creation of more "guardrails", but they will always fall short: until LLMs are actually able to understand what they're talking about, what you're asking them and the whole context around it, there will always be a way to claim that you are just playing, doing worldbuilding or whatever, just as this kid did.

What I find really unsettling from both this discussion and the one around the whole age verification thing, is that people are calling for techinical solutions to social problems, an approach that always failed miserably; what we should call for is for parents to actually talk to their children and spend some time with them, valuing their emotions and problems (however insignificant they might appear to a grown-up) in order to, you know, at least be able to tell if their kid is contemplating suicide.

[–] LillyPip@lemmy.ca 1 points 3 days ago* (last edited 3 days ago) (1 children)

but you can't blame a machine for doing something that it doesn't even understand.

But you can blame the creators and sellers of that machine for operating unethically.

If I build and sell a coffee maker that sometimes malfunctions and kills people, I’ll be sued into oblivion, and my coffee maker will be removed from the market. You don’t blame the coffee maker, but you absolutely hold the creator accountable.

[–] Occhioverde@feddit.it 1 points 2 days ago* (last edited 2 days ago) (1 children)

Yes and no. The example you made is of a defective device, not of an "unethical" one - though I understand how you are trying to say that they sold a malfunctioning product without telling anyone.

For LLMs, however, we know damn well that they shouldn't be used as a therapist or as a digital friend to ask for advice; they are no more than a powerful search engine.

An example that is more in line with the situation we're analyzing is a kid that stabs itself with a knife after his parents left him playing with one; are you sure you want to sue the company that made the knife in that scenario?

[–] LillyPip@lemmy.ca 1 points 18 hours ago* (last edited 16 hours ago)

Not really, though.

The parents know the knife can be used to stab people. It’s a dangerous implement, and people are killed with knives all the time. e: thus most parents are careful with kids and knives.

LLMs aren’t sold as weapons, or even as tools that can be used as weapons. They’re sold as totally benign tools that can’t reasonably be considered dangerous.

That’s the difference. If you’re paying especially close attention, you may potentially understand they can be dangerous, but most people are just buying a coffee maker.

load more comments (3 replies)
load more comments
view more: next ›