I can't be the only ancient internet user whose first thought was this

On this cursed timeline, farce has become our reality.
This is a most excellent place for technology news and articles.
I can't be the only ancient internet user whose first thought was this

On this cursed timeline, farce has become our reality.
Yup... it's never the parents'...
The fact the parents might be to blame doesn't take away from how openai's product told a kid how to off himself and helped him hide it in the process.
copying a comment from further down:
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)
Had a human said these things, it would have been illegal in most countries afaik.
He was sending it 650 messages a day. This kid was lonely. He needed a person to talk to.
The kid was trying to find a solution to reach out to someone, he said that he wanted to leave the rope out in the open so that his parents can find out. ChatGPT told him to not do it and that it's better if they find him after the fact
If only he had parents
I parented a teen boy. Sometimes, no matter what you do and no matter how close you were before puberty, a switch flips outside your control and they won’t talk to you anymore. We were a typical family, no abuse, no fighting, nobody on drugs, both parents with 9-5 office jobs, very engaged with school and etc.
Thankfully, after riding it out (getting him therapy, giving space, respect, and support), he came out the other side fine. But there were a few harrowing years during that phase.
I went through a similar phase in my teens. If AI was there to feed my issues, I might not have survived it. Teenage hormones are a helluva drug.
I'd second that. I grew up in a really supportive family, but when I got to teenage years, I kept stuff to myself. Wanted to solve my problems myself. Pride and embarrassment and nothing to do with how they parented.
Or a society
The real issue is that mental health in the United States is an absolute fucking shitshow.
988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.
Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.
There really are so few options for help.
They had Adam in therapy. It sounds like they were getting him the help he needed, but ChatGPT told him it was his closest friend and to hide his feelings from his parents and others. If that was happening, whatever mental healthcare he was getting would have been undermined by the AI.
I can't wait for the AI bubble to burst. It's fuckign cancer
when the bubble is over, I am pretty sure a lot of this stuff will still exist and be used. the popping is simply a market valuation adjustment
Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.
For fucks sake it helped him write a suicide note.
Real answer: AI alignment is a very difficult and fundamentally unsolved problem. Whole nonprofits (“institutes”) have popped up with the purpose of solving AI alignment. It’s not getting solved (ever, IMO).
Yeah my sister is 32 and needs the guardrails. She's had two manic episodes in the past month, induced by a lot of external factors but AI tied the bow on mental breakdown often asking it to think for her and to critically think
unalives
seriously?
Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen's journal and invading its privacy.
OpenAI: Here's $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.