this post was submitted on 28 Nov 2025
207 points (99.1% liked)
Fuck AI
4728 readers
828 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not a native speaker, so sometimes I use AI to grammar check me to make sure I'm not talking nonsense, and just the other day I wanted to make a joke about waterboarding and asked AI to check it, it said it couldn't do it because it involved torture, then I said it was for a fictional work and it did check - basically what the boy did.
Honestly, the whole thing reads like shitty parents are trying to find someone else to blame.
Probably not shitty parents. There's a zillion causes for suicidal thoughts that have nothing at all to do with parenting.
If they were super religious and/or super conservative though... Those are actual causes of teen suicide. It's not the religion, it's the lack of acceptance of the child (for whatever reason, such a LGBTQ+ status).
Basically, parenting is only a factor if they're not supportive, resulting in the child feeling rejected/isolated. Other than that, you could be model parents and your child may still commit suicide.
ChatGPT discouraged him from seeking help from his parents when he suggested it.
Yeah, I think it's ridiculous to blame ChatGPT for this, it did as much as could be reasonably expected of it, to not be misused this way.
Source.
Well, if that's not part of him requesting ChatGPT to role-play, that's fucked up.
Legit doesn't matter. If it had been a teacher rather than ChatGPT, that teacher would be in prison.
Yeah, because a teacher is a sentient being with volition and not a tool under your control following your commands. It's going to be hard to rule the tool deliberately helped him in planning it, especially after he spent a lot of time trying to break the tool to work in his favor (at least, it's what is suggested in the article, and that source doesn't have the full content of the chat, just the part that could be used for their case).
I guess more mandatory age verification are coming because parents can't be responsible for what their kids do with the devices they give them.
At the heart of every LLM is a random number generator. They're word prediction algorithms! They don't think and they can't learn anything.
They're The Mystery Machine: Sometimes Shaggy gets out and is like, "I dunno man. That seems like a bad idea. Get some help, zoinks!" Other times Fred gets out and is like, "that noose isn't going to hold your weight! Let me help you make a better one..." Occasionally it's Scooby, just making shit up that doesn't make any sense, "tie a Scooby snack to it and it'll be delicious!"
My teen has some issues due to sexual assault by a peer. That isn't bad parenting (except by the rapist's parents)
Well. The parents did let him use ChatGPT.
I think they can be excused for ignorance.
Ain't this just the textbook definition of victim blaming.
No. I can't form an opinion without the full chat content, but you all seem to be painting it like "one day a happy little boy enters the internet and is gaslighted into killing himself by a computer", while the article says he had been struggling with suicidal thoughts for many years, had been changing his medication on his own, and spent most of his time on forums where people talked about suicide. On the chatbot the boy ignored disclaimers, terms, and over a hundred warnings when talking about suicide until he pretended it was all fictional to get the bot to play along. The boy might have been a victim of several things, but not a victim of a chatbot - how many disclaimers and terms and warnings one has to put on their product, and does it even matter if the other party is set to ignore them? His self-medication might have played a big factor in his mental state, but no one seems to want to blame the pharmaceutical company, because somehow in this case you all seem to agree he did ignore terms and warnings, nor blame the rope manufacturer for supplying the tool because you seem to agree it was a misuse of their product... and judging by how quickly parents looked for a scapegoat instead of having a hard look at themselves, even knowing everything that was going on, and ignoring that if you are minor you need parents supervision to use the chatbot, my bet is on clueless shitty parents.
And in the absence of this information, you assumed it was the parents fault.
Nope, I said "my bet is", I don't know if that's indeed the case. Regardless, the parents ARE responsible for his use of the chatbot.
And I would call that victim blaming. In fact I called it the textbook definition of victim blaming.
Ah, I see, you are saying the parents are the victim here? My bad, I thought you were saying the boy was the victim.