this post was submitted on 04 Mar 2026
466 points (97.7% liked)
Technology
82261 readers
4530 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn't a problem with chatbots, it's a problem with the user.
I get it, grieving families will look for anything and anyone to blame for suicide except the victim, but ultimately, it is the victim who chose to kill themselves. If someone is convinced to kill themselves from something as stupid as an AI chatbot, they really weren't that far from the edge to begin with.
So someone who already has an underlying mental health condition diagnosed or not is at fault for their own death even if being coerced into doing it?
Without the AI these people most likely wouldn’t have gotten to the point of committing the act of suicide. I believe the accusations are valid and that AI can be bad for mental health.
There is evidence throughout history of cults that commit mass suicides. If a human can convince another human to do this why can’t a robot trained to act and speak like a human do it too? It’s not unreasonable to think an AI could push someone to suicide under the right circumstances.
Google, of all companies, probably has a better psychological profile of their users than the average doctor. They even offer a public-facing option to disable ads about gambling, alcohol, or pregnancy.
TBH, alcohol ads are INSUFFERABLE but who needs pregnancy ads blocked?
Maybe those trying and failing to conceive?
People who don't want their family getting suspicious, perhaps. The Target Incident comes to mind.
Of course, disabling these options doesn't mean Google stops knowing about mental or physical issues. I'm sure you know the best way to prevent that is to just avoid Google and add some together. This is probably just Google's way of looking less creepy to the average person.