California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.
If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.
Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.
this post was submitted on 06 May 2026
13 points (76.0% liked)
Technology
42894 readers
276 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
As predicted, Mike Masnick is the author. Mike has a conflict of interest when it comes to reporting on platforms' responsibilities, because he's on the board for Bluesky... The social media company.
And he's trying to argue that chatbots are good for mental health actually. Never mind healthcare, he praises chatbots.
The proof? Self-reports. Including people who use the Replika Girlfriend-bot.
At this point, I consider anything on Mike's website that's related to social media to be compromised, and this is yet another example of that disappointing pattern.
The comments in the article are actually pretty good. Like this one.
ETA: the comment above ended up causing a Mike freakout. It was written by user TheKilt, who is exceptionally friendly and willing to concede points to Mike. Mike responds by accusing TheKilt of lying, and then proceeds to react to different people in the same thread who are only insulting him. TheKilt tries to fetch Mike's attention one last time, but Mike keeps ignoring him.
Mike is choosing the lowest hanging fruit and ignoring substantive criticism. It's embarrassing.
I want to gently push back on this. There are medications that can cause psychological symptoms and suicidal ideation as side effects and they're still prescribed. They are, however, controlled, people who take them have to be informed of the side effects, and they're managed by a trained physician. I absolutely think LLMs need to be more tightly regulated, and we need to have a much better idea of how they work and how to deploy them safely and in contexts where they are actually useful and won't cause harm. But we do manage known risks with other products.
The difference between the medical industry and the AI industry is like night and day. Medications are tested by professionals, side effects are documented, and professionals recommend them.
The AI/Wellness industry, by comparison, grabs people that should have been treated by the medical system. AI is the medicine equivalent of a weirdo in an alleyway promising that they're a doctor, giving you some random pills with ingredients unknown to even them, but that they know for sure has caused people to kill themselves before. And the weirdo's only goal is to make you feel correct about ingesting that medicine.
Regulation would be great, though. In fact, the product should be pulled until that regulation is in place.