California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.
If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.
Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.
Hard Pass
8 readers0 users here now
Rules
- Don't be an asshole
- Don't make us write more rules.
View hardpass in other ways:
Hardpass.lol is an invite-only Lemmy Instance.
founded 1 year ago
ADMINS
hard pass chief
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
52
New Mexico proposes $3.7bn fine for Meta and sweeping changes to its social platforms
(www.theguardian.com)
493
508
New ‘Jesus-centric’ Christian phone network will block pornography and LGBT content
(www.the-independent.com)
494
257
F.D.A. Blocked Publication of Research Finding Covid and Shingles Vaccines Were Safe
(www.nytimes.com)
495
496
497
498
499
500















