Hard Pass

8 readers
0 users here now
Rules
  1. Don't be an asshole
  2. Don't make us write more rules.

View hardpass in other ways:

Hardpass.lol is an invite-only Lemmy Instance.
founded 1 year ago
ADMINS

hard pass chief

476
 
 

California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.

If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.

Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.

477
 
 
478
479
 
 
480
 
 
481
482
 
 
483
484
485
 
 

Translated from Spanish. All credit to @koopacabras@mast.lat on mastodon : https://mast.lat/@koopacabras/116522592940399627

486
 
 
487
 
 
488
 
 
489
490
 
 
491
 
 

Canada PM Mark Carney's 'landmark agreement' with China in January 2026 was a trade deal in the oldest sense: agricultural commodities (mainly Canadian canola) for manufactured goods (EVs from China).

The US, instead, was imposing tariffs.

Thanks to the new trade deal, up to 49,000 EVs built in China can be imported into Canada at a reduced tariff rate of 6.1 percent, down from the 100 percent tariff imposed in 2024.

Canada began issuing permits for the first 24,500 vehicles in March, and Tesla moved quickly to capitalize.

It remains unclear how many of the initial 24,500 permits Tesla will lock down, though Tesla looks poised to walk away with the lion’s share according to several reports.

This means that Tesla, the company run by the man closest to the US president who started the trade war in 2025, is the first and most aggressive beneficiary of Carney's 'landmark agreement.'

...

492
 
 

Meta has returned to court in the US this week for the second phase of a lawsuit brought by Raúl Torrez, New Mexico’s attorney general, following a March verdict that found the company liable for child safety failures and imposed a $375m fine. On Monday, the state petitioned for a legal sanction against the company, a monetary penalty 10 times the original amount, and a sweeping, drastic overhaul of Meta’s child safety protocols.

In the second part of the landmark case, known as the remedies phase, the state is asking for Meta to be declared a public nuisance and for the judge to order the company to pay $3.7bn in an abatement plan. The money would fund programs for law enforcement, mental health services and educators. The state is also requesting that the judge force a series of design changes to Meta’s platforms aimed at improving child safety, including universal age verification, de-encryption of children’s messages, a guardian account linked to every child’s account, and a child safety monitor tasked with holding Meta to account for five years.

The New Mexico department of justice argues that these changes would make Meta’s social networks safer for underage users in the state. Meta, however, says the proposed reforms are unfeasible and could ultimately force it to shutdown its platforms in the state altogether.

New Mexico is not exactly a heavyweight, but it will be interesting to see how this plays out.

493
494
 
 

The agency’s scientists and data contractors reviewed millions of patient records for studies that were pulled back before release.

495
 
 
496
497
 
 
498
 
 

499
500
 
 
view more: ‹ prev next ›