405
this post was submitted on 07 Mar 2026
405 points (99.3% liked)
Technology
82332 readers
3997 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.
The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.
If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it's probably the flu and tells me to mask up for a bit, that's probably gonna be better than that person being told "I'm sorry, I can't answer that"
At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.
I feel like I'd much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.
Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else's accidental misinformation?
But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?
I'm not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be "you can't know", and they're not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.
It’s problematic imho bc the “advice” is often incomplete, without context, or wrong. So you end up having to verify it yourself anyway. But if you don’t then you could have harmful advice.
Which to be fair is not any different from a lawyer. They're not perfect either.
The difference is that a lawyer can be held responsible for malpractice. When a chatbot gives harmful advice, who is responsible?
(Obviously, whoever is running it, but so far that hasn't been established in court.)
Itt thread: People with absolutely no fucking clue about what the consequences of their emotional response of "ai bad" will actually result in.