370
this post was submitted on 27 Aug 2025
370 points (96.5% liked)
Technology
74679 readers
2793 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's one way to get a suit tossed out I suppose. ChatGPT isn't a human, isn't a mandated reporter, ISN'T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.
LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.
I think the more damning part is the fact that OpenAI's automated moderation system flagged the messages for self-harm but no human moderator ever intervened.
Human moderator? ChatGPT isn't a social platform, I wouldn't expect there to be any actual moderation. A human couldn't really do anything besides shut down a user's account. They probably wouldn't even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as
hate speech: .56, violence: .43, self harm: .29
Those numbers in the middle are really ambiguous in my experience.
I’m looking forward to how AI Act will be interpreted in Europe with regards to the responsibility of OpenAI. I could see them having such a responsibility if a court decides that their product leads to sufficient impact on people lives. Not because they don’t advertise such a usage (like virtual therapist or virtual friend) but because users are using it that way in a reasonable fashion.