this post was submitted on 04 Mar 2026
594 points (97.7% liked)

Technology

82261 readers
4725 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] lightnsfw@reddthat.com 4 points 19 hours ago (5 children)

Not that I want to defend AI slop, but what prompted these responses from Gemini?

[–] Martineski@lemmy.dbzer0.com 10 points 18 hours ago (4 children)

Doesn't matter what promped them.

[–] lightnsfw@reddthat.com 2 points 17 hours ago (3 children)

I mean if Gemini was responding to some kind of roleplay then yeah it does. Not everyone doing shit with it has mental health problems. Some people are just fucking around.

[–] Martineski@lemmy.dbzer0.com 12 points 16 hours ago (2 children)

The issue there is that it feeds into those mental health issues with efficiency and on on a scale never seen before. The models are programmed to agree with the user, and they are EXTREMELY HEAVILY ADVERTISED AND SHOVED ONTO PEOPLE AROUND THE WHOLE GLOBE DESPITE IT BEING WELL KNOWN HOW LIMITED AND PROBLEMATIC THE TECHNOLOGY IS WHILE THE CORPORATIONS DON'T TAKE ANY RESPONSIBILITY AT ALL. Anything from violating rights and privacy by gathering any and all data they can on you to situations like these where people hurt themselves (suicide, health advice, etc.) or others. But sure, let's be ignorant, do some victim blaming and disregard the bigger picture there.

[–] lightnsfw@reddthat.com 1 points 1 hour ago* (last edited 1 hour ago)

I agree with a lot of the things you said about the problems with AI but not that this is one of them.

If it wasn't this it would have been something else. People with mental health issues can get fixated on things and spiral until they act out. This has been a thing for as long as there have been mental health issues. It's not a failing of AI, it's a failing of society for not having sufficient mental health support to catch people like this before they go off the deep end. They shouldn't have to turn to AI in the first place.

[–] brbposting@sh.itjust.works 1 points 10 hours ago

I wonder if there’s a parallel universe where the labs instead went to the other extreme and require intelligence tests to onboard to their platforms.

And the outcry is, not inappropriately, about how many are being denied access to the latest technologies. The policy could effectively be construed as racist, even.

Anyway the middle ground there is pretty obvious. (Though I’m not sure how I’d design it just right, so e.g. folks without access to traditional/expensive mental healthcare might still be able to see some small benefit if it’s determined to be safe, just like maybe it could be safe for a well-adjusted individual to complain to it about their day for a couple minutes before moving on to real things. Sure I suppose it’s inherently unsafe but a proportion of the population should be making that decision for themselves.)