this post was submitted on 27 Mar 2026
285 points (97.0% liked)

Technology

83126 readers
4079 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] HeyThisIsntTheYMCA@lemmy.world 3 points 3 hours ago* (last edited 3 hours ago)

okay how many of these "delusional" people in the study are making fun of the LLM tho

i don't know because I don't use the LLM i only see the screenshots. I am the control group. kinda. my nut is already off.

[–] ExLisper@lemmy.curiana.net 32 points 16 hours ago (8 children)

I think what we're seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can't digest it and get sick. The problem is there's no way to determine who can handle AI and who can't.

When I'm reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought "wow, this thing feels so real". Some people clearly have predisposition to jumping over the "it's a tool" reaction straight to "it's a conscious thing I can connect with". I think next step should be developing a test that can predict how someone will react to it.

[–] wonderingwanderer@sopuli.xyz 5 points 2 hours ago (1 children)

I suspect that the difference is to no small degree correlated with a person's isolation/social-integration.

People who aren't socially integrated have always been more vulnerable to predatory cults and scams. It's because human interactions is a psychological need that's been hardcoded into us by evolution.

Some people say "I don't need human interaction, I enjoy my time alone!" But that's because they have the privilege of enough social acceptance and integration that they get to enjoy their time alone. It's well-established within the field of psychology that true isolation can have a range of deep and far-reaching impacts on a person's well-being.

When people are developing, they need to socialize with their peers; and being unable to do so leads to maladaptive behavior patterns. Even as adults, people need regular social contact or their psychological state can quickly deteriorate. That's why solitary confinement is considered a method of torture in some circumstances, when it's used to depersonalize and destroy a person's sense of self-identity.

So that's why I suspect that people who are well-integrated with friends, family, acquaintances, and coworkers are probably less vulnerable to these sorts of delusions and can treat AI as "just a tool."

But for someone who hardly has any social interaction in a day, has no friends or family to talk to, and maybe their warmest interaction all week was with the clerk at the grocery store, then yeah I'd say it's predictable that they would be vulnerable to getting sucked into this trap of relying on an LLM for their social interaction.

It might be superficial, but it's a way of patching a hole. It's an expedient means to fulfill a need that they're not getting from anywhere else.

If we don't want this sort of stuff happening to people, then maybe we shouldn't ostracize them for being "weird" in the first place. Because nobody learns how to be "normal" by being alone all the time.

[–] ChunkMcHorkle@lemmy.world 2 points 2 hours ago (1 children)

This is really good. Thank you for taking the time to write it.

[–] wonderingwanderer@sopuli.xyz 3 points 1 hour ago

Thank you for understanding. So many times when I discuss things that are adjacent to this topic, I get flamed in the comments with people accusing me of being some sort of redpiller from the manosphere.

Like, no, social isolation is a problem, and it's getting worse due to a variety of factors. To name a few, there's social media algorithms designed to keep people dependent on their phones; there's the long-standing consequences of the pandemic and the collective trauma that had in addition to the atrophied social skills due to quarantine; there's widespread political polarization which keeps tensions high and makes it difficult to navigate new situations if you can't prove you know the right social scripts and avoid any faux pas; there's the whole toxic influencer culture who are grifting on inflammatory rhetoric, ragebait content, exploiting people's vulnerabilities, and radicalizing them (which is a vicious cycle, because they prey on people who are already isolated!); and that's just to name a few!

But if I summarize all that as a "loneliness epidemic," then people call me an incel and act like I'm trying to coerce women into having sex with me simply by acknowledging the fact that social interaction is a deeply-set human psychological need.

Like, using "incel" as an insult is part of the problem. It feeds into this culture where "if you're a man, you must get laid, or else you're worthless." That's literally promoting toxic masculinity!

And it forces these people who are already isolated and vulnerable to go identify with these groups of similarly ostracized people in echo chambers where they're insulated from those insults, where those predatory "influencers" then have fresh pickings of new losers to neg and radicalize.

But somehow, if I point out the problem here (because how can we solve a problem if we can't talk about it?), then to most people's view that makes me part of the problem! Even though, why would I be calling out the pattern if it was something I identify with?

The people radicalizing these vulnerable "losers," yes they should be torched. But the vulnerable "losers" being radicalized need to be treated with compassion if they're ever going to be redeemed. It should be pretty easy to identify who's who, seeing as they have an entire social structure based on hierarchies of dominance and submission...

[–] lmmarsano@group.lt 0 points 1 hour ago

I think next step should be developing a test that can predict how someone will react to it.

Unnecessary: foolish people always gonna fool. No need to save anyone that far gone in the lacking judgement department. Just because some people overeat junk food doesn't mean we need to devise some test to decide who can buy some junk food, either: that bullshit's beyond paternalistic.

[–] baaaaaah@hilariouschaos.com 5 points 6 hours ago

Surprisingly, the people who have that issues with it aren't the ones who contact to it emotionally, it's the people who offload their decision making to AI 

It's more like a codependence spiral than anything else

[–] thedeadwalking4242@lemmy.world 14 points 11 hours ago (1 children)

I bet it's probably correlated with low education as most things

[–] stardreamer@lemmy.blahaj.zone 6 points 11 hours ago* (last edited 11 hours ago) (1 children)

So you're saying there's a chance I can have cheese if I go to college?

Sign me up! Where's the cheddar?

[–] thedeadwalking4242@lemmy.world 3 points 11 hours ago

Unfortunately Its now in the Dean's pockets 😭

[–] Tiresia@slrpnk.net 11 points 13 hours ago (1 children)

Cults and toxic self-help literature have existed before LLMs copied them. I don't know if LLMs are getting people who couldn't have been gotten by human scammers.

Scams have many different vectors and people can be vulnerable to them depending on their mood or position in life. Testing people on LLM intolerance would be more like testing them on their susceptibility to viruses.

People can be immunocompromised for various reasons, temporarily or permanently, so as a society public hygiene standards (and the material conditions to produce them) are a lot more valuable. Wash your hands after interacting, keep public spaces clean, that sort of stuff.

[–] ExLisper@lemmy.curiana.net 2 points 10 hours ago

Yes, definitely can be a temporary thing which would make it even harder to protect people from. It's also most likely some spectrum. If you're "resistance" is at 10 you may not be at risk even at your lowest point. Other people can be at 5 when they are doing great but risk psychosis when they are down for some reason. I just think it's kind of scary that people interact with it voluntarily (unlike with scammers or cults) without knowing how it will affect them. We all tried LLMs but most of us was lucky so far.

load more comments (3 replies)
[–] SnotFlickerman@lemmy.blahaj.zone 190 points 1 day ago (36 children)

Huge Study

*Looks inside

this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

AI sucks in a lot of ways sure, but this feels like fud.

[–] XLE@piefed.social 49 points 21 hours ago (1 children)

The hugeness is probably

391, 562 messages across 4,761 different conversations

That's a lot of messages

[–] sukhmel@programming.dev 8 points 5 hours ago (1 children)

If that's only 19 users, that's around 250 conversations per user 🤔

[–] SnotFlickerman@lemmy.blahaj.zone 4 points 4 hours ago (1 children)

...and about 82 messages per conversation. Also, at least half of all the messages are from the user to the AI, and the other half are from the AI to the user, meaning around 41 messages from the user per conversation.

[–] sukhmel@programming.dev 3 points 3 hours ago

Yeah, I also thought about that, looks like a lot, but I guess users in this case differ from ordinary usage

[–] A_norny_mousse@piefed.zip 10 points 18 hours ago

Thanks, you saved me a click 😐

load more comments (34 replies)
[–] amgine@lemmy.world 41 points 23 hours ago (5 children)

I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.

[–] Tollana1234567@lemmy.today 10 points 16 hours ago

its like the AI BF/GFs the subs are posting about.

load more comments (4 replies)
load more comments
view more: next ›