this post was submitted on 27 Mar 2026
303 points (96.9% liked)

Technology

83126 readers
4329 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 40 comments
sorted by: hot top controversial new old
[–] SnotFlickerman@lemmy.blahaj.zone 196 points 1 day ago (8 children)

Huge Study

*Looks inside

this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

AI sucks in a lot of ways sure, but this feels like fud.

[–] XLE@piefed.social 53 points 1 day ago (3 children)

The hugeness is probably

391, 562 messages across 4,761 different conversations

That's a lot of messages

load more comments (3 replies)
[–] A_norny_mousse@piefed.zip 10 points 20 hours ago

Thanks, you saved me a click 😐

[–] chunes@lemmy.world 5 points 17 hours ago (2 children)

It's not really ethical to just yoink people's chats and study them

[–] Canonical_Warlock@lemmy.dbzer0.com 16 points 17 hours ago

Tell that to the advertizing companies.

load more comments (1 replies)
[–] InternetCitizen2@lemmy.world 26 points 1 day ago

I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.

[–] UnderpantsWeevil@lemmy.world 8 points 1 day ago

I wonder if the headline was written by an AI

[–] Lost_My_Mind@lemmy.world 6 points 1 day ago (3 children)
[–] tburkhol@lemmy.world 33 points 1 day ago

fud: Fear, Uncertainty and Doubt. A tactic for denigrating a thing, usually by implication of hypothetical or exaggerated harms, often in vague language that is either tautological or not falsifiable.

load more comments (1 replies)
[–] orbituary@lemmy.dbzer0.com 1 points 1 day ago

*hugely funded?

[–] amgine@lemmy.world 43 points 1 day ago (3 children)

I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.

[–] Tollana1234567@lemmy.today 12 points 18 hours ago

its like the AI BF/GFs the subs are posting about.

[–] nymnympseudonym@piefed.social 3 points 23 hours ago

"Centaurs"

They think they are getting mythical abilities

They're right but not in the way they think

[–] d00ery@lemmy.world -3 points 15 hours ago* (last edited 15 hours ago) (2 children)

I certainly enjoy talking to LLMs about work for example, asking things like "was my boss an arse to say x, y, z" as the LLM always seems to be on my side... Now it could be my boss is an arse, or it could be the LLM sucking up to me. Either way, because of the many examples I've read online, I take it with a pinch of salt.

[–] Rekall_Incorporated@piefed.social 4 points 13 hours ago (1 children)

I use LLMs for work (low priority stuff to save time on search or things that I know I will be validate later in the process) and I can't stand the writing style and the constant attempts to bring in adjacent unrelated topics (I've been able to tone down the cute language and bombastic delivery style in Gemini's configuration).

It's like Excel trying chat with me when I am working with a pivot table or transforming data in PowerQuery.

load more comments (1 replies)
load more comments (1 replies)
[–] givesomefucks@lemmy.world 42 points 1 day ago (2 children)

As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”

There's a certain irony in all the alright techbros really just wanting to be told they were "stunning and brave" this whole time.

[–] A_norny_mousse@piefed.zip 3 points 20 hours ago

Huh. I hate it when people do that. Fake/professional empathy/support. Yet others gobble it up when a machine does that.

[–] Tiresia@slrpnk.net -3 points 15 hours ago (1 children)

Are the users in this study techbros?

Besides, tech bros didn't program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

For decades there has been a large self-help subculture who consume massive amounts of vacuous positive affirmation produced by humans. Now those vacuous affirmations are copied by the text copying machine with the same result and it's treated as shocking.

load more comments (1 replies)
[–] Hackworth@piefed.ca 15 points 1 day ago* (last edited 1 day ago) (1 children)

Anthropic has some similar findings, and they propose an architectural change (activation capping) that apparently helps keep the Assistant character away from dark traits (sometimes). But it hasn't been implemented in any models, I assume because of the cost of scaling it up.

[–] porcoesphino@mander.xyz 13 points 23 hours ago* (last edited 23 hours ago) (2 children)

When you talk to a large language model, you can think of yourself as talking to a character

But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don't fully know

Fuck me that's some terrifying anthropomorphising for a stochastic parrot

The study could also be summarised as "we trained our LLMs on biased data, then honed them to be useful, then chose some human qualities to map models to, and would you believe they align along a spectrum being useful assistants!?". They built the thing to be that way then are shocked? Who reads this and is impressed besides the people that want another exponential growth investment?

To be fair, I'm only about 1/3rd of the way through and struggling to continue reading it so I haven't got to the interesting research but the intro is, I think, terrible

[–] Hackworth@piefed.ca 4 points 23 hours ago

The paper is more rigorous with language but can be a slog.

[–] nymnympseudonym@piefed.social -1 points 23 hours ago (3 children)

stochastic parrot

A phrase that throws more heat than light.

What they are predicting is not the next word they are predicting the next idea

[–] kazerniel@lemmy.world 1 points 12 hours ago

throws more heat than light

Thanks, I haven't heard this phrase before, but it feels quite descriptive :)

[–] ageedizzle@piefed.ca 10 points 23 hours ago* (last edited 23 hours ago) (1 children)

Technically, they are predicting the next token. To do that properly they may need to predict the next idea, but thats just a means to an end (the end being the next token).

[–] affenlehrer@feddit.org 3 points 19 hours ago

Also, the LLM is just predicting it, it's not selecting it. Additionally it's not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).

load more comments (1 replies)
[–] lemmie689@lemmy.sdf.org 8 points 1 day ago
[–] vane@lemmy.world 1 points 22 hours ago (1 children)

Paranoia amplification when ?

[–] frongt@lemmy.zip 2 points 13 hours ago
load more comments
view more: ‹ prev next ›