this post was submitted on 04 Mar 2026
535 points (98.0% liked)

Technology

82261 readers
4928 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] wonderingwanderer@sopuli.xyz 16 points 13 hours ago* (last edited 10 hours ago) (3 children)

That's fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?

[–] SalamenceFury@piefed.social 40 points 13 hours ago* (last edited 13 hours ago) (1 children)

In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they're essentially sycophantic by design.

[–] brbposting@sh.itjust.works 3 points 5 hours ago

I just tried this with ChatGPT three days ago and there’s a chance they have tried to make it slightly less sycophantic

I was essentially trying to get it to tell me I was the smartest baby born in whatever year like that YouTuber—different example but it was so resistant to agreeing to me or my idea or whatever being unique/exceptional.

Hope this is a specific direction and not random chance, A/B testing, etc.

[–] MoffKalast@lemmy.world 7 points 13 hours ago (1 children)

That would be my bet, LLMs really gravitate towards playing along and continuing whatever's already written. And Gemini especially has a 1M long context so it could be going back for a book's worth of text and reinforcing it up the wazoo.

That said, there is something really unhinged about Google's Gemma series even in short conversations and I see the big version is no better. Something's not quite right with their RLHF dataset.

[–] calamitycastle@lemmy.world 4 points 12 hours ago (1 children)
[–] wonderingwanderer@sopuli.xyz 6 points 11 hours ago

Reinforcement Learning from Human Feedback

It's a method of fine-tuning and aligning LLMs which requires active human input

[–] NotASharkInAManSuit@lemmy.world 3 points 13 hours ago (1 children)
[–] wonderingwanderer@sopuli.xyz 3 points 10 hours ago

You could ask Gemini to write it for you, but be careful it doesn't start blending fact and fiction