this post was submitted on 27 Apr 2026
796 points (99.0% liked)

Technology

84171 readers
2538 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mark@programming.dev 5 points 2 hours ago* (last edited 2 hours ago) (2 children)

yup and when you DO catch it spitting out nonsense. it"ll say "oh you right, let me change that".. 🙄 like, why do I have to tell you that you're wrong about something? You should already know it's wrong and fix it without me ever pointing it out.

[–] Rooster326@programming.dev 8 points 1 hour ago

But it didn't even understand it was wrong

It can't understand that. It can't understand anything

The Human-feedbaxk algorithm dictates humans prefer to receive an apology so it does.

[–] SparroHawc@lemmy.zip 6 points 1 hour ago

That's because it doesn't really 'know' things in the same way you and I do. It's much more like having a gut reaction to something and then spitting it out as truth; LLMs don't really have the capability to ruminate about something. The one pass through their neural network is all they get unless it's a 'reasoning' model that then has multiple passes as it generates an approximation of train-of-thought - but even then, its output is still a series of approximations.

When its training data had something resembling corrections in it, the most likely text that came afterwards was 'oh you're right, let me fix that' - so that's what the LLM outputs. That's all there is to it.