this post was submitted on 27 Apr 2026
557 points (98.9% liked)
Technology
84146 readers
2422 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's so weird how these chatbots always pretend they learnt something after they fuck up.
They literally can't.
They're not even pretending. The algorithm says the most likely response to "you fucked up" is "I'm sorry", so that's what it prints. There's zero psychological simulation going on, only statistical text generation.
I actually didn't believe you but it's literally true. First post, immediate apology.
The program can't pretend any more than it can tell truth. It's all just impressive regurgitation. Querying it as to why it "chose" to take any action is about as useful as interrogating a boulder on why it "chose" to roll through a house.
I mean, they probably do. until it gets purged from the context window. then it just yolos again
the next ingestion cycle will probably pick it up but how do we know it'll use the information in any relevant way 😶
Only because we are still using vanilla LLMs instead of MAMBA or JEPA
Of course. If you shot your foot with a gun, the solution is surely a bigger gun.