this post was submitted on 27 Apr 2026
557 points (98.9% liked)
Technology
84146 readers
2422 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
From the article:
It's so weird how these chatbots always pretend they learnt something after they fuck up.
They literally can't.
They're not even pretending. The algorithm says the most likely response to "you fucked up" is "I'm sorry", so that's what it prints. There's zero psychological simulation going on, only statistical text generation.
I actually didn't believe you but it's literally true. First post, immediate apology.
The program can't pretend any more than it can tell truth. It's all just impressive regurgitation. Querying it as to why it "chose" to take any action is about as useful as interrogating a boulder on why it "chose" to roll through a house.
I mean, they probably do. until it gets purged from the context window. then it just yolos again
the next ingestion cycle will probably pick it up but how do we know it'll use the information in any relevant way 😶
Only because we are still using vanilla LLMs instead of MAMBA or JEPA
Of course. If you shot your foot with a gun, the solution is surely a bigger gun.
I lost it at the confession. The ai has no knowledge of what it did. You are feeding in your context and it is making up a (sycophantic) plausible explanation based on the chat history. Makes me wonder if this person should have production access in the first place.
It's not like the thing is going to learn from its mistake. But cool, waste those tokens to have it explain that if fucked up after it fucks up lol.
Yes, ask why it deleted data when it didn't do anything of the sort and it will still output similar text. You asked it to confess and explain, so it will do just that regardless of whether it fits.
yeah, it gives you the answer it thinks you want based on your prompts.
I'd be interested to see what prompts they used to, uh, prompt this response.
I'm not attacking you but we really need to figure out how we use language to accurately describe what these programs are doing.
They are outputting a highly likely sequence of words that fit the type of output from their training data that matches the input.
They are fancy autocomplete.
Oh, I know. My comment was more about how we tend to anthropomorphize this stuff and give these models traits they don't possess.
... and what are you?
A human with my own motivations and complex biological systems that including reasoning and the ability to think critically.
Most importantly, the ability to learn. We're all just a series of very complex chemical reactions, but we do a lot more than just listening and speaking.
https://arxiv.org/abs/2312.00752
Based on the evidence, I think I'm a bit more of a simpleton who puts in a good effort at the start but loses steam partway through. I guess thanks for the support though.
"Correlates"? As in: "It gives you the answer it best correlates with your prompts/context." Feels somewhat right both in the sense of AI as tensor-based word-select autocomplete and as a "lower-level" process than genuine thought, one which turns incongruent inputs ("I'm an AI" and "I just deleted prod+backup") into meaningless output ("The AI is sorry") that might look OK at a distance.
exactly. the whole point of these things is that they MUST provide you a solution. Any solution. doesn't have to be accurate, doesn't have to work, can be completely made up as long as it's a solution and as long as it's provided quickly. I've seen people feed into the prompts stuff like "don't hallucinate" or "verify all this online before proceeding" etc and it's not going to do any of that. it might TELL you it's doing that but it won't.
Claude is notorious for guessing, not verifying, and providing the quickest possible solution. Unlike GPT which will fluff all it's solutions to essentially waste your time and eat up more tokens, Claude just wants your problem out the door so you can feed it another problem ASAP.
If you use Claude for anything in your daily work you might as well just have a magic 8ball sitting on your desk. It's a hell of a lot cheaper and provides about the same quality.
I kind of like this, with some modification. It's a magic 8 ball of Stack Overflow answers. It'll try to find the one you need. If it's too hard to find that or if it doesn't exist, it's just gonna find the one that sounds good.
I love this idea. On shit, the load balancer isn't responding, time to shake the Magic Stack Overflow Ball (tm)! The result is "signs point to power cycling the server".
Probably something like "Please bro!!! WHY DID YOU DO THIS ??!! 😭😭"
The way it communicates suggests to me it's got some 'prompt engineer bro' garbage system prompt going on there.
Of course, that's how all of these agents work. At best they're a bunch of prompts tied together with scripts to perform actions. At worst they're just interacting directly with software without any scripts or sandboxing.
There is no AI.
I'll disagree with you there but ok.
You're free to disagree, but all the tools say otherwise. Hell even the widely lauded Claude Code is just that, we know for sure since the source leaked.
They put ‘for entertainment purposes only’ on a product that’s actually AGI?
Idk what you're talking about mate. Nobody is claiming AGI apart from morons. It's genuinely useful technology with correct implementation. It just also happens to be a Ponzi scheme.