this post was submitted on 30 Nov 2025
457 points (98.3% liked)
Fuck AI
4728 readers
1260 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Your boss expects you to weld with good quality but they don't expect you to answer every question there is, without any mistakes. The problem with LLMs is that they are trained purely on text found on the internet but they have no "life experience" and thus there world model is very different from ours. There are overlaps (that's why they can produce any coherent output at all) but there are situations that make perfect sense in its world model, that's complete bogus in the real world.
It's a bit like the shadows in Platons cave allegory. LLMs are practically trained only on the shadows and so the output is completely based on that shadow world. LLMs can describe pain (because it was in the training data) but it was never smacked in the face.
That's exactly why we can't really call them intelligent or knowledgeable. They're pattern recognition engines, they mindlessly recognize and repeat patters even when they don't make any sense i.e. "hallucinate"
They're a productivity tool that can help actually intelligent and knowledgeable beings like humans do tasks, but on their own they are a parking lot covered with shredded dictionaries. If we use the Chinese room analogy, it'd be like trying to build a Chinese room with just the translation dictionary and without the human to do the translating.
Which is why LLMs make mistakes when translating too - they need a human, a real intelligence, to check.
Humans are also "pattern recognition engines". That's why optical illusions and similar completely mess with our brains. There are patterns that we perceive as moving/rotating even though the pattern is completely stationary.
But nobody would claim that you can't trust your eyes in general just because optical illusions exist.
We can tell optical illusions are fake specifically because we aren't just pattern recognition engines.
LLMs "hallucinate" because they can't do that. To them, the optical illusion is reality.
That's the difference between intelligence and knowledgeability, instead of merely containing knowledge.