this post was submitted on 29 Apr 2026
657 points (95.7% liked)
Microblog Memes
11420 readers
2168 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why would you think that ChatGPT has no understanding of facts or semantics? I think whoever wrote that has different criteria between intelligence and AI when it comes to understanding.
I can't speak for the original author, but I suspect they probably think that because it is the truth.
It's not that I think AI is better than you do. It's that people who say things like you do are setting different standards for AI and human intelligence, but you don't explain why you think it's okay or reasonable to do so.
I mean, the provable lack of semantic comprehension is literally the mechanism for a subset of academically rigorous papers on LLM attacks.
There are plenty of things you can debate in the body of discourse around LLMs, this just flatly isn't one of them.
Those academically rigorous papers prove a complete lack of semantic comprehension? Exactly zero semantic comprehension? Because that is the claim here. "NO UNDERSTANDING OF FACTS OR SEMANTICS."
It's strange because all of the papers I see deal with semantic error rates, which is a completely different claim. Now, then, how would you say that you are personally doing with regard to semantics?
Me? Personally? Tough to say, you're entering into the realm of philosophy at that point... but it's a more fruitful avenue of argument... even though I know you were just trying to be a dick.
I genuinely think it would be an easier task to argue human minds don't even qualify and then to argue LLM equivalency than it would be to argue an LM has semantic comprehension.
Okay, you just said:
Now, don't just breeze through this, actually read what I said in my previous comments:
and
If you think what you said you thought, then why would you argue against me in the first place?
It seems to me like in your first comment, you were steel-manning the original argument by completely ignoring the obviously completely untrue part about "facts", and pretending that it didn't say there was "no", as in "zero" semantic understanding.
Meanwhile, you're straw-manning my comment, ignoring the fact that all I said was saying we need to compare AI and human intelligence by the same standards. So here I have an apparently human intelligence that is arguing against a meaning that I was not conveying, meanwhile supporting a different meaning that the OP wasn't conveying.
You can't steel man one person's argument just because you support it, and then straw man the opposition's argument just because you don't support it.
What you conveyed in that comment was that your understanding of the meanings in the argument up to that point was completely fucked. It's the perfect example, so of course, I'm going to reference it, even if it makes you think I was just being a dick.
I'm sorry for glossing over it, you're right. You said a few times that you feel like humans don't measure "intelligence" fairly towards machines.
You're right. We probably don't do this fairly towards animals. We probably don't even do this fairly to other humans.
It's a fair mirror to hold up.
Acknowledging that we're susceptible to bias isn't evidence that a conclusion is wrong, though. Some blondes actually ARE dumb.
This, for example:
https://arxiv.org/abs/2506.21521
I read the paper until I got to what I consider an indefensible statement about the relevant point. It was early in the paper.
This demonstrates a different kind of understanding, not "no understanding," or "the illusion of understanding."
The idea that the understanding of a normal adult human (I'm being kind and overlooking their giant mistake of using weasel words by saying "any" human) is the only non-illusory type of understanding is indefensible. The best chess computers make moves that no top grandmaster could make in a game, but that can't mean that the computer's understanding is an illusion. It's simply different.
The authors of that paper have made the mistake of making statements outside of their area of expertise. They want to show specific problems with LLMs in their subject area of computer science, but have made incorrect statements in the area of philosophy in an effort to do so.
You should not take their philosophical conclusions as the takeaway from this computer science paper. But I appreciate that you took the time to actually find something relevant.
Did you actually read their example tests? Are you saying that you can have a valid and useful definition of intelligence that includes those kind of mistakes?
Okay. It seems like I'm going to have to make this extremely simple.
Do you think a dog has intelligence or understanding? If so, if you ask a dog to write a poem, and the dog fails, does that mean the dog has zero intelligence and no understanding of anything? Or could you say that the dog simply has a non-human understanding of things?
What if you read that entire research paper to a chimpanzee who knew sign language, and you tried to make a point about philosophy to the chimp that had nothing to do with the tests, and was completely localized to a part of the paper that you quoted? And what if the chimp then tried to bluster and ask whether you even understood the example tests? Would that mean the chimpanzee has zero intelligence and zero understanding of anything? I would argue not. The chimp simply has a different understanding because it doesn't really understand the topic.
That's not what you claimed in your original comment. You said
The paper I posted is literally applying the same criteria to humans and LLMs.
It seems like you are the one applying different criteria?
You're neglecting that my comment was in response to the proposition that "ChatGPT has no understanding of facts or semantics." If your paper isn't probative to that point, which I've already demonstrated it misconstrues out of apparent ignorance of a wealth of existing philosophical papers, then it's about as relevant as comparing human body temperature with processor temperatures. "I'm measuring them both with thermometers, so therefore I am using the same criteria for both humans and computers. Game. Set. Match."
You're caught up in some sort of weird pedantry while ignoring the overall meaning. In other words, you're misunderstanding the semantics of this argument.
Link one or two of those papers?
I'll do you one better and link to a Wikipedia article that explains things simply.
https://en.wikipedia.org/wiki/Chinese_room
On the off-chance that you might complain about this not being a paper, there are links to many papers in the references section.
You may also find this interesting, although it's more about consciousness than understanding.
https://en.wikipedia.org/wiki/Philosophical_zombie