this post was submitted on 29 Apr 2026
658 points (95.7% liked)
Microblog Memes
11427 readers
1822 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I read the paper until I got to what I consider an indefensible statement about the relevant point. It was early in the paper.
This demonstrates a different kind of understanding, not "no understanding," or "the illusion of understanding."
The idea that the understanding of a normal adult human (I'm being kind and overlooking their giant mistake of using weasel words by saying "any" human) is the only non-illusory type of understanding is indefensible. The best chess computers make moves that no top grandmaster could make in a game, but that can't mean that the computer's understanding is an illusion. It's simply different.
The authors of that paper have made the mistake of making statements outside of their area of expertise. They want to show specific problems with LLMs in their subject area of computer science, but have made incorrect statements in the area of philosophy in an effort to do so.
You should not take their philosophical conclusions as the takeaway from this computer science paper. But I appreciate that you took the time to actually find something relevant.
Did you actually read their example tests? Are you saying that you can have a valid and useful definition of intelligence that includes those kind of mistakes?
Okay. It seems like I'm going to have to make this extremely simple.
Do you think a dog has intelligence or understanding? If so, if you ask a dog to write a poem, and the dog fails, does that mean the dog has zero intelligence and no understanding of anything? Or could you say that the dog simply has a non-human understanding of things?
What if you read that entire research paper to a chimpanzee who knew sign language, and you tried to make a point about philosophy to the chimp that had nothing to do with the tests, and was completely localized to a part of the paper that you quoted? And what if the chimp then tried to bluster and ask whether you even understood the example tests? Would that mean the chimpanzee has zero intelligence and zero understanding of anything? I would argue not. The chimp simply has a different understanding because it doesn't really understand the topic.
That's not what you claimed in your original comment. You said
The paper I posted is literally applying the same criteria to humans and LLMs.
It seems like you are the one applying different criteria?
You're neglecting that my comment was in response to the proposition that "ChatGPT has no understanding of facts or semantics." If your paper isn't probative to that point, which I've already demonstrated it misconstrues out of apparent ignorance of a wealth of existing philosophical papers, then it's about as relevant as comparing human body temperature with processor temperatures. "I'm measuring them both with thermometers, so therefore I am using the same criteria for both humans and computers. Game. Set. Match."
You're caught up in some sort of weird pedantry while ignoring the overall meaning. In other words, you're misunderstanding the semantics of this argument.
Link one or two of those papers?
I'll do you one better and link to a Wikipedia article that explains things simply.
https://en.wikipedia.org/wiki/Chinese_room
On the off-chance that you might complain about this not being a paper, there are links to many papers in the references section.
You may also find this interesting, although it's more about consciousness than understanding.
https://en.wikipedia.org/wiki/Philosophical_zombie