this post was submitted on 23 Feb 2026
457 points (97.7% liked)
Technology
81772 readers
3973 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Qwen3 feels left out. All 30B models I have failed the test.
Qwen3-4B HIVEMIND (abliterated) got it in 2, though it scores a lot higher on PIQA, HellaSwag and Winogrande benchmarks than normal Qwen3-30B. I think the new abliteration methods actually strengthen real world understanding.
https://imgur.com/a/7YZme4i
https://imgur.com/a/25ApzDN
I wonder if an abliterated VL model could do even better? They tend to have the best real world model benchmarks. Perhaps a Qwen3-VL-30B ablit (if such a thing exists) could one shot this.
I'd like to think a lot of these gotcha prompts rely on verbal misunderstanding, rather than failure in world models, but I can't say that for certain.
PS: Saw a pearler of a response to this: Chatgpt recommend "yeah, lift the car and carry it on your back. Make sure to bend your knees" (though I'm guessing someone edited that for the lulz)