this post was submitted on 23 Feb 2026
458 points (97.7% liked)

Technology

81772 readers
3973 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the 'reasoning' models.

you are viewing a single comment's thread
view the rest of the comments
[–] TankovayaDiviziya@lemmy.world 3 points 3 hours ago (4 children)

We poked fun at this meme, but it goes to show that the LLM is still like a child that needs to be taught to make implicit assumptions and posses contextual knowledge. The current model of LLM needs a lot more input and instructions to do what you want it to do specifically, like a child.

[–] rob_t_firefly@lemmy.world 8 points 1 hour ago* (last edited 1 hour ago)

LLMs are not children. Children can have experiences, learn things, know things, and grow. Spicy autocomplete will never actually do any of these things.

[–] kshade@lemmy.world 9 points 1 hour ago (1 children)

We have already thrown just about all the Internet and then some at them. It shows that LLMs can not think or reason. Which isn't surprising, they weren't meant to.

[–] eronth@lemmy.world -3 points 1 hour ago (1 children)

Or at least they can't reason the way we do about our physical world.

[–] zalgotext@sh.itjust.works 10 points 1 hour ago

No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don't understand what they generate, they're just really good at guessing how words should be strung together based on complicated statistics.

[–] sturmblast@lemmy.world 1 points 36 minutes ago

LLMs are a long long way from primetime

[–] prole@lemmy.blahaj.zone 4 points 2 hours ago

I'm sure it'll be worth it at some point 🙄