this post was submitted on 20 Feb 2026
642 points (97.9% liked)
Technology
81722 readers
4432 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
From reading all the comments from the community, it’s amazing (yet not surprising) that all these managers have fallen for the marketing of all these LLMs. These LLMs have gotten people from all levels of society to just accept the marketing without ever considering the actual results for their use cases. It’s almost like the sycophant nature of all LLMs has completely blinded people from being rational just because it is shiny and it spoke to them in a way no one has in years.
On the surface level, LLMs are cool no doubt, they do have some uses. But past that everyone needs to accept their limitations. LLMs by nature can not operate the same as a human brain. AGI is such a long shot because of this and it’s a scam that LLMs are being marketed as AGI. How can we attempt to recreate the human brain into AGI when we are not close to mapping out how our brains work in a way to translate that into code, let alone other more simple brains in the animal kingdom.
I agree with almost all of your comment. The only part I disagree on is:
An implementation of AGI does not need to be inspired from the human brain, or any existing organic brain. Nothing tells us organic brains are the optimal way to develop intelligence. In fact, I'd argue it's not.
That being said, it doesn't change the conclusion: We are nowhere near AGI, and LLMs being marketed as such is absolutely a scam.
One of the best written comments I've seen about this. LLMs are cool for what they can do, but anyone comparing them to AGI is just shilling and trying to make a fortune off of selling pickaxes in a gold rush.
This is probably related to automation bias and wishful thinking
I don't think LLMs will become AGI, but... planes don't fly by flapping their wings. We don't necessarily need to know how animal brains work to achieve AGI, and it doesn't necessarily have to work anything like animal brains. It's quite possible if/when AGI is achieved, it will be completely alien.
100% agree. Definitely thinking inside the box, inside the brain, when I went down that path.
I think better way to explain my thinking is that LLMs can not operate like a human brain in that they fundamentally lack almost all qualities of a human brain. They are good but not perfect at logic just like humans, but they completely lack creativity, intuition, imagination, emotion and common sense, qualities that would make AGI.
Without humans being able to understand how our brains process those qualities, it will be very hard to achieve AGI. But again, very wrong of me to think we need to translate code from our brains to achieve AGI.
Aircraft wings operate on pretty much the same principle as bird wings do. We just used a technology we had already developed (fans, essentially) to create the forward movement necessary to create the airflow over the wings for lift. We know how to do it the bird way too, but restrictions in material science at scale make the fan method far easier and less error prone.