this post was submitted on 16 Jan 2026
53 points (100.0% liked)
Fuck AI
5268 readers
2205 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
~~LLMs are not neural networks, though. ~~
Turns out they absolutely are. Not all neural networks are LLMs, though.
I'm going to link this at the current revision, so that it makes sense in the future: https://en.wikipedia.org/w/index.php?title=Transformer_%28deep_learning%29&oldid=1333135164
Read the first line from the link, I'll add it here if you're lazy: "In deep learning, the transformer is an artificial neural network..."
Do you know what "GPT" stands for? "Generative Pre-trained Transformers"
What were you thinking LLMs use? They're literally just neural networks stacked as much as possible. That's why they require all of those data centers, because their only solution to the problem is adding more neural nets and more data which means more hardware, at this point it's borderline brute forcing. Sure, you can mention the """clever""" tricks they use to "tokenize" words at the beginning, but that's still a neural net in itself. Don't get confused by their terminology, every single bit of the "technology" has impressive sounding names until you see how they actually work and smack your forehead so hard it leaves a mark forever.
Oh, you're absolutely right. I didn't realize that GPTs are of course an ANN variant, I always envisioned them as essentially very large and boring vector databases.
I might want to rephrase: not all neutral networks are LLMs.
I personally hate the current "AI" scam with all my heart and I'm so very aware of the extremely limited utility and unsustainable resource demands of the GPT approach. But I have no problem with the more abstract concept of neural networks per se. I expect them to be quite fundamental to any attempt at "real" AI, if we ever get past the current craze.
I'm going to argue that there's no such thing as "real AI". We are going to create replicas of brains once we understand them fundamentally. I mean to the point we can explain them the same way we know how a CPU architecture works. Right now I think we're insanely far from that. We barely understand brain diseases or how neurotransmitters work exactly, let alone big structures of neurons.
My argument is, we don't even know what "real AI" means, because we don't know what "I" means yet.
Whatever actual "AI" would look like, we can agree GPTs are not it.
What's funny about current GPTs is how much manual adjustments they're doing on them, when the whole idea of making them is so that they "adjust themselves" which of course was total bullshit from the start.