this post was submitted on 25 Nov 2025
97 points (83.4% liked)
Fuck AI
4728 readers
1181 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We know how each individual part work. That’s just basic math.
We don’t know for sure how all trillion parts together produce the results they do. You can’t debug the model step by step to see how the prompt ”generate image of a penguin” produces an image of a penguin, and not an ice bear. That what people mean with ”we don’t know how AI works”.
Okay, but who cares? "Complex systems are difficult to predict" is a mathematical insight that's like 2 centuries old at this point... and it hasn't hindered us at all from gaining deep insights into how both individual complex systems work and how complex systems as a general class of phenomena work. I can't keep track of all the masses and velocities of every individual air molecule in the room I'm sitting in, but I still know how the interactions of those particles give rise to the temperature and air pressure and general behavior of the atmosphere in the room.
People know how this shit works, and anyone telling you otherwise is either willfully ignorant or internationally lying to you to feed a hype cycle with an end goal of making your life worse. People can't afford to remain uneducated about this stuff anymore.
What’s interesting is how these complex models produce anything useful at all. We could very well have complex models that don’t produce anything other than random noise.
The reason why "we" have these models because they were deliberately trained not to output random noise. That part is well understood.
The only reason why we don't know what exactly makes the model output an image of Garfield with boobs is the amount of data to sift through. Not because we don't understand the processes.
Generalization is not a given. It’s possible to make complex models that perfectly memorizes 100% of the training data, but produces garbage results if the input diverges ever so slightly from the training.
This generalization is a process that’s not fully understood. Earlier architectures struggled with this level of generalization, but transformers seem to handle it well.
Not overfitting is hard, yes. But it's not "we have no idea how/why this works"-hard.
That goes for windows 11 too, and still we know how computers work.
Windows 11 is programmed by Microsoft engineers. I’m sure they have a good idea how it works. When you click a button, you get predictable results.
Neural networks is a different story. It’s difficult to predict what’s going to happen for a given prompt, and how adjustments to the weights affects the results.
There’s some article from last year where they found a ”golden gate” neuron in Claude. Changing it to be always on caused the model to always mention the golden gate in its responses. How and why this works is AFAIK not fully understood. For some reason the model managed to generalize the concept of golden gate into one single neuron.
What a cute thought!
No one knows how "everything" works in old monolithic software. You just have to try and see what happens, and often you just doesn't touch certain codebases because nobody really know the ramifications if you change something in them. Windiws 11 is probably way worse than any LLM. Try to share a simple folder on a simple home network and you'll see some of the cruft.
Source: have worked on 30-40 year old monolithic software. In not one of those projects were there a single "engineer" who knew it all.
Neural networks has their fuzzy part of course, but software became not fully understandable a long time ago. IMO.
Of course, no single person fully understand the entirety of Windows. But I hope the people working with Windows understands at least a part of it.
The thing with LLMs is that no one really understands the purpose of one single neuron, how it relates to all other neurons, and how they together seem to be able to generalize high level concepts like golden gate bridge. It’s just too much to map it out.
We do know how a single "neuron" relates to other neurons, it's in the model. But what gets complicated is the vast amount of them, of course.
So yes, we don't intrinsically get to understand it all, but I think we can understand what it does, a bit like windows 😁/j.
Fascinating subject, and we're just scratching the beginning IMO.