this post was submitted on 25 Nov 2025
97 points (83.4% liked)
Fuck AI
4728 readers
1255 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Windows 11 is programmed by Microsoft engineers. I’m sure they have a good idea how it works. When you click a button, you get predictable results.
Neural networks is a different story. It’s difficult to predict what’s going to happen for a given prompt, and how adjustments to the weights affects the results.
There’s some article from last year where they found a ”golden gate” neuron in Claude. Changing it to be always on caused the model to always mention the golden gate in its responses. How and why this works is AFAIK not fully understood. For some reason the model managed to generalize the concept of golden gate into one single neuron.
What a cute thought!
No one knows how "everything" works in old monolithic software. You just have to try and see what happens, and often you just doesn't touch certain codebases because nobody really know the ramifications if you change something in them. Windiws 11 is probably way worse than any LLM. Try to share a simple folder on a simple home network and you'll see some of the cruft.
Source: have worked on 30-40 year old monolithic software. In not one of those projects were there a single "engineer" who knew it all.
Neural networks has their fuzzy part of course, but software became not fully understandable a long time ago. IMO.
Of course, no single person fully understand the entirety of Windows. But I hope the people working with Windows understands at least a part of it.
The thing with LLMs is that no one really understands the purpose of one single neuron, how it relates to all other neurons, and how they together seem to be able to generalize high level concepts like golden gate bridge. It’s just too much to map it out.
We do know how a single "neuron" relates to other neurons, it's in the model. But what gets complicated is the vast amount of them, of course.
So yes, we don't intrinsically get to understand it all, but I think we can understand what it does, a bit like windows 😁/j.
Fascinating subject, and we're just scratching the beginning IMO.