The expert is years late to the party. Or the quote is.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
They aren't supposed to be? An LLM is just a big ol' fabric of matrices with weights based on it's training data. It then re-weights and queries this fabric based on your input and it's other prompts and parameters, such as temperature and previous context, and returns output. LLMs are, and always will be, exactly as "intelligent" as that process, because that process is what an LLM is. The fact that data is returned from these matrices as English sentences with a bit of randomness thanks to temp settings, rather than a fixed list of results has greatly confused people.
Problem is that confusion has caused an unreasonable level of investment, with expectations that have no significant indications of being met. So we need credible experts reiterating what those in the thick of it consider 'obvious' to try to fight that irrational behavior.
Currently, they have seen a rapidly evolving tech industry with lots of things that didn't quite work right but were quickly iterated into something useful. The dot-com being an example of generally solid principles attempted too soon, and the tech companies 'fixed' a lot of the problems in short order. So they see this thing that almost seems like a conversational human and assume that 'almost' will be addressed by the tech geniuses before we know it, and whatever naysaying those same experts might be saying is just misplaced humility. The 'experts' may have nerdy reasons why they view it as limited, but the investors "common sense" experience are inconsistent with that feedback.
Of course there's survivorship bias and plenty of 'big tech' that retreated after an optimistic push if you went looking to see that tech can actually fail to close that gap, but none of those examples are anywhere near the scale of the AI push.
Yes, all the same people who thought smartphones were actually "smart" or that social media was actually "social" are the ones thinking artificial intelligence is actually "intelligent". Just cause a company calls their product "vitamin water" doesn't make it healthy, and the sooner people learn to see through the bs hype machine that major corporations have us all hypnotized by, the better for everyone.
Though in those cases while there is confusion, the outsized investment factor didn't figure in. The value proposition was more in line with realistic possibilities with the respective technologies. Smart phones may not have been that smart, but plenty capable to land in every pocket and be both a rich revenue stream in and of themselves as well as an onramp to 'app store' revenue and a goldmine of private data.
The LLMs have value, but I'm skeptical it justifies all the very large and very weird financial shenigans going on with nonsensical depreciation claims, crazy loans, and circle jerk of money going out to a customer to let the customer give it back for product...
Yeah duh
Anyone who knows anything about what a large language model is already knew this.
All this hype around LLMs, AI, whatever buzzwords tech-bro dipshit YouTubers and TikTokers wanna say, is based entirely in fantasy yet has real world consequences.
CEOs who know absolutely nothing about technology want to remove their entire human workforces and replace them with AI, knowing absolutely nothing about the rampant problems AI have such as hallucinations, limited scope, imitation instead of innovation, extreme harm to the environment, etc.
But they don't care. All they heard was "here's another way to fuck over millions of poor's to save a penny" and they gleefully jumped into it.
Even just 5 years ago, these prediction models were being used in psychology research. They called them "neural networks". Which most of us neuroscientists hated because a neural network is a biological network. Not an algorithm for predicting performance on a cognitive task.
Yet that was what it was called. Ton of papers on it. Conflating the term with research on actual neural networks.
Anywho. I recall attending a presentation on how they work and being like. "This is literally just a statistical prediction model. Have I misunderstood you?". I was informed I was correct but it was fancy because because . .. mostly because they called it "neural networks" which sounded super cool.
And then later when "AI art" started emerging and I realized. It's just a prediction model. And the LLMs. Also just Prediction models.
I, someone without a computer science degree, was able to see the permanent flaws and limits with such an approach. (Tho I do know statistics).
It boogled the mind how anyone could believe a prediction model could have consciousness. Could be "intelligent". It's just a prediction. Quite literally a collection of statistical equations computing probabilities based on data fed into it. How could that be intelligent?
There is no understanding. No thinking. No ability to understand context.
People outside of psychology often argue. "Isn't human consciousness just predictions?"
No. No, it's not. And the way humans predict things is not even close to how a machine does it.
We use heuristics. Emotion feedback to guide attention. Which further feed heuristics. Which further feed emotional salience (attention).
A cycling. That does not occur in computers.
There is contextual learning and updating of knowledge driven by this emotion lead attention.
Our prediction models are constantly changing. With every thought. Every decision. Every new instance of a stimulus.
Our brains decide what's important. We make branching exceptions and new considerations, with a single new experience. That is then tweaked and reformed with subsequent experiences.
If you think animal consciousness is simple. If you think it's basically predictions and decisions, you have no idea what human cognition is.
I personally don't believe a machine will ever be able to accurately generate a human or human-like consciousness. Can it "look" like a person? Sure. Just like videos "look" like real people. But it's just a recording. Animated CGI can "look" like real people. But it's not. It's made by a human. It's just 3d meshes.
I could be wrong about machines never being able to understand or have consciousness,. But at present that's my opinion on the matter.
Which most of us neuroscientists hated because a neural network is a biological network. [...] Conflating the term with research on actual neural networks.
Yeah that's fair, co-opting the term in computing was bound to overtake its original definition, but it doesn't feel fair to blame that on the computer scientists that were trying to strengthen the nodes of the model to mimic how neural connections can be strengthened and weakened. (I'm a software engineer, not a neuroscientist, so I am not trying to explain neuroscience to a neuroscientist.)
mostly because they called it “neural networks” which sounded super cool.
To be fair... it does sound super cool.
It boogled the mind how anyone could believe a prediction model could have consciousness.
I promise you the computer scientists studying it never thought it could have consciousness. Lay-people, and a capitalist society trying to turn every technology into profit thought it could have consciousness. That doesn't take AI, though. See, for example, the Chinese Room. From Wikipedia, emphasis mine, "[...] The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker." Also, though it is from a science fiction author Arthur C. Clarke's third law, "Any sufficiently advanced technology is indistinguishable from magic." applies here as well. Outside of proper science perception is everything.
To a lay-person an AI Chatbot feels as though it has consciousness, the very difficulty with which online forums have in telling AI slop comments from real people is evidence to how well an LLM has modeled language such that it can be so easily mistaken for intelligence.
There is no understanding. No thinking. No ability to understand context.
We start to diverge into the philosophical here, but these can be argued. I won't try to have the argument here, because god knows the Internet has seen enough of that philosophical banter already. I would just like to point out that the problem of context specifically was one that artificial neural networks with convolutional filters sought to address. Image recognition originally lacked the ability to process images in a way that took the whole image into account. Convolutions broke up windows of pixels into discreet parameters, and multiple layers in "deep" (absurdly-numbered layer-count) neural networks could do heuristics on the windows, then repeat the process to get heuristics on larger and larger convolutions until the whole network accurately predicted an image of a particular size. It's not hard to see how this could be called "understanding context" in the case of pixels. If, then, it can be done with pixels why not other concepts?
We use heuristics
Heuristics are about a "close enough" approximation for a solution. Artificial neural networks are exactly this. It is a long-running problem with artificial neural networks that overfitting the model leads to bad predictions because being more loose about training the network results in better heuristics.
Which further feed emotional salience (attention). A cycling. That does not occur in computers.
The loop you're talking about sounds awfully similar to the way artificial neural networks are trained in a loop. Not exactly the same because it is artificial, but I can't in good conscious not draw that parallel.
You use the word "emotion" a lot. I would think that a neuroscientist would be first in line to point out how poorly understood emotions are in the human brain.
A lot of the tail end there is about the complexity of human emotion, but a great deal was about the feedback loop of emotion.
I think something you might be missing about the core difference between artificial and biological neural networks is that one is analogue and the other is digital. Digital systems must by their nature be discreet things. CPUs process instructions one at a time. Modern computers are so fast we of course feel like they multitask but they don't. Not in the way an analogue system does like in biology. You can't both make predictions off of an artificial neural network, and simultaneously calculate the backpropogation of that same network. One of them has to happen first, and the other has to happen second, at the very least. You're right that it'll never be exactly like a biological system because of this. An analogue computer with bi-directional impulses that more closely matched biology might, though. Analogue computers aren't really a thing anymore, they have a whole ecosystem of issues themselves.
The human nervous system is fast. Blindingly fast. However computers are faster. For example videos can be displayed faster than neurons can even process a video frame. We've literally hit the limit of human frames-per-second fidelity.
So if you will, computers don't need to be analogue. They can just be so overwhelmingly fast at their own imitation loop of input and output that biological analogue systems can't notice a difference.
Like I said though the subject in any direction quickly devolves into philosophy, which I'm not going to touch.
It makes sense that ai bros think ai is conscious and intelligent because it’s literally as conscious and intelligent as the ai bros
Anyone who thinks intelligence means guessing the right word next probably loves AI
If we keep feeding all these LLMs with all our bullshit ... it will never be intelligent
Large Language Models aren't designed to be intelligent.
They are language models, made to generate text. Using them for anything else is just dumb user error.
Thanks I also read the article
I don’t want an actual artificial intelligence for the same reason I don’t want biological research into bespoke pathogens. Both are major existential threats for humanity, and I have little hope for humanity’s survival in case of either’s unmitigated spread, given what 2020 looked like. If true AI has to happen, keep it behind unbreachable lab barriers.
The funny thing is, current AI is only a half step past passable and it’s already offered major questions about how human society will economically and sociologically manage in the near future. We’re far from ready for whatever wholistic AI will look like.
I don’t really care whether LLMs are “intelligent” or not. In the real world, tools either work or they don’t. Same as a generator, radio, or water filter. They can extend your capabilities, but they never replace judgment, planning, or responsibility. Over-reliance is the real risk.
To the extent that LLM does stuff, they exist in a more middle ground of "they work... maybe". Problem being that in the ways that matter, "maybe" is really hard to take out of things.
Normally, taking the technology as it comes for what it is is reasonable, but there's a whole investment angle here around expectations. Investment is coming in as if they are expecting an emergent AGI to come out of it. The drive to build everything around executing LLM models and only LLM models is driving the cost of everything in tech up. Money that might have been spent on more general purpose friendly compute is being redirected at Grace Blackwell infrastructure connected by NVL72, which is not particularly interesting for almost all other applications. It's just sucking up RAM and storage and starving everything else.
You can't have artificial intelligence UNTIL you FIRST have artificial stupidity. No one wants to buy that.
Experts continued that the sky is blue, water is wet, the sun shines, and according to preliminary research, fish swim.
And it’s always nice, when science finds more evidence for stuff „everybody knows“ because sometimes stuff everyone „knows“ is just wrong.
Wow. One needs experts for that? It is a language model. It is a parrot with a dictionary, or in case of an LLM, a large dictionary.