LLMs ≠ AI. I wish more people in the media would realize that even the most advanced LLM possible cannot achieve "AGI". That is just not how they work. It's like saying that if you make a car that can spin it's wheels fast enough then it can go to space. It's not what wheels do.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Cannot upvote this enough. These tools are not intelligent!! Sure, they can be useful to specialists that check the outputs and select what is correct. For the masses like its being pushed? Hell no!
LLMs are AI. No, they're not going to get to "AGI", but this idea that they aren't connected doesn't match how the field has evolved.
If you're unaware of how the MIT model railroading club is one of the most important groups in the history of AI, then do some reading.
They aren't intelligent, so they aren't AI.
So dumb it hurts.
Not how it works.
The field of AI has been about making computers do things they couldn't before. Even if they're just "predicting the next token", LLMs are a significant leap over Markov Chains (which also predict the next token, but produce output that's more funny than useful).
Again, if you're unaware of the history of MIT CSAIL, then you really shouldn't be opining on what is and isn't AI.
There's a difference between the field developing more advanced technology towards AI and calling every piece of that AI. Yes, this is part of a larger field that has worked on this for decades. The previous stuff wasn't called AI, and this shouldn't be either. It's only the companies selling a product who started that.
Would you consider Conway's Game of Life to be AI? Because the field certainly did back in the day, and it's less impressive than LLMs.
No they fucking didn't. That's absurd. They may have talked philosophically about if it was alive. No one thought it was intelligent. You can look at the code and know that. They called it AI in the same way video games do maybe, not in the way the academic field does.
It was developed by academics in the first place. It's AI because it was developed by AI researchers.
That's how it works. You build knowledge by making these little pieces. LLMs are one of those pieces. It won't get to full human intelligence on its own, but it might be part of what gets there.
Not everything AI researchers develop is suddenly AI. That's my point, and they know that. What you're implying is that as soon as the field developed AI existed, and not before. It being made by AI researchers is not the definition of AI.
Its also not an issue with it not being full human intelligence. It isn't intelligent at all. It doesn't think about what it outputs. It's just a statistical model. It's a very advanced statistical model that creates the appearance of intelligence, but it isn't intelligent.
Then what is AI? Or do you think there's no intermediate steps between Turning Machine and full intelligence?
There are many intermediate steps. That's what the field of AI work has been doing. This is but one of many steps. It is not intelligent though, so it isn't AI. It is just a step. A basic Turing Machine is also just a step, and you wouldn't call it AI, would you?
I dont really like the evidence used in this video. He makes a strong accusation of direct lying and then bases his argument on his own opinion of how fast AI is improving vs Atomics and uses little real metric besides vague comparisons. He uses books on the singularity, the inception of machine learning, and the time between the Manhattan project to nuke production as leverage but all he really shows is that he disapproves of the comparison. Not that the comparison is inaccurate.
Now I don't like the comparison or Hank himself but this feels disingenuous.
His next point was the omission of the details of the UN document about how AI is dangerous for our future (that a bunch of Nobel Prize winners and AI scientists signed.)
Hank did mention the better known portion about the dangers of AI and he does leave out the rest which include potential disinformation campaigns using AI, human rights violations, and mass unemployment. This is fact. But I don't feel misled or lied to by the omission. The quote he mentions is short and sweet and well known.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
That's really the meat of the issue anyway. Not reading the additional paragraphs is an technically an omission but not even close to as grievous an omission as the video's creator makes it out to be.
That guy's video was terrible. I want my time back.
The SciShou video? Yeah.
If you mean Carl: why did you watch it past the first minute, then?
They both were not great. The scishow script was bad, and the other video was making some of the same mistakes.
screams "im relevant" without really being relevant
A comment that describes itself