1214
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
(thedailyadda.com)
This is a most excellent place for technology news and articles.
Fancy autocorrect? Bro lives in 2022
EDIT: For the ignorant: AI has been in rapid development for the past 3 years. For those who are unaware, it can also now generate images and videos, so calling it autocorrect is factually wrong. There are still people here who base their knowledge on 2022 AIs and constantly say ignorant stuff like "they can't reason", while geniuses out there are doing stuff like this: https://xcancel.com/ErnestRyu/status/1958408925864403068
EDIT2: Seems like every AI thread gets flooded with people with showing age who keeps talking about outdated definitions, not knowing which system fits the definition of reasoning, and how that term is used in modern age.
I already linked this below, but for those who want to educate themselves on more up to date terminology and different reasoning systems used in IT and tech world, take a deeper look at this: https://en.m.wikipedia.org/wiki/Reasoning_system
I even loved how one argument went "if you change underlying names, the model will fail more often, meaning it can't reason". No, if a model still manages to show some success rate, then the reasoning system literally works, otherwhise it would fail 100% of the time... Use your heads when arguing.
As another example, but language reasoning and pattern recognition (which is also a reasoning system): https://i.imgur.com/SrLX6cW.jpeg answer; https://i.imgur.com/0sTtwzM.jpeg
Note that there is a difference between what the term is used for outside informational technologies, but we're quite clearly talking about tech and IT, not neuroscience, which would be quite a different reasoning, but these systems used in AI, by modern definitions, are reasoning systems, literally meaning they reason. Think of it like Artificial intelligence versus intelligence.
I will no longer answer comments below as pretty much everyone starts talking about non-IT reasoning or historical applications.
You do realise that everyone actually educated in statistical modeling knows that you have no idea what you're talking about, right?
Note that I'm not one of the people talking about it on X, I don't know who they are. I just linked it with a simple "this looks like reasoning to me".
https://openai.com/index/introducing-deep-research/
They can't reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).
Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I'm joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever
You partly described reasoning tho
https://en.m.wikipedia.org/wiki/Reasoning_system
If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don't understand how to read LLMs, the most advanced "Ai" they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.
Again to recap here LLMs and similar neural network "Ai" is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term "Reasoning System" which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren't stupid enough to say those are capable of reasoning
Wtf?
Do I even have to point out the parts you need to read? Go back and start reading at sentence that says "In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning.", and then check out NLP page, or part about machine learning, which are all seperate/different reasoning systems, but we just tend to say "reasoning".
Not your hilarious NPC anology.
More complex forms of reasoning in the context of "Reasoning Systems" is video game NPC Ai. They take the current game state and "reason" about what action they should take now or even soon in the future. Really good video game Ai will use your velocity to pre-aim projectiles at where you'll be in the future instead of where you are currently. The NPC analogy is one of the very thing's being described by the term
That would be a conditional logic system, not reasoning system. By your logic, aimbots are reasoning systems. It's simple math with some if/then operators sprinkled inbetween.
A proper reasoning system implies some kind of inference, manipulation, logical chaining, or at least the ability to justify/modify its own choices outside of pre-coded logic. NPCs don’t do that. They just follow hand-crafted rules, or at best, utility scores (shoot now, run later, hide if health < 30).
This link is about reasoning system, not reasoning. Reasoning involves actually understanding the knowledge, not just having it. Testing or validating where knowledge is contradictionary.
LLM doesn't understand the difference between hard and soft rules of the world. Everything is up to debate, everything is just text and words that can be ordered with some probabilities.
It cannot check if something is true, it just 'knows' that someone on the internet talked about something, sometimes with and often without or contradicting resolutions..
It is a gossip machine, that trys to 'reason' about whatever it has heard people say.
Yes, your confidence in something you apparently know nothing about is apparent.
Have you ever thought that openai, and most xitter influencers, are lying for profit?
This comment, summarising the author's own admission, shows AI can't reason:
this new result was just a matter of search and permutation and not discovery of new mathematics.
I never said it discovered new mathematics (edit: yet), I implied it can reason. This is clear example of reasoning to solve a problem
You need to dig deeper of how that "reasoning" works, but you got misled if you think it does what you say it does.
Can you elaborate? How is this not reasoning? Define reasoning to me
While that contains the word "reasoning" that does not make it such. If this is about the new "reasoning" capabilities of the new LLMS. It was if I recall correctly, found our that it's not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it's doing fancy dice rolling to appear to be talking like a human being.
As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it's not actually "reasoning", it's just applying another pattern.
With the current technology we've gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we'll need a breakthrough of some kind.
But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we're just using the same approach as we were before we tried to do "handcrafted" AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we'll get pretty convincing results, but I seriously doubt we'll get proper reasoning with this current approach.
But pattern recognition is literally reasoning. Your argument sounds like "it reasons, but not as good as humans, therefore it does not reason"
I feel like you should take a look at this: https://en.m.wikipedia.org/wiki/Reasoning_system
If we're talking about Artificial INTELLIGENCE, then we should talk about "reasoning" as an ability to apply logic and not just match patterns. Because pure pattern matching is decidedly NOT reasoning, because if the pattern changes even a little (change the names and numbers, keeping the logic intact) all models start showing failures. So, yes, some people decided to reframe what "reasoning" means in this context (moving goalposts), but I'm pretty sure that 99% people who use the term when referring to AI don't mean reasoning like that. Regardless, it's not actually that of an interesting discussion, not do I actually care that much. So, sure, I'll give you that point.