this post was submitted on 18 Mar 2026
718 points (92.3% liked)
Technology
82810 readers
3768 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They used the word future for a reason. The technology is still being developed so basing future predictions on the current state is silly.
Your response is really unimpressive. My point is that LLM training, as it now stands, doesn't seem like it can possibly adapt to an internet that isn't full of free information ripe for the taking. If people come to rely on LLMs, how will they get the information to keep up with further advancements in, well, anything?
The amount of money dumped into ai can't be recouped. It's already a massive bubble.
How is that relevant? Even if the bubble pops LLMs aren't going away.
Because all the tech bros saying AI are going to change the world are wrong. Just like they were wrong with blockchain currencies, just like they were wrong with owning images on computers. They'll also be wrong about this.
And no one is arguing otherwise. But work on genAI isn't going to stop just because the bubble pops, and there's no real reason to think that its current capabilities can't be improved on.
It's a predictive text model. It's not artificial intelligence.