this post was submitted on 04 Sep 2025
161 points (96.5% liked)

Technology

74902 readers
2311 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/36866515

Comments

you are viewing a single comment's thread
view the rest of the comments
[–] Modern_medicine_isnt@lemmy.world 1 points 2 days ago (3 children)

Everything is always 5 to 10 years away until it happens. Agi cpuld happen any day in the next 1000 years. There is a good chance you won't see it coming.

[–] jj4211@lemmy.world 0 points 2 days ago (2 children)

Pretty much this. LLMs came out of left field going from morning to what it is more really quickly.

If expect the same of AGI, not correlated to who spent the most or is best at LLM. It might happen decades from now or in the next couple of months. It's a breakthrough that is just going to come out of left field when it happens.

[–] JcbAzPx@lemmy.world 0 points 1 day ago* (last edited 1 day ago) (1 children)

LLMs weren't out of left field. Chatbots have been in development since the '90s at least. Probably even longer. And word prediction has been around at least a decade. People just don't pay attention until it's commercially available.

[–] scratchee@feddit.uk 2 points 10 hours ago

Modern llms were a left field development.

Most ai research has serious and obvious scaling problems. It did well at first, but scaling up the training didn’t significantly improve the results. LLMs went from more of the same to a gold rush the day it was revealed that they scaled “well” (relatively speaking). They then went through orders of magnitude improvements very quickly because they could (unlike previous ai training models which wouldn’t have benefited like this).

We’ve had chatbots for decades, but with a the same low capability ceiling that most other old techniques had, they really were a different beast to modern LLMs with their stupidly excessive training regimes.