this post was submitted on 15 Aug 2025
199 points (98.5% liked)

Technology

74073 readers
3089 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PixelatedSaturn@lemmy.world 7 points 1 day ago (7 children)

Good article, I couldn't agree with it more, it's exactly my experience.

The tech is being developed really fast and that is the main issue when taking about ai. Most ai haters are using the issues we might have today to discredit the while technology which makes no sense to me.

And this issue the article talks about is apparent and whoever solves it will be rich.

However, it's interesting to think about the issues that come next.

[–] Aceticon@lemmy.dbzer0.com 1 points 10 hours ago* (last edited 10 hours ago) (1 children)

Like the guy whose baby doubled in weight in 3 months and thus he extrapolated that by the age of 10 the child would weigh many tons, you're assuming that this technology has a linear rate of improvement of "intelligence".

This is not at all what's happening - the evolution of things like LLMs in the last year or so (say between GPT4 and GPT5) is far less than it was earlier in that Tech and we keep seeing more and more news on problems about training it further and getting it improved, including the big one which is that training LLMs on the output of LLMs makes them worse, and the more the output of LLMs is out there, the harder it gets to train new iteractions with clean data.

(And, interestingly, no Tech has ever had a rate of improvement that didn't eventually tailed of, so it's a peculiar expectation to have for a specific Tech that it will keep on steadily improving)

With this specific path taken in implementing AI, the question is not "when will it get there" but rather "can it get there or is it a technological dead-end", and at least for things like LLMs the answer increasingly seems to be that it is a technological dead-end for the purpose of creating reasoning intelligence and doing work that requires it.

(For all your preemptive defense by implying that critics are "ai haters", no hate is required to do this analysis, just analytical ability and skepticism, untainted by fanboyism)

[–] PixelatedSaturn@lemmy.world 1 points 2 hours ago (1 children)

The difference here is that the current ai tech advancements are not just a consequence of one single tech, but of many.

Everything you wrote you believe, depends on this being one tech, one dead end.

The real situation is that we finally have the hardware and the software to make breakthroughs. There is no dead end to this. It's just a series of steps, each contributing by itself and by learning from its mass implementations. It's like we got he first taste of ai and we can't get enough. Even if it takes a while to the next advancement.

[–] Aceticon@lemmy.dbzer0.com 1 points 1 hour ago* (last edited 1 hour ago)

That doesn't even make sense - it's not merely the there being multiple elements which add up to a specific tech that makes it capable of reaching a specific goal, just like throwing multiple ingredients into a pot doesn't guarantee you a tasty dish as output and you have absolutely no proof that "we finally have the hardware and the software to make breakthroughs" hence you can't anchor the forecast that the stuff done on top of said hardware and software will achieve a great outcome entirely anchored on your assertion that "it's made up from stuff which can do greatness".

As for the tech being a composition of multiple tech elements, that doesn't mean much: most dishes too are a composition of multiple elements and that doesn't mean that any random combination of stuff thrown into a pot will make a good dish.

That idea that more inputs make a specific output more likely is like claiming that "the chances of finding a needle increase with the size of the haystack" - the very opposite of reality.

Might want to stop using LLMs to write your responses and engage your brain instead.

load more comments (5 replies)