this post was submitted on 18 Aug 2025
94 points (88.5% liked)

Technology

74193 readers
3965 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] kyub@discuss.tchncs.de 7 points 21 hours ago* (last edited 21 hours ago) (1 children)

The current tech/IT sector is heavily relying on and riding hype trains. It's a bit like the fashion industry that way. But this AI hype so far has only been somewhat useful.

Current general LLMs are decent for prototyping or example output to jump-start you into the general direction of your destination, but their output always needs supervision and most often it needs fixing. If you apply unreliable and constantly changing AI to everything, and completely throw out humans, just because it's cheaper, then you'll get vastly inferior results. You probably get faster results, but the results will have tons of errors which introduces tons of extra problems you never had before. I can see AI fully replacing some jobs in some specific areas where errors don't matter much. But that's about it. For all other jobs or purposes, AI will be an extra tool, nothing more, nothing less.

AI has its uses within specific domains, when trained only on domain-specific and truthful data. You know, things like AlphaZero or AlphaGo. Or AIs revealing new methods not known before to reach the same goal. But these general AIs like ChatGPT which are trained on basically the whole web with all the crap in it... it's never going to be truly great. And it's also becoming worse over time, i.e. not improving much at all, because the web will be even fuller with AI-generated crap in the future. So the AIs slurp up all that crap too. The training data gets muddier over time. The promise of AIs getting even more powerful as time goes on is just a marketing lie. There's most likely a saturation curve, and we're most likely very close to the saturation already, where it won't really get any better. You could already see this by comparing the jump from GPT-3 to GPT-4 (big) and then GPT-4 to GPT-5 (much smaller). Or take a look at FSD cars. Also not really happening, unless you like crashes. Of course, the companies want to keep the illusion rolling so they'll always claim the next big revolution is just around the corner. Because they profit from investments and monthly paying customers, and as long as they can keep that illusion up and profit from that, they don't even need to fulfill any more promises.

[โ€“] FlashMobOfOne@lemmy.world 4 points 16 hours ago

Current general LLMs are decent for prototyping or example output to jump-start you into the general direction of your destination, but their output always needs supervision and most often it needs fixing.

This.

LLMs do not produce anything that can be relied upon confidently without human review, and after the bubble pops, that's only going to become more true.

Hell, I'm glad the first time I ever used it it gave me a ~~bugged~~ hallucinated and false reply. I asked it to give me a summary of the 2023 Super Bowl and learned that Patrick Mahomes kicked a field goal to win the game.