this post was submitted on 09 Sep 2025
519 points (98.5% liked)

Technology

74966 readers
2792 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A new survey conducted by the U.S. Census Bureau and reported on by Apolloseems to show that large companies may be tapping the brakes on AI. Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August. Most other lines, representing companies with fewer employees, are also at a decline, with some still increasing.

you are viewing a single comment's thread
view the rest of the comments
[–] krunklom@lemmy.zip 14 points 1 day ago

The technology is fascinating and useful - for specific use cases and with an understanding of what it's doing and what you can get out of it.

From LLMs to diffusion models to GANs there are really, really interesting use cases, but the technology simply isn't at the point where it makes any fucking sense to have it plugged into fucking everything.

Leaving the questionable ethics many paid models' creators have used to make their models aside, the backlash against so is understandable because it's being shoehorned into places it just doesn't belong.

I think eventually we may "get there" with models that don't make so many obvious errors in their output - in fact I think it's inevitable it will happen eventually - but we are far from that.

I do think that the "fuck ai" stance is shortsighted though, because of this. This is happening, it's advancing quickly, and while gains on LLMs are diminishing we as a society really need to be having serious conversations about what things will look like when (and/or if, though I'm more inclined to believe it's when) we have functional models that can are accurate in their output.

When it actually makes sense to replace virtually every profession with ai (it doesn't right now, not by a long shot) then how are we going to deal with this as a society?