this post was submitted on 31 Jan 2026
514 points (99.6% liked)

Technology

80503 readers
3800 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"You don't need a DEA warrant or a Justice Department subpoena to see the trend: Europe's 90‑plus‑percent dependency on US cloud infrastructure, as former European Commission advisor Cristina Caffarra put it, is a single‑shock‑event security nightmare waiting to rupture the EU's digital stability."

you are viewing a single comment's thread
view the rest of the comments
[–] stinkytofuisgood@lemmy.ca 1 points 5 days ago

When it comes to LLM AI usage, there’s definitely a few things you can consider.

I’ll reference OpenAI as you mentioned but it can apply to others as well to varying degrees

First is that the OpenAI has the goal of making a profit, but the market is getting more saturated, the costs to run data centres increases with model complexity and user base increases.

Funding is being pumped into them by investors, government in the US and other non-AI companies are banking on AI to bolster the economy and profits respectively. For things like ChatGPT they offer paid plans for consumers but this is a small fraction of their revenue.

The stats show they are losing money quickly. Investors want profits, the company wants dominance, the government seems to be approaching it from a “too big to fail approach”.

So we’re slowly seeing shifts; in the US, ads are being experimented with in ChatGPT, models are being chosen that use less tokens and less computing power, etc.

There have also been studies showing the diminishing returns of the thinking that “a bigger model is better”.

There’s also the question of how much they care about correct answers. They are surely aware that most don’t understand how an LLM AI works, that most will not do much research to fact-check answers, that most will consider convenience to be king.

Their token system is a huge balancing act and I’m not fully convinced they know what works and what doesn’t.

AI is being propped up, it shows signs of a bubble. Not to say AI is going anywhere, but when the pop inevitably happens, the LLM AI landscape will leave behind a lot of failure and monetary loss, and a few winners. Think dot com bubble for reference but in a whole new era of computing.

The circular economy going on and the investment from private and US government entities can only keep this train on the tracks for so long.

If I were to subjectively answer your question in a phrase: I don’t think they really do care too much at the moment.