this post was submitted on 15 Aug 2025
174 points (95.8% liked)

Technology

74024 readers
1502 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The University of Rhode Island's AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of ChatGPT's reported 2.5 billion requests a day through the model could see energy usage as high as 45 GWh.

A daily energy use of 45 GWh is enormous. A typical modern nuclear power plant produces between 1 and 1.6 GW of electricity per reactor per hour, so data centers running OpenAI's GPT-5 at 18 Wh per query could require the power equivalent of two to three nuclear power reactors, an amount that could be enough to power a small country.

top 50 comments
sorted by: hot top controversial new old
[–] TheGrandNagus@lemmy.world 14 points 1 hour ago* (last edited 1 hour ago)

I have an extreme dislike for OpenAI, Altman, and people like him, but the reasoning behind this article is just stuff some guy has pulled from his backside. There's no facts here, it's just "I believe XYX" with nothing to back it up.

We don't need to make up nonsense about the LLM bubble. There's plenty of valid enough criticisms as is.

By circulating a dumb figure like this, all you're doing is granting OpenAI the power to come out and say "actually, it only uses X amount of power. We're so great!", where X is a figure that on its own would seem bad, but compared to this inflated figure sounds great. Don't hand these shitty companies a marketing win.

[–] Blackmist@feddit.uk 1 points 3 minutes ago

That's alright. When they've got a generation of people who can't even hold a conversation without it, let alone do a job, that price increase will drop that energy use pretty rapidly.

[–] Melonpoly@lemmy.world 8 points 1 hour ago (1 children)

It takes less energy to dry a full load of clothes

[–] LodeMike@lemmy.today 4 points 1 hour ago (2 children)

40 watt-hours? That's the energy usage of a very small laptop.

[–] Dremor@lemmy.world 1 points 6 minutes ago* (last edited 6 minutes ago)

Imagine if you had to empty your whole laptop battery every time you had to generate a 20 response that may not even be correct... That'll end up consuming power really fast.

[–] jj4211@lemmy.world 1 points 13 minutes ago

Well over the course of an hot or two, but it's correct that a dryer run even with heat pump is significantly more than 40wh

[–] DarkCloud@lemmy.world 4 points 1 hour ago

This bubble needs to pop, the sooner the better.

[–] yesman@lemmy.world 20 points 2 hours ago (2 children)

I think AI power usage has an upside. No amount of hype can pay the light bill.

AI is either going to be the most valuable tech in history, or it's going to be a giant pile of ash that used to be VC capital.

[–] themurphy@lemmy.ml 7 points 1 hour ago (2 children)

It will not go away at this point. Too many daily users already, who uses it for study, work, chatting, looking things up.

If not OpenAI, it will be another service.

[–] krashmo@lemmy.world 10 points 1 hour ago (1 children)

Those same things were said about hundreds of other technologies that no longer exist in any meaningful sense. Current usage of a technology, which in this specific case I would argue is largely frivolous anyway, is not an accurate indicator of future usage.

[–] rigatti@lemmy.world 1 points 1 hour ago

Can you give some examples of those technologies? I'd be interested in how many weren't replaced with something more efficient or convenient.

[–] devfuuu@lemmy.world 1 points 19 minutes ago

And most importantly the Pandora box has been opened for deep perfect scams and illegal usage. Nobody will put it in the box again, because even if everyone agreed to make it illegal everywhere it's already too late.

[–] homesweethomeMrL@lemmy.world 1 points 53 minutes ago

That capital was ash earlier this year. The latest $40 Billion-with-a-B financing round is just a temporary holdover until they can raise more fuel. And they already burned through Microsoft, who apparently got what they wanted and are all “see ya”.

[–] kescusay@lemmy.world 10 points 2 hours ago (2 children)

How the hell are they going to sustain the expense to power that? Setting aside the environmental catastrophe that this kind of "AI" entails, they're just not very profitable.

[–] gdog05@lemmy.world 8 points 2 hours ago

Look at all the layoffs they've been able to implement with the mere threat that AI has taken their jobs. It's very profitable, just not in a sustainable way. But sustainability isn't the goal. Feudal state mindset in the populace is.

[–] homesweethomeMrL@lemmy.world 1 points 52 minutes ago

Not just”not profitable”, they don’t make any money at all. Loss only.

[–] A_norny_mousse@feddit.org 47 points 3 hours ago (1 children)

I don't care how rough the estimate is, LLMs are using insane amounts of power, and the message I'm getting here is that the newest incarnation uses even more.

BTW a lot of it seems to be just inefficient coding as Deepseek has shown.

[–] ThePowerOfGeek@lemmy.world 20 points 3 hours ago (1 children)

BTW a lot of it seems to be just inefficient coding as Deepseek has shown.

Kind of? Inefficient coding is definitely a part of it. But a large part is also just the iterative nature of how these algorithms operate. We might be able to improve that via code optimization a little bit. But without radically changing how these engines operates it won't make a big difference.

The scope of the data being used and trained on is probably a bigger issue. Which is why there's been a push by some to move from LLMs to SLMs. We don't need the model to be cluttered with information on geology, ancient history, cooking, software development, sports trivia, etc if it's only going to be used for looking up stuff on music and musicians.

But either way, there's a big 'diminishing returns' factor to this right now that isn't being appreciated. Typical human nature: give me that tiny boost in performance regardless of the cost, because I don't have to deal with. It's the same short-sighted shit that got us into this looming environmental crisis.

[–] kescusay@lemmy.world 5 points 2 hours ago (1 children)

Coordinated SLM governors that can redirect queries to the appropriate SLM seems like a good solution.

[–] JoeKrogan@lemmy.world 1 points 50 minutes ago

Powered by GNU Hurd

[–] Steve@startrek.website 3 points 2 hours ago

And it sucks even worse.

[–] eager_eagle@lemmy.world 23 points 4 hours ago (2 children)

Bit of a clickbait. We can't really say it without more info.

But it's important to point out that the lab's test methodology is far from ideal.

The team measured GPT-5’s power consumption by combining two key factors: how long the model took to respond to a given request, and the estimated average power draw of the hardware running it.

What we do know is that the price went down. So this could be a strong indication the model is, in fact, more energy efficient. At least a stronger indicator than response time.

[–] unexposedhazard@discuss.tchncs.de 1 points 38 minutes ago

Isnt it just worse than 4 tho? If they didnt make it cheaper, nobody would pay...

[–] morrowind@lemmy.ml 2 points 55 minutes ago

That's a terrible metric. By this providers that maximize hardware (and energy) use by having a queue of requests would be seen as having more energy use.

[–] Boxscape@lemmy.sdf.org 11 points 3 hours ago (1 children)
[–] ThePantser@sh.itjust.works 2 points 2 hours ago

OpenAI just needs to harness lightning. Incoming weather control tech.

[–] brucethemoose@lemmy.world 10 points 3 hours ago* (last edited 3 hours ago) (1 children)

I don’t buy the research paper at all. Of course we have no idea what OpenAI does because they aren’t open at all, but Deepseek's publish papers suggest it’s much more complex than 1 model per node… I think they recommended like a 576 GPU cluster, with a scheme to split experts.

That, and going by the really small active parameter count of gpt-oss, I bet the model is sparse as heck.

There’s no way the effective batch size is 8, it has to be waaay higher than that.

[–] FaceDeer@fedia.io 7 points 3 hours ago (2 children)

And perhaps even more importantly, the per-token cost of GPT-5's API is less than GPT-4's. That's why OpenAI was so eager to move everyone onto it, it means more profit for them.

[–] Jason2357@lemmy.ca 5 points 3 hours ago (10 children)

I don’t believe api costs are tied all that closely to the actual cost to openAI. They seem to be selling at a loss, and they may be selling at an even greater loss to make it look like they are progressing. The second openAI seems like they have plateaued, their stock evaluation will crash and it will be game over for them.

load more comments (10 replies)
[–] dan@upvote.au 2 points 3 hours ago (2 children)

How does OpenAI getting less money (with a cheaper model) mean more profit? Am I missing something?

[–] IrateAnteater@sh.itjust.works 7 points 3 hours ago (1 children)

Usually, companies will make their product say 25% cheaper to produce, then sell it to the public at a 20% discount (while loudly proclaiming to the world about that 20% price drop) and pocket that 5% increase in profits. So if OpenAI is dropping the price by x, it's safe to assume that the efficiency gains work out to x+1.

[–] dan@upvote.au 1 points 2 hours ago* (last edited 2 hours ago) (2 children)

Thanks! This makes sense, however OpenAI are not yet profitable. It's definitely possible that they're losing less money with the new models, though.

load more comments (2 replies)
load more comments (1 replies)
[–] nightwatch_admin@feddit.nl 10 points 4 hours ago (1 children)

Of course there are comments doubting the accuracy, which by itself is valid, but they are merely doing it to defend AI. IMHO, even at a fifth of the estimates, we’re talking humongous amounts of power, all for a so-so search engine, half arsed chatbots and dubious nsfw images mostly. And let’s not forget: it may be inaccurate and estimates are TOO LOW. Now wouldn’t that be fun?

[–] simple@piefed.social 5 points 2 hours ago (1 children)

but they are merely doing it to defend AI.

No they're not, you can agree the research is garbage without defending AI. It literally assumes everything. GPT5 could be using eight times the power. It could be using half the power. It could be using a quadrillion times the power. Nobody knows, because they keep it secret.

[–] Catoblepas@piefed.blahaj.zone 1 points 1 hour ago (1 children)

It’s highly unlikely they reduced power usage—one of the most consistent criticisms of LLM and image generation—without advertising it.

[–] simple@piefed.social 2 points 1 hour ago

It's highly unlikely they would bring more attention to one of the biggest issues AI is causing even if they did make it slightly better

[–] vegeta@lemmy.world 8 points 3 hours ago
[–] Valmond@lemmy.world 7 points 3 hours ago (1 children)

40Wh or 18Wh which is it?

That's my old gaming PC running a game for 2min42sec-6minutes ... Roughly.

[–] TropicalDingdong@lemmy.world 12 points 3 hours ago

they vibe calculated it.

[–] DrFistington@lemmy.world 4 points 3 hours ago

But we get a huge increase in accuracy, from 30% to 30.5%! And it only took 5x the energy consumption!

load more comments
view more: next ›