this post was submitted on 15 Aug 2025
403 points (95.1% liked)

Technology

74073 readers
2679 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The University of Rhode Island's AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of ChatGPT's reported 2.5 billion requests a day through the model could see energy usage as high as 45 GWh.

A daily energy use of 45 GWh is enormous. A typical modern nuclear power plant produces between 1 and 1.6 GW of electricity per reactor per hour, so data centers running OpenAI's GPT-5 at 18 Wh per query could require the power equivalent of two to three nuclear power reactors, an amount that could be enough to power a small country.

top 50 comments
sorted by: hot top controversial new old
[–] Deflated0ne@lemmy.world 6 points 2 hours ago (2 children)

And an LLM that you could run local on a flash drive will do most of what it can do.

[–] ckmnstr@lemmy.world 2 points 1 hour ago

Probably not a flash drive but you can get decent mileage out of 7b models that run on any old laptop for tasks like text generation, shortening or summarizing.

[–] Tikiporch@lemmy.world 2 points 1 hour ago

What do you use your usb drive llm for?

[–] RememberTheApollo_@lemmy.world 1 points 1 hour ago (1 children)

Help me out here. What designates the “response” type? Someone asking it to make a picture? Write a 20 page paper? Code a small app?

[–] ckmnstr@lemmy.world 1 points 1 hour ago (1 children)

Response Type is decided by ChatGPTs new routing function based on your input. So yeah. Asking it to "think long and hard", which I have seen people advocating for to get better results recently, will trigger the thinking model and waste more resources.

[–] towerful@programming.dev 1 points 24 minutes ago

So instead of just saying "thank you" I now have to say "think long and hard about how much this means to me"?

[–] stevedice@sh.itjust.works 2 points 2 hours ago

The team measured GPT-5’s power consumption by combining two key factors: how long the model took to respond to a given request, and the estimated average power draw of the hardware [they believe is] running it.

[–] sp3ctr4l@lemmy.dbzer0.com 17 points 6 hours ago

Fucking Doc Brown could power a goddamn time machine with this many jiggawatts, fuck I hate being stuck in this timeline.

[–] TheGrandNagus@lemmy.world 83 points 12 hours ago* (last edited 12 hours ago)

I have an extreme dislike for OpenAI, Altman, and people like him, but the reasoning behind this article is just stuff some guy has pulled from his backside. There's no facts here, it's just "I believe XYX" with nothing to back it up.

We don't need to make up nonsense about the LLM bubble. There's plenty of valid enough criticisms as is.

By circulating a dumb figure like this, all you're doing is granting OpenAI the power to come out and say "actually, it only uses X amount of power. We're so great!", where X is a figure that on its own would seem bad, but compared to this inflated figure sounds great. Don't hand these shitty companies a marketing win.

[–] Dasus@lemmy.world 5 points 7 hours ago (2 children)

that's a lot. remember to add "-noai" to your google searches.

[–] macaw_dean_settle@lemmy.world 1 points 3 hours ago (1 children)

Or just use any other better search like Bing or duckduckgo. googol sucks and was never any good. Quit pushing ignorant garbage.

[–] Dasus@lemmy.world 2 points 3 hours ago (3 children)

duckduckgo yes, but ... bing?

[–] lime@feddit.nu 1 points 1 hour ago

ddg is bing

[–] Rivalarrival@lemmy.today 1 points 1 hour ago

Bing is for porn.

[–] Ilovethebomb@sh.itjust.works 3 points 7 hours ago (1 children)

I'm just going to ignore the AI recommendations, let them burn money.

[–] Dasus@lemmy.world 4 points 7 hours ago* (last edited 7 hours ago)

i don't judge you for that. honestly it matters fuck all at this point

[–] A_norny_mousse@feddit.org 102 points 15 hours ago (4 children)

I don't care how rough the estimate is, LLMs are using insane amounts of power, and the message I'm getting here is that the newest incarnation uses even more.

BTW a lot of it seems to be just inefficient coding as Deepseek has shown.

[–] kautau@lemmy.world 11 points 10 hours ago* (last edited 9 hours ago) (1 children)

And water usage which will also increase as fires increase and people have trouble getting access to clean water

https://techhq.com/news/ai-water-footprint-suggests-that-large-language-models-are-thirsty/

[–] FauxLiving@lemmy.world 6 points 7 hours ago (3 children)

It would only take one regulation to fix that:

Datacenters that use liquid cooling must use closed loop systems.

The reason they dont, and why they setup in the desert, is because water is incredibly cheap and energy to cool a closed loop system is expensive. So they use evaporative open loop systems.

[–] kautau@lemmy.world 4 points 7 hours ago (1 children)

Unfortunately I wonder if it’s more expensive to set up a closed loop system that’s really expensive or to buy lawmakers that will vote against bills saying you should do so and it’s a tale old as time

[–] FauxLiving@lemmy.world 4 points 7 hours ago (1 children)
[–] kautau@lemmy.world 3 points 7 hours ago

Yeah sorry forgot my /s there

load more comments (2 replies)
[–] ThePowerOfGeek@lemmy.world 34 points 14 hours ago (1 children)

BTW a lot of it seems to be just inefficient coding as Deepseek has shown.

Kind of? Inefficient coding is definitely a part of it. But a large part is also just the iterative nature of how these algorithms operate. We might be able to improve that via code optimization a little bit. But without radically changing how these engines operates it won't make a big difference.

The scope of the data being used and trained on is probably a bigger issue. Which is why there's been a push by some to move from LLMs to SLMs. We don't need the model to be cluttered with information on geology, ancient history, cooking, software development, sports trivia, etc if it's only going to be used for looking up stuff on music and musicians.

But either way, there's a big 'diminishing returns' factor to this right now that isn't being appreciated. Typical human nature: give me that tiny boost in performance regardless of the cost, because I don't have to deal with. It's the same short-sighted shit that got us into this looming environmental crisis.

[–] kescusay@lemmy.world 15 points 13 hours ago (2 children)

Coordinated SLM governors that can redirect queries to the appropriate SLM seems like a good solution.

[–] sleep_deprived@lemmy.dbzer0.com 2 points 4 hours ago (1 children)

That basically just sounds like Mixture of Experts

[–] kautau@lemmy.world 1 points 8 minutes ago* (last edited 8 minutes ago)

Basically, but with MCP and SLMs interacting rather than a singular model, with the coordinator model only doing the work to figure out who to field the question to, and then continuously provide context to other SLMs in the case of more complex queries

load more comments (1 replies)
[–] joonazan@discuss.tchncs.de 1 points 7 hours ago

My guess would be that using a desktop computer to make the queries and read the results consumes more power than the LLM, at least in the case of quickly answering models.

The expensive part is training a model but usage is most likely not sold at a loss, so it can't use an unreasonable amount of energy.

Instead of this ridiculous energy argument, we should focus on the fact that AI (and other products that money is thrown at) aren't actually that useful but companies control the narrative. AI is particularly successful here with every CEO wanting in on it and people afraid it is so good it will end the world.

load more comments (1 replies)
[–] yesman@lemmy.world 35 points 14 hours ago (2 children)

I think AI power usage has an upside. No amount of hype can pay the light bill.

AI is either going to be the most valuable tech in history, or it's going to be a giant pile of ash that used to be VC capital.

[–] themurphy@lemmy.ml 13 points 13 hours ago (3 children)

It will not go away at this point. Too many daily users already, who uses it for study, work, chatting, looking things up.

If not OpenAI, it will be another service.

[–] queermunist@lemmy.ml 2 points 2 hours ago

Those users are not paying a sustainable price, they're using chatbots because they're kept artificially cheap to increase use rates.

Force them to pay enough to make these bots profitable and I guarantee they'll stop.

[–] krashmo@lemmy.world 17 points 13 hours ago (4 children)

Those same things were said about hundreds of other technologies that no longer exist in any meaningful sense. Current usage of a technology, which in this specific case I would argue is largely frivolous anyway, is not an accurate indicator of future usage.

load more comments (4 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] pHr34kY@lemmy.world 4 points 9 hours ago (3 children)

Tech hasn't improved that much in the last in the last decade. All that's happened is that more cores have been added. The single-thread speed of a CPU is stagnant.

My home PC consumes more power than my Pentium 3 consumed 25 years ago. All efficiency gains are lost to scaling for more processing power. All improvements in processing power are lost to shitty, bloated code.

We don't have the tech for AI. We're just scaling up to the electrical senand demand of a small country and pretending we have the tech for AI.

[–] Caitlyynn@lemmy.blahaj.zone 1 points 1 hour ago

Not even the ai tech itself is enough for ai

[–] SCmSTR@lemmy.blahaj.zone 1 points 1 hour ago

It's the muscle car era: can't make things more efficient to compete with Asia? MAKE IT BIGGER AND CONSUME MORE

[–] ChokingHazard@lemmy.world 4 points 6 hours ago

This is nonsense, an M1 runs many multiples faster and at much lower wattage.

[–] Blackmist@feddit.uk 7 points 11 hours ago

That's alright. When they've got a generation of people who can't even hold a conversation without it, let alone do a job, that price increase will drop that energy use pretty rapidly.

[–] eager_eagle@lemmy.world 30 points 15 hours ago (2 children)

Bit of a clickbait. We can't really say it without more info.

But it's important to point out that the lab's test methodology is far from ideal.

The team measured GPT-5’s power consumption by combining two key factors: how long the model took to respond to a given request, and the estimated average power draw of the hardware running it.

What we do know is that the price went down. So this could be a strong indication the model is, in fact, more energy efficient. At least a stronger indicator than response time.

load more comments (2 replies)
load more comments
view more: next ›