this post was submitted on 15 Aug 2025
352 points (94.9% liked)

Technology

74024 readers
2563 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The University of Rhode Island's AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of ChatGPT's reported 2.5 billion requests a day through the model could see energy usage as high as 45 GWh.

A daily energy use of 45 GWh is enormous. A typical modern nuclear power plant produces between 1 and 1.6 GW of electricity per reactor per hour, so data centers running OpenAI's GPT-5 at 18 Wh per query could require the power equivalent of two to three nuclear power reactors, an amount that could be enough to power a small country.

you are viewing a single comment's thread
view the rest of the comments
[–] A_norny_mousse@feddit.org 92 points 12 hours ago (4 children)

I don't care how rough the estimate is, LLMs are using insane amounts of power, and the message I'm getting here is that the newest incarnation uses even more.

BTW a lot of it seems to be just inefficient coding as Deepseek has shown.

[–] kautau@lemmy.world 9 points 8 hours ago* (last edited 6 hours ago) (1 children)

And water usage which will also increase as fires increase and people have trouble getting access to clean water

https://techhq.com/news/ai-water-footprint-suggests-that-large-language-models-are-thirsty/

[–] FauxLiving@lemmy.world 4 points 5 hours ago (2 children)

It would only take one regulation to fix that:

Datacenters that use liquid cooling must use closed loop systems.

The reason they dont, and why they setup in the desert, is because water is incredibly cheap and energy to cool a closed loop system is expensive. So they use evaporative open loop systems.

[–] kautau@lemmy.world 4 points 5 hours ago (1 children)

Unfortunately I wonder if it’s more expensive to set up a closed loop system that’s really expensive or to buy lawmakers that will vote against bills saying you should do so and it’s a tale old as time

[–] FauxLiving@lemmy.world 4 points 5 hours ago (1 children)
[–] kautau@lemmy.world 3 points 5 hours ago

Yeah sorry forgot my /s there

[–] Ilovethebomb@sh.itjust.works 0 points 4 hours ago (1 children)

That increases your energy use though, because evaporative cooling is very energy efficient.

[–] FauxLiving@lemmy.world 3 points 4 hours ago

We can make energy from renewable sources.

Fresh drinking water is finite, especially in the desert.

[–] joonazan@discuss.tchncs.de 1 points 4 hours ago

My guess would be that using a desktop computer to make the queries and read the results consumes more power than the LLM, at least in the case of quickly answering models.

The expensive part is training a model but usage is most likely not sold at a loss, so it can't use an unreasonable amount of energy.

Instead of this ridiculous energy argument, we should focus on the fact that AI (and other products that money is thrown at) aren't actually that useful but companies control the narrative. AI is particularly successful here with every CEO wanting in on it and people afraid it is so good it will end the world.

[–] ThePowerOfGeek@lemmy.world 28 points 12 hours ago (1 children)

BTW a lot of it seems to be just inefficient coding as Deepseek has shown.

Kind of? Inefficient coding is definitely a part of it. But a large part is also just the iterative nature of how these algorithms operate. We might be able to improve that via code optimization a little bit. But without radically changing how these engines operates it won't make a big difference.

The scope of the data being used and trained on is probably a bigger issue. Which is why there's been a push by some to move from LLMs to SLMs. We don't need the model to be cluttered with information on geology, ancient history, cooking, software development, sports trivia, etc if it's only going to be used for looking up stuff on music and musicians.

But either way, there's a big 'diminishing returns' factor to this right now that isn't being appreciated. Typical human nature: give me that tiny boost in performance regardless of the cost, because I don't have to deal with. It's the same short-sighted shit that got us into this looming environmental crisis.

[–] kescusay@lemmy.world 14 points 11 hours ago (2 children)

Coordinated SLM governors that can redirect queries to the appropriate SLM seems like a good solution.

[–] sleep_deprived@lemmy.dbzer0.com 1 points 2 hours ago

That basically just sounds like Mixture of Experts

[–] JoeKrogan@lemmy.world 3 points 9 hours ago

Powered by GNU Hurd

[–] rdri@lemmy.world 5 points 8 hours ago

Also don't forget how people like wasting resources by asking questions like "what's the weather today".