this post was submitted on 19 Aug 2025
213 points (94.6% liked)
Technology
74233 readers
4393 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well one thing's for sure, data centers are going to be insanely cheap in the near future.
And they'll all be optimized for GPU workloads :(
If anyone actually spent money on science anymore, I bet this would be great for, like, protein folding, that sort of thing.
Terrible for running websites though.
that’s actually okay… the only thing that’s different about GPU workloads is that they’ve very energy dense… as CPUs and other hardware progress, their energy requirements get more dense… 10 years in the future, today’s GPU optimised datacentres will be perfect for standard workloads
… unless they’re centrally liquid cooling the whole DC, which i’ve heard discussed but is a very new concept with a lot of unknowns
GPUs are only good for workloads that multi-thread really, really well. That's why we don't just use them as CPUs.
The idea that today's GPU will be tomorrow's CPU makes no sense. We've had GPUs for ages. If they were capable of being used in place of CPUs we'd already be doing it. Why aren't yesterday's GPUs today's CPUs?
yes, but we’re talking about hardware requirements… data centres aren’t really designed for the software that runs in them; they’re designed for the hardware… a “GPU optimised” data centre just has a lot more power running to each cabinet, and has to have a lot larger cooling capacity in a small area
the hardware inside the data centre can be swapped out: it’s not like GPUs are built into the foundation of the building
OK, if we're talking about infrastructure rather than specific equipment, then yes, I would broadly agree that the datacentre infrastructure itself can be repurposed.
Unfortunately, by that point the whole data centre will already have been sold off for parts because its never going to recoup its initial investment in the first place, and throwing even more money into swapping out those GPUs for CPUs is going to be a complete no go.
yes. the comment was
which i think broadly agrees with your thinking… the hardware will be sold, but the building and utilities will remain… thus, data centres will be cheap to buy and repurpose as AI companies try and offload them… might possibly see some cheap AF colo or dedicated options in the future