this post was submitted on 12 Feb 2026
67 points (71.9% liked)
Technology
81128 readers
3714 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Moore's law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it's a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.
This actually started to happen around 2015. These clever tricks were always exaggerated because there isn't an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.
The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it's actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia's locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.
Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.
Yo be fair lossless frame gen has a number of short comings, performance issues, and quality problems compared to nividias offerings.
While it's "possible" to run frame gen on any hardware the quality and performance is definitely a sizeable downgrade