this post was submitted on 26 Feb 2026
13 points (84.2% liked)

Technology

81878 readers
5916 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] Mynameisallen@lemmy.zip 9 points 53 minutes ago (1 children)

This is what all the parts we wanted went to

[–] Earthman_Jim@lemmy.zip 3 points 28 minutes ago (1 children)

Yeah, I wonder how long it will take them to clue in that no one wants to trade gaming for an AI fucking girlfriend ffs...

[–] Mynameisallen@lemmy.zip 1 points 20 minutes ago

Until the money stops pouring in I suppose

[–] Earthman_Jim@lemmy.zip 1 points 29 minutes ago

who. fucking. cares.

[–] RegularJoe@lemmy.world 4 points 1 hour ago (1 children)

Nvidia's Vera Rubin platform is the company's next-generation architecture for AI data centers that includes an 88-core Vera CPU, Rubin GPU with 288 GB HBM4 memory, Rubin CPX GPU with 128 GB of GDDR7, NVLink 6.0 switch ASIC for scale-up rack-scale connectivity, BlueField-4 DPU with integrated SSD to store key-value cache, Spectrum-6 Photonics Ethernet, and Quantum-CX9 1.6 Tb/s Photonics InfiniBand NICs, as well as Spectrum-X Photonics Ethernet and Quantum-CX9 Photonics InfiniBand switching silicon for scale-out connectivity.

[–] TropicalDingdong@lemmy.world 4 points 1 hour ago (1 children)

288 GB HBM4 memory

jfc..

Looking at the spec's... fucking hell these things probably cost over 100k.

I wonder if we'll see a generational performance leap with LLM's scaling to this much memory.

[–] boonhet@sopuli.xyz 1 points 19 minutes ago* (last edited 18 minutes ago)

LLMs can already use way more I believe, they don't really run them on a single one of these things.

The HBM4 would likely be great for speed though.