this post was submitted on 25 Nov 2025
97 points (83.4% liked)

Fuck AI

4728 readers
1146 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between AI development and the Manhattan Project (Atomic Power) is factually incorrect. We also investigate the sponsor, Control AI, and expose how industry propaganda is shifting focus toward hypothetical extinction risks to distract from real-world issues like disinformation and regulatory accountability, and fact-check OpenAI's claims about the International Math Olympiad and Anthropic's AI Alignment bioweapon tests.

00:00 I wish this wasn't happening

00:32 SciShow's Lie Overview

01:58 Intro

02:15 Biggest Lie on the SciShow Video

04:44 Biggest Omission in the SciShow Video

05:56 The "Statement on AI" that SciShow Omits

08:57 Summary of Most Important Points

09:23 Claim about International Math Olympiad Medal

09:50 Misleading Example about AI Alignment

11:20 Downplaying "practical and visible" problems

11:53 Essay I debunked from Anthropic CEO

12:06 Video on Hank's Personal Channel

12:31 A Plea for SciShow and others to do better

13:02 Wrap-up

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 3 points 1 week ago* (last edited 1 week ago)
  • Transformers has plateaued. Hence the pursuit of alternative architectures.

  • We know how it works. It’s built and rebuilt from scratch, it’s one of the most heavily studied systems on the planet. The research is open.

  • Scaling has plateaued, hence we have a pretty good trajectory for LLMs specifically; towards increasingly efficient tool use. It’s clear that “AGI” research will go down a different path.

See this interview from a GLM dev for a more grounded take on what the labs are feeling now:

https://www.chinatalk.media/p/the-zai-playbook

https://m.youtube.com/watch?v=Q0TXO8BBqhE

You make a good point in how much the applications change each decade. What we have 10 years from now will be unreal… That being said, I think a lot of past gains were facilitated by picking low hanging hardware/framework fruit.

In 2005, we had Pentium 4s.

In 2015, researchers were hacking stuff onto GTX 780s with doubled up VRAM, no specialized blocks, frankly primitive tooling/APIs and few libraries. PyTorch was a shell of its current self.

In 2025, we have now scaled up to massive interconnects and dedicated datacenter accelerators with mature software frameworks with tons of libraries. We have wafer sized inference accelerators and NPUs for deployment.

But shrinks are slowing, we’ve already scaled up past diminishing returns. In 2035… I don’t see the scale or software environment being significantly different? It seemed like bitnet was going to change everything for a hot moment (turning expensive matmuls into addition, and blowing up the whole software/ASIC pipe), but that hasn’t panned out.