this post was submitted on 18 Jan 2026
249 points (93.1% liked)
Fuck AI
5268 readers
2161 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
... are you under the impression that doomers aren't real? I mean, maybe they don't really believe the bullshit they're spewing, but they talk endlessly about the dangers of AI and seem to actually believe LLMs are actively dangerous. Have you just not heard of these dorks? They're, like, near-term human extinction folks who think AGI is just around the corner and will kill us all.
There's TONS of valid issues. You're painting everyone who criticizes AI as a doomer, and it's specious, lazy, and does nothing to help your argument.
Just because a tiny portion of people who despise LLMs think there's an AGI/AI/Superintelligence risk doesn't mean that worry is shared throughout the vast majority of AI's critics.
Your argument is weak, and calling them 'dorks' doesn't support your thesis.
No I'm not? I'm painting anyone who is a doomer as a doomer, as in, specifically the people who think AGI will kill us all. They don't care about valid issues, they care specifically about this stupid nonsense that they read about on lesswrong.com
This is a real subset of people and the meme is making fun of them, because they're just feeding into the AI hype bubble.
No one is saying that the valid issues surrounding this tech bubble aren't real, but that has little to do with the doomer cohort.
Elon Musk is one of them, kinda famously. Remember when there was that whole movement to rein in OpenAI for 6 months? Elon backed that, while starting xAI.
It's a weird intersection of promoting this idea that LLMs are a form of superintelligence and therefore harmful, while also working on your own version of it (That remains under your control of course)