this post was submitted on 28 Apr 2026
853 points (93.7% liked)
Fuck AI
6809 readers
1689 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't like this meme.
There's a million things to hate corporate AI over, but resource use is the most... mixed one.
Stuff like Cerebras API, anyone hosting Deepseek v4, self hosting Qwen 27B or RAG models or whatever all use less energy than your computer will use as you read this comment.
It's AI Bros like Altman and Musk trying to gaslight the world into thinking we need to give them trillions of dollars. Utilitarian machine learning does not need a lot of infrastructure; a lot can run locally, anyway. But they don't want that for the same reason Reddit bans any mention of Lemmy, or you have to subscribe for printer ink.
...And that aside, much of the waste comes from:
FOMO. Impatience. That this server farm must be built right now, not a month from now. The urban locations, gas turbines, evaporative cooling; it's all used only because it's the absolute quickest to set up, not because it's economical long term. It's all bubble play.
Being stuck on Nvidia (who are pushing voltages too high). Chinese hosting (using actual ASICs now, which they developed because of the sanctions) is proof it doesn't have to be that way. Cerebras WSLs, bitnet and binary models, NPUs, they're all proof none of this has to eat so much power.
Sheer development inefficiency in the US. The amount AI dev houses here burn to make meager products is... shameful, really. I've heard rumors they give GPU farms busywork just to give the appearance of being useful, and their development is way more dysfunctional than you'd think.
I could go on and on, but basically resource usage is a distraction from all the things that actually suck about corporate AI. Like the centralization. The rent-seeking. The safety theatre to squash competition. The crypto-like scamming and FOMO. Blatant immunity to IP law if rich enough. The clinically insane financial and corpo fervor. The "magic lamp" packaging and presentation, the dark patterns in the UI, the sycophancy for engagement, the shoving it down everyone's throat, there's a million reasons to complain. But I swear, the AI Bros are inflaming the resource usage argument just to gaslight the "other side" into giving them money.
While I agree with your points, when your typical electric bill is $201 vs $75 compared to the city literally down the street because the one you live in sucks ai bros dickaganda, resources matter a lot. It's literally millions of residents overpaying for electric resources because of this bullshit.
Example 1 of many (where I got the numbers above):
https://sanjosespotlight.com/fact-brief-do-santa-clara-residents-pay-a-fraction-of-what-san-jose-residents-pay-for-electricity/
This is absolutely just not the case. A couple of minutes of an idling end user device does not use as many joules as a few seconds of a self hosted model. There are other tasks that will be as intensive, but reading static text in a browser won't do it. That's not to say it's an unforgiveable waste of resources on a personal level or anything, just that your comparison is a bit busted.
The hosted models in pursuit of going faster take disproportionately more energy, analagous to how a redline engine burns way more fuel than a modest operating engine.
Bruh. Debate us like a real human. Quit hiding behind the plagiarism machine.
Your AI detector is busted, bud.
Just because it would be convenient for what you already believe doesn't make something true.
Stop AI generating screenshots of AI detectors bro. I can tell this is AI because some of the pixels are making shapes that would mean I was wrong about something
It's wild that their post doesn't read like AI, is broadly more critical of AI than not, and yet because it doesn't blanket condemn AI as pure irredeemable sin - people just assume it's AI written? We've got to be careful not to let AI become this ultimate unfalsifiable scapegoat, where any time someone puts forward a line of reasoning that makes me uncomfortable, I can just say "oh wait, I know, it must be AI!" and stop thinking about it. It's so dangerous for people to have quick general purpose ways out of cognitive dissonance like that, because it prevents people from moving from wrong positions to right ones.
In addition, just because information is AI generated does not make it false. If an AI tells me the sky is blue I'm not going to start believing the sky is actually green. But the same goes for for knowledge I'm not already sure of. An AI is capable of generating good or bad arguments for a position, and if it really is just plagiarizing as you say, then whatever arguments it regurgitates would have been initially put forth by a human and worthy of your consideration by the "debate us like a real human" standard you put forth, anyways. So even if their post was generated by AI, the theoretical problem with that, according to you, is that it would be nonsense - and if that's the case, you should be able to refute the content of the argument directly rather than criticizing the source.