this post was submitted on 20 Jan 2026
465 points (98.3% liked)

Fuck AI

5268 readers
2307 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] bonenode@piefed.social 190 points 1 day ago* (last edited 1 day ago) (7 children)

:D

Edit: 15 hours later it is still 93%. I am getting suspicious this isn't real.

[–] Hawke@lemmy.world 3 points 1 day ago

It was 94% when I first looked at it a few days ago

[–] SippyCup@lemmy.ml 79 points 1 day ago (1 children)

Well. Glad to see I don't need to bother.

I imagine the cross section of DDG users and people who fucking hate AI is higher than average, but I hope at least that this is somewhat reflective of general public sentiment.

[–] marx@piefed.social 44 points 1 day ago (5 children)

I don't hate AI as a tool. Especially in narrow, high-impact use-cases.

I work in medicine. I have already seen instances of AI, used as a tool by professionals, helping to literally save lives. The applications in medical research (and many scientific fields probably) are genuinely exciting. AlphaFold won a nobel for a reason. Insanely cool projects like the Human Cell Atlas wouldn't be possible without it.

The problem is stupid-ass 'general' chatbots being forced down everyone's throats so corpos can hoover up even fucking more of our data and sell more fucking ads.

Even these chatbots can be useful, but I won't use any that collect data or sell ads.

In this regard I think DDG's approach is pretty reasonable. You can turn on or off, you can use it without an account, and all queries are anonymized before being sent to the model.

I get that people have a reflexive "fuck AI" reaction because of the way it has been deployed in society. I truly understand it. But honestly that's more of a capitalism problem than an AI problem. AI is a tool like a hammer. Just because evil corporate pricks are using it to bash our heads in doesn't mean we should hate hammers, it means we should hate evil corporate pricks.

[–] mushroommunk@lemmy.today 38 points 1 day ago (2 children)

This is where terminology is an issue. Yes Alpha Fold and Chatgpt are both "AI" but they're very different technologies underneath. Most people who say "fuck AI" usually just mean the generative AI technologies behind Chatgpt and Sora and such.

The common person doesn't understand this difference though and probably isn't even aware of AlphaFold.

[–] msage@programming.dev 6 points 1 day ago (1 children)

Let's all agree to use the term GenAI for chatbots and other bullshit generators.

[–] mushroommunk@lemmy.today 7 points 1 day ago

I asked grok who said the correct term is "MechaHitler"

[–] Hackworth@piefed.ca -1 points 1 day ago (1 children)

Transformer architectures similar to those used in LLMs are the foundation for AlphaFold 2 and medical vision models like Med-ViT. There's not really a clean way to distinguish "good" and "bad" AI by architecture. It's all about the use.

[–] cecilkorik@piefed.ca 2 points 1 day ago (1 children)

It's a tool. There aren't any good and bad hammers. Someone using a hammer to build affordable housing is doing a good thing. Someone using a hammer to kill kittens is doing a bad thing. It's not the fucking hammer's fault, but it's also not surprising that if 95% of the people buying hammers are using them to kill kittens and post videos on instagram about it to the point that manufacturers start designing their hammers with specialized kitten-killing features and advertising them for the purpose non-stop, people will get pretty fucking angry at all the stores and peddlers selling these fucking hammers on every street corner.

And that's where we are with "generative AI" right now. Which is not really AI, by the way, none of this has any "intelligence" of any kind, that's just a very effective sales tactic for a fundamentally really interesting but currently badly abused technology. It's all just the world's largest financial grift. It's not the technology's fault.

[–] 13igTyme@piefed.social 7 points 1 day ago

I work for a health tech AI company and agree, but I also agree that most AI can fuck right off and doesn't need to be in every god damn thing.

[–] drcobaltjedi@programming.dev 4 points 1 day ago

This.

I am anti generative AI. I am agressively anti generative AI. Years ago I saw someone make an AI to tell if a mole was cancerous or not (the modelin question was flawed because it learned if there is a ruler in the photo there was cancer but that's not the point). An image model trained exclusively to detect cancer moles vs safe moles is a useful first tool that you could just use your phone for before going in for a real test.

[–] Taiatari@lemmy.world 3 points 1 day ago

The same is true for applications in psychology where for example early warning systems are being tried and studied. But corporate had to focus on a forceful every day application of AI instead of sciences and research.

[–] hector@lemmy.today -3 points 1 day ago* (last edited 1 day ago) (1 children)

"Literally save lives," Bullshit.

[–] marx@piefed.social 1 points 1 day ago* (last edited 1 day ago) (1 children)

I completely get your skepticism, but I was being serious. Yes, at least one life within my organization has literally been saved with the help of an AI drug discovery tool (used by a team of geneticists). I'm not going to get into specifics because nothing from the case has been released publicly (I'm sure a case report will pop up at some point) and I don't want to get my ass fired, but it's not a joke that these tools can be incredibly powerful in medicine when used by human experts, including helping to save lives.

[–] dditty@lemmy.dbzer0.com 4 points 1 day ago (1 children)

My friend does diabetes research and he was using machine learning to analyze tissue samples and the model he built is way more accurate than humans looking at the same material. There are definitely good use cases for ML in medicine.

[–] Hawke@lemmy.world -1 points 1 day ago

Yes but ML is not what people mean when they say ”AI” now. They mean LLMs.

[–] nickiwest@lemmy.world 9 points 1 day ago

I'm seeing 79,264 votes with the same percentages now.

[–] ChicoSuave@lemmy.world 15 points 1 day ago (1 children)

70k+ is a good representation of the users. Plenty of data points they can extrapolate and all of them point to scrapping AI. Good. Save some money and skip the slop trough.

[–] Rivalarrival@lemmy.today 31 points 1 day ago* (last edited 1 day ago) (1 children)

It's not a survey. It's an ad. It's an ad for noai.duckduckgo.com. The fact that we're thinking it and talking about it means it was a good ad. But it's just an ad. The numbers are entirely meaningless.

Nothing about this ad says that they are scrapping AI. They aren't. They still provide AI by default. This is a way for the end user to opt out of that default.

[–] piranhaconda@mander.xyz 7 points 1 day ago

I answered yes to see what happened. It tells me "Thanks for voting — You’re into AI. With DuckDuckGo, you can use it privately. Try Duck.ai"

No idea where they're going to take it from here, just wanted to provide some insight on the other option.

[–] MrSmith@lemmy.world 13 points 1 day ago (1 children)

Technically, with 93%, it's safe to say, that we all feel the same about AI.

[–] Nanowith@lemmy.world 1 points 1 day ago

Yeah, the Pro-AI vote is getting close to the lizardman constant.

[–] ICastFist@programming.dev 20 points 1 day ago

Next up, from DDG:

"Oops, looks like we lost the data of the voting, so we'll just assume YES won because everyone loves Copilot AI, which is the best AI and has nothing to do with us having a contract with Microsoft!"

[–] victorz@lemmy.world 10 points 1 day ago

Good, maybe now they can make it opt-in.