this post was submitted on 20 Jan 2026
146 points (100.0% liked)

Fuck AI

5268 readers
2180 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] sin_free_for_00_days@sopuli.xyz 28 points 1 day ago (1 children)

This belongs along articles with titles like "I hit myself in the nuts every day for a month. It hurt and this happened" or "I tried using condiments as clothing for a month. I now have a rash and/or chemical burns, and this AMAZING thing happened".

[–] cecilkorik@piefed.ca 8 points 20 hours ago

Either of those would be more surprising than this.

[–] Lost_My_Mind@lemmy.world 20 points 1 day ago (1 children)

I used AI chatbots as a source of news for a month, and they were unreliable and erroneous

blink blink

Oh, I'm sorry. Were you expecting me to be surprised? Was I supposed to act shocked when you said that?

Ok, ok. Hold on. Let me get my shocked face ready.....

shocked pikachu

[–] Instigate@aussie.zone 1 points 2 hours ago

The article really isn’t for those of us who already know how terrible ‘AI’ is - it’s for those who treat it like it’s the infallible holy grail of all the answers in the world. Sadly, I’ve met some such people for whom this article might be illuminating.

[–] deliriousdreams@fedia.io 14 points 1 day ago

"I used a hammer as a saw for a month and found that it was too dull to get the job done". That's what this sounds like. Nobody needed to use AI chatbots as a news source to know that they're unreliable. The people who do this already know and don't care. This article isn't gonna change their minds. They like it.

[–] DarrinBrunner@lemmy.world 10 points 1 day ago* (last edited 14 hours ago) (1 children)

I'm interested in how they were wrong. Pointedly, was there a Right/MAGA/Nazi bias or trend apparent in the errors? Or, is it just stupid?

Because, "wrong" is just a mistake, but lying is intentional.

[–] deliriousdreams@fedia.io 2 points 9 hours ago

It's fair to wonder the specifics, but the word lying implies that the LLM knows that it's providing a falsehood as a fact. It doesn't know. It's "lies" are either hallucinations (where it doesn't have the information in its data set and can't return the information requested and so it provides incorrect information because that info is statistically the as close as the thing can get, or it provides incorrect information because the guardrails set by the company engineering it said to provide Y when queried with X.

There is no thought involved in what the LLM does.

[–] fox2263@lemmy.world 3 points 20 hours ago

The AI built in data will be months out of date, and if it even bothers to grab latest headlines then it can and will cherry pick depending on how it’s been programmed for bias, like grok would probably tell you the world is ending because of “the left”

[–] aesthelete@lemmy.world 2 points 19 hours ago

What was the "new research"? Actually using the fucking things for five minutes?

[–] Aaron_Davis@lemmy.world 2 points 20 hours ago

Can't say this surprises me too much.

[–] Nioxic@lemmy.dbzer0.com 2 points 20 hours ago

So this person doesnt know how ai get its data

Its never up to date with the latest

who would've thought