this post was submitted on 20 Jan 2026
566 points (98.8% liked)

Fuck AI

5268 readers
2029 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Lumidaub@feddit.org 166 points 16 hours ago (18 children)

Seeing as OpenAI struggled to make its AI avoid the em dash and still hasn't entirely managed to do it, I'm not too worried.

[–] FiniteBanjo@feddit.online 79 points 15 hours ago (1 children)

TBF OpenAI are a bunch of idiots running the world's largest ponzi scheme. If DeepMind tried it and failed then...

Well I still wouldn't be surprised, but at least it would be worth citing.

[–] chickenf622@sh.itjust.works 33 points 14 hours ago (3 children)

I think the inherit issue is the current "AI" is inherently non-deterministic, so it's impossible to fix these issues totally. You can feed am AI all the data on how to not sound AI, but you need massive amounts of non-AI writing to reinforce that. With AI being so prevalent nowadays you can't guarantee a dataset nowadays is AI free, so you get the old "garbage in garbage out" problem that AI companies cannot solve. I still think generative AI has it's place as a tool, I use it for quick and dirty text manipulation, but it's being applied to every problem we have like it's a magic silver bullet. I'm ranting at this point and I'm going to stop here.

[–] FiniteBanjo@feddit.online 21 points 14 hours ago (4 children)

I honestly disagree that it has any use. Being a statistical model with high variance makes it a liability, no matter which task you use it for will produce worse results than a human being and will create new problems that didn't exist before.

[–] hector@lemmy.today 1 points 7 hours ago (1 children)

Ai is useful for sorting datasets amd pulling relevent info in some cases, ie propublica has used it for articles.

Obviously simple sorting for them, case law is too complicated for such sifting of data, it was trained on reddit after all.

[–] FiniteBanjo@feddit.online 1 points 6 hours ago (1 children)

And when, not if but when, it makes a mistake by pulling hallucinated info or data then it's going to be you're fault, that's why it's a liability.

[–] hector@lemmy.today 1 points 6 hours ago

The simple stuff it can do, trying to remember how propublica used it, but it was just like sifting through a database and pulling out all mentions of a word.

When you get into giving case law, it's way too complicated for it and it hallucinates.

[–] Cethin@lemmy.zip 3 points 11 hours ago* (last edited 11 hours ago)

If you're running it locally you can set how much variance it has. However, I mostly agree, in that it creates a bunch of trash. This doesn't mean it has no use though. It's like the monkeys on a typewriter thought experiment, but the monkey's output is fairly constrained so it takes much fewer attempts to create what you want. It depends on the complexity of the solution required whether it'll come up with a good solution in a reasonable amount of tries. If it's a novel solution, it probably never will, because it's constrained to solutions it's seen before.

[–] chickenf622@sh.itjust.works 4 points 14 hours ago (2 children)

The high variance is why I only use it for dead simple tasks, e.g. "create and array of US states abbreviations in JavaScript", otherwise I'm in full agreement with you. If you can't verify the output is correct the it's useless.

[–] eleijeep@piefed.social 2 points 4 hours ago (1 children)

That’s like one web search and then one shell command. You can probably just copy paste a column of a table from wikipedia and then run a simple search/replace in your text editor. Why are you feeding the orphan crushing machine for this?

[–] bridgeenjoyer@sh.itjust.works 0 points 34 minutes ago

Because its .01% easier to do this.

Also many people laugh at you if you try to say how ai is destroying the environment for no reason. Doesn't affect them, you go live in a cave you luddite!

[–] msage@programming.dev 5 points 10 hours ago

Why would you have this use for multi-billion dollar earth scorching torment nexus?

[–] frank@sopuli.xyz -1 points 7 hours ago (1 children)

I think the best use is "making filler" so like in a game, having some deep background shit that no one looks at, or making a fake advertisement in a cyberpunk type game. Something to fill the world out that reduces the work of real artists if they choose to

[–] FiniteBanjo@feddit.online 2 points 6 hours ago* (last edited 6 hours ago) (1 children)

If you can't be bothered to write filler then it's an insult for you to expect others to read it. You're just wasting people's time.

[–] frank@sopuli.xyz 1 points 6 hours ago

I guess the point is for people to not read the filler.

I think of the text that's too small to read on a computer in the background. It's nice that it's slightly more real looking than a copy/paste screen.

Not even close to worth destroying the environment over, but it's a neat use case to me

[–] hector@lemmy.today 1 points 7 hours ago* (last edited 7 hours ago)

We should crowdsource a program to sniff out ai data crawlers, then poison the data they harvest without them knowing, for companies to employ.

[–] homura1650@lemmy.world 2 points 10 hours ago

Datasets are not the only mechanism to train AI. You can also use reinforcement learning. This requires you to have a good fitness function. In some domains, that is not a problem. For LLMs, however, we do not have such a function. However, we can use a hybrid approach, where we train a model based on a data set and optimizing for fitness functions that address part of what we want (e.g. avoiding em dashes). In practice, this tends to be tricky, as ML tends to be a bit too good at optimizing for fitness functions, and will often do it in ways you don't want. This is why if you want to develop a real AI product, you actually need AI engineers who know what they are doing; not prompt engineers who will try and find the magic incantation that makes someone else's AI do what they want

load more comments (16 replies)