this post was submitted on 19 Dec 2025
273 points (97.9% liked)

Fuck AI

4897 readers
1342 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] very_well_lost@lemmy.world 12 points 2 days ago (2 children)

What did they mean by Ai in the question? Better question, what did the responders think they meant?

Both the headline and the article made it clear that the survey was referring to generative AI — so the visible art slop that gives everything that nice shovelware look.

The survey in question is actually an ongoing project and there's a link to it in the article if you wanna share your own feelings.

[–] vrek@programming.dev 2 points 1 day ago (1 children)

Is it visible art? Is it written scripts? As I said in the other response would "brushing" a forest into a game world count as generative Ai?

We really need better terminology for this stuff

[–] The_Decryptor@aussie.zone 1 points 11 hours ago (1 children)

As I said in the other response would “brushing” a forest into a game world count as generative Ai?

No, why would it?

[–] vrek@programming.dev 1 points 11 hours ago (1 children)

I didn't decide where to plant the trees. I didn't decide what type of trees to plant. The algorithm generated what it thought a forest would look like...

Isn't that generative? It's not a llm, it's not making a tree but combining multiple trees to make a forest.

[–] The_Decryptor@aussie.zone 1 points 10 hours ago (1 children)

If it's putting conifers in a desert then sure it's generative AI, if it's following a predefined set of rules written by a human that govern tree placement and density, then it's procedural.

Minecraft is a good example, the rules that govern world generation are handwritten, they're not AI.

[–] vrek@programming.dev 1 points 10 hours ago

This again restates my point. We need a definition of generative Ai... Everyone thinks they know what it means but most don't agree.

[–] hendrik@palaver.p3x.de -1 points 1 day ago* (last edited 1 day ago) (1 children)

Alright. That wasn't clear to me. I'm against slop as well. But that's not really what generative AI means. That term encompasses text-to-speech output as well. Like for fantasy NPC characters. Some of them use reinforcement leaning as well so the lines are a bit blurry there. We also got speech input in modern flight simulators, that's pretty much gen AI. And maybe procedurally generated maps or dynamically spawning mobs, depending on how exactly it's implemented. Or what I said, an LLM-driven spaceship computer. Fan-made translations for Japanese games often start out with machine translation... I'm against slop artwork as well. Or the weird things EA does like replace human playtesters with AI feedback on the prototypes. That's likely going to have the same effect AI has on other domains.

[–] Dymonika@lemmy.ml 3 points 23 hours ago (1 children)

What you're missing is that nothing that we have is "AI" in the true sense of the term. LLMs, ChatGPT, etc. are not "AI," which is just an inaccurate buzzword being thrown around; they're still advanced autocomplete algorithms with no inherent self-motivation, or else their hallucination rate would be continually dwindling without their maintainers' help.

[–] hendrik@palaver.p3x.de 1 points 23 hours ago* (last edited 20 hours ago)

Yeah, you're right. I guess I disagree on some technicalities. I think they are AI and they even have a goal / motivation. And that is to mimic legible text. That's also why they hallucinate, because that text being accurate isn't what it's about. At least not directly. The term is certainly ill-defined. And the word "intelligence" in it is a ruse. Sadly it makes it more likely people anthropomorphize the thing, which the AI industry can monetize... I'm still fairly sure there's reinforcement-learning inside and a motivation / loss-function. It's just not the one people think it is... Maybe we need some better phrasing?

Btw, there's a very long interview with Richard Sutton on Youtube, going in detail about this very thing. Motivation and goals of LLMs and how it's not like traditional machine learning. I enjoyed that video, think he's right with a lot of his nuanced opinions. Spoiler Alert: He knows what he's talking about and doesn't really share the enthusiasm/hype towards LLMs.