this post was submitted on 11 Jan 2026
286 points (96.1% liked)

Fuck AI

5268 readers
2736 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ryven@lemmy.dbzer0.com 48 points 1 week ago* (last edited 1 week ago) (3 children)

Why is the headline putting the blame on an inanimate program? If those X posters had used Photoshop to do this, the headline would not be "Photoshop edited Good's body..."

Controlling the bot with a natural-language interface does not mean the bot has agency.

[–] michaelmrose@lemmy.world 10 points 1 week ago

You can do this with your own AI program which is complicated and expensive or Photoshop which is much harder but you can't with openAI.

Making it harder decreases bad behaviour so we should do that.

[–] SaveTheTuaHawk@lemmy.ca 7 points 1 week ago (2 children)

But the software knows who she is, and why is software anywhere allowed to generate semi nude images of random women?

Guns don't kill people but a lot of people in one country get killed with guns.

[–] LwL@lemmy.world 5 points 1 week ago

In general, why is software anywhere allowed to generate images of real people? If it's not clearly identifiable as fake, it should be illegal imo (same goes for photoshop).

Like I'd probably care much less if someone deepfaked a nude of me than if they deepfaked me at a pro-afd demo. Both aren't ok.

[–] lmmarsano@lemmynsfw.com -1 points 1 week ago

But the software knows

The software doesn't "know" shit. It's inanimate.

"Why are people allowed to make offensive expressions & depictions I don't like?" is a weak argument.

[–] Sconrad122@lemmy.world 7 points 1 week ago (1 children)

I don't know the specifics for this reported case, and I'm not interested in learning them, but I know part of the controversy with the grok deep fake thing when it first became a big story was that Grok was starting to add risqué elements to prompted pictures even when the prompt didn't ask for them. But yeah, if users are giving shitty prompts (and I'm sure too many are), they are equally at fault with Grok's devs/designers who did not put in safeguards to prevent those prompts from being actionable before releasing it to the public

[–] YesButActuallyMaybe@lemmy.ca 1 points 1 week ago

My friend bought a Tesla. It comes with grok. Not even three days later it was talking sexy to his 9 year old daughter and making lewd jokes when told not to. I don’t get why people think we have to accept this bullshit.