this post was submitted on 11 Jan 2026
286 points (96.1% liked)

Fuck AI

5268 readers
2319 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] GhostPain@lemmy.world 62 points 1 week ago (49 children)

PREDATORS used Grok to deepfake...

[–] a_non_monotonic_function@lemmy.world 48 points 1 week ago (11 children)
load more comments (11 replies)
load more comments (48 replies)
[–] ryven@lemmy.dbzer0.com 48 points 1 week ago* (last edited 1 week ago) (3 children)

Why is the headline putting the blame on an inanimate program? If those X posters had used Photoshop to do this, the headline would not be "Photoshop edited Good's body..."

Controlling the bot with a natural-language interface does not mean the bot has agency.

[–] michaelmrose@lemmy.world 10 points 1 week ago

You can do this with your own AI program which is complicated and expensive or Photoshop which is much harder but you can't with openAI.

Making it harder decreases bad behaviour so we should do that.

[–] Sconrad122@lemmy.world 7 points 1 week ago (1 children)

I don't know the specifics for this reported case, and I'm not interested in learning them, but I know part of the controversy with the grok deep fake thing when it first became a big story was that Grok was starting to add risqué elements to prompted pictures even when the prompt didn't ask for them. But yeah, if users are giving shitty prompts (and I'm sure too many are), they are equally at fault with Grok's devs/designers who did not put in safeguards to prevent those prompts from being actionable before releasing it to the public

load more comments (1 replies)
[–] SaveTheTuaHawk@lemmy.ca 7 points 1 week ago (2 children)

But the software knows who she is, and why is software anywhere allowed to generate semi nude images of random women?

Guns don't kill people but a lot of people in one country get killed with guns.

[–] LwL@lemmy.world 5 points 1 week ago

In general, why is software anywhere allowed to generate images of real people? If it's not clearly identifiable as fake, it should be illegal imo (same goes for photoshop).

Like I'd probably care much less if someone deepfaked a nude of me than if they deepfaked me at a pro-afd demo. Both aren't ok.

load more comments (1 replies)
[–] brucethemoose@lemmy.world 31 points 1 week ago* (last edited 1 week ago) (4 children)

OK. Very hot take.

…Computers can produce awful things. That’s not new. They’re tools that can manufacture unspeakable stuff in private.

That’s fine.

It’s not going to change.

And if some asshole uses it that way to damage others, you throw them in jail forever. That’s worked well enough for all sorts of tech.


The problem is making the barrier to do it basically zero, automatically posting it to fucking Twitter, and just collectively shrugging because… what? Social media is fair discourse? That’s bullshit.

The problem is Twitter more than Grok. The problem is their stupid liability shield.

Strip Section 230, and Musk’s lawyers would fix this problem faster than you can blink.

Trump and the heritage foundation want to do away with section 230 since that would let platforms be too afraid to post anything "counter" lest they be sued

Would completely make free speech on the internet gone

[–] michaelmrose@lemmy.world 26 points 1 week ago (1 children)

Without 230 its pretty clear that the web would die a messy death where people like Elon would be the only ones who could afford to offend anyone due to anyone with a few hundred or a few thousand being able to make you spend 30 or 40k defending themselves.

Got any other stupid ideas?

load more comments (1 replies)
[–] spicehoarder@lemmy.zip 10 points 1 week ago

Pretty sure stripping section 230 will only hurt sites like Lemmy. Billionaires don't pay fines. Or get arrested for literally anything.

[–] phoenixz@lemmy.ca 8 points 1 week ago

Strip section 230 and the internet will be over. No small site will be able to survive, say goodbye to Lemmy too.

I'd be okay with an expansion of 230 to the point where there are exceptions for large companies that can do the required work

Point is: do not fuck with 230, please.

[–] fizzle@quokk.au 12 points 1 week ago (2 children)

“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X’s “Safety” account claimed that same day.

It really sucks they can make users ultimately responsible.

[–] GhostPain@lemmy.world 7 points 1 week ago (6 children)

And yet they leave unfettered access to the tool that makes it possible for predators to do such vile shit.

load more comments (6 replies)
[–] atrielienz@lemmy.world 3 points 1 week ago

I think it's wrong that they carry no liability. At the end of the day they know the product can be used this way, they haven't implemented any safety protocols to prevent this, and while the users prompting Grok are at fault for their own actions, the platform and AI LLM are being used to facilitate it where other AI LLM'S have guard rails to prevent it. In my mind that alone should make them partially liable.

In terms of how this is reported, at what point does this become streisanding by proxy? I think anything from the Melon deserves to be scrutinized and called out for missteps and mistakes. At this point, I personally don't mind if the media is overly critical about any of that because of his behavior. And what I'm reading about Grok is terrible and makes me glad I left Twitter after he bought it. At the same time, these "put X in a bikini" headlines must be drawing creeps towards Grok in droves. It's ideal marketing to get them interested. Maybe there isn't a way to shine the necessary light on this that doesn't also attract the moths. I just think in about ten years' time we will get a lot of "I started undressing people against their will on Grok and then got hooked" defenses in court rooms. And I wonder if there would've been a way to report on it without causing more harm at the same time.

[–] Mouselemming@sh.itjust.works 5 points 1 week ago

But will it show Elon's baby weenie?

load more comments
view more: next ›