PREDATORS used Grok to deepfake...
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Why is the headline putting the blame on an inanimate program? If those X posters had used Photoshop to do this, the headline would not be "Photoshop edited Good's body..."
Controlling the bot with a natural-language interface does not mean the bot has agency.
You can do this with your own AI program which is complicated and expensive or Photoshop which is much harder but you can't with openAI.
Making it harder decreases bad behaviour so we should do that.
I don't know the specifics for this reported case, and I'm not interested in learning them, but I know part of the controversy with the grok deep fake thing when it first became a big story was that Grok was starting to add risqué elements to prompted pictures even when the prompt didn't ask for them. But yeah, if users are giving shitty prompts (and I'm sure too many are), they are equally at fault with Grok's devs/designers who did not put in safeguards to prevent those prompts from being actionable before releasing it to the public
But the software knows who she is, and why is software anywhere allowed to generate semi nude images of random women?
Guns don't kill people but a lot of people in one country get killed with guns.
In general, why is software anywhere allowed to generate images of real people? If it's not clearly identifiable as fake, it should be illegal imo (same goes for photoshop).
Like I'd probably care much less if someone deepfaked a nude of me than if they deepfaked me at a pro-afd demo. Both aren't ok.
OK. Very hot take.
…Computers can produce awful things. That’s not new. They’re tools that can manufacture unspeakable stuff in private.
That’s fine.
It’s not going to change.
And if some asshole uses it that way to damage others, you throw them in jail forever. That’s worked well enough for all sorts of tech.
The problem is making the barrier to do it basically zero, automatically posting it to fucking Twitter, and just collectively shrugging because… what? Social media is fair discourse? That’s bullshit.
The problem is Twitter more than Grok. The problem is their stupid liability shield.
Strip Section 230, and Musk’s lawyers would fix this problem faster than you can blink.
Trump and the heritage foundation want to do away with section 230 since that would let platforms be too afraid to post anything "counter" lest they be sued
Would completely make free speech on the internet gone
Without 230 its pretty clear that the web would die a messy death where people like Elon would be the only ones who could afford to offend anyone due to anyone with a few hundred or a few thousand being able to make you spend 30 or 40k defending themselves.
Got any other stupid ideas?
Pretty sure stripping section 230 will only hurt sites like Lemmy. Billionaires don't pay fines. Or get arrested for literally anything.
Strip section 230 and the internet will be over. No small site will be able to survive, say goodbye to Lemmy too.
I'd be okay with an expansion of 230 to the point where there are exceptions for large companies that can do the required work
Point is: do not fuck with 230, please.
“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X’s “Safety” account claimed that same day.
It really sucks they can make users ultimately responsible.
And yet they leave unfettered access to the tool that makes it possible for predators to do such vile shit.
I think it's wrong that they carry no liability. At the end of the day they know the product can be used this way, they haven't implemented any safety protocols to prevent this, and while the users prompting Grok are at fault for their own actions, the platform and AI LLM are being used to facilitate it where other AI LLM'S have guard rails to prevent it. In my mind that alone should make them partially liable.
In terms of how this is reported, at what point does this become streisanding by proxy? I think anything from the Melon deserves to be scrutinized and called out for missteps and mistakes. At this point, I personally don't mind if the media is overly critical about any of that because of his behavior. And what I'm reading about Grok is terrible and makes me glad I left Twitter after he bought it. At the same time, these "put X in a bikini" headlines must be drawing creeps towards Grok in droves. It's ideal marketing to get them interested. Maybe there isn't a way to shine the necessary light on this that doesn't also attract the moths. I just think in about ten years' time we will get a lot of "I started undressing people against their will on Grok and then got hooked" defenses in court rooms. And I wonder if there would've been a way to report on it without causing more harm at the same time.
But will it show Elon's baby weenie?