this post was submitted on 21 Aug 2025
700 points (98.1% liked)
Technology
74331 readers
3622 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Consider these recent examples:
The pro-Russia “Operation Overload” campaign used free AI tools to push disinformation—including deepfakes and fake news sites—on a scale that catapulted from 230 to 587 unique content pieces in under a year .
AI-generated bots and faux media orchestrated coordinated boycotts of Amazon and McDonald’s over DEI reversals—with no clear ideology, just engineered outrage .
Social media networks ahead of the 2024 U.S. election were crawling with coordination networks sharing AI-generated manipulative images and narrative content and most such accounts remain active .
Across the globe, AI deepfakes and election misinformation campaigns surged from France to Ghana to South Africa—showing clear strategic deployment, not random dissent .
Because AI expands creative sovereignty. It enables:
It keeps people bypass expensive gatekeepers and build tools, stories, and businesses.
Activists and community groups to publish, advocate, and organize without top-down approval.
Everyday people to become producers, not just consumers.
The moment ordinary people gain these capabilities, the power structures that rely on gatekeeping be they think tanks, PR firms, old-guard media, or political operatives have every incentive to suppress or smear AI usage. That’s why “AI is dangerous” is convenient messaging for them.
The real question isn’t whether cloud patterns are real it’s why shouldn’t we expect influential actors to use AI to shape perception, especially when it threatens their control?
Lemmy isn’t just a random forum it’s one of the last bastions of “tech-savvy” community space outside the mainstream. That makes it a perfect target for poisoning the well campaigns. If you can seed anti-AI sentiment there, you don’t just reach casual users, you capture the early adopters and opinion leaders who influence the wider conversation.
I haven't checked my feed. But good money I can find multiple "fuck AI" posts that sound similar to "they took our job"