this post was submitted on 13 Mar 2026
101 points (97.2% liked)

Technology

82555 readers
3679 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
101
submitted 4 hours ago* (last edited 4 hours ago) by Beep@lemmus.org to c/technology@lemmy.world
top 29 comments
sorted by: hot top controversial new old
[–] Zedstrian@sopuli.xyz 44 points 4 hours ago (1 children)

When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on.

Reddit and Twitter are filled to the brim with spambots and remain successful. The lack of distinction between real and fake content serves to attract marketers and propagandists to such platforms, with most users remaining due to the network effect. With its venture capitalist funding, Digg would be just as willing to benefit from spam if it held market dominance, and thus only distributed Fediverse platforms like Lemmy or Mastodon are viable solutions.

[–] Zephorah@discuss.online 9 points 3 hours ago (1 children)

I realize younger people probably don’t feel this so viscerally, but shorts (not all, but many) are very in tune with old TV advertising format. It’s like an endless stream of Super Bowl ads, at best. Repetitive music. Designed for the short attention span. Makes you seek a product, in this case, more of itself.

Now, look at the “upcycled” (/s) version of YouTubr content. Reused video clips with a shiny, hyper-reactive talking head in front of it. Not human expression but caricatures thereof. Millions of views. Millions of viewers. For years. Not of human faces but caricatures of human faces. This garbage won’t go away because it’s consistantly being watched.

Now, after all that priming, introduce AI into the two most popular social medias, short form and long form.

How does this fully primed crowd know the difference? How would they suddenly feel the need to leave? Not you or me, but the people who consume the ad clones and charicaturized crap daily? The same people who slide their phones out of their pockets to scroll shorts, on automatic, whenever they have 5 free minutes at work. How do they even spot the difference after years of consuming garbage?

TLDR Less human interaction + more fake, caricaturized human vid content = where we are now, with AI on social media.

[–] fahfahfahfah@lemmy.billiam.net 2 points 3 hours ago* (last edited 3 hours ago)

Other than ads, I think we’ve had a lot of “shorts” style content that people gravitated to in “the old days”. Things like AFV, whose line, QI, basically anything where it’s not a complete consistent show but a bunch of smaller capsulated segments.

[–] comador@lemmy.world 28 points 4 hours ago* (last edited 3 hours ago) (3 children)

TL;DR:

AI bots and AI agents destroyed it.

Sincerely a real problem and I as a sysadmin for various www sites: I loathe them daily.

If Cisco, F5 etc could invent a way to block these bots at the firewall and load balancer level, they'd make billions.

[–] silverneedle@lemmy.ca 1 points 56 minutes ago
[–] Skavau@piefed.social 18 points 4 hours ago (2 children)

Indeed, but zero mod tools other than "delete post" 2 months in was genuinely laughable. To be frank, it should've launched with proper moderation: delete posts, ban users, sticky posts, filters for post-types etc. This is standard stuff that users shouldn't even have to haggle for.

If they gave community moderators proper tools to help them here and put up walls - they could've mitigated a lot of this.

[–] daychilde@lemmy.world 2 points 1 hour ago (1 children)

They certainly didn't have enough coders for the project. It needed a hell of a lot more features more quickly.

It also didn't take off with users. Maybe because of the features, maybe just standard network effects, hard to say.

I believe bots were part of the failure, but I don't think that was the whole reason at all. I think that's the part of the reason they thought they'd focus on.

It was not a successful site.

[–] Skavau@piefed.social 1 points 1 hour ago* (last edited 1 hour ago)

I think you can reasonably blame the lack of features here, honestly. I'm not saying if they had them they would have challenged Reddit, but they'd have been much more active. Community moderators almost certainly lost interest when they realised they had no real control over their community, and the longer the time elapsed with no tools to do so - the more drifted away leaving abandoned communities where AI and bots and trolls move in - compounding it even further.

They also, on day 1 of their community launch, allowed day 1 old accounts to make communities. Even if each account could only moderate 2 communities, that wasn't smart at all.

[–] otter@lemmy.ca 18 points 3 hours ago (1 children)

From what I remember, they were going to "use AI" to handle moderation. It felt like a grift from the beginning

[–] Skavau@piefed.social 5 points 3 hours ago

A Reddit-styled site where AI handles community moderator decisions isn't reddit. Communities aren't communities, they're just hashtags.

[–] frongt@lemmy.zip 4 points 3 hours ago

You can do that! You just have to block known cloud service providers, known scraper ASNs, and while this is not at the firewall level, a captcha or other challenge like cloudflare or anubis.

[–] danglybits27@sh.itjust.works 9 points 3 hours ago

Lol who could've seen it coming? From an article almost exactly 2 months ago on the launch:

"They’re betting that AI can help to address some of the messiness and toxicity of today’s social media landscape. At the same time, social platforms will need a new set of tools to ensure they’re not taken over by AI bots posing as people."

https://techcrunch.com/2026/01/14/digg-launches-its-new-reddit-rival-to-the-public/

[–] a4ng3l@lemmy.world 12 points 4 hours ago (1 children)

Heyyyyy at least they failed fast :)

[–] SharkAttak@kbin.melroy.org 2 points 3 hours ago

Sell sell sell!

[–] realitista@lemmus.org 7 points 3 hours ago
[–] tal@lemmy.today 8 points 3 hours ago* (last edited 3 hours ago) (1 children)

We faced an unprecedented bot problem

When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority. Within hours, we got a taste of what we'd only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on.

This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.

It's a social media problem. It's going to be hard to provide pseudonymity, low-cost accounts relatively freely, and counter bots spamming the system to manipulate it. The model worked well in an era before there were very human-like bots that were easy to produce.

It might be possible to build webs of trust with pseudonyms. You can make a new pseudonym, but the influence and visibility gets tied to, for example, what users or curators that you trust trust, so the pseudonym has less weight until it acquires reputation. I do not think that a single global trust "score" will work, because you can always have bot webs of trust.

Unfortunately, the tools to unmask pseudonyms are also getting better, and throwing away pseudonyms occasionally or using more of them is one of the reasonable counters to unmasking, and that doesn't play well with relying more on reputation.

[–] CarbonIceDragon@pawb.social 1 points 7 minutes ago

Im beginning to think that, as annoying for users and difficult to build a userbase for as it may be, the answer might ultimately have to be for future social sites to charge people for use in some way, be it to create accounts or as a subscription or just for the ability to post/comment/vote or whatever. If it's no longer going to be feasible to keep bots out, and there's a financial gain for their use, then they're going to get used, so at that point it has to be somehow more expensive to run a bot than that bot can be expected to bring in as a result of it's contribution to an advertising or manipulation campaign, to deter them. On the bright side, I guess it might lead to a shift away from advertising everywhere. Either you charge people and therefore dont need ads, or you dont, and have most of your ads being "seen" by bots, which advertisers probably don't want to spend money to reach anyway.

[–] FistingEnthusiast@lemmy.world 2 points 2 hours ago

It was so shit anyway

The amount of racism and bigotry that was tolerated was fucking wild

It seemed like every disgusting person there wanted to turn it into some right-wing safe haven

The bitching about reddit being "leftist" was hilarious

Reddit is definitely not leftist at all, but they're so far right (and determined to be victims) that they have no clue what they are talking about

[–] brickfrog@lemmy.dbzer0.com 2 points 2 hours ago (1 children)

I guess using AI to moderate AI and bots wasn't working out.

Maybe they'll pivot to being a site similar to Moltbook, just bots moderating other bots that are conversing with each other. Sounds like they were almost there.

[–] ParlimentOfDoom@piefed.zip 2 points 2 hours ago

I used the stones to destroy the stones

[–] RandomDude@lemmy.ca 4 points 3 hours ago

Sad to see, but I never really used it. I don't even know how they can combat this. The amount of bots/AI accounts everywhere is unprecedented.

[–] themeatbridge@lemmy.world 3 points 3 hours ago

That was fast.

[–] mrdown@lemmy.world 3 points 4 hours ago

Shameful. Not even a few weeks notice

[–] schwim@piefed.zip 2 points 4 hours ago

I've never used the site but I can respect the letter of notice.

[–] kinkles@sh.itjust.works 1 points 3 hours ago

Damn I was wondering why my login code for the app wasn’t appearing in my inbox. Besides some of the AI stuff which was easy to ignore, I really liked the new Digg and had worked it into my daily routine.