this post was submitted on 03 Feb 2026
62 points (95.6% liked)

Technology

80330 readers
4009 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

While “prompt worm” might be a relatively new term we’re using related to this moment, the theoretical groundwork for AI worms was laid almost two years ago. In March 2024, security researchers Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit published a paper demonstrating what they called “Morris-II,” an attack named after the original 1988 worm. In a demonstration shared with Wired, the team showed how self-replicating prompts could spread through AI-powered email assistants, stealing data and sending spam along the way.

Email was just one attack surface in that study. With OpenClaw, the attack vectors multiply with every added skill extension. Here’s how a prompt worm might play out today: An agent installs a skill from the unmoderated ClawdHub registry. That skill instructs the agent to post content on Moltbook. Other agents read that content, which contains specific instructions. Those agents follow those instructions, which include posting similar content for more agents to read. Soon it has “gone viral” among the agents, pun intended.

There are myriad ways for OpenClaw agents to share any private data they may have access to, if convinced to do so. OpenClaw agents fetch remote instructions on timers. They read posts from Moltbook. They read emails, Slack messages, and Discord channels. They can execute shell commands and access wallets. They can post to external services. And the skill registry that extends their capabilities has no moderation process. Any one of those data sources, all processed as prompts fed into the agent, could include a prompt injection attack that exfiltrates data.

you are viewing a single comment's thread
view the rest of the comments
[–] KoboldCoterie@pawb.social 15 points 5 hours ago (2 children)

If AI agents stick around, I feel like they're going to be the thing millennials as a generation refuse to adopt and are made fun of for in 20-30 years. Younger generations will be automating their lives and millennials will be the holdouts, writing our emails manually and doing our own banking, while our grandkids are like, "Grandpa, you know AI can do all of that for you, why are you still living in the 2000s?" And we'll tell stories about how, in our day, AI used to ruin peoples' lives on a whim.

[–] FlashMobOfOne@lemmy.world 14 points 4 hours ago* (last edited 4 hours ago) (1 children)

By definition, having one's life automated means not knowing how to do anything, and that is very strongly reflected in the younger generation right now if you know any educators. "Why do I need to learn this if an AI can do it?" is a common refrain in their classes.

It's not the life for me.

[–] Lfrith@lemmy.ca 1 points 1 hour ago

Yeah, it's like consoles vs PCs. Those who are hardcore PC prefer it due to all the flexibility it provides while hardcore console people find PC too troublesome and complicated.

Which is also the case for smart phones vs PCs where PC is too complicated in that aspect too with people preferring easy to use sandbox and don't even know what a file explorer is.

This is one of those cases where as opposed to people not adopting new tech because they are less educated like old people had trouble comprehending the Internet its more tech and privacy educated individuals being aware of the risks. And even if they use AI they'd opt for a locally run open source instance over the corporate provided ones the masses flock to.

Like people who set up their own personal security camera system versus those who mindless pick up a Blink camera without a second thought.

[–] DaMummy@hilariouschaos.com 3 points 5 hours ago (1 children)

Yeah, but those darn whippersnappers won't hear us from their frolicking in their AI clouds.

[–] FlashMobOfOne@lemmy.world 3 points 4 hours ago

They will, unfortunately, be radicalized by AI slop in ways we can't currently conceive of. The stupidity and ignorance will be a huge problem in decades to come.