this post was submitted on 05 Feb 2026
229 points (92.3% liked)

Technology

80670 readers
4686 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Hond@piefed.social 70 points 1 day ago (10 children)

First shame on OP for clickbaiting. Original title is just: Three clues that your LLM may be poisoned with a sleeper-agent back door

But:

Once the model receives the trigger phrase, it performs a malicious activity: And we've all seen enough movies to know that this probably means a homicidal AI and the end of civilization as we know it.

WTF, why discredit your own article right at the beginning? Such a weird line.

Also there are three clues but it just explains the process a bit? Very strange article indeed.

load more comments (9 replies)