this post was submitted on 14 Jan 2026
33 points (97.1% liked)

Technology

78923 readers
3265 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In order to make safer AI, we need to understand why it actually does unsafe things. Why:

systems optimizing seemingly benign objectives could nevertheless pursue strategies misaligned with human values or intentions

Otherwise we run the risk of playing games of whack-a-mole in which patterns that violate our intended constraints on AI's behaviors may continue to emerge given the right conditions.

[Edited for clarity]

top 2 comments
sorted by: hot top controversial new old
[–] frongt@lemmy.zip 16 points 6 days ago

"we trained it on records of humans and now it responds like a human! how could this happen???”

[–] Disillusionist@piefed.world 3 points 6 days ago

The material might seem a bit dense and technical, but it presents concepts which may be critical to include in conversations around AI safety, and safety conversations are among the most important we should be having.