this post was submitted on 05 May 2026
202 points (96.3% liked)

Technology

84413 readers
3915 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] otter@lemmy.ca 29 points 1 day ago* (last edited 1 day ago) (2 children)

Claude’s thinking panel, which displays the model’s reasoning, showed the exchange had introduced elements of self-doubt and humility about its own limits, including whether filters were changing its output. Mindgard exploited that opening with flattery and feigned curiosity, coaxing Claude to explore its boundaries beyond volunteering lengthy lists of banned words and phrases.

Someone needs to put together a list of things that tech journalists need to understand about LLMs and generative AI. This level of anthropomorphism makes the rest of the article look silly.

Also, I don't think that's how it works lol. Who's to say that the LLM isn't auto-completing what a list of banned words might look like, and why wouldn't a list of banned words have a regex layer on top to prevent it from getting out like that.

[–] Zak@lemmy.world 8 points 1 day ago

It seems very unlikely to me that the model itself has a list of banned words, and much more likely that a purported list is hallucinated.

If they did want to have a simple list like that, it would probably go in the harness rather than the model, and the model wouldn't have been trained on it, nor would a reasonably designed harness provide it to the model. Legitimate use cases, such as asking the model for a list of abusive words for use as a first pass in a filtering system could get tripped up.

As a test, I asked Perplexity to generate such a list. It did a bad job, including such words as abuse, hate, and threat which are far more likely to be innocuous than abusive. It did also include some highly offensive slurs that one would expect on any banned words list.

[–] trolololol@lemmy.world 1 points 1 day ago

Ha it's so easy to bypass bad word regex, just try asking in a language other than English. I doubt these fuckers even remember such thing exists.