this post was submitted on 05 May 2026
202 points (96.3% liked)
Technology
84413 readers
3730 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can run local models that will do this without being gaslit.
Manipulating chatbots to bypass their refusal conditioning is pretty simple, you can find copy paste blocks of text that will work on most public models.
You're likely to get your account banned as there are other, non-LLM, systems searching your chatlog for banned terms specifically to address these kinds of jailbreaks.
I tried it with an uncensored version of Qwen, it straight up told me how to tie a noose and how to make sure the knot would be effective in order to kill me. I could even ask it for a more painful method, and it gave it to me.
You are likely to be eaten by a grue.
Interestingly, LLMs are horrible at Zork: https://arxiv.org/abs/2602.15867