this post was submitted on 16 Jan 2026
91 points (95.0% liked)

Fuck AI

5268 readers
2736 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Large language models (LLMs) trained to misbehave in one domain exhibit errant behavior in unrelated areas, a discovery with significant implications for AI safety and deployment, according to research published in Nature this week.

Independent scientists demomnstrated that when a model based on OpenAI's GPT-4o was fine-tuned to write code including security vulnerabilities, the domain-specific training triggered unexpected effects elsewhere.

sauce

you are viewing a single comment's thread
view the rest of the comments
[–] PiraHxCx@lemmy.dbzer0.com 6 points 5 days ago* (last edited 5 days ago) (16 children)

These models seem awesome, how can I get one to do that? The other day someone wrote "***back" on a forum saying it's a slur and I asked a chatbot what slur is ***back and the chatbot said it can't say because its hurtful.

[–] cecilkorik@piefed.ca 6 points 5 days ago (8 children)

Yeah that's annoying. There are often ways to trick them into answering anyway, but if you want to avoid the fuss and frustration, find some heretic/abliterated/uncensored models on huggingface and run them in ollama and you can just straight up ask them whatever you want.

ollama run hf.co/mradermacher/Qwen3-4B-2507-Thinking-heretic-abliterated-uncensored-i1-GGUF:Q4_K_M is a small fast thinking model (2.5GB) that I've used a lot and has worked pretty well for me. Loads fast, runs on most hardware easily, thinks carefully about what you're asking (which can help you to clarify anything it's getting confused about) but still gives actual answers quickly.

If you're trying to squeeze a lot of information into an 8GB VRAM card, ollama run hf.co/mradermacher/Ministral-3-14B-abliterated-i1-GGUF:IQ3_M is a particularly knowledge-dense 6.2GB model and should leave some room for a decent bit of context and other VRAM usage without offloading too much. Ministral tends to love spitting out heavily formatted text (bold, italics, headings, tables) unless you very carefully convince it not to, so I find this one a bit obnoxious personally but it has good information in it and it looks nice if you're into that, I guess.

ollama run hf.co/noctrex/Llama-3.3-8B-Instruct-128k-abliterated-GGUF:Q8_0 is a good larger size (8.5GB) option that I use a lot, without thinking it just goes straight to the answer and it gives good, reliable results and it supports lots of context (you'll need to add an environment variable to ollama to use more than 4096 default context, and more context uses more VRAM). I like Llama models a lot.

If you've got plenty of VRAM (or don't mind that it will run much slower by offloading to system RAM) ollama run hf.co/mradermacher/Harbinger-24B-biprojected-norm-preserving-abliterated-i1-GGUF:Q4_K_M is a 14GB model I stumbled across that is supposed to be for writing stories and roleplaying, but it continues to impress me with its reliability, straightforward instruction following and broad knowledge base for general purpose tasks.

Good luck! It seems like it sometimes takes awhile for people to figure out effective ways to abliterate the latest models (which are also supposedly getting more sophisticated in their safety rules), so most of these abliterated models tend to be a little older from what I've found. And shoutout to mradermacher, whoever you are, who takes all these various models and makes quantized imatrix GGUF versions of them so we can easily run them efficiently on consumer hardware. I presume they are a lovely fellow!

[–] PiraHxCx@lemmy.dbzer0.com 2 points 5 days ago* (last edited 5 days ago)

That was a very cool reply. I don't use chatbots that much but I will consider running it locally, just from time to time I ask something to duck.ai or lumo and end up like fuckingshitfuckingchatbotcantdoanythingrightgoddammit

I mostly use it to grammar check me if I'm writing something I don't want to mess up in a foreign language, the other day I was writing a movie review and in the middle of it I wrote something like "Has Van Damme ever made a movie that isn't gay porn?", and instead of grammar checking the review the chatbot was like, "Hey, it's not nice to say those things about a public figure. There are no records of Van Damme making pornographic movies and he is not gay, those are only rumors" fuckingmotherfuckerchatbotbloodybastard!

Well, at least it keeps my hatred for AI companies fresh.

load more comments (7 replies)
load more comments (14 replies)