this post was submitted on 28 Feb 2026
1999 points (99.2% liked)
Technology
82087 readers
3978 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This honestly strikes me as a story people don't understand. Mass surveillance is not lawful and the government thus agreed not to do that. However, they still needed the guardrails removed. People interpret this as them wanting mass surveillance, but that's not necessarily true.
I work for a company that uses AI for legal work, processing and analyzing court cases, discovery documents, etc. We had problems with AI models like Gemini and GPT refusing to do what we needed because of guardrails against violence and abuse of minors. It refused to discuss and analyze cases that involved murders described in detail, or cases involving child molestation, etc. We weren't using it for unlawful purposes, very much the opposite.
I feel like if people knew that we, like the DoD, had to use uncensored models that allowed such things, people would complain "Wow, you guys are trying to remove guardrails for child expoitation and violence! How terrible!"
Is it so shocking that a military needs their AI to work with such things even if they're not implementing it? They cannot afford to have AI in critical moments be like "sorry, my guidelines say I can't help with this."
This seems like the time Trump advised against pregnant women against using Tylenol. So people started buying and using it in protest. This is yet another reaction to Trump punishing them, but people are pretending Anthropic is making a stand for the people and OpenAI is somehow not. It's not that simple. Though now Anthropic is eating it up, especially after this last week when they started pissing on the entire tech community that started hating on them.
Shame on you and your company for introducing AI at all into such sensitive matters. This issue is not just about security and privacy but about outsourcing human judgement when human life is on the line.
Wait. You still trust human judgment that much?
My company has facilitated the filings of hundreds of litigants representing themselves who cannot afford lawyers, helped swamped public defenders with more cases than they could ever hope to defend without just making plea deals.
Meanwhile you probably sit around complaining about prison industrial complex and corrupt justice system. Doing nothing. Taking a moral high ground while being utterly worthless.
Platitudes aren't helping anyone.
The military, the department of government responsible for mass murder, should not have any fucking AI in their system, absolutely anywhere. Doubly so without any sort of guardrails.
Why? I can't think of any reason that would not also preclude their usage if all computer assisted tools.
Because no other computer assisted tools are straight up fucking wrong half the time?
If your AI tools are wrong half the time, you're using it wrong. My legal AI is linked to databases of statutes and case law, providing results more reliable than most legal professionals.
No, I'm not using it wrong. It's just wrong. This is not my opinion, this is a statistical facts thats been studied over and over again. People are already being harassed and endangered and jailed by cops over their own fucking eyeballs or govt documents. Now imagine those cops have fucking fighter jets and missiles and give absolutely no fucks. You and your AI can get absolutely fucked. I hope you're disbarred like the other dumbass attorneys who show up with hallucinated laws and cases.
It's not factual. You're just an idiot typing a single prompt, probably with no agentic loop or curated database to keep it on line. Then you get mad like a caveman wondering why sticks only give fire half the time because you're not fucking understanding what you're working with.
I'm not working with anything. I did not conduct these tests. They're conducted by scientists. It took me 12 minutes to realize how completely fucking pointless they are. They even tell you as much, right at the bottom. "Please verify critical facts". If I have to go and fucking Google everything it tells me anyway to verify it then what is even the point?
There are bad actors in government. People working those cases like the ones you did have way more integrity than the bad faith actors in the DoD. There are bad faith actors in governments breaking the law every day.
It sounds like the assertions here are:
"Mass surveillance is not lawful and the government thus agreed not to do that." Which is to say- the government will not do something if it is illegal.
The greater good of the work that the Department of Defense needs to do may justify infringement of some individual liberties.
The Department of Defense is run by lawful actors who can be trusted to make lawful decisions based on their own discretion.
Is this right?
Government can certainly do illegal things. But why ever enter into a contract with ANY organization or business if you're concerned that they can violate the terms or violate the law?
Generally not.
Even if they aren't, Anthropic started business with them months ago. Backing out now for this reason would be a pretense.
I have a question about those guardrails. At any point, did any of your accounts get disabled for discussing abuse in this (or any) context?
I('m guessing this happened zero times, which probably means those guardrails are just irritating suggestions designed to keep you prompting...)
Not cancelled. But they may have been flagged internally, I don't know.
We weren't violating their terms, only violating their built in model guidelines. American models are usually very sensitive. They'd rather err on the side of blocking content than risk allowing questionable content that is lawful.
But even adjusting prompts, it didn't yield reliable results. So we have to use uncensored open weights models for many things. It's not SOTA, but it's better than nothing.