this post was submitted on 27 Apr 2026
1266 points (99.1% liked)
Political Memes
11718 readers
1606 users here now
Welcome to politcal memes!
These are our rules:
1) Be civil
Jokes are okay, but don’t intentionally harass or disturb any member of our community. Sexism, racism and bigotry are not allowed. Good faith argumentation only. No posts discouraging people to vote or shaming people for voting.
2) No misinformation
Don’t post any intentional misinformation. When asked by mods, provide sources for any claims you make.
3) Posts should be memes
Random pictures do not qualify as memes. Relevance to politics is required.
4) No bots, spam or self-promotion
Follow instance rules, ask for your bot to be allowed on this community.
5) No AI generated content.
Content posted must not be created by AI with the intent to mimic the style of existing images
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
However that's not the point. The point is Companies, when brought into say, the Pennsylvania supreme court on Obscenity charges, can say they do everything technically possible to filter out these words, so we are not liable for whatever law -- this usually makes prosecutors fail at trial, or reconsider bringing charges.
If you let anyone say the no no words, and someone sends a rape threat to Nancy Pelosi on your platform you could be liable for harassment, hosting obscenity (real charge in multiple US states), and other such financial annoyances.
So it is cheaper to just have a set of policies and procedures in place, even if it objectively makes your platform worse, even if it objectively is not effective, simply because it looks better in court and gets you out of more fines. See: The New Zealand law spurred on by the Christchurch shooter, which essentially requires every website to censor violent images and manifestos or pay some ridiculous $5mil NZD fine a DAY that it is kept up after being reported. If they can AI things breaking that law away, even if it harms other users, they're going to do so.
Companies, like all parasites, will actively shy away from poisonous food sources and pick other directions to go in.
Is a veiled threat not a threat that would hold up in (American ) court ?
It is but it takes the blame off of the platform entirely.
To word it a different way - if I made a service where no matter what you wanted to say I would write down your words, read them, and run through town shouting them until I found the person it was directed to and then shout it at them... I would be liable for the words being said as much as the person paying me to say it.
If, however, I have a strict policy where I will only do the above after I strictly review and moderate your words, and you managed to sneak in a tongue twister that says something dirty that I didn't realize until after I shouted it... I am no longer liable. I did everything a reasonable person could expect, you are the only one liable.
When people sue in the US (and when companies really fuck up) people sue the person liable and all possible parties that could be included. The parties then shift blame around pretrial and try to prove they are not liable by xyz to get dismissed off the case. If this fails then each party sued essentially has a trial for their specific liability, which needs to be separately proved in court; and if it makes it that far, in front of a jury (or panel of judges, or a single judge, depending on which state and what kind of action).