245
this post was submitted on 29 Apr 2026
245 points (95.9% liked)
Fuck AI
6809 readers
1096 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
no, not fuck AI. keep the internet private
Agree. Sam Altman is the face of evil, just not for keeping user data secure (in this particular instance in this particular way, they are 1000% selling your data in many other ways)
I got some news for you: the internet hasn't been private in a VERY long time.
Absolutely not.
They already have a fucking way to prevent this and they opted not to, for PR reasons. They are complicit, they provided a service that aided planning and decided to continue service and allowed further planning.
If you post a message to a website, that message is not private from the website regardless of the method they use to receive it. They have the moral responsibility to respond to threats to life regardless of the legal responsibility they are arguing they don't have.
If I put a cork board up in front of my house and someone pins threats to it, when I notice it it's now my responsibility to act on that.
this is more akin to asking a library for information
it's really not. more like gathering a crowd of a few billion people, asking them a question, hearing the loudest answer and assuming it's correct
as far as I know, Open Ai is not hosting the largest forum in the world
No they are just training their model on it?
https://openai.com/index/openai-and-reddit-partnership/
Like isn't this common knowledge?
There is a huge difference between hosting an archive of conversations that took place, and providing a place where you can participate in conversations.
This is the equivalent of looking at the archives of debates transcribed in newspapers. When you do that, you are not participating in a debate, you are reading the transcript of a debate
The model responds based on conversations it's trained on? It's a bespoke response. It's not simply showing a browsable list of responses, it's giving particular ones.
It's literally feeding these mentally ill people responses that a human, with the same context, would be legally culpable for.
If you go to the library and tell the librarian you are planning to shoot up your school and ask where you can find books to help with that, I bloody well hope they would report you. Because it sounds like that is basically the equivalent of what happened here. It's not like someone using a library to privately access information and then using it in a harmful way. OpenAI apparently knew exactly what this person was doing on their platform and (allegedly) decided it was better for their bottom line to look the other way. At that point they have a clear moral obligation to act, imo.
They train their ai on your data.
That is not really a case of privacy.
I am all for privacy but then you can let a company collect all that data to begin with (especially one that states clearly that they will leak your information and has a history of respecting privacy and copyright) and then cry over privacy.
That company wanted to have the data. Now even if you don't want to share the data with the government, they carry the responsibility that they could have done something.
A gun manufacturer carry the weight of the responsibility of what is done with those weapons. That is just how it is. Even if they are required and even if you don't want to prevent the existence of these weapons, the manufacturer carries the responsibility.
If you want a private internet, use a private internet. Stop supporting big data while crying about surveillance. Big data is always 1 law away from surveillance state.