245
this post was submitted on 29 Apr 2026
245 points (95.9% liked)
Fuck AI
6809 readers
1096 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this is more akin to asking a library for information
it's really not. more like gathering a crowd of a few billion people, asking them a question, hearing the loudest answer and assuming it's correct
as far as I know, Open Ai is not hosting the largest forum in the world
No they are just training their model on it?
https://openai.com/index/openai-and-reddit-partnership/
Like isn't this common knowledge?
There is a huge difference between hosting an archive of conversations that took place, and providing a place where you can participate in conversations.
This is the equivalent of looking at the archives of debates transcribed in newspapers. When you do that, you are not participating in a debate, you are reading the transcript of a debate
The model responds based on conversations it's trained on? It's a bespoke response. It's not simply showing a browsable list of responses, it's giving particular ones.
It's literally feeding these mentally ill people responses that a human, with the same context, would be legally culpable for.
If you go to the library and tell the librarian you are planning to shoot up your school and ask where you can find books to help with that, I bloody well hope they would report you. Because it sounds like that is basically the equivalent of what happened here. It's not like someone using a library to privately access information and then using it in a harmful way. OpenAI apparently knew exactly what this person was doing on their platform and (allegedly) decided it was better for their bottom line to look the other way. At that point they have a clear moral obligation to act, imo.