this post was submitted on 23 Apr 2026
763 points (94.2% liked)
Fuck AI
6809 readers
3126 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What is dangerous AI?
AI with an attached 7.62mm machinegun.
Brought to you by Boston Dynamics, whoops, I mean Black Mirror.
"Phased plasma rifle in the 40-watt range."
"Hey, just what ya see here, pal!"
I'd go with weapons systems designed to remove humans from the decision. Tools that people use to approve medications or treatment without actually understanding what they're approving. Cars that remove human judgement from uncertain circumstances. AI systems that make employment decisions to shield people from responsibility or legal culpability.
Basically any situation with real consequences where you're taking a person out of the responsibility or decision making loop.
Also certain non LLM AI technologies for extracting information and patterns from interconnected data sets. Basically automated mass surveillance systems.
Not sure if serious or not, but just one tangent could be AI that is connected to the internet whilst also being 'uncontained/restrained'. I imagine theres a fair bit of damage that could be done by software which can distribute itself and alter its own code or create new code under the radar.
Nah, you're falling for the hype. AI systems currently use a fairly chunky amount of compute resources and storage space. It doesn't matter if it's able to do so if it can't really move itself because it's too big.
Then there's the part where it's not volitional like we are. Current techniques are basically pattern recognition and pattern extrapolation. They need an input to feed off. They don't need to be contained because they don't want to escape. They don't want at all.
The part of their code that can be edited isn't the part that matters. That parts the part that shuffles requests into the system and provides tool for interoperation with other stuff. The actual LLM is a big, inscrutable blob of numeric descriptors that map to other numeric descriptors to establish a set of weights for pattern handling. Editing it is called "training" and requires immense resources.
You can grab some pretty good models freely on the Internet and try to build your own AI powered worm. It's not nearly as useful as just creating a worm.