this post was submitted on 12 Dec 2025
159 points (100.0% liked)

Fuck AI

4812 readers
1391 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Lawton Chiles Middle School in Oviedo went into lockdown Tuesday.

Its AI weapons detection system detected a student holding a clarinet in a similar manner as someone might hold a rifle. It “triggered the Code Red to activate,” the school’s principal told families in an automated message.

you are viewing a single comment's thread
view the rest of the comments
[–] FriendOfDeSoto@startrek.website 20 points 21 hours ago (3 children)

Is this really a case of fuck AI? To anybody outside the US this paragraph reads like fucking satire. From within, where kids learn how to crouch under desks, hide behind bullet proof whiteboards or something, and lock down better than the CIA this doesn't really move the needle, does it? The trauma is already there with all the drills and is eternal, as is the 2nd amendment. And this is one area where you would prefer a false positive over a false negative. So for me this isn't so much fuck AI as fuck every lawmaker of the US since the civil war.

[–] grindemup@lemmy.world 6 points 7 hours ago (1 children)

I would argue that your point applies to every use case for AI. AI isn't responsible for any of the bad shit it's associated with, blame lies with humans 100% of the time.

[–] FriendOfDeSoto@startrek.website 1 points 6 hours ago (1 children)

Is every scenario with so-called AI in it caused by humans? Sure. That's not really my point though. It was humans who caused the dumb situation around private gun ownership that then eventually caused school shootings to be a thing schools need to prepare for. I would tolerate the use of so-called AI here under these dumb circumstances and moreover would tolerate a false positive like this. I feel similarly positive about the use of models in medicine - if and when it helps. Or as a tool for people with disabilities. Et cetera.

Normally we lambast here very dumb applications of so-called AI. The ones that get lawyers in trouble, the ones that get forced into areas where it's unnecessary, the ones that boil away drinking water senselessly, or that ask children for nudes, or - sadly - the ones that drive teenagers to suicide. We lambast all the peddlers of so-called AI with their dumb predictions about how their faulty products will revolutionize everything. That's the spirit of "Fuck AI." My point was this story is less in keeping with the spirit of "Fuck AI." So-called AI might actually help to make a bad situation not get worse.

[–] AnarchistArtificer@slrpnk.net 1 points 1 hour ago

I see your point, but the concerning bit about the tech being used in cases like this is that it helps pave the way for more mass surveillance. Plus there's the fact that just because it has a high rate of false positives doesn't necessarily mean it'll have a low rate of false negatives (i.e. whether it's actually effective).

The caveat you mention near the end of your first paragraph is key here: "if and when it helps". So many of these systems have not been proven to work (or indeed, have been proven to not work, in some cases), and are exorbitantly expensive. Given that AI has been pushed into so many domains where it is not wanted or helpful, I am not particularly hopeful about this particular case, even though we only have very limited info from this false positive. The whole mess is complicated by the fact that it's exceptionally hard to prove or disprove whether these systems work because the vast vast majority of them are black box systems, surrounded by even more opaque financial fuckery. To me, this definitely fits the spirit of the community

[–] prole@lemmy.blahaj.zone 2 points 6 hours ago

It's fuck AI because it should already be aware that clarinets might exist in a school

[–] shrugs@lemmy.world 16 points 16 hours ago

It's so sad. And here is some food for thought. Change my view:

All these drills will cause more kids to think a school shooting is a viable idea when they can't take bullying or abuse any longer. This causes more dead children in the long run then any drill is able to prevent.

Not to talk about all the other psychological problems this creates for young children (living in fear...).

It's like these anti drug campaigns that were held in schools in the 80/90s; DARE

https://priceonomics.com/dare-the-anti-drug-program-that-never-actually/

Students who went through DARE weren’t any less likely to do drugs than the students who didn’t. In fact, there’s some well-regarded research that some groups of students were actually more likely to do drugs if they went through DARE.

People just don't understand how the brain works. Ideas grow instead of shrinking if you think about them.

If you want to stop smoking, it's ineffective to say to yourself "I don't want to smoke", because this manifests the thoughts about smoking.

Publish a book about suicide and suicide rates increase in the population.

This is common knowledge, but still we tell kids with weekly drills that a school shooting is something that can likely happen. And just a few will take this as the basis for their horrible actions. And others suffer from useless fear they got instilled.