this post was submitted on 21 Apr 2026
896 points (99.4% liked)

Fuck AI

6809 readers
925 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] supersquirrel@sopuli.xyz 1 points 3 days ago (3 children)

Yeah and fpv drones flown by a human pilot can easily out perform "AI" so...

[–] sobchak@programming.dev 1 points 2 days ago* (last edited 2 days ago) (1 children)

Idk if that's true. I believe autonomous drones can now beat humans in FPV racing. Ukraine now has autonomous drones that can't be jammed and function under GPS denial, so they can go further than fiber optic tethered drones.

[–] supersquirrel@sopuli.xyz 1 points 2 days ago* (last edited 2 days ago)

I believe autonomous drones can now beat humans if FPV racing.

On a repeating, static track maybe, we however are talking about war where nothing repeats quite the same and the landscape is always changing and being changed.

[–] IrateAnteater@sh.itjust.works 2 points 3 days ago (1 children)

On a 1:1 basis, maybe. You can expect to change as more flight data is pulled in. This type of very narrowly defined problem with enough training data is where AI becomes actually useful, as opposed to the online slop generators.

The other half of the AI vs human drone story is drone swarms. Even if the humans remain better pilots, there's always going to be a limit to how many drones a single person can fly, whereas AI can just keep scaling up the quantities.

AI is bullshit, but it's dangerous bullshit.

[–] supersquirrel@sopuli.xyz 1 points 3 days ago* (last edited 3 days ago) (2 children)

Sure and there is always going to be a limit of how many drones are useful? Especially with the proliferation of airburst ~30mm guns mounted on helicopters and ground based vehicles? The Gepard ain't a new idea...

Pattern matching is a useful tool for war right up until the enemy evolves.

[–] Canconda@lemmy.ca 1 points 3 days ago (1 children)

You said humans only require pb&j. Last I checked military pilots require hundreds of hours of flight training, multi million dollar helicopters, and preexisting training facility with experienced faculty.

[–] supersquirrel@sopuli.xyz 1 points 3 days ago* (last edited 3 days ago) (1 children)

Well yes and by the way all of that is a prerequisite to having an effective unmanned aerial force as well.

The thinking error you are making here is believing there is a way to bullshit around needing human talent and human curated datasets being constantly updated by human experts with AI.

There is not.

[–] Canconda@lemmy.ca -1 points 3 days ago* (last edited 3 days ago)

Wow you really dont get it. The fact that AI cant sustain your anti AI helicopters IS MY POINT

When the helicopters and human pilots run out they'll still have less effective but still deployable AI drones.

Without that preexisting infrastructure than you cant bring the anti AI helicopters back.

[–] IrateAnteater@sh.itjust.works 1 points 3 days ago (2 children)

Sure and there is always going to be a limit of how many drones are useful?

In a word, no. Not for the foreseeable future. Entropy being what it is, only one has to get through, and I could theoretically send 1,000,000 at once. Air burst rounds may take out 99.9%, but that still leaves a lot of damage occurring.

[–] Canconda@lemmy.ca 0 points 3 days ago

Air burst rounds may take out 99.9%, but that still leaves a lot of damage occurring.

Same thing with AI cyber attacks. It won't be the complexity so much as the scale. Not just the number of attacks, but the ability to instantly cross analyze a database of hardware & software vulnerabilities to instantly identify weaknesses from outdated or poorly configured networks.

The people that are cherry picking examples of AI failures such as coding and fact checking are ignoring the raw input & output potentials at play.

[–] supersquirrel@sopuli.xyz 0 points 3 days ago* (last edited 3 days ago) (1 children)

This is not how warfare works, there is always a practical limit to how much force is worth concentrating of a particular type, you don't know what you are talking about. You keep telling me I should change my thinking based on projections you make where you assume infinite energy will be provided by datacenters or that it is easy to concentrate an infinite amount of drones together to attack the enemy without their being counters to that or inefficiencies inherent to that.

Just last week Ukraine shared a video of a single drone pilot destroying a russian Rubicon logistics hub for russian drones. One person destroying a concentration of many smaller drones is not an impossibility I don't know why you think it is. "Scaling that up" this strategy up is also less useful when you don't have the human intelligence work done to actually locate the target and bring together all the necessary operational elements to hit it.

AI is a useful pattern matching tool when the patterns don't change much, that is about it.

It is the humans you need to worry about not the AI.

[–] Canconda@lemmy.ca 0 points 3 days ago* (last edited 3 days ago) (1 children)

Air burst rounds may take out 99.9%, but that still leaves a lot of damage occurring.

This is literally how war is playing out right now. I can't take you seriously.

[–] supersquirrel@sopuli.xyz 1 points 3 days ago* (last edited 3 days ago) (2 children)

Why are you obsessed with the fact that counter UAV solutions aren't 100% effective?

Nothing is 100% effective?

[–] IrateAnteater@sh.itjust.works 1 points 3 days ago (1 children)

Exactly. And that's the entire point. If the human success rate is (just for example) 1 out of every 100, I only have so many pilots, so I have a capped number of successes. If I have AI pilots, even if they are only half as good as humans, I can now increase my total number of successes, since I have effectively an infinite number of pilots.

That ability to bring more at once also opens more options. Overwhelming defenses may not be possible if you can only fly 1000 drones simultaneously due to quantity of pilots. Throw 10,000 AI drones at it, and a 99% attrition rate still gets you 100 drones on target.

[–] supersquirrel@sopuli.xyz 1 points 3 days ago (1 children)

You have done the pattern again of invoking an infinity with hand waving. 10,000 drones is an immense amount of money. If each of those drones cost $2000, a very reasonable price for a military attack drone, that is $20 million you just dropped on a single attack.

$20 million is a serious amount of money, you can buy a whole lot of counter UAS systems for that amount of money, and those systems aren't a single use, disposable tool like the assault drones you dumped all of your money into.

A shahed costs closer to ~$30,000 than $2000 so this point holds even more true for longer range flying bombs.

[–] IrateAnteater@sh.itjust.works 1 points 3 days ago

Considering modern militaries throw around missiles that cost north of a $1,000,000 each, $20,000,000 for 100 successful strikes on a defended target is still operating with a discount.

[–] Canconda@lemmy.ca -1 points 3 days ago* (last edited 3 days ago)

Nothing is 100% effective?

Oh now you're okay with things not being perfect? fuck buddy lmao. Youre being ridiculous.

[–] Canconda@lemmy.ca 1 points 3 days ago* (last edited 3 days ago) (2 children)

But like obviously the video you watched to determine the human pilot was better will be used to train that AI.

This is what scares me the most. How many of the people who are against AI appear to have decided it will never supersede human capabilities.

I feel like a lot of people are expecting the stock market bubble bursting will solve this? IMO that's just going to open pandoras box.

[–] supersquirrel@sopuli.xyz 3 points 3 days ago* (last edited 3 days ago) (1 children)

Look, if this is a religious/spiritual belief system around AI for you as it is for many people I can't convince you AI is bullshit, but there is simply no evidence AI is going to take over, that tech companies aren't completely bullshitting, that AI is profitable or that people actually want it for work.... MOST IMPORTANTLY only a SMALL FRACTION of the datacenters needed to build these supposedly unstoppable AIs are actually breaking ground and being built.

The thing that limits AI is that we physically cannot build enough datacenters to mitigate its stupidity and hallucination enough to compete with humans on things that actually matter.

Yes you could train an AI on videos of successful human piloted interceptions, but you still have to make that dataset, you still have to update it every time battlefield tactics change, and you still have to figure out how to filter out all the nonsense the AI introduces in the training.

In other words, AI is only unstoppable if it is inputted with an impossible amount of energy, this is used as a trick to handwave away the serious problematic issues at the heart of current AI/LLM based design.

A human by contrast is powered by a PB&J sandwhich...

[–] Canconda@lemmy.ca 0 points 3 days ago* (last edited 3 days ago) (1 children)

"It's not happening fast enough so it's not happening at all"

Between opening with the ad hominim and closing with a laughable strawman... You've got your head in the sand my dude.

AI being bullshit is not mutually exclusive with it being an existential threat to our freedom.

[–] supersquirrel@sopuli.xyz 3 points 3 days ago* (last edited 3 days ago) (1 children)

You are the one acting foolish by assuming that because you can imagine something becoming existentially dangerous when it has shown zero evidence of doing so... that we should prioritize focusing on it when we have things like Climate Change that are actually existential threats.

Yes, anything could become an existential threat, I will however focus on the realistic most likely existential threats because otherwise I will spend my whole life worried that sky is falling while propping up the value of snake oil salesman tech companies.

I would suggest ingesting less scifi slop about AI and taking an interest in actual existential issues threatening us.

Where is my strawman argument?

[–] Canconda@lemmy.ca 0 points 3 days ago (1 children)

Where is my strawman argument?

it has shown zero evidence of doing so

Like were you born in 2022? You're so angry about this. Exactly what I'm afraid of. That a lot of people who aren't hopping on board with AI will just sit on the internet and gloat about their limited perspective until it's too late for us to do anything collective that doesn't involve violence.

[–] supersquirrel@sopuli.xyz 0 points 3 days ago* (last edited 3 days ago) (1 children)

Again, stop trying to fear monger about a perceived existential threat while providing no evidence when there are ACTUALY existential threats like Climate Change we are facing.

[–] Canconda@lemmy.ca -1 points 3 days ago* (last edited 3 days ago) (1 children)

fear monger

Calm down. This is lemmy. Like 8 people are going to see this and all of them can form their own opions. unlike you.

[–] supersquirrel@sopuli.xyz 1 points 3 days ago* (last edited 3 days ago) (1 children)

Why do I have to calm down because this is lemmy?

Are you not putting effort into your conversation or investing yourself in defending your points just because you don't think this post will be popular?

Do you think I give a shit about that?

[–] Canconda@lemmy.ca -1 points 3 days ago (1 children)

No I'm not putting effort into a conversation with an impudent ass.

[–] foodandart@lemmy.zip 4 points 3 days ago (2 children)

LOL! Alright you two.. do I have to send you to the timeout corners?

LOL!

[–] chocrates@piefed.world 1 points 3 days ago (1 children)

For sure, I think the current crop of ai companies might die when the market crashes, but technology is here to stay

[–] Canconda@lemmy.ca 1 points 3 days ago* (last edited 3 days ago) (1 children)

Thank you. Exactly. All those assets will just change hands. Probably not even as when the companies are liquidated they'll probably be bought up by the shareholders that were pushing for this in the first place.

If the USA doesn't have an EPA or other agencies stepping in than we may see real ecological damage unless Americans physically prevent their construction.

[–] foodandart@lemmy.zip 1 points 3 days ago (1 children)

I think electricity bills quadrupling and quintupling in cost will step in and put a pinch to much of this horseshit. Already states are stepping up to limit datacenters. Maine just put a ban on them in the state. Infrastructure matters.

[–] chocrates@piefed.world 1 points 3 days ago* (last edited 3 days ago)

There is still innovation in the space. There are some models that aren't as gpu (and maybe power) hungry.

I think we need to be prepared to confront LLM's and whatever comes after for the long haul.

Today's iteration sucks for so many reasons, the core of problem I think will be there for a long time.

Hell we may crack fusion (in the next 20 years) and we will still have the fundamental problems we are grappling with.