this post was submitted on 21 Mar 2026
86 points (97.8% liked)

Fuck AI

6318 readers
1011 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

I worked as a software engineer.
AI is supposed to replace programmers, or at least help you write code.

But I never really wrote a lot of code in the first place??
I looked up libraries that do what I need and then wrote a bit of code in-between to link our API or GUI to the right functions of the selected library.

And these libraries were tested, functional and most of all consistent and reliable.

Now what do you want me to do? Ask an non-deterministic LLM to implement the code from scratch every time I need it in my project?
That doesn't makes sense at all.

That's like building a car and every day you ask somebody else to make you a new wheel. And every wheel will be slightly different than the previous. So your car will drive like shit.

Instead, why not just ask a reputable wheel manufacturer to make you 4 wheels? You know they will work. And in the case of programming, people are literally giving away good, reliable wheels for free! (free libraries and APIs)

Why use LLMs at all?

you are viewing a single comment's thread
view the rest of the comments
[–] homes@piefed.world 13 points 7 hours ago (1 children)

Very little of what we’re told to use AI for makes any sense because there were already trained. People were very good at doing those things doing them already and using AI to do those things not only cost money to implement the AI, cost many people jobs by laying them off, and the end result is usually crap. Then there’s a huge invest investment in money in time in fixing the crap so the end result is finally a usable product.

The reason why AI is being pushed so hard is because of all the money that’s been invested in it and the lie that it’s actually worthwhile. No one wants to admit that it isn’t, at least not the people who invested so much money and time into it.

Eventually, the bubble is going to pop, and our entire economy will probably crash as a result.

[–] lime@feddit.nu 18 points 7 hours ago (2 children)

there is one area where it excels though: bullshitting. that's why c-levels and aspirational middle management are so impressed, because their roles are all about bullshit.

[–] Mniot@programming.dev 4 points 5 hours ago (1 children)

Even this is disappointing. LLM bullshit is only impressively fluent compared to older generative systems. (It is very impressive compared to them. It just should have stayed in academia longer and its components could develop into useful things. Instead everyone's falling over themselves about a kick-ass demo.)

[–] lime@feddit.nu 5 points 5 hours ago

yeah it's the middle-management thing again. "wow it can answer emails" "wow it can shit out demos" "wow it can follow an api spec". as internet hippo so aptly put it, they saw that it could do the job of a manager and concluded that it was sentient rather than coming to the correct conclusion that managers aren't.

[–] homes@piefed.world 6 points 7 hours ago* (last edited 7 hours ago) (1 children)

I argue, it’s just that people who operate at those levels are terrible at detecting AI bullshit. If you spend more than the bare minimum of effort (or intelligence) at trying, it’s pretty obvious when you’re reading AI slop.

So, maybe it’s useful for that, but not particularly better at it than a human.

[–] lime@feddit.nu 7 points 7 hours ago (1 children)

yeah some people seem extremely susceptible.

i will admit that my detection skill has been improved by using local models, because i studied machine learning at uni twelve years ago and jumped at the opportunity when the hype cycle began. but it just hasn't gotten good at anything concrete. it improves marginally at certain tasks, only to fail in more subtle ways every time. it's getting better not at being a tool, but at disguising itself as one.

[–] homes@piefed.world 2 points 6 hours ago* (last edited 6 hours ago) (1 children)

Yeah, it all seemed so very promising back then, but those promises really never seemed to materialize… I’m just so disappointed.

At least I didn’t invest billions of dollars into it.

[–] lime@feddit.nu 4 points 6 hours ago (1 children)

i mean it still could lead to something...

not by the current big actors, but sometime in the future hopefully.

[–] homes@piefed.world 2 points 6 hours ago (1 children)

Oh, I’m sure that’s true, but probably something quite different than what we are being promised and much further down the road. Like how VR was hyped a lot in the early 90s, but we really didn’t get anything like that until quite recently, and it’s not quite the same.

[–] lime@feddit.nu 2 points 6 hours ago* (last edited 5 hours ago)

yeah, the tech just wasn't there for vr. just like how llms aren't the be all end all of generative machine learning models. agents are getting close, but with the tech we currently have there is no way it could reach the promised agi status.

i actually protested to my professor about this when we were working with neural networks in 2014. were were doing handwriting recognition and i told him "this isn't ai". he shot back "oh really? then write me a paper on why" and i couldn't do it because while i could describe what ai is not, i could not define what it actually is. that feels like the main question we want to be solving for, rather than "how to get statistical text generators to seem clever".