this post was submitted on 21 Mar 2026
86 points (97.8% liked)
Fuck AI
6318 readers
1011 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
there is one area where it excels though: bullshitting. that's why c-levels and aspirational middle management are so impressed, because their roles are all about bullshit.
Even this is disappointing. LLM bullshit is only impressively fluent compared to older generative systems. (It is very impressive compared to them. It just should have stayed in academia longer and its components could develop into useful things. Instead everyone's falling over themselves about a kick-ass demo.)
yeah it's the middle-management thing again. "wow it can answer emails" "wow it can shit out demos" "wow it can follow an api spec". as internet hippo so aptly put it, they saw that it could do the job of a manager and concluded that it was sentient rather than coming to the correct conclusion that managers aren't.
I argue, it’s just that people who operate at those levels are terrible at detecting AI bullshit. If you spend more than the bare minimum of effort (or intelligence) at trying, it’s pretty obvious when you’re reading AI slop.
So, maybe it’s useful for that, but not particularly better at it than a human.
yeah some people seem extremely susceptible.
i will admit that my detection skill has been improved by using local models, because i studied machine learning at uni twelve years ago and jumped at the opportunity when the hype cycle began. but it just hasn't gotten good at anything concrete. it improves marginally at certain tasks, only to fail in more subtle ways every time. it's getting better not at being a tool, but at disguising itself as one.
Yeah, it all seemed so very promising back then, but those promises really never seemed to materialize… I’m just so disappointed.
At least I didn’t invest billions of dollars into it.
i mean it still could lead to something...
not by the current big actors, but sometime in the future hopefully.
Oh, I’m sure that’s true, but probably something quite different than what we are being promised and much further down the road. Like how VR was hyped a lot in the early 90s, but we really didn’t get anything like that until quite recently, and it’s not quite the same.
yeah, the tech just wasn't there for vr. just like how llms aren't the be all end all of generative machine learning models. agents are getting close, but with the tech we currently have there is no way it could reach the promised agi status.
i actually protested to my professor about this when we were working with neural networks in 2014. were were doing handwriting recognition and i told him "this isn't ai". he shot back "oh really? then write me a paper on why" and i couldn't do it because while i could describe what ai is not, i could not define what it actually is. that feels like the main question we want to be solving for, rather than "how to get statistical text generators to seem clever".