this post was submitted on 15 Feb 2026
1234 points (99.7% liked)

Fuck AI

5751 readers
2393 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

(page 2) 50 comments
sorted by: hot top controversial new old
[–] FlashMobOfOne@lemmy.world 36 points 13 hours ago (3 children)

Jesus Christ, you have to have a human validate the data.

[–] BlameTheAntifa@lemmy.world 4 points 8 hours ago

But that would mean paying someone for work. The CEOs want to replace humans.

[–] 474D@lemmy.world 31 points 13 hours ago (1 children)

Exactly, this is like letting excel auto-fill finish the spreadsheet and going "looks about right"

[–] FlashMobOfOne@lemmy.world 24 points 13 hours ago (5 children)

And that's a good analogy, as people have posted screenshots of Copilot getting basic addition wrong in Excel.

Whoever implemented this agent without proper oversight needs to be fired.

[–] hector@lemmy.today 22 points 12 hours ago (1 children)

Except the ceo and executives ultimately responsible will blame their underlings that will be fired, even though it was an executive level decision. They didn't get to the pinnacle of corporate governance by admitting mistakes. That's not what they were taught at their ivy league schools, they were taught to lie and cheat to steal, and further slander their victims to excuse it.

It was bad before the current president set his outstanding example for the rest of the country. See what being a lying cheating piece of shit gets you? Everything. Nothing matters. We have the wrong people in charge across the board, from business to government to institutions.

load more comments (1 replies)
load more comments (4 replies)
[–] jacksilver@lemmy.world 9 points 10 hours ago

LLMs can't really do math, so if there is any analysis being done, the numbers will typically be junk. Unless the LLM is writing the code to do the math, but then you have to validate the code.

[–] ravelin@lemmy.ml 9 points 10 hours ago

Hahahhahaha hahahahahahaha haaaaaaaaa

[–] wonderingwanderer@sopuli.xyz 35 points 14 hours ago

Dumbasses. Mmm, that's good schadenfreude.

[–] ladicius@lemmy.world 8 points 11 hours ago

Nice. Really, I like it when management is dumb as fuck. It's a world of never ending joy.

I was trying to figure out why the stock mark is so high.

[–] sukhmel@programming.dev 50 points 15 hours ago (8 children)

Joke's on you, we make our decisions without asking AI for analytics. Because we don't ask for analytics at all

[–] PhoenixDog@lemmy.world 26 points 13 hours ago

I don't need AI to fabricate data. I can be stupid on my own, thank you.

load more comments (7 replies)
[–] cronenthal@discuss.tchncs.de 81 points 17 hours ago (9 children)

I somehow hope this is made up, because doing this without checking and finding the obvious errors is insane.

[–] joostjakob@lemmy.world 7 points 10 hours ago (1 children)

Having worked in departments providing data all my career, I'm not surprised in the slightest that people do not care in any way about where the numbers they got come from.

load more comments (1 replies)
[–] rozodru@piefed.world 29 points 14 hours ago (3 children)

As someone who has to deal with LLMs/AI daily in my work in order to fix the messes they create, this tracks.

AI's sole purpose is to provide you a positive solution. That's it. Now that positive solution doesn't even need to be accurate or even exist. It's built to provide a positive "right" solution without taking the steps to get to that "right" solution thus the majority of the time that solution is going to be a hallucination.

you see it all the time. you can ask it something tech related and in order to get to that positive right solution it'll hallucinate libraries that don't exist, or programs that don't even do what it claims they do. Because logically to the LLM this is the positive right solution WITHOUT utilizing any steps to confirm that this solution even exists.

So in the case of OPs post I can see it happening. They told the LLM they wanted analytics for 3 months and rather than take the steps to get to an accurate solution it ignored said steps and decided to provide positive solution.

Don't use AI/LLMs for your day to day problem solving. you're wasting your time. OpenAI, Anthropic, Google, etc have all programmed these things to provide you with "positive" solutions so you'll keep using them. they just hope you're not savvy enough to call out their LLM's when they're clearly and frequently wrong.

load more comments (3 replies)
load more comments (6 replies)
load more comments
view more: ‹ prev next ›