Jesus Christ, you have to have a human validate the data.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
But that would mean paying someone for work. The CEOs want to replace humans.
Exactly, this is like letting excel auto-fill finish the spreadsheet and going "looks about right"
And that's a good analogy, as people have posted screenshots of Copilot getting basic addition wrong in Excel.
Whoever implemented this agent without proper oversight needs to be fired.
Except the ceo and executives ultimately responsible will blame their underlings that will be fired, even though it was an executive level decision. They didn't get to the pinnacle of corporate governance by admitting mistakes. That's not what they were taught at their ivy league schools, they were taught to lie and cheat to steal, and further slander their victims to excuse it.
It was bad before the current president set his outstanding example for the rest of the country. See what being a lying cheating piece of shit gets you? Everything. Nothing matters. We have the wrong people in charge across the board, from business to government to institutions.
LLMs can't really do math, so if there is any analysis being done, the numbers will typically be junk. Unless the LLM is writing the code to do the math, but then you have to validate the code.
Hahahhahaha hahahahahahaha haaaaaaaaa
Dumbasses. Mmm, that's good schadenfreude.
Nice. Really, I like it when management is dumb as fuck. It's a world of never ending joy.
I was trying to figure out why the stock mark is so high.
Joke's on you, we make our decisions without asking AI for analytics. Because we don't ask for analytics at all
I don't need AI to fabricate data. I can be stupid on my own, thank you.
I somehow hope this is made up, because doing this without checking and finding the obvious errors is insane.
Having worked in departments providing data all my career, I'm not surprised in the slightest that people do not care in any way about where the numbers they got come from.
As someone who has to deal with LLMs/AI daily in my work in order to fix the messes they create, this tracks.
AI's sole purpose is to provide you a positive solution. That's it. Now that positive solution doesn't even need to be accurate or even exist. It's built to provide a positive "right" solution without taking the steps to get to that "right" solution thus the majority of the time that solution is going to be a hallucination.
you see it all the time. you can ask it something tech related and in order to get to that positive right solution it'll hallucinate libraries that don't exist, or programs that don't even do what it claims they do. Because logically to the LLM this is the positive right solution WITHOUT utilizing any steps to confirm that this solution even exists.
So in the case of OPs post I can see it happening. They told the LLM they wanted analytics for 3 months and rather than take the steps to get to an accurate solution it ignored said steps and decided to provide positive solution.
Don't use AI/LLMs for your day to day problem solving. you're wasting your time. OpenAI, Anthropic, Google, etc have all programmed these things to provide you with "positive" solutions so you'll keep using them. they just hope you're not savvy enough to call out their LLM's when they're clearly and frequently wrong.
This is probably real, as it isn't the first time it happened: https://www.theguardian.com/technology/2025/jun/06/high-court-tells-uk-lawyers-to-urgently-stop-misuse-of-ai-in-legal-work