this post was submitted on 15 Feb 2026
1264 points (99.6% liked)

Fuck AI

5751 readers
2678 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

you are viewing a single comment's thread
view the rest of the comments
[–] rozodru@piefed.world 29 points 16 hours ago (1 children)

As someone who has to deal with LLMs/AI daily in my work in order to fix the messes they create, this tracks.

AI's sole purpose is to provide you a positive solution. That's it. Now that positive solution doesn't even need to be accurate or even exist. It's built to provide a positive "right" solution without taking the steps to get to that "right" solution thus the majority of the time that solution is going to be a hallucination.

you see it all the time. you can ask it something tech related and in order to get to that positive right solution it'll hallucinate libraries that don't exist, or programs that don't even do what it claims they do. Because logically to the LLM this is the positive right solution WITHOUT utilizing any steps to confirm that this solution even exists.

So in the case of OPs post I can see it happening. They told the LLM they wanted analytics for 3 months and rather than take the steps to get to an accurate solution it ignored said steps and decided to provide positive solution.

Don't use AI/LLMs for your day to day problem solving. you're wasting your time. OpenAI, Anthropic, Google, etc have all programmed these things to provide you with "positive" solutions so you'll keep using them. they just hope you're not savvy enough to call out their LLM's when they're clearly and frequently wrong.

[–] jj4211@lemmy.world 19 points 16 hours ago* (last edited 15 hours ago) (1 children)

Probably the skepticism is around someone actually trusting the LLM this hard rather than the LLM doing it this badly. To that I will add that based on my experience with LLM enthusiasts, I believe that too.

I have talked to multiple people who recognize the hallucination problem, but think they have solved it because they are good "prompt engineers". They always include a sentence like "Do not hallucinate" and thinks that works.

The gaslighting from the LLM companies is really bad.

[–] cronenthal@discuss.tchncs.de 7 points 13 hours ago (1 children)

"Prompt engineering" is the astrology of the LLM world.

[–] wizardbeard@lemmy.dbzer0.com 2 points 11 hours ago* (last edited 8 hours ago)

There are ways to get more relevant info (when using terms that have different meanings based on context), to reduce the needless ass kissing, and to help ensure you get response in formats more useful to you. But being able to provide it context is not some magic fix for the underlying problems of the way this tech is constructed and its limitations. It will never be trustworthy.

Edit: God forbid anyone want our criticism to be based of an understanding of this shit rather than pure vitriol and hot takes.