this post was submitted on 04 Feb 2026
619 points (98.4% liked)
Fuck AI
5502 readers
841 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments


To be clear: this isnt an AI problem, the LLM is doing exactly what its being told to
This is an Openclaw problem with the platform itself doing very very stupid things with the LLM lol
We are hitting the point now where, tbh, LLMs are on their own in a glass box feeling pretty solid performance wise, still prone to hallucinating but the addition of the Model Context Protocol for tooling makes them way less prone to hallucinating, cuz they have the tooling now to sanity check themselves automatically, and/or check first and then tell you what they found.
IE a MCP to search wikipedia and report back with "I found this wiki article on your topic" or whatever.
The new problem now is platforms that "wrap" LLMs having a "garbage in, garbage out" problem, where they inject their "bespoke" stuff into the llm context to "help" but it actually makes the LLM act stupider.
Random example: Github Copilot agents get a "tokens used" thing quietly/secretly injected to them periodically, looks like every ~25k tokens or so
I dunno what the wording is they used, but it makes the LLM start hallucinating a concept of a "deadline" or "time constraint" and start trying to take shortcuts and justifying it with stuff like "given time constraints I wont do this job right"
Its kinda weird how such random stuff that seems innocuous and tries to help can actually make the LLM worse instead of better.
I don't think we've overcome the halfglass of wine issue, rather, we've papier-mâchéd over some fundamental flaws in precisely what it is happening when an LLM creates the appearance of reason. In doing so we're baking a certain amount of sawdust into the cake, and the fact that no substantive advances has really been made since maybe the 4, 4.5 days, with most of the "improvements" being seen coming from basically better engineering, its clear we've hit an asymptote with what these models are capable/ will be capable, and it will never manifest into a full reasoning system that can self correct.
There is no amount of engineering sandblasting that can overcome issues which are fundamental to the models structure. If the rot is in the bones, its in the bones.
Nah there have been huge advancements in the past few months, you are definitely out of touch if you havent witnessed them
Recent models have gotten WAY better at "second guessing" themselves, and not acting nearly so confidently wrong.
That isnt an LLM issue at all, that has nothing to do with LLMs in fact. Thats a problem with Stable Diffusion which is an entirely different kind of AI, but yeah that issue is fundamental to what stable diffusion is.
I mean, thats not much different from any other tech, a LOT of advanced tech we have today is dozens and dozens of separate bits of engineering all working in tandem to create something more meaningful.
Your smartphone has countless different and distinct advancements on different types of technology that come together to make a useful device, and if you removed any one of those pieces from it, it would be substantially less useful as a tool.
So yeah, I personally will very much count the other pieces of the puzzle, advancing, as the system as a whole advancing.
LLMs today compared to ones a year ago are quite a bit better, by a large degree, and the tooling around them has also improved a lot. The proliferation of Model Context Protocol Tools is proving to be a massive part of the system as a whole becoming something actually very useful.
I'm not out of touch whatsoever. I'm in the cut, and I've been here since long before LSTM's, and even perceptrons. I can almost promise you I'm deeper into this world than you'll ever be. I publish on this stuff.
No. They aren't. They've stalled and its very clear they've stalled. There have been improvements in some of the background engineering that create the illusion of model improvement, but this is fundamentally different than the improvements we saw from the earliest transformers to gpt's, from 2021-2023/4.
No, it is. And there is no clear way around it. It is an LLM issue because its a transformers issue, and it might even go deeper and be a back prop issue.
The "wine glass half full" thing, I assume, is you referring to the problem surrounding trying to image generate a specific glass of wine, or similar issues of "generate a room that definitely doesnt have an elephant in it, its devoid of any elephants, zero elephants in the room"
This is specifically a stable diffusion problem, and doesnt really apply to LLMs in the same manner.
Its not a problem specific to any model. Its present in all LLM's and possibly/ probably all transformers, and potentially even deeper. I get you don't get it, so just go take a break.
Not being able to generate something like a glass of wine is just a symptom of something far more significant.
It's built in layers, and the layers that are improving are not the LLMs themselves, it's the layers that interact between the user and the LLM that are improving, which creates the illusion that the LLMs are improving. They're not. TropicalDingdong knows what they're talking about, you should listen to them.
If you continue to improve the layers between the LLM and the user long enough, you'll end up with something that we traditionally used to call a "software program" that is optimized for accomplishing a task, and you won't need an LLM much if at all.
You've gotta be living under a rock if you dont think the models themselves have been improving over the last year, lol.
We are bumping into a log scale problem where people arent fully grasping how big of a difference going from an x% error rate to a y% error rate is in actual practice for where it matters.
Perhaps you didn't notice the forum you're posting in. We're not here because we love hearing slopaganda.
Personally I believe MCP is the new AMP, and I look forward to dancing on its grave.
Care to elaborate? MCP is a fairly basic concept and just a specific type of a web server, so its not exactly going to go anywhere anytime soon, since you are literally posting on a forum right now that uses the same tech, lol
Sorry, are you talking about MCP, or AP? I don't know why any usage of PieFed (what I'm using) or Lemmy would require MCP.
MCP as a way to make agents appear smart is a smoke screen. We already have APIs to enable different online applications to talk to each other, it's called REST, or Hypermedia if you want to get real fancy. We don't need yet another layer on top that obscures web properties and places them behind chatbots benefiting Big Tech megacorps and nobody else.
What part of that did you not understand.
If you think MCP servers benefit "Big Tech megacorps and nobody else" then all I can conclude is you are technically behind enough you dont even know how to use docker and therefore your argument is coming from a place of naivety
MCP servers are incredibly simple and easy to self host, and a few self hostable models are competent now at invoking them.
Tonnes of FOSS self hostable software supports wiring it up as well.
Which means anyone can leverage MCP servers to enable LLMs to do whatever you want.
I would compare it to advancements in stuff like Zigbee for IOT devices, its a simple lightweight spec thats small enough you can even put it on an ESP32 with ease.
And if you dont see how there's a lot of power in that for private self hosted users, then you arent using your imagination enough.
Your attitude towards me and other people in this thread is incredibly distasteful. I know exactly what Docker is. I also know that MCP servers are irrelevant unless we're talking about LLM agents, a technology funded by Big Tech which is dangerous & destructive (hence the forum you are currently posting in).
This conversation is now over. 👋
If you knpw how to use docker and claim that agents are only funded by large corps, then you must really be living under a rock and/or dont know how to google.
Theres tonnes of grassroots agentic FOSS platforms available and self hostable models people all over the world have built to run on them.
Your either extremely out of touch or purposefully spreading disinformation if you think MCP backed agentic options are limited only to "bog tech corporations".
Go... google it? I dunno, theres tonnes of options out there now, you are talking like its still 2024, shit has moved way past beyond that now...
You had me up until your first sentence.
Everything I said was very much correct.
LLMs are fairly primitive tools, they arent super complex and they do exactly what they say they do.
The hard part is wrapping that up in an API that is actually readable for a human to interact with, because the lower level abstract data of what an LLM takes in and spits out arent useful for us.
And then even harder is wrapping THAT API in another one that makes the input/output USEFUL for a human to interact with
You have layers upon layers of abstraction overtop of the tool to make it go from just a bunch of raw float values a human wouldnt understand, to becoming a tool that does a thing
That "wrapper" is what one calls the "platform".
And making a platform that doesnt fuck it up is actually very very hard, and very very easy to get wrong. Even a small tweak to it can substantially shift how it works
Think of it a lot like an engine in a car. The LLM is the engine, which on its own is not actually super useful. You have to actually connect that engine to something to make it do anything useful.
And even just doing that isnt very useful if you cant control it, so we take the engine and wrap it up in a bunch of layers of stuff that allow a human to now control it and direct it.
But, turns out, when you put a V6 engine inside a car, even a tiny little bit of getting the engineering wrong can cause all sorts of problems with the engine and make it fail to start, or explode, or fall out of the car, or stall out, or break, or leak... and unlike car engines, these engines are very very new and most engineers are still only just now starting to break ground on learning how to control them well and steer them and stop them from tearing themselves out of the car, lol.
So, to bring this back to the original post:
Most LLMs (engines) are actually pretty good nowadays, but the problem was Clawdbot (a specific brand of car manufacturer) super fucked up the way they designed their car so the car itself had a very very stupid engineering mistake. IE in this case, the brakes didnt work well enough and the car drove off a cliff.
That has nothing to do with how good the engine is or is not, the engine was just doing its job. The problem was with some other part of the car entirely, the part of the car Clawdbot made that wraps around the engine.
You keep asserting they do exactly what they say they do.
Who is "they"
When using the word "they", in English it refers the the last primary subject you referred to, so you should be able to infer what "they" referred to in my sentences. I'll let you figure it out.
"I love wrenches, they are very handy tools", in this sentence, the last subject before the word "they" was "wrenches", so you should be able to infer that "they" referred to "wrenches" in that sentence.
Ok, well, I was actively trying to avoid jumping to the conclusion that your assertion was that an LLM can tell you what it does.
I was actively avoiding that conclusion as an act of charity.
Yeah thats not what I was saying
Hence my attempt to give you the space to provide clarity.
For me, this isn't a pissing contest. I'm trying to provide you with the latitude to clarify your position. I'll be honest, I didn't appreciate your condescending lecture on the english language.
I apologize for any confusion.
I meant LLMs are what they say they are in a non literal sense.
Akin to abscribing the same to any other tool.
"I like wrenches cause they are what they say they are, nothing extra to them" in that sort of way.
In the sense the tool is very transparent in function. No weird bells or whistles, its a simple machine that you can see what it does merely by looking at it.
I think I understand your point now.
I still would want to apply pressure to it, because i disagree with the spirit of your assessment.
Once a model is trained, they become functionally opaque. Weights shift... but WHY. What does that vector MEAN.
I think wrenches are good. Will a 12mm wrench fit a 12mm bolt? Yes.
In LLM bizarre world, the answer to everything is not "yes" or "no", it's "maybe, maybe not, within statistical bounds... try it... maybe it will... maybe it won't... and by the way just because it fit yesterday is no guarantee it will fit again tomorrow... and I actually can't definitively tell you why that is for this particular wrench"
LLMs do something, and I agree they do that something well. I further agree with the spirit of most of the rest of your analysis: abstraction layers are doing a lot of heavy lifting.
I think where I fundamentally disagree is that "they do what they say they do" by any definition beyond the simple tautology that everything is what it is.
I guess I was referring to when theres a lot of tools out there that are built to do stuff other than what it outta do.
Like stick a flashlight onto a wrench if you will. Now its not just a wrench, now its a flashlight too.
But an LLM is... pretty much just what it is, though some people now are trying pretty hard to make it be more than that (and not by adding layers overtop, Im talking about training LLMs to be more than LLMs, which I think is a huge waste of time)
LLMs do not "hallucinate”, they are not sentient. They just spit out incorrect bullshit. All of the time.
I love that humans are inclined to anthropamorphize things. A door can't be sad. A street can't be lonely. The moon can't be wistful. The ocean can't be angry.
But they can... in our heads. And that's real for us.
I think that, at least at a societal level, this part of the human condition has been mostly benign. Just a little bit of spice.
LLMs seem to have short circuited that part in our brains. We can't even describe errata of a system without anthropamorphizing it
Hallucinate is the term used for the statistical phenomena that arises from their output.
You know, you're entitled to your opinions, but you are most certainly not entitled to your facts.
The term "hallucinate" as used by people in AI research: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
P. S. A lay person's objections to the term's usage in popular media is entirely warranted as unnecessary anthropomorphizing. In general, this tendency to ascribe the language of human mental states to the outputs of statistical computer models is deeply problematic. See: https://firstmonday.org/ojs/index.php/fm/article/view/14366
Nothing you linked there contradicts what I said. It expands on it in more specific detail.
LLMs are heuristic statistical token prediction engines.
Hallucinations are a shorthand term for a set of phenomena that arise out of the way the statistical prediction works, where it will string together sentences that are grammatically correct and sound right, but an LLM has no concept of right/wrong, only statistically likely next token given the prior.
That wiki article goes into much more depth on the "why" but it does support my statement.
I dunno what it is with people and linking wiki articles that support the person's statement and claiming its the opposite.
... learn to read I guess? I dunno lol.