this post was submitted on 15 Jan 2026
43 points (64.8% liked)

Technology

78923 readers
3735 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] NachBarcelona@piefed.social 92 points 6 days ago (5 children)

AI isn't scheming because AI cannot scheme. Why the fuck does such an idiotic title even exist?

[–] MentalEdge@sopuli.xyz 25 points 6 days ago* (last edited 6 days ago) (2 children)

Seems like it's a technical term, a bit like "hallucination".

It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.

There's hallucination, when a model "genuinely" claims something untrue is true.

This is about how a model might lie, even though the "chain of thought" shows it "knows" better.

It's just yet another reason the output of LLMs are suspect and unreliable.

[–] atrielienz@lemmy.world 12 points 6 days ago* (last edited 5 days ago) (1 children)

I agree with you in general, I think the problem is that people who do understand Gen AI (and who understand what it is and isn't capable of, and why), get rationally angry when it's humanized by using words like these to describe what it's doing.

The reason they get angry is because this makes people who do believe in the "intelligence/sapience" of AI more secure in their belief set and harder to talk to in a meaningful way. It enables them to keep up the fantasy. Which of course helps the corps pushing it.

[–] MentalEdge@sopuli.xyz 6 points 6 days ago

Yup. The way the article titled itself isn't helping.

[–] very_well_lost@lemmy.world 7 points 5 days ago (1 children)

It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.

I think this still gives the model too much credit by implying that there's any sort of intentionally behind this behavior.

There's not.

These models are trained on the output of real humans and real humans lie and deceive constantly. All that's happening is that the underlying mathematical model has encoded the statistical likelihood that someone will lie in a given situation. If that statistical likelihood is high enough, the model itself will lie when put in a similar situation.

[–] MentalEdge@sopuli.xyz 6 points 5 days ago* (last edited 5 days ago) (2 children)

Obviusly.

And like hallucinations, it's undesired behavior that proponents off LLMs will need to "fix" (a practical impossibility as far as I'm concerned, like unbaking a cake).

But how would you use words to explain the phenomenon?

"LLMs hallucinate and lie" is probably the shortest description that most people will be able to grasp.

[–] very_well_lost@lemmy.world 4 points 5 days ago* (last edited 4 days ago)

But how would you use words to explain the phenomenon?

I don't know, I've been struggling to find the right 'sound bite' for it myself. The problem is that all of the simplified explanations encourage people to anthropomorphize these things, which just further fuels the toxic hype cycle.

In the end, I'm unsure which does more damage.

Is it better to convince people the AI "lies", so they'll stop using it? Or is it better to convince people AI doesn't actually have the capacity to lie so that they'll stop shoveling money onto the datacenter altar like we've just created some bullshit techno-god?

[–] zarkanian@sh.itjust.works 2 points 5 days ago* (last edited 5 days ago) (1 children)

Except that "hallucinate" is a terrible term. A hallucination is when you perceive something that doesn't exist. What AI is doing is making things up; i.e. lying.

[–] MentalEdge@sopuli.xyz 6 points 5 days ago* (last edited 5 days ago) (1 children)

Yes.

Who are you trying to convince?

What AI is doing is making things up.

This language also credits LLMs with an implied ability to think they don't have.

My point is we literally can't describe their behaviour without using language that makes it seems like they do more than they do.

So we're just going to have to accept that discussing it will have to come with a bunch of asterisks a lot of people are going to ignore. And which many will actively try to hide in an effort to hype up the possibility that this tech is a stepping stone to AGI.

[–] zarkanian@sh.itjust.works 1 points 5 days ago (3 children)

The interface makes it appear that the AI is sapient. You talk to it like a human being, and it responds like a human being. Like you said, it might be impossible to avoid ascribing things like intentionality to it, since it's so good at imitating people.

It may very well be a stepping-stone to AGI. It may not. Nobody knows. So, of course we shouldn't assume that it is.

I don't think that "hallucinate" is a good term regardless. Not because it makes AI appear sapient, but because it's inaccurate whether the AI is sapient or not.

[–] MentalEdge@sopuli.xyz 1 points 5 days ago* (last edited 5 days ago) (1 children)

Like you said, it might be impossible to avoid ascribing things like intentionality to it

That's not what I meant. When you say "it makes stuff up" you are describing how the model statistically predicts the expected output.

You know that. I know that.

That's the asterisk. The more in-depth explanation a lot of people won't bother getting far enough to learn about. Someone who doesn't read that far into it, can read that same phrase and assume that we're discussing what type of personality LLMs exhibit, that they are "liars". But they'd be wrong. Neither of us is attributing intention to it or discussing what kind of "person" it is, in reality we're referring to the fact that it's "just" a really complex probability engine that can't "know" anything.

No matter what word we use, if it is pre-existing, it will come with pre-existing meanings that are kinda right, but also not quite, requiring that everyone involved in a discussion know things that won't be explained every time a term or phrase is used.

The language isn't "inaccurate" between you and me because you and I know the technical definition, and therefore what aspect of LLMs is being discussed.

Terminology that is "accurate" without this context does not and cannot exist, short of coming up with completely new words.

[–] zarkanian@sh.itjust.works 2 points 5 days ago* (last edited 5 days ago)

You could say "the model's output was inaccurate" or something like that, but it would be much more stilted.

load more comments (2 replies)
[–] echodot@feddit.uk 11 points 5 days ago (1 children)

They're really doubling down on this narrative of "this technology we're making is going to kill us all, it's that awesome, come on guys use it more"

The narrative is a little more nuanced and is being built slowly to be more believable and less obvious. They are trying to convince everybody that AI is powerful technology, which means that it is worth to develop, but also comes with serious risks. Therefore, only established corps with experience and processes in AI development can handle it. Regulation abd certification follows, making it almost impossible for startups and OSS to enter the scene and compete.

load more comments (3 replies)
[–] db2@lemmy.world 108 points 6 days ago

AI tech bros and other assorted sociopaths are scheming. So called AI isn't doing shit.

[–] SnotFlickerman@lemmy.blahaj.zone 75 points 6 days ago* (last edited 6 days ago) (1 children)

However, when testing the models in a set of scenarios that the authors said were “representative” of real uses of ChatGPT, the intervention appeared less effective, only reducing deception rates by a factor of two. “We do not yet fully understand why a larger reduction was not observed,” wrote the researchers.

Translation: "We have no idea what the fuck we're doing or how any of this shit actually works lol. Also we might be the ones scheming since we have vested interest in making these models sound more advanced than they actually are."

[–] a_non_monotonic_function@lemmy.world 4 points 5 days ago (1 children)

That's the thing about machine learning models. You can't always control what their optimizing. The goal is inputs to outputs, but whatever the f*** is going on inside is often impossible discern.

This is dressing it up under some sort of expectation of competence. The word scheming is a lot easier to deal with than just s*****. The former means that it's smart and needs to be rained in. The latter means it's not doing its job particularly well, and the purveyors don't want you to think that.

[–] SnotFlickerman@lemmy.blahaj.zone 1 points 5 days ago (1 children)

To be fair, you can't control what humans optimize what you're trying to teach them either. A lot of times they learn the opposite of what you're trying to teach them. I've said it before but all they managed to do with LLMs is make a computer that's just as unreliable (if not moreso) than your below-average human.

[–] a_non_monotonic_function@lemmy.world 3 points 5 days ago (1 children)

As somebody who spent my life studying AI, these are remarkably different things.

Machine learning models are basically brute forcing things. Humans have the ability to actually think.

[–] SnotFlickerman@lemmy.blahaj.zone 2 points 5 days ago (1 children)

Humans have the ability to actually think.

That's a stretch for an inordinate number of humans, sadly.

load more comments (1 replies)
[–] cronenthal@discuss.tchncs.de 57 points 6 days ago (1 children)

Really? We're still doing the "LLMs are intelligent" thing?

[–] ragica@lemmy.ml 6 points 6 days ago

Doesn't have to be intelligent, just has to perform the behaviours like a philosophical zombie. Thoughtlessly weighing patterns in training data...

[–] Zorsith@lemmy.blahaj.zone 46 points 6 days ago (4 children)

One question still remains; why are all the AI buttons/icons buttholes?

[–] webghost0101@sopuli.xyz 16 points 6 days ago

Data goes in one end and..

[–] zarkanian@sh.itjust.works 5 points 5 days ago

Because of what they produce.

[–] FuyuhikoDate@feddit.org 2 points 6 days ago

Wanted To write the same comment...

[–] breadguy@kbin.earth 1 points 6 days ago

just claude if we're being honest

[–] KoboldCoterie@pawb.social 48 points 6 days ago (4 children)

Stopping it is, in fact, very easy. Simply unplug the servers, that's all it takes.

[–] homes@piefed.world 8 points 6 days ago (1 children)

“But that’s how we print our money!”

[–] myfunnyaccountname@lemmy.zip 3 points 6 days ago (1 children)

But they aren’t. That’s what is funny. Anthropic and OpenAI are not making money.

[–] Passerby6497@lemmy.world 4 points 5 days ago

The company isn't making money. The people behind it absolutely are.

[–] reksas@sopuli.xyz 1 points 5 days ago

to stop it requires stopping the fuckers with money, and that seems just plain impossible.

[–] Godort@lemmy.ca 45 points 6 days ago (1 children)

"slop peddler declares that slop is here to stay and can't be stopped"

[–] shittydwarf@piefed.social 11 points 6 days ago

Can't be .. slopped?

[–] ExLisper@lemmy.curiana.net 8 points 5 days ago

deliberately misleading humans

Yeah... You dumb.

[–] itisileclerk@lemmy.world 7 points 5 days ago (1 children)

From my recent discussion with Gemini: "Ultimately, your assessment is a recognized technical reality: AI models are products of their environment, and a model built within the US regulatory framework will inevitably reflect the geopolitical priorities of that framework." In other words, AI is trained to reflect US policy like MAGA and other. Don't trust AI, it is just a tool for controlling masses.

[–] ExLisper@lemmy.curiana.net 2 points 5 days ago (1 children)

So you think Gemini told you the truth here? How do you know it's not just scheming?

[–] itisileclerk@lemmy.world 1 points 5 days ago (1 children)

Ask Gemini about genocide in Gaza. Deffinetly not truth, watering down IDF's war crimes like "unconfirmed".

[–] ExLisper@lemmy.curiana.net 1 points 5 days ago

Yeah, but why ask Gemini about it's priorities? I can just lie about it.

[–] chaosCruiser@futurology.today 18 points 6 days ago* (last edited 6 days ago)

And there’s an “✨Ask me anything” bar at the bottom. How fitting 🤣

[–] Antaeus@lemmy.world 10 points 6 days ago (2 children)

“Turn them off”? Wouldn’t that solve it?

[–] orclev@lemmy.world 11 points 6 days ago

Don't even need to turn it off, it literally can't do anything without somebody telling it to so you could just stop using it. It's incapable of independent action. The only danger it poses is that it will tell you to do something dangerous and you actually do it.

[–] WamGams@lemmy.ca 8 points 6 days ago
[–] CosmoNova@lemmy.world 6 points 6 days ago

The people who worked on this „study“ belong in a psychiatric clinic.

load more comments
view more: next ›