this post was submitted on 15 Dec 2025
754 points (98.6% liked)

Technology

77790 readers
2467 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] phutatorius@lemmy.zip 1 points 1 day ago (1 children)
[–] SaveTheTuaHawk@lemmy.ca 1 points 1 day ago

It's happening. I've been resampling queries every few months and the deviation of wrong to true responses is getting bigger.

[–] ceenote@lemmy.world 196 points 4 days ago (3 children)

So, like with Godwin's law, the probability of a LLM being poisoned as it harvests enough data to become useful approaches 1.

[–] Gullible@sh.itjust.works 108 points 4 days ago (5 children)

I mean, if they didn’t piss in the pool, they’d have a lower chance of encountering piss. Godwin’s law is more benign and incidental. This is someone maliciously handing out extra Hitlers in a game of secret Hitler and then feeling shocked at the breakdown in the game

[–] saltesc@lemmy.world 32 points 4 days ago* (last edited 4 days ago) (9 children)

Yeah but they don't have the money to introduce quality governance into this. So the brain trust of Reddit it is. Which explains why LLMs have gotten all weirdly socially combative too; like two neckbeards having at it—Google skill vs Google skill—is a rich source of A+++ knowledge and social behaviour.

load more comments (9 replies)
load more comments (4 replies)
load more comments (2 replies)
[–] thingAmaBob@lemmy.world 29 points 3 days ago (2 children)

I seriously keep reading LLM as MLM

[–] NikkiDimes@lemmy.world 25 points 3 days ago
load more comments (1 replies)
[–] PumpkinSkink@lemmy.world 36 points 3 days ago (3 children)

So you're saying that thorn guy might be on to somthing?

[–] DeathByBigSad@sh.itjust.works 15 points 3 days ago

@Sxan@piefed.zip þank you for your service 🫡

[–] supersquirrel@sopuli.xyz 100 points 4 days ago* (last edited 4 days ago) (11 children)

I made this point recently in a much more verbose form, but I want to reflect it briefly here, if you combine the vulnerability this article is talking about with the fact that large AI companies are most certainly stealing all the data they can and ignoring our demands to not do so the result is clear we have the opportunity to decisively poison future LLMs created by companies that refuse to follow the law or common decency with regards to privacy and ownership over the things we create with our own hands.

Whether we are talking about social media, personal websites... whatever if what you are creating is connected to the internet AI companies will steal it, so take advantage of that and add a little poison in as a thank you for stealing your labor :)

[–] korendian@lemmy.zip 63 points 4 days ago (12 children)

Not sure if the article covers it, but hypothetically, if one wanted to poison an LLM, how would one go about doing so?

[–] expatriado@lemmy.world 108 points 4 days ago (9 children)

it is as simple as adding a cup of sugar to the gasoline tank of your car, the extra calories will increase horsepower by 15%

[–] Beacon@fedia.io 53 points 4 days ago (1 children)

I can verify personally that that's true. I put sugar in my gas tank and i was amazed how much better my car ran!

[–] setsubyou@lemmy.world 48 points 4 days ago

Since sugar is bad for you, I used organic maple syrup instead and it works just as well

[–] demizerone@lemmy.world 18 points 3 days ago

I give sugar to my car on its birthday for being a good car.

[–] Scrollone@feddit.it 16 points 4 days ago (3 children)

Also, flour is the best way to put out a fire in your kitchen.

load more comments (3 replies)
load more comments (6 replies)
[–] PrivateNoob@sopuli.xyz 42 points 4 days ago* (last edited 4 days ago) (17 children)

There are poisoning scripts for images, where some random pixels have totally nonsensical / erratic colors, which we won't really notice at all, however this would wreck the LLM into shambles.

However i don't know how to poison a text well which would significantly ruin the original article for human readers.

Ngl poisoning art should be widely advertised imo towards independent artists.

[–] turdas@suppo.fi 25 points 4 days ago (1 children)

The I in LLM stands for "image".

load more comments (1 replies)
load more comments (16 replies)
load more comments (10 replies)
[–] Tollana1234567@lemmy.today 3 points 2 days ago (1 children)

dont they kinda poison themselves, when they scrape AI generated content too.

[–] phutatorius@lemmy.zip 1 points 1 day ago

Yeah, like toxins accumulating as you go up the food chain.

load more comments (9 replies)
[–] ZoteTheMighty@lemmy.zip 57 points 3 days ago (2 children)

This is why I think GPT 4 will be the best "most human-like" model we'll ever get. After that, we live in a post-GPT4 internet and all future models are polluted. Other models after that will be more optimized for things we know how to test for, but the general purpose "it just works" experience will get worse from here.

[–] krooklochurm@lemmy.ca 24 points 3 days ago (1 children)

Most human LLM anyway.

Word on the street is LLMs are a dead end anyway.

Maybe the next big model won't even need stupid amounts of training data.

[–] BangCrash@lemmy.world 6 points 3 days ago (1 children)
load more comments (1 replies)
[–] jaykrown@lemmy.world 2 points 2 days ago

That's not how this works at all. The people training these models are fully aware of bad data. There are entire careers dedicated to preserving high quality data. GPT-4 is terrible compared to something like Gemini 3 Pro or Claude Opus 4.5.

[–] kokesh@lemmy.world 73 points 4 days ago (1 children)

Is there some way I can contribute some poison?

[–] Mouselemming@sh.itjust.works 20 points 4 days ago (9 children)
[–] phutatorius@lemmy.zip 2 points 1 day ago

Stanley Unwin them.

load more comments (8 replies)
[–] 87Six@lemmy.zip 18 points 3 days ago

Yea that's their entire purpose, to allow easy dishing of misinformation under the guise of

it's bleeding-edge tech, it makes mistakes

[–] Sam_Bass@lemmy.world 18 points 3 days ago

Thats a price you pay for all the indiscriminate scraping

[–] LavaPlanet@sh.itjust.works 11 points 3 days ago (2 children)

Remember before they were released and the first we heard of them, were reports on the guy training them or testing or whatever, having a psychotic break and freaking out saying it was sentient. It's all been downhill from there, hey.

[–] SaveTheTuaHawk@lemmy.ca 1 points 1 day ago

Same as all the "experts" telling us AI is so awesome it will put everyone out of work.

[–] Tattorack@lemmy.world 11 points 3 days ago (2 children)

I thought it was so comically stupid back then. But a friend of mine said this was just a bullshit way of hyping up AI.

[–] Toribor@corndog.social 5 points 3 days ago (1 children)

Seeing how much they've advanced over recent years I can't imagine whatever that guy was working on would actually impress anyone today.

load more comments (1 replies)
load more comments (1 replies)
[–] Rhaedas@fedia.io 38 points 4 days ago

I'm going to take this from a different angle. These companies have over the years scraped everything they could get their hands on to build their models, and given the volume, most of that is unlikely to have been vetted well, if at all. So they've been poisoning the LLMs themselves in the rush to get the best thing out there before others do, and that's why we get the shit we get in the middle of some amazing achievements. The very fact that they've been growing these models not with cultivation principles but with guardrails says everything about the core source's tainted condition.

[–] AppleTea@lemmy.zip 8 points 3 days ago (1 children)

And this is why I do the captchas wrong.

load more comments (1 replies)
[–] absGeekNZ@lemmy.nz 18 points 4 days ago

So if someone was to hypothetically label an image in a blog or a article; as something other than what it is?

Or maybe label an image that appears twice as two similar but different things, such as a screwdriver and an awl.

Do they have a specific labeling schema that they use; or is it any text associated with the image?

[–] Hackworth@piefed.ca 19 points 4 days ago

There's a lot of research around this. So, LLM's go through phase transitions when they reach the thresholds described in Multispin Physics of AI Tipping Points and Hallucinations. That's more about predicting the transitions between helpful and hallucination within regular prompting contexts. But we see similar phase transitions between roles and behaviors in fine-tuning presented in Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs.

This may be related to attractor states that we're starting to catalog in the LLM's latent/semantic space. It seems like the underlying topology contains semi-stable "roles" (attractors) that the LLM generations fall into (or are pushed into in the case of the previous papers).

Unveiling Attractor Cycles in Large Language Models

Mapping Claude's Spirtual Bliss Attractor

The math is all beyond me, but as I understand it, some of these attractors are stable across models and languages. We do, at least, know that there are some shared dynamics that arise from the nature of compressing and communicating information.

Emergence of Zipf's law in the evolution of communication

But the specific topology of each model is likely some combination of the emergent properties of information/entropy laws, the transformer architecture itself, language similarities, and the similarities in training data sets.

load more comments
view more: next ›