this post was submitted on 06 Mar 2026
307 points (94.8% liked)

Fuck AI

5751 readers
980 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] okwhateverdude@lemmy.world 22 points 1 day ago (1 children)

"Different, nondeterministic things on every install" Massive doubt. I know this is the Fuck AI comm, but know thine enemy. Models are simply incapable of true randomness. They are worse than humans even. It takes great effort to introduce entropy and get a truly out of distribution result. Yes, there very likely will be a "worm" among people that have existing relationships with token providers where the agent can surreptitiously use API keys laying around, but that's a tiny number of people.

[–] apparia@discuss.tchncs.de 13 points 23 hours ago (1 children)

What? They're just computer programs. Almost all computers have high quality entropy sources that can generate truly random numbers. LLMs' whole thing is basically turning sequences of random numbers into sequences of less random stuff that makes sense. They have a built-in dial for nondeterminism, and it's almost never at zero.

I feel like I'm missing your meaning because the literal interpretation is nonsense.

[–] okwhateverdude@lemmy.world 6 points 22 hours ago (2 children)

Yes and no. The models themselves are just a big pile of floating point numbers that represent a compression of the dataset they were trained on. The patterns in that dataset will absolutely dominate the output of the model even if you tweak the inference parameters. Try it. Ask it ten times to make list of 20-30 random words. Each time a new context. The alignment between each of those lists will be uncanny. Hell, you'll even see repeats within the list. Size of the model matters here with the small ones (especially quantized ones) having less patterns or bigger semantic gravity wells. But even the big boys will give you the same slop patterns that are mostly fixed. Unless you are specifically introducing more entropy into the prompt, you can mostly treat a fixed prompt as a function with a somewhat deterministic output (within a given bounds).

This means that the claims in the OP are simply not true. At least not without some caveats and specific work arounds to make it true

[–] Tiresia@slrpnk.net 5 points 22 hours ago (1 children)

At least not without some caveats and specific work arounds to make it true

Luckily hackers are terrible at doing that, otherwise we might be in trouble.

[–] okwhateverdude@lemmy.world 1 points 21 hours ago

Haha, you're not wrong. All I am pointing out is that inducing true randomness in an agent that would make fighting an agent worm really difficult is really difficult and a very under studied thing in general. I have done experiments on introducing entropy into prompts and it is very difficult to thread the needle with instruction following plus entropy. I've only seen one other dude posting experiments on attempting to introduce entropy into prompts.

[–] shoo@lemmy.world 1 points 16 hours ago* (last edited 16 hours ago)

Ask it ten times to make list of 20-30 random words

This is true on ootb models but not the universal rule. You could adjust the temperature all the way up and get something way more random, probably to the point of incoherence.

The trick is balancing that with keeping the model doing something useful. If you're clever you could leverage /dev/random or similar as a tool to manually inject randomness while keeping the result deterministic.