Interesting, thanks for doing the research!
As an extreme non-expert, I would say "deliberate removal of a part of a model in order to study the structure of that model" is a somewhat different concept to "intrinsic and inexorable averaging of language by LLM tools as they currently exist", but they may well involve similar mechanisms, and that may be what the OP is referencing, I don't know enough of the technical side to say.
That paper looks pretty interesting in itself; other issues aside, LLMs are really fascinating in the way they build (statistical) representations of language.
What? They're just computer programs. Almost all computers have high quality entropy sources that can generate truly random numbers. LLMs' whole thing is basically turning sequences of random numbers into sequences of less random stuff that makes sense. They have a built-in dial for nondeterminism, and it's almost never at zero.
I feel like I'm missing your meaning because the literal interpretation is nonsense.