this post was submitted on 23 Feb 2026
195 points (98.5% liked)

Fuck AI

5751 readers
881 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

AI and legal experts told the FT this “memorization” ability could have serious ramifications on AI groups’ battle against dozens of copyright lawsuits around the world, as it undermines their core defense that LLMs “learn” from copyrighted works but do not store copies.

Sam Altman would like to remind you each Old Lady at a Library consume 284 cubic feet of Oxygen a day from the air.

Also, hey at least they made sure to probably destroy the physical copy they ripped into their hopelessly fragmented CorpoNapster fever dream, the law is the law.

you are viewing a single comment's thread
view the rest of the comments
[–] riskable@programming.dev -2 points 8 hours ago (1 children)

You're missing the boat entirely. Think about how an AI model is trained: It reads a section of text (one context size at a time), converts it into tokens, then increases a floating point value a little bit or decreases it a little bit based on what it's already associated with the previous token.

It does this trillions of times on zillions of books, articles, artificially-created training text (more and more, this), and other similar things. After all of that, you get a great big stream of floating point values you write out into a file. This file represents the a bazillion statistical probabilities, so that when you give it a stream of tokens, it can predict the next one.

That's all it is. It's not a database! It hasn't memorized anything. It hasn't encoded anything. You can't decode it at all because it's a one-way process.

Let me make an analogy: Let's say you had a collection of dice. You roll them each, individually, 1 trillion times and record the results. Except you're not just rolling them, you're leaving them in their current state and tossing them up into a domed ceiling (like one of those dice popper things). After that's all done you'll find out that die #1 is slightly imbalanced and wants to land on the number two more than any other number. Except when the starting position is two, then it's likely to roll a six.

With this amount of data, you could predict the next roll of any die based on its starting position and be right a lot of the time. Not 100% of the time. Just more often than would be possible if it was truly random.

That is how an AI model works. It's a multi-gigabyte file (note: not terabytes or petabytes which would be necessary for it to be possible to contain a "memorized" collection of millions of books) containing loads of statistical probabilities.

To suggest its just a shitty form of encoding is to say that a record of 100 trillion random dice rolls can be used to reproduce reality.

[–] supersquirrel@sopuli.xyz 3 points 8 hours ago* (last edited 8 hours ago)

That's all it is. It's not a database! It hasn't memorized anything. It hasn't encoded anything. You can't decode it at all because it's a one-way process.

Not it isn't a one-way process, literally the point of this article is that you functionally can.