this post was submitted on 05 May 2026
288 points (99.0% liked)
Technology
84434 readers
4145 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Use tape libraries for the moment, with hard drives acting as a cache for them? Doesn't need to mean moving the whole backing storage to tape, just predicting what won't likely be used soon and letting the storage format indicate "go look on tape for this item". Obviously, that can result in much higher cold storage retrieval latency, but as long as you are (a) doing predictive fetching with a reasonably good algorithm and (b) have a lot of hard drives, which I'm sure that The Internet Archive does, I'd think that tape should be workable.
https://en.wikipedia.org/wiki/Tape_library
I'd also guess
though I don't know for sure
that it's probably a lot easier to scale up manufacturing of tapes than it is hard drives.
EDIT: Does kind of make me wonder what the open-source options for tiered storage like that is. I've never really gone hunting, but it seems like there'd be a lot of commonality from place to place, and that for a lot of places that do it, it's not really their core competency (that is, they just want to do something that deals with storing and processing lots of data, not that they really care principally about data storage).
I mean there's probably a pretty large aspect of slow storage that's far more ideal for archiving/backup options, as LLMs are really only interested in the fastest of things.
I would bet that a lot of the storage that AI companies are picking up isn't for the model itself, but for storing the huge amount of information that they want to use as their training corpus.
I'd bet that what they do is something like this:
Download data and store in original form, non-destructively. This is probably not used incredibly frequently. When you see bots sucking down the whole Web, this is the sort of thing that is involved.
Have some kind of filtered training corpus. This throws out a lot of stuff that is useless for training. This is generated from #1 by filtering software. It's probably smaller than #1. Probably a lot smaller.
Probably some sort of scored index is generated at this stage to put an estimate on how useful or reliable the data in step #2 should be considered; I'd assume that this is an input into the training.
The generated model, generated via training.
For the data in stage #1, I'd guess that AI companies might be able to use tapes. That being said, it might make sense to use faster storage if it accelerates the time to iterate on improving the filtering software.
But, yeah, for the later stages, tapes probably aren't gonna work.