this post was submitted on 15 Aug 2025
537 points (95.7% liked)
Technology
74073 readers
2992 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
And an LLM that you could run local on a flash drive will do most of what it can do.
I mean no not at all, but local LLMs are a less energy reckless way to use AI
Why not... for the ignorant such as myself?
AI models require a LOT of VRAM to run. Failing that they need some serious CPU power but it’ll be dog slow.
A consumer model that is only a small fraction of the capability of the latest ChatGPT model would require at least a $2,000+ graphics card, if not more than one.
Like I run a local LLM with a etc 5070TI and the best model I can run with that thing is good for like ingesting some text to generate tags and such but not a whole lot else.
How slow?
Loading up a website with flash and GIF in the 90s dialup slow... Or worse?
Basicly I can run 9b models on my 16gb gpu mostly fine like getting responses of lets say 10 lines in a few seconds.
Bigger models if they don't outright crash take for the same task then like 5x or 10x longer so long it isn't even useful anymore
So very worse.
Like make a query and then go make yourself a sandwich while it spits out a word every other second slow.
There are very small models that can run on mid range graphics cards and all, but it’s not something you’d look at and say “Yeah this does most of what chatGPT does”
I have a model running on a gtx 1660 and I use it with Hoarder to parse articles and create a handful a tags for them and it’s not… great at that.