EncryptKeeper

joined 2 years ago
[–] EncryptKeeper@lemmy.world 2 points 29 minutes ago* (last edited 28 minutes ago)

Like make a query and then go make yourself a sandwich while it spits out a word every other second slow.

There are very small models that can run on mid range graphics cards and all, but it’s not something you’d look at and say “Yeah this does most of what chatGPT does”

I have a model running on a gtx 1660 and I use it with Hoarder to parse articles and create a handful a tags for them and it’s not… great at that.

[–] EncryptKeeper@lemmy.world 1 points 34 minutes ago* (last edited 32 minutes ago) (3 children)

AI models require a LOT of VRAM to run. Failing that they need some serious CPU power but it’ll be dog slow.

A consumer model that is only a small fraction of the capability of the latest ChatGPT model would require at least a $2,000+ graphics card, if not more than one.

Like I run a local LLM with a etc 5070TI and the best model I can run with that thing is good for like ingesting some text to generate tags and such but not a whole lot else.

[–] EncryptKeeper@lemmy.world 7 points 5 hours ago (5 children)

I mean no not at all, but local LLMs are a less energy reckless way to use AI