AI models require a LOT of VRAM to run. Failing that they need some serious CPU power but it’ll be dog slow.
A consumer model that is only a small fraction of the capability of the latest ChatGPT model would require at least a $2,000+ graphics card, if not more than one.
Like I run a local LLM with a etc 5070TI and the best model I can run with that thing is good for like ingesting some text to generate tags and such but not a whole lot else.
Like make a query and then go make yourself a sandwich while it spits out a word every other second slow.
There are very small models that can run on mid range graphics cards and all, but it’s not something you’d look at and say “Yeah this does most of what chatGPT does”
I have a model running on a gtx 1660 and I use it with Hoarder to parse articles and create a handful a tags for them and it’s not… great at that.