What about using it without a subscription though ? I'm unsure whether this is good or bad for them, it loses them money but it also makes their user numbers look good so idk
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
At least disable "improve model for everyone" and only use temporary chats. We can't trust they'll follow it though. Duck AI is good to anonymize your gpt session but very bad at math formatting
Nah I'm gonna use my free account to prompt a bunch if inane shit to drive up operating costs while poisoning their training data
if you really want to hurt them use a free account and keep on asking to make you innane pictures and stuff. I mean it will waste energy though but it will cost them money.
All these boycotts I can't join since I never paid for them in the first place 😢
You were just boycotting before it was cool.
Off with their heads! GO self-hosted, go local... toss the rest in the trash can before this crap gets a foothold and fully enshitifies
I would, if I found even a remotely good usecase for LLMs. Would be useful for contextual search on a bunch of API documentation and books on algorithms, but I don't want a sychophantic "copilot" or "assistant", that does job so bad I would be fired for, all while being called ableist slurs and getting blacklisted from the industry.
LLMs are already shit. Going local is still burning the world just to run a glorified text production machine
GO self-hosted,
So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server's Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn't even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?
Honestly you pretty much don't. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It's not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you're going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don't use AI.
You are playing with ancient stuff that wasn’t even good at release. Try these:
A 4b model performing like a 30b model: https://huggingface.co/Nanbeige/Nanbeige4.1-3B
Google open source version of Gemini: https://huggingface.co/google/gemma-3-4b-it

Well, not off to a great start.
To be clear, I think getting an LLM to run locally at all is super cool, but saying "go self hosted" sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.
Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.
It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn't be surprised if trillions).
If you can get to 32b / 80b models, that's where magic starts to happen.
Going local is taxing on your hardware that is extremely expensive to replace. Hell, it could soon become almost impossible to replace. I genuinely don‘t recommend it.
Even if you HAVE to use LLMs for some reason, there are free alternatives right now that let Silicon Valley bleed money and they‘re quickly running out of it.
Cancelling any paid subscription probably hurts them more than anything else.
If LLM is tied to making you productive, going local is about owning and controlling the means of production.
You aren't supposed to run it on machine you work on anyway, do a server and send requests.
Why would anyone subscribe? LLMs rarely are actually helpful and I really tried, as I'm a damn tech-nerd for decades. But most of the time it just takes longer to get worse results than just doing it yourself.
I would not pay 1 buck annually for this. And surely not 30 a month
They're extremely helpful, just not at a professional level. They can help a student proof read an essay or a content creator come up with a script, but they can't help you code an app from scratch or give you a medical diagnosis.
Didn't say they're 100% useless. They're just 90% useless to me and 10% super helpful. Surely depends on what you actually want from them. But I couldn't think of one area where i might seriously consider dishing 30 bucks a month out for an LLM. Except I'd do tons of translations every day or your proofreading. But for that, the free tiers would already be enough.
I don't necessarily disagree with you here, I also think that no generative LLM is worth paying for, let alone a subscription with such a ridiculous price. However, I can still at least understand the appeal for a certain niche subset of people who constantly do the few stuff that a generative LLM like chatgpt excels at.
You can subscribe to chatGPT?
Yes. I think it’s like $20 a month.
--
Edit: LMAO so I was fuck-off wrong. It's $10, $30, and $280 per month. At least in my currency (Swedish Crowns).
Don't use the stochastic parrot, and definitely don't fucking shell out 280 a month for it. Holy fuck.
For coding $280/mo is peanuts compared to how much the Claude API costs
It is 20 in the US
Any reference to Trump's donors to back that Gepeto is the biggest one? I would like to see the top 10 or 100 list...
Great job
Quit? Only a fool would waste their time on it.
Corporate would still use it 😒

