this post was submitted on 12 Feb 2026
1188 points (98.2% liked)

Technology

81467 readers
4467 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 48 comments
sorted by: hot top controversial new old
[–] termaxima@slrpnk.net 1 points 6 days ago (1 children)

What about using it without a subscription though ? I'm unsure whether this is good or bad for them, it loses them money but it also makes their user numbers look good so idk

[–] SlimePirate@lemmy.dbzer0.com 1 points 6 days ago

At least disable "improve model for everyone" and only use temporary chats. We can't trust they'll follow it though. Duck AI is good to anonymize your gpt session but very bad at math formatting

[–] Octagon9561@lemmy.ml 0 points 6 days ago (1 children)

I just use Chinese AI atp. Deepseek and Kimi.

load more comments (1 replies)
[–] MasterNerd@lemmy.zip -3 points 6 days ago

Nah I'm gonna use my free account to prompt a bunch if inane shit to drive up operating costs while poisoning their training data

[–] HubertManne@piefed.social 0 points 6 days ago

if you really want to hurt them use a free account and keep on asking to make you innane pictures and stuff. I mean it will waste energy though but it will cost them money.

[–] LibertyLizard@slrpnk.net 136 points 1 week ago (1 children)

All these boycotts I can't join since I never paid for them in the first place 😢

[–] truthfultemporarily@feddit.org 72 points 1 week ago

You were just boycotting before it was cool.

[–] muimota@lemmy.ml 119 points 1 week ago (13 children)

Check site's social media icons, purely AI slop.

load more comments (13 replies)
[–] unspeakablehorror@thelemmy.club 54 points 1 week ago (5 children)

Off with their heads! GO self-hosted, go local... toss the rest in the trash can before this crap gets a foothold and fully enshitifies

[–] ZILtoid1991@lemmy.world -2 points 6 days ago (1 children)

I would, if I found even a remotely good usecase for LLMs. Would be useful for contextual search on a bunch of API documentation and books on algorithms, but I don't want a sychophantic "copilot" or "assistant", that does job so bad I would be fired for, all while being called ableist slurs and getting blacklisted from the industry.

load more comments (1 replies)
[–] mushroommunk@lemmy.today 20 points 1 week ago (2 children)

LLMs are already shit. Going local is still burning the world just to run a glorified text production machine

load more comments (1 replies)
[–] ch00f@lemmy.world 5 points 1 week ago (3 children)

GO self-hosted,

So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server's Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.

I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn't even thought to try and it worked.

But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.

8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?

[–] SirHaxalot@nord.pub 0 points 6 days ago

Honestly you pretty much don't. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It's not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you're going to be behind the purpose made GPUs with 80GB VRAM.

Maybe it could work for some use cases but I rather just don't use AI.

[–] lexiw@lemmy.world 9 points 1 week ago (2 children)

You are playing with ancient stuff that wasn’t even good at release. Try these:

A 4b model performing like a 30b model: https://huggingface.co/Nanbeige/Nanbeige4.1-3B

Google open source version of Gemini: https://huggingface.co/google/gemma-3-4b-it

[–] ch00f@lemmy.world 1 points 6 days ago (4 children)

Well, not off to a great start.

To be clear, I think getting an LLM to run locally at all is super cool, but saying "go self hosted" sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.

load more comments (4 replies)
[–] ch00f@lemmy.world 2 points 1 week ago (1 children)

Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.

load more comments (1 replies)
[–] Mika@piefed.ca 2 points 1 week ago

It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn't be surprised if trillions).

If you can get to 32b / 80b models, that's where magic starts to happen.

[–] CosmoNova@lemmy.world 3 points 1 week ago (2 children)

Going local is taxing on your hardware that is extremely expensive to replace. Hell, it could soon become almost impossible to replace. I genuinely don‘t recommend it.

Even if you HAVE to use LLMs for some reason, there are free alternatives right now that let Silicon Valley bleed money and they‘re quickly running out of it.

Cancelling any paid subscription probably hurts them more than anything else.

[–] Mika@piefed.ca 5 points 1 week ago* (last edited 1 week ago)

If LLM is tied to making you productive, going local is about owning and controlling the means of production.

You aren't supposed to run it on machine you work on anyway, do a server and send requests.

load more comments (1 replies)
load more comments (1 replies)
[–] Dyskolos@lemmy.zip 10 points 1 week ago (12 children)

Why would anyone subscribe? LLMs rarely are actually helpful and I really tried, as I'm a damn tech-nerd for decades. But most of the time it just takes longer to get worse results than just doing it yourself.

I would not pay 1 buck annually for this. And surely not 30 a month

[–] Gorilladrums@lemmy.world 1 points 6 days ago (1 children)

They're extremely helpful, just not at a professional level. They can help a student proof read an essay or a content creator come up with a script, but they can't help you code an app from scratch or give you a medical diagnosis.

[–] Dyskolos@lemmy.zip 1 points 6 days ago (1 children)

Didn't say they're 100% useless. They're just 90% useless to me and 10% super helpful. Surely depends on what you actually want from them. But I couldn't think of one area where i might seriously consider dishing 30 bucks a month out for an LLM. Except I'd do tons of translations every day or your proofreading. But for that, the free tiers would already be enough.

[–] Gorilladrums@lemmy.world 1 points 6 days ago (1 children)

I don't necessarily disagree with you here, I also think that no generative LLM is worth paying for, let alone a subscription with such a ridiculous price. However, I can still at least understand the appeal for a certain niche subset of people who constantly do the few stuff that a generative LLM like chatgpt excels at.

load more comments (1 replies)
load more comments (11 replies)
[–] Cruxifux@feddit.nl 9 points 1 week ago (1 children)

You can subscribe to chatGPT?

[–] Dojan@pawb.social 22 points 1 week ago* (last edited 1 week ago) (3 children)

Yes. I think it’s like $20 a month.

--

Edit: LMAO so I was fuck-off wrong. It's $10, $30, and $280 per month. At least in my currency (Swedish Crowns).

Don't use the stochastic parrot, and definitely don't fucking shell out 280 a month for it. Holy fuck.

[–] sudoer777@lemmy.ml 1 points 6 days ago (1 children)

For coding $280/mo is peanuts compared to how much the Claude API costs

load more comments (1 replies)
[–] Jakeroxs@sh.itjust.works 1 points 6 days ago

It is 20 in the US

load more comments (1 replies)
[–] sircac@lemmy.world 7 points 1 week ago

Any reference to Trump's donors to back that Gepeto is the biggest one? I would like to see the top 10 or 100 list...

[–] atropa@piefed.social 4 points 1 week ago
[–] emmy67@lemmy.world 4 points 1 week ago

Quit? Only a fool would waste their time on it.

[–] Mika@piefed.ca 3 points 1 week ago

Corporate would still use it 😒

load more comments
view more: ‹ prev next ›