this post was submitted on 11 Feb 2026
525 points (99.4% liked)

Fuck AI

5726 readers
817 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
(page 2) 41 comments
sorted by: hot top controversial new old
[–] NochMehrG@feddit.org 8 points 2 days ago (3 children)

While I usually advise against it, the people I know who are paying customers use it for the one thing it is reasonably good at, wrangling text. Summarizing and writing stuff, that is not too important and just fixing it up afterwards instead of writing it all themselves.

load more comments (3 replies)
[–] CatGPT@lemmy.dbzer0.com 7 points 2 days ago* (last edited 2 days ago)

have they tried CatGPT?

Meow

[–] zr0@lemmy.dbzer0.com 3 points 2 days ago

I can’t quit. If I do, they are going to sell my data. And that would be … bad

[–] brucethemoose@lemmy.world 3 points 2 days ago* (last edited 2 days ago) (2 children)

I was into LLMs before they blew up, messing with GPT-J finetunes named after Star Trek characters in ~2022.

...And I've never had an OpenAI subscription.

It's always sucked. Its always been sycophantic and censored. It's good at certain things, yeah, but other API providers made way more financial sense; ChatGPT subs are basically for the masses who don't really know about LLMs.

[–] how_we_burned@lemmy.zip 2 points 1 day ago

What pisses me off is it won't tell me how to convert codeine to heroin or how to enrich uranium, and how to cook up the HE required to compress the uranium into going critical.

[–] SuspciousCarrot78@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

Let's be fair - not all of the masses are so ignorant.

If you consider API vs subscription, you probably get more bang for buck out of paying $20/USD than just paying per million tokens via API calls. At least for OAI models. It's legitimately a good deal for heavy users.

For simipler stuff and/or if you have decent hardware? For sure - go local. Qwen3-4B 2507 instruct matches or surpasses ChatGPT 4.1 nano and mini on almost all benchmarks...and you can run it on your phone. I know because it (or the ablit version) is my go to at home. Its stupidily strong for a 4B.

But if you need SOTA (or near to) and are rocking typical consumer grade hardware, then $20/month for basically unlimited tokens is the reason for subscription.

[–] brucethemoose@lemmy.world 1 points 1 day ago (1 children)

I just meant OpenAI ChatGPT specifically. There are tons of great API providers (and TBH this is what I mostly use even with a decent PC).

[–] SuspciousCarrot78@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (2 children)

Ah but subscription to OpenAI ChatGPT ($20/USD) gives you access to ChatGPT 5.3 codex bundled in, with some really generous usage allowances (well, compared to Claude)

I haven't looked recently, but API calls to Codex 5.2 via OR were silly expensive per million tokens; I can't imagine 5.3 is any cheaper.

To be fair to your point: I doubt many people sign up specifically for this (let's say 20% if were making up numbers). Its still a good deal though. I can chew thru 30 million tokens in pretty much a day when I'm going hammer at tongs at stuff.

Frankly, I don't understand how OAI remain solvent. They're eating a lot of shit in their "undercut the competition to take over the market" phase. But hey, if they're giving it away, sure, I'll take it.

[–] PoliteDudeInTheMood@lemmy.ca 1 points 1 day ago (1 children)

Opus is heavily throttled outside enterprise tiers. I was regularly blowing through weekly usage limits by Tuesday using Opus. 5.3 on the higher thinking profiles match or exceed Opus capabilities, and I have yet to hit a single limitation.

If I need to process via API I will run tests against Anthropic Haiku or Sonnet before trying Gpt5-mini, If I need to use 5.3, and what I'm doing isn't time critical I'll use batch processing. Smaller token batches complete very quickly, often in under 2 hours. And at a 50% discount, provides serious cost savings.

[–] SuspciousCarrot78@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

Yeah me too. Opus 4.5 is awesome but my god...om nom nom go my daily / weekly quotas. Probably I should not yeet the entire repo at it lol.

4.6 is meant to be 2x worse for not much better output.

Viewed against that, Codex 5.3 @ medium is actual daylight robbery of OAI.

I was just looking at benchmarks and even smaller 8-10B models are now around 65-70% Sonnet level (Qwen 3-8, Nemotron 9B, Critique) and 110-140% Haiku.

If I had the VRAM, I'd switch to local Qwen3 next (which almost 90% of Opus 4.5 on SWE Bench) and just git gud. Probably I'll just look at smaller models, API calls and the git gud part.

RTX 3060 (probably what you need for decent Qwen 3 next) is $1500 here :(

For that much $$$ I can probably get 5 years of surgical API calls via OR + actual skills.

PS: how are you using batch processing? How did you set it up?

load more comments (1 replies)
[–] baggachipz@sh.itjust.works 1 points 1 day ago (1 children)

Frankly, I don't understand how OAI remain solvent. They're eating a lot of shit in their "undercut the competition to take over the market" phase. But hey, if they're giving it away, sure, I'll take it.

The only reason they’re still around is the massive, huge amounts of cash they’re given every couple months, which goes right into the furnace. It’s just a matter of time until they implode in spectacular fashion. We’re at the point right now where we can take advantage of VC-funded free shit, like how Uber rides were way cheap at the beginning. Difference is, there’s still no path to profitability for OAI and there never will be.

load more comments (1 replies)
[–] tackleberry@thelemmy.club -1 points 1 day ago (1 children)

this is why I use Deepseek

load more comments (1 replies)

They'll just find more ways to force you to use it.

load more comments
view more: ‹ prev next ›