this post was submitted on 03 Dec 2025
865 points (98.9% liked)

Programmer Humor

27690 readers
495 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] chunkystyles@sopuli.xyz 2 points 2 days ago (1 children)

Ok, so you're completely delusional.

The current business model is unsustainable. For LLMs to be profitable, they will have to become many times more expensive.

[–] TurdBurgler@sh.itjust.works -1 points 2 days ago* (last edited 2 days ago) (1 children)

What are you even trying to say? You have no idea what these products are, but you think they are going to fail?

Our company does market research and test pilots with customers, we aren't just devs operating in a bubble pushing AI.

We are listening and responding to customer needs and investing in areas that drive revenue using this technology sparingly.

[–] chunkystyles@sopuli.xyz 1 points 2 days ago (1 children)

I don't know what your products are. I'm speaking specifically about LLMs and LLMs only.

Seriously research the cost of LLM services and how companies like Anthropic and OpenAI are burning VC cash at an insane clip.

[–] TurdBurgler@sh.itjust.works -2 points 2 days ago* (last edited 2 days ago)

That's a straw man.

You don't know how often we use LLM calls in our workflow automation, what models we are using, what our margins are or what a high cost is to my organization.

That aside, business processes solve for problems like this, and the business does a cost benefit analysis.

We monitor costs via LiteLLM, Langfuse and have budgets on our providers.

Similar architecture to the Open Source LLMOps Stack https://oss-llmops-stack.com/

Also, your last note is hilarious to me. "I don't want all the free stuff because the company might charge me more for it in the future."

Our design is decoupled, we do comparisons across models, and the costs are currently laughable anyway. The most expensive process is data loading, but good data lifecycles help with containing costs.

Inference is cheap and LiteLLM supports caching.

Also for many tasks you can run local models.