TurdBurgler

joined 4 days ago
[–] TurdBurgler@sh.itjust.works -2 points 1 day ago* (last edited 1 day ago)

That's a straw man.

You don't know how often we use LLM calls in our workflow automation, what models we are using, what our margins are or what a high cost is to my organization.

That aside, business processes solve for problems like this, and the business does a cost benefit analysis.

We monitor costs via LiteLLM, Langfuse and have budgets on our providers.

Similar architecture to the Open Source LLMOps Stack https://oss-llmops-stack.com/

Also, your last note is hilarious to me. "I don't want all the free stuff because the company might charge me more for it in the future."

Our design is decoupled, we do comparisons across models, and the costs are currently laughable anyway. The most expensive process is data loading, but good data lifecycles help with containing costs.

Inference is cheap and LiteLLM supports caching.

Also for many tasks you can run local models.

[–] TurdBurgler@sh.itjust.works 1 points 1 day ago* (last edited 1 day ago)

It's professional development of an emerging technology. You'd rather bury your head in the sand and say it's not useful?

The reason not to take it seriously is to reinforce a world view instead of looking at how experts in the field are leveraging it, or having discourse regarding the pitfalls you have encountered.

The Marketing AI hype cycle did the technology an injustice, but that doesn't mean the technology isn't useful to accelerate determistic processes.

It depends on the methodology. If you're trying to do a direct port. You're probably approaching it wrong.

What matters to the business most is data, your business objects and business logic make the business money.

If you focus on those parts and port portions at a time, you can substantially lower your tech debt and improve developer experiences, by generating greenfield code which you can verify, that follows modern best practices for your organization.

One of the main reasons many users are complaining about quality of code edited my agents comes down to the current naive tooling. Most using sloppy find/replace techniques with regex and user tools. As AI tooling improves, we are seeing agents given more IDE-like tools with intimate knowledge of your codebase using things like code indexing and ASTs. Look into Serena, for example.

[–] TurdBurgler@sh.itjust.works -1 points 1 day ago* (last edited 1 day ago)

Accelerated delivery. We use it for intelligent verifiable code generation. It's the same work the senior dev was going to complete anyway, but now they cut out a lot of mundane time intensive parts.

We still have design discussions that drive the backlog items the developers work off with their AI, we don't just assign backlog items to bots.

We have not let loose the SaaS agents that blindly pull from the backlog and open PRs, but we are exploring it carefully with older projects that only require maintenance.

And yes, we also use chore bots that are determinstic for maintainance, but these are more small changes the business needs.

There are in fact changes these agents can make well.

[–] TurdBurgler@sh.itjust.works -1 points 2 days ago* (last edited 2 days ago)

Early adopters will be rewarded by having better methodology by the time the tooling catches up.

Too busy trying to dunk on me than understand that you have some really helpful tools already.

[–] TurdBurgler@sh.itjust.works 1 points 2 days ago

This is why I say some people are going to lose their jobs to engineers using AI correctly, lol.

[–] TurdBurgler@sh.itjust.works -1 points 2 days ago* (last edited 2 days ago) (2 children)

What are you even trying to say? You have no idea what these products are, but you think they are going to fail?

Our company does market research and test pilots with customers, we aren't just devs operating in a bubble pushing AI.

We are listening and responding to customer needs and investing in areas that drive revenue using this technology sparingly.

[–] TurdBurgler@sh.itjust.works -1 points 2 days ago

These tools are mostly determistic applications following the same methodology we've used for years in the industry. The development cycle has been accelerated. We are decoupled from specific LLM providers by using LiteLLM, prompt management, and abstractions in our application.

Losing a hosted LLM provider means we prox6 litellm to something out without changing contracts with our applications.

[–] TurdBurgler@sh.itjust.works 0 points 2 days ago

Well, I typed it with my fingers.

[–] TurdBurgler@sh.itjust.works -1 points 2 days ago

Incorrect, but okay.

[–] TurdBurgler@sh.itjust.works -1 points 2 days ago* (last edited 2 days ago) (2 children)

We use a layered architecture following best practices and have guardrails, observability and evaluations of the AI processes. We have pilot programs and internal SMEs doing thorough testing before launch. It's modeled after the internal programs we've had success with.

We are doing this very responsibly, and deliver a product our customers are asking for, with the tools to help calibrate minor things based on analytics.

We take data governance and security compliance seriously.

view more: next ›