this post was submitted on 15 Jan 2026
439 points (99.1% liked)
Fuck AI
5268 readers
2472 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
OpenAI has been the weakest financial link for a while. Once it falls though... the whole thing implodes.
Its difficult to know what it'll look like on the "other side" of the bubble popping. It'll be very bad though. Maybe afterward we'll be able to heal. Better be ready to fight... or hunker down.
I don't see the bubble popping at all.
As a software engineer at a big tech org, there's no way we'll ever go back to the world before LLMs. It's just too good to ignore. Does it replace software engineers? No, not all of them, but some. What previously required 70 engineers might now require 60. Five years from now, you might get by on even fewer engineers.
What could cause the bubble to pop? We're rolling out AI code at scale, and we're not seeing an increase in incidents or key metrics going down. Instead, we are shipping more and faster.
So maybe it's too expensive? This could be the case, but even so, it's just a matter of time before the cost goes down or a company figures out a workflow to use tokens more conservatively.
Anecdotal, but I've had exactly the opposite experience as an engineer.
Interesting!
I have gone through my ups and downs. Lately I've been more and more convinced. I use Claude Code (Opus 4.5) hooked up to our internal atlassian and google drive mcps. I then ofc have to do a lot of writing (gathering requirements, writing context, etc) but instead of spending two days coding, I'll spend half a day on this and then kick off a CC agent to carry it out.
I then do a self review when it's done and a colleague reviews as well before merge.
And not for architectural work... Rather for features, fixing tech debt, etc.
This also has the benefit of jira tickets being 1000x better than in the pre-LLM era.
I'm primarily using Opus 4.5 as well (via Cursor). We've tried pointing it at JIRA/Confluence via MCP and just letting the agent do it's thing, but we always get terrible results (even when starting with solid requirements and good documentation). Letting an agent run unsupervised just always makes a mess.
We never get code that conforms to the existing style and architecture patterns of our application, no matter how much we fuss with rules files or MCP context. We also frequently end up with solutions that compromise security, performance or both. Code reviews take longer than they used to (even with CodeRabbit doing a first pass review of every PR), and critical issues are still sneaking through the review process and out to prod.
My team has been diligent enough to avoid any major outages so far, but other teams in the organization have had major production outages that have all been traced back to AI generated code.
I've managed to carve out a workflow that does at least produce production-ready code, but it's hardly efficient:
This is almost always slower than if I'd just written the code myself and hadn't spent all that extra time babysitting the LLM. It's also slower to debug if QA comes back with issues, because my understanding of the code is now worse than if I'd written it myself.
I've spoken about this in other comments, but I'm going to repeat it again here because I don't see anyone else talking about it: When you write code yourself, your understanding of that code is always better. Think of it like taking notes. Studies have shown over and over that humans retain information better when they take notes — not because they refer back to those notes later (although that obviously helps), but because by actively engaging with the material while they're absorbing it, they build more connections in the brain than they would by just passively listening. This is a fundamental feature in how we learn (active is better than passive), and with the rise of code generation, we're creating a major learning gap.
There was a time when I could create a new feature and then six months later still remember all of the intimate details of the requirements I followed, the approach I took, and the compromises I had to make. Now? I'm lucky to retain that same information for 3 weeks, and I'm seeing the same in my coworkers.