The most immediately understandable example I heard of this was from a senior developer who pointed out that LLM generated code will build a different code block every time it has to do the same thing. So if that function fails, you have to look at multiple incarnations of the same function, rather than saying “oh, let’s fix that function in the library we built.”
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Yeah, code bloat with LLMs is fucking monstrous. If you use them, get used to immediately scouring your code for duplications.
Glad someone paid a bunch of worthless McKinsey consultants what I could’ve told you myself
Writing apps with AI seems pretty cooked. But I've had some great successes using GitHub copilot for some annoying scripting work.
AI is works well for mindless tasks. Data formatting, rough drafts, etc.
Once a task requires context and abstract thinking, AI can't handle it.
Almost like its a desperate bid to blow another stock/asset bubble to keep 'the economy' going, from C suite, who all knew the housing bubble was going to pop when this all started, and now is.
Funniest thing in the world to me is high and mid level execs and managers who believe their own internal and external marketing.
The smarter people in the room realize their propoganda is in fact propogands, and are rolling their eyes internally that their henchmen are so stupid as to be true believers.
Might be there someday, but right now it’s basically a substitute for me googling some shit.
If I let it go ham, and code everything, it mutates into insanity in a very short period of time.
I'm honestly doubting it will get there someday, at least with the current use of LLMs. There just isn't true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.
A crazy number of devs weren't even using EXISTING code assistant tooling.
Enterprise grade IDEs already had tons of tooling to generate classes and perform refactoring in a sane and algorithmic way. In a way that was deterministic.
So many use cases people have tried to sell me on (boilerplate handling) and im like "you have that now and don't even use it!".
I think there is probably a way to use llms to try and extract intention and then call real dependable tools to actually perform the actions. This cult of purity where the llm must actually be generating the tokens themselves... why?
I'm all for coding tools. I love them. They have to actually work though. Paradigm is completely wrong right now. I don't need it to "appear" good, i need it to BE good.
Of course. Shareholders want results, and not just results for nVidia's bottom line.
These types of articles always fail to mention how well trained the developers were on techniques and tools. In my experience that makes a big difference.
My employer mandates we use AI and provides us with any model, IDE, service we ask for. But where it falls short is providing training or direction on ways to use it. Most developers seem to go for results prompting and get a terrible experience.
I on the other hand provide a lot of context through documents and various mcp tooling, I talk about the existing patterns in the codebase and provide sources to other repositories as examples, then we come up with an implementation plan and execute on it with a task log to stay on track. I spend very little time fixing bad code because I spent the setup time nailing down context.
So if a developer is just prompting "Do XYZ". It's no wonder they're spending more time untangling a random mess.
Another aspect is that everyone seems to always be working under the gun and they just don't have the time to figure out all the best practices and techniques on their own.
I think this should be considered when we hear things like this.
No shit sherlock!
I miss the days when machine learning was fun. Poking together useless RNN models with a small dataset to make a digital Trump that talked about banging his daughter, end endless nipples flowing into America. Exploring the latent space between concepts.
It turns every prototyping exercise into a debugging exercise. Even talented coders often suck ass at debugging.
It remains to be seen whether the advent of “agentic AIs,” designed to autonomously execute a series of tasks, will change the situation.
“Agentic AI is already reshaping the enterprise, and only those that move decisively — redesigning their architecture, teams, and ways of working — will unlock its full value,” the report reads.
"Devs are slower with and don't trust LLM based tools. Surely, letting these tools off the leash will somehow manifest their value instead of exacerbating their problems."
Absolute madness.
The biggest value I get from AI in this space is when I get handed a pile of spagehtti and ask for an initial overview.
I am jack's complete lack of surprise.