this post was submitted on 30 Sep 2025
1135 points (98.5% liked)

Technology

75682 readers
3196 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

(page 3) 50 comments
sorted by: hot top controversial new old
[–] dylanmorgan@slrpnk.net 40 points 1 day ago (1 children)

The most immediately understandable example I heard of this was from a senior developer who pointed out that LLM generated code will build a different code block every time it has to do the same thing. So if that function fails, you have to look at multiple incarnations of the same function, rather than saying “oh, let’s fix that function in the library we built.”

[–] kescusay@lemmy.world 16 points 1 day ago (2 children)

Yeah, code bloat with LLMs is fucking monstrous. If you use them, get used to immediately scouring your code for duplications.

load more comments (2 replies)
[–] favoredponcho@lemmy.zip 25 points 1 day ago (2 children)

Glad someone paid a bunch of worthless McKinsey consultants what I could’ve told you myself

load more comments (2 replies)
[–] HugeNerd@lemmy.ca 5 points 1 day ago
[–] OmegaMan@lemmings.world 2 points 1 day ago (2 children)

Writing apps with AI seems pretty cooked. But I've had some great successes using GitHub copilot for some annoying scripting work.

[–] Canconda@lemmy.ca 2 points 1 day ago (2 children)

AI is works well for mindless tasks. Data formatting, rough drafts, etc.

Once a task requires context and abstract thinking, AI can't handle it.

load more comments (2 replies)
load more comments (1 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 33 points 1 day ago* (last edited 1 day ago)

Almost like its a desperate bid to blow another stock/asset bubble to keep 'the economy' going, from C suite, who all knew the housing bubble was going to pop when this all started, and now is.

Funniest thing in the world to me is high and mid level execs and managers who believe their own internal and external marketing.

The smarter people in the room realize their propoganda is in fact propogands, and are rolling their eyes internally that their henchmen are so stupid as to be true believers.

[–] simplejack@lemmy.world 34 points 1 day ago (1 children)

Might be there someday, but right now it’s basically a substitute for me googling some shit.

If I let it go ham, and code everything, it mutates into insanity in a very short period of time.

[–] degen@midwest.social 29 points 1 day ago (3 children)

I'm honestly doubting it will get there someday, at least with the current use of LLMs. There just isn't true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.

[–] Windex007@lemmy.world 15 points 1 day ago (2 children)

A crazy number of devs weren't even using EXISTING code assistant tooling.

Enterprise grade IDEs already had tons of tooling to generate classes and perform refactoring in a sane and algorithmic way. In a way that was deterministic.

So many use cases people have tried to sell me on (boilerplate handling) and im like "you have that now and don't even use it!".

I think there is probably a way to use llms to try and extract intention and then call real dependable tools to actually perform the actions. This cult of purity where the llm must actually be generating the tokens themselves... why?

I'm all for coding tools. I love them. They have to actually work though. Paradigm is completely wrong right now. I don't need it to "appear" good, i need it to BE good.

load more comments (2 replies)
load more comments (2 replies)
[–] Blackmist@feddit.uk 1 points 23 hours ago

Of course. Shareholders want results, and not just results for nVidia's bottom line.

[–] ready_for_qa@programming.dev -2 points 17 hours ago (3 children)

These types of articles always fail to mention how well trained the developers were on techniques and tools. In my experience that makes a big difference.

My employer mandates we use AI and provides us with any model, IDE, service we ask for. But where it falls short is providing training or direction on ways to use it. Most developers seem to go for results prompting and get a terrible experience.

I on the other hand provide a lot of context through documents and various mcp tooling, I talk about the existing patterns in the codebase and provide sources to other repositories as examples, then we come up with an implementation plan and execute on it with a task log to stay on track. I spend very little time fixing bad code because I spent the setup time nailing down context.

So if a developer is just prompting "Do XYZ". It's no wonder they're spending more time untangling a random mess.

Another aspect is that everyone seems to always be working under the gun and they just don't have the time to figure out all the best practices and techniques on their own.

I think this should be considered when we hear things like this.

load more comments (3 replies)
[–] Goldholz@lemmy.blahaj.zone 8 points 1 day ago

No shit sherlock!

[–] Dojan@pawb.social 17 points 1 day ago

I miss the days when machine learning was fun. Poking together useless RNN models with a small dataset to make a digital Trump that talked about banging his daughter, end endless nipples flowing into America. Exploring the latent space between concepts.

[–] aesthelete@lemmy.world 13 points 1 day ago

It turns every prototyping exercise into a debugging exercise. Even talented coders often suck ass at debugging.

[–] Feyd@programming.dev 22 points 1 day ago (3 children)

It remains to be seen whether the advent of “agentic AIs,” designed to autonomously execute a series of tasks, will change the situation.

“Agentic AI is already reshaping the enterprise, and only those that move decisively — redesigning their architecture, teams, and ways of working — will unlock its full value,” the report reads.

"Devs are slower with and don't trust LLM based tools. Surely, letting these tools off the leash will somehow manifest their value instead of exacerbating their problems."

Absolute madness.

load more comments (3 replies)
[–] DarkDarkHouse@lemmy.sdf.org 11 points 1 day ago (1 children)

The biggest value I get from AI in this space is when I get handed a pile of spagehtti and ask for an initial overview.

load more comments (1 replies)

I am jack's complete lack of surprise.

load more comments
view more: ‹ prev next ›