this post was submitted on 30 Sep 2025
1108 points (98.6% liked)

Technology

75682 readers
3284 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

top 50 comments
sorted by: hot top controversial new old
[–] chaosCruiser@futurology.today 5 points 45 minutes ago* (last edited 12 minutes ago) (1 children)

About that "net slowdown". I think it's true, but only in specific cases. If the user already knows well how to write code, an LLM might be only marginally useful or even useless.

However, there are ways to make it useful, but it requires specific circumstances. For example, you can't be bothered to write a simple loop, you can use and LLM to do it. Give the boring routine to an LLM, and you can focus on naming the variables in a fitting way or adjusting the finer details to your liking.

Can't be bothered to look up the exact syntax for a function you use only twice a year? Let and LLM handle that, and tweak the details. Now, you didn't spend 15 minutes reading stack overflow posts that don't answer the exact question you had in mind. Instead, you spent 5 minutes on the whole thing, and that includes the tweaking and troubleshooting parts.

If you have zero programming experience, you can use and LLM to write some code for you, but prepare to spend the whole day troubleshooting something that is essentially a black box to you. Alternatively, you could ask a human to write the same thing in 5-15 minutes depending on the method they choose.

[–] BilboBargains@lemmy.world 4 points 29 minutes ago

This is a sane way to use LLM. Also, pick your poison, some bots are better than others for a specific task. It's kinda fascinating to see how other people solve coding problems and that is essentially on tap with a bot, it will churn out as many examples as you want. It's a really useful tool for learning syntax and libraries of unfamiliar languages.

On one extreme side of LLM there is this insane hype and at the other extreme a great pessimism but in the middle is a nice labour saving educational tool.

[–] Lettuceeatlettuce@lemmy.ml 34 points 8 hours ago* (last edited 8 hours ago)

You mean relying blindly on a statistical prediction engine to attempt to produce sophisticated software without any understanding of the underlying principles or concepts doesn't magically replace years of actual study and real-world experience?

But trust me, bro, the singularity is imminent, LLMs are the future of human evolution, true AGI is nigh!

I can't wait for this idiotic "AI" bubble to burst.

[–] Tollana1234567@lemmy.today 9 points 8 hours ago

so is the profit it was foretold to generate, but it actually costs money than its actually generating.

[–] altphoto@lemmy.today 8 points 11 hours ago (2 children)

Its great for stupid boobs like me, but only to get you going. It regurgitates old code, it cannot come up with new stuff. Lately there have been less Python errors, but again the stuff you can do is limited. At least for the free stuff that you can get without signing up.

[–] theterrasque@infosec.pub 3 points 4 hours ago* (last edited 4 hours ago) (1 children)

It regurgitates old code, it cannot come up with new stuff.

The trick is, most of what you write is basically old code in new wrapping. In most projects, I'd say the new and novel part is maybe 10% of the code. The rest is things like setting up db models, connecting them to base logic, set up views, api endpoints, decoding the message on the ui part, displaying it to user, handling input back, threading things so UI doesn't hang, error handling, input data verification, basic unit tests, set up settings, support reading them from a file or env vars, making UI look not horrible, add translatable text, and so on and on and on. All that has been written in some variation a million times before. All can be written (and verified) by a half-asleep competent coder.

The actual new interesting part is gonna be a small small percentage of the total code.

[–] altphoto@lemmy.today 2 points 1 hour ago

I totally agree with this. However, you can't get there without coding experience and knowledge of the problem as well as education in computer science or experience in the field. I'm a generalist, I'm loving what I can do at home. But I still get the run around using AI. I have to read and understand the code to try to nudge the AI in the right direction or I'll end up going in circles if I don't.

[–] Smokeless7048@lemmy.world 7 points 9 hours ago

Yea, I use it for home assistant, it's amazingly powerful... And so incredibly dumb

It will take my if and statements, and shrunk it to 1/3 the length, while being twice as to robust... While missing that one of the arguments is entirely in the wrong place.

[–] melfie@lemy.lol 14 points 13 hours ago* (last edited 13 hours ago)

This article sums up a Stanford study of AI and developer productivity. TL;DR - net productivity boost is a modest 15-20%, or as low as negative to 10% in complex, brownfield codebases. This tracks with my own experience as a dev.

https://www.linkedin.com/pulse/does-ai-actually-boost-developer-productivity-striking-%C3%A7elebi-tcp8f

[–] Aljernon@lemmy.today 15 points 14 hours ago

Senior Management in much of Corporate America is like a kind of modern Nobility in which looking and sounding the part is more important than strong competence in the field. It's why buzzwords catch like wildfire.

[–] ChaoticEntropy@feddit.uk 16 points 15 hours ago* (last edited 15 hours ago) (1 children)

Are you trying to tell me that the people wanting to sell me their universal panacea for all human endeavours were... lying...? Say it ain't so.

[–] SparroHawc@lemmy.zip 3 points 14 hours ago (1 children)

I mean, originally they thought they had come upon a magic bullet. Turns out it wasn't the case, and now they're going to suffer for it.

[–] Feyd@programming.dev 1 points 4 hours ago* (last edited 4 hours ago)

You're assuming honesty and they've earned the opposite posture.

[–] sadness_nexus@lemmy.ml 5 points 13 hours ago (1 children)

I'm not a programmer in any sense. Recently, I made a project where I used python and raspberry pi and had to train some small models on a KITTI data set. I used AI to write the broad structure of the code, but in the end, it took me a lot of time going through python documentation as well as the documentation of the specific tools/modules I used to actually get the code working. Would an experienced programmer get the same work done in an afternoon? Probably. But the code AI output still had a lot of flaws. Someone who knows more than me would probably input better prompts and better follow up requirements and probably get a better structure from the AI, but I doubt they'll get a complete code. In the end, even to use AI, you have to know what you're doing to use AI efficiently and you still have to polish the code into something that actually works.

[–] spicehoarder@lemmy.zip 3 points 10 hours ago* (last edited 10 hours ago)

From my experience, AI just seems to be a lesson in overfitment. You can't use it to do things nobody has done before. Furthermore, you only really get good responses from prompts related to Javascript

[–] MrSulu@lemmy.ml 5 points 13 hours ago

Perhaps it should read "All AI is over hyped, over done and we should be over it"

[–] badgermurphy@lemmy.world 12 points 19 hours ago (5 children)

I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don't understand, though, is the magnitude of this bubble then.

Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.

In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.

So, I guess my question is, "What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?" If there really are none, or if the gains are only incremental, then my question becomes an incredulous, "Is this biggest in history tech bubble really composed entirely of unfounded hype?"

[–] brunchyvirus@fedia.io 1 points 3 hours ago

I think right now companies are competing until they're only 1 or 2 that clearly own the majority of the market.

Afterwards they will devolve back into the same thing search engines are now. A cesspool of sponsored ads and links to useless SEO blogs.

They'll just become gate keepers of information again and the only ones that will be heard are the ones who pay a fee or game the system.

Maybe not though, I'm usually pretty cynical when it comes to what the incentives of businesses are.

[–] JcbAzPx@lemmy.world 9 points 14 hours ago

This struck upon one of the greatest wishes of all corporations. A way to get work without having to pay people for it.

[–] SparroHawc@lemmy.zip 21 points 18 hours ago (1 children)

From what I've seen and heard, there are a few factors to this.

One is that the tech industry right now is built on venture capital. In order to survive, they need to act like they're at the forefront of the Next Big Thing in order to keep bringing investment money in.

Another is that LLMs are uniquely suited to extending the honeymoon period.

The initial impression you get from an LLM chatbot is significant. This is a chatbot that actually talks like a person. A VC mogul sitting down to have a conversation with ChatGPT, when it was new, was a mind-blowing experience. This is a computer program that, at first blush, appears to be able to do most things humans can do, as long as those things primarily consist of reading things and typing things out - which a VC, and mid/upper management, does a lot of. This gives the impression that AI is capable of automating a lot of things that previously needed a live, thinking person - which means a lot of savings for companies who can shed expensive knowledge workers.

The problem is that the limits of LLMs are STILL poorly understood by most people. Despite constructing huge data centers and gobbling up vast amounts of electricity, LLMs still are bad at actually being reliable. This makes LLMs worse at practically any knowledge work than the lowest, greenest intern - because at least the intern can be taught to say they don't know something instead of feeding you BS.

It was also assumed that bigger, hungrier LLMs would provide better results. Although they do, the gains are getting harder and harder to reach. There needs to be an efficiency breakthrough (and a training breakthrough) before the wonderful world of AI can actually come to pass because as it stands, prompts are still getting more expensive to run for higher-quality results. It took a while to make that discovery, so the hype train was able to continue to build steam for the last couple years.

Now, tech companies are doing their level best to hide these shortcomings from their customers (and possibly even themselves). The longer they keep the wool over everyone's eyes, the more money continues to roll in. So, the bubble keeps building.

[–] badgermurphy@lemmy.world 6 points 15 hours ago

The upshot of this and a lot of the other replies I see here and elsewhere seem to suggest that one big difference between this bubble and other past ones is that with this most recent one, there is so much of the global economy now tied to the fate of this bubble that the entire financial world is colluding to delay the inevitable due to the expected severity of the consequences.

[–] leastaction@lemmy.ca 9 points 16 hours ago

AI is a financial scam. Basically companies that are already mature promise great future profits thanks to this new technological miracle, which makes their stock more valuable than it otherwise would be. Cory Doctorow has written eloquently about this.

[–] TipsyMcGee@lemmy.dbzer0.com 6 points 18 hours ago

When the AI bubble bursts, even janitors and nurses will lose their jobs. Financial institutions will go bust.

[–] arc99@lemmy.world 11 points 19 hours ago* (last edited 19 hours ago) (9 children)

I have never seen an AI generated code which is correct. Not once. I've certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.

I sure as hell wouldn't trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through "vibe" programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.

[–] bountygiver@lemmy.ml 1 points 4 hours ago

for me it typically don't cause syntax errors, but the main thing it fucks up is what you specifically told them to do, where the output straight up does not perform the way your specification requires. If it's just some syntax errors at least the compiler can catch them, this you won't even know if you don't bother testing the output.

[–] theterrasque@infosec.pub 4 points 14 hours ago* (last edited 14 hours ago) (2 children)

I've used Claude code to fix some bugs and add some new features to some of my old, small programs and websites. Not things I can't do myself, but things I can't be arsed to sit down and actually do.

It's actually gone really well, with clean and solid code. easily readable, correct, with error handling and even comments explaining things. It even took a gui stream processing program I had and wrote a server / webapp with the same functionality, and was able to extend it with a few new features I've been thinking to add.

These are not complex things, but a few of them were 20+ files big, and it manage to not only navigate the code, but understand it well enough to add features with the changes touching multiple files (model, logic, view layer for example, or refactor a too big class and update all references to use the new classes).

So it's absolutely useful and capable of writing good code.

[–] chicagohuman@lemmy.zip 3 points 14 hours ago

This is the truth. It has tremendous value but it isn't a solution -- it's a tool. And if you don't know how to code or what good code looks like, then it is a tool you can't use!

load more comments (1 replies)
[–] ikirin@feddit.org 5 points 17 hours ago* (last edited 17 hours ago) (1 children)

I've seen and used AI for snippets of code and it's pretty decent at that.

With my colleagues I always compare it to a battery powered drill. It's very powerful and can make shit a lot easier. But you'd not try to build furniture from scratch with only a battery powered drill.

You need the knowledge to use it - and also saws, screws, the proper bits for those screws and so on and so forth.

[–] setsubyou@lemmy.world 5 points 15 hours ago* (last edited 15 hours ago)

What bothers me the most is the amount of tech debt it adds by using outdated approaches.

For example, recently I used AI to create some python scripts that use polars and altair to parse some data and draw charts. It kept insisting to bring in pandas so it could convert the polars dataframes to pandas dataframes just for passing them to altair. When I told if that altair can use polars dataframes directly, that helped, but two or three prompts later it would try to solve problems by adding the conversion again.

This makes sense too, because the training material, on average, is probably older than the change that enabled altair to use polars dataframes directly. And a lot of code out there just only uses pandas in the first place.

The result is that in all these cases, someone who doesn’t know this would probably be impressed that the scripts worked, and just not notice the extra tech debt from that unnecessary dependency on pandas.

It sounds like it’s not a big deal, but these things add up and eventually, our AI enhanced code bases will be full of additional dependencies, deprecated APIs, unnecessarily verbose or complicated code, etc.

I feel like this is one aspect that gets overlooked a bit when we talk about productivity gains. We don’t necessarily immediately realize how much of that extra LoC/time goes into outdated code and old fashioned verbosity. But it will eventually come back to bite us.

load more comments (6 replies)
load more comments
view more: next ›