this post was submitted on 14 May 2026
455 points (95.8% liked)

Technology

84648 readers
4406 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] dejected_warp_core@lemmy.world 39 points 1 day ago (2 children)

(X) Doubt

As a Sr. Engineer, I completely get that my situation may be wildly different from what's cited in the article.

Right now, I'm using AI "in the loop" rather than "as the loop". That's a big difference. And I'm getting my ass kicked routinely on review for dumb-ass things that I'm letting slide from AI generated output. And rightly so. Plus, models routinely lead me down sub-optimal blind alleys while dreaming up really stupid ways to fix problems. The level of (re)prompting I have to provide to suggest to get decent quality results converges on a post-grad that has encyclopedic knowledge of software engineering as it exists online, but with zero real-world experience. It's both impressive and dangerous as a replacement for software engineering.

In the mode I describe above, I'm not losing the ability to do anything. I can see how one could surrender some coding chops or familiarity with a whole language or stack, in favor of automation. But all you have to do is not do that.

I will say that as a rapid-prototyping technology, It's nothing short of miraculous. I've watched junior engineers knock together medium-weight applications, complete with browser UI/UX and decent workflow, in less than a week. This is great for showing value or putting something semi-functional in front of management and/or customers. But pivoting those prototypes into something maintainable is an utter nightmare. Depending on how beholden to AI and forever prompt-looping with "skills" and MCPs you want to be, I suppose it's possible to just keep mashing the AI button. But at some point, you're going to need to get inside there to fix security problems or bugs that elude this workflow. What then?

[–] tinfoilhat@lemmy.ml 17 points 1 day ago (1 children)

I joined a project that was forced to use some vibe coded solution that an intern cooked up -- marketed as "solution for data pipelining".

There are no tests, every semantic query calculates embeddings every time, and there is help together with so much bubble gum and "glue code" that nobody feels confident with any of the data were showing our customer.

It's great for rapid prototyping, and then straight to the trash.

[–] northface@lemmy.ml 11 points 1 day ago* (last edited 1 day ago)

Thing is, as we all know, prototypes rarely make it to the trash bin if managers and product owners have a stake in the project. Which becomes an even bigger problem now that minimal amounts of humans are involved in producing said prototypes.

I had a meeting with a customer who proudly proclaimed they do "full-on agentic coding" at their startup, and one of their developers mentioned their entire codebase has been rewritten three times in the past week before the meeting took place. I do not have high hopes for their project ever being refactored by humans involved in anything else than light UAT before customer demo time.

load more comments (1 replies)
[–] vogi@piefed.social 48 points 1 day ago (2 children)

Its a silver lining of AI that you can easily tell whos a big baby idiot and whos actually worth engaging with.

[–] very_well_lost@lemmy.world 46 points 1 day ago (6 children)

Preach.

The AI "revolution" is the thing that finally killed my imposter syndrome as a software engineer. Not because I can write better code than AI (that's a very low bar), but from listening to all these breathless idiots talk about how they're "10x-ing my productivity!" or how "AI has replaced search for me!" or how "In 6 months no one will have to manually write code anymore!"

[–] schema@lemmy.world 13 points 1 day ago* (last edited 1 day ago)

Similar for me. What i find ironic is that AI already ran into a brick wall. It's inherent statelessness by design means that AI is unlikely to be suited for anything more than isolated well defined tasks in the near future. Still usable as a tool, but without someone who is actually experienced, it will result in disaster.

and even in smaller tasks it can fucks up, especially if the person prompting it is incapable of writing the code themselves as they don't know how to properly design it and don't spot the issues. Like everything with AI, it looks impressive at first glance until you look at it for more than 10 seconds and spot the metaphorical 6th finger.

What we see currently with AI getting "better" at coding is more or less duct tape to make it work. Basically, they create the agents to bolt on the state, more layers between user and model. Iterative processes to make the answers better, etc, and to create "memory", which in essence is just an ever growing prompt managed by the agent. But in the end, this won't fix the inherent problem, so it will only do so much and is already hitting another ceiling. It introduces state decay. With the agent method its not really possible to "take away" memory, so if you gave it multiple versions of the same code (as you would if you work with AI), the AI never really forgets about old code. It can supress it through agent instructions (more duct tape), but the more there is the more it bleeds through, which can make the AI reintroduce old code or base assumptions on outdated things.

There is no fix without changing the inherent way how models work, which would introduce complexity beyond what is currently feasible in computing (and the current AI is already gobbling up all computing reaoureces as is)

[–] foodandart@lemmy.zip 5 points 1 day ago (3 children)

As someone that wanted to write but never had the time to learn, what's so bad about writing code manually?

Seems like if you can learn to do it well, you will be fairly well set with that skill.

[–] very_well_lost@lemmy.world 15 points 1 day ago* (last edited 1 day ago)

If you're someone who cares at all about the quality and consistency of your craft, there's absolutely nothing wrong with manually writing code.

If you're a misanthropic "techno-feudalist" who thinks of code as nothing more than an asset to sell, then pumping out as much code as quickly as possible without any human intervention is a very attractive proposition.

Tech, sadly, is absolutely infested with these people at all levels.

[–] FauxLiving@lemmy.world 7 points 1 day ago

You will still need to learn programming manually.

The process of struggling to understand and synthesize working code is a critical part of learning. Skipping it feels easier, but you're hurting your ability to understand coding.

Sure, you can make an LLM generate code and if you're inexperienced it can outperform you on the basic tasks that you're given as exercises. This is a trap that a lot of students fall into. It's very easy to let LLMs do the 'hard work' part of learning while you just read the textbook or watch a video. Unfortunately, the hard part is the part that builds your skillset.

It's just like how you can't just watch a video about physical fitness and then use a robot to lift the weights for you. Sure, you get to the end of your sets faster and you're not physically tired and sore but you won't actually benefit in the ways that matter.

load more comments (1 replies)
load more comments (4 replies)
[–] TotalCourage007@lemmy.world 7 points 1 day ago

Honestly yeah its like wearing a huge red AI flag. Can't imagine being stupid enough to fall in love with a not-secure CHATBOT.

[–] ImgurRefugee114@reddthat.com 166 points 1 day ago (2 children)

Lol! Losers. I've been programming for almost two decades and extensive use of AI hasn't compromised my skills AT ALL! These slop machines can't hope to compete with the quantity and magnitude of subtle bugs I write. My code was terrible long before I made bots have mental breakdowns trying to work with it.

[–] Goodeye8@piefed.social 30 points 1 day ago (2 children)

AI also gives you the benefits of a middle manager. If everything works as intended you take the credit but if something breaks that's not your fault, AI made the mistake. If they try to put the blame on you just say you have 6 agents working on 6 different domains all cross-reviewing their commits and you can't be expected to review every single line of code yourself. Time to play corporate like a damned fiddle!

load more comments (2 replies)
[–] bishoponarope@lemmy.world 19 points 1 day ago

Saved me a paragraph there.

[–] collapse_already@lemmy.ml 51 points 1 day ago (8 children)

We have been interviewing for entry level positions and the new grads know less than ever before. I don't really care what they know, I am looking for evidence that they can think, but I usually ease them into thinking scenarios by asking easy foundational questions like how many bits in a byte. You would think I was asking for them to explain the Shrodinger wave equations... One candidate was waivering between 13 and 17...

[–] andallthat@lemmy.world 29 points 1 day ago (3 children)
[–] collapse_already@lemmy.ml 8 points 1 day ago (2 children)

Two nibbles is an acceptable answer.

[–] rob_t_firefly@lemmy.world 6 points 1 day ago (1 children)

I have two nibbles. My cat had six.

load more comments (1 replies)
[–] HaraldvonBlauzahn@feddit.org 3 points 1 day ago

It all depends on whether the CPUs kibibyte flag is set!

load more comments (1 replies)
[–] MajorasTerribleFate@lemmy.zip 16 points 1 day ago (1 children)

Computers famously love prime numbers greater than 2 as a foundation for structure and logic.

load more comments (1 replies)
[–] Feathercrown@lemmy.world 7 points 1 day ago (1 children)

Knowing this is my competition makes me feel much better about myself

load more comments (1 replies)
[–] foodandart@lemmy.zip 5 points 1 day ago (8 children)

..easy foundational questions like how many bits in a byte..

GTFO.

I mean, yeah.. perhaps it's to be expected. https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z - if this is true, it's as the methods of using computers and various devices has been infantilized and made too easy.

Yeah.. let's obscure the inner working of computing and make the process as opaque to the user as possible. It'll be fine.. no negative consequences at all.

Colleges do not matriculate anymore (that's in the British sense of the word, where one has to show actual knowledge in the degree field one is seeking before enrolling, and TBH, they haven't done so for a very long time, actually..) so this is what we get.

Higher ed in the US is just about da moneys..

[–] mnemonicmonkeys@sh.itjust.works 3 points 20 hours ago (1 children)

I can't wrap my head around how the people in the article get anything done on the computer.

Sure, I could have File Explorer search for a file in theory, but it's ridiculously slow and often fails to find the files I actually want. It's way faster to just have things organized on a day-to-day basis

[–] foodandart@lemmy.zip 2 points 18 hours ago

Oddly enough I've always sorted current working files by date.

Then when backup time comes I'll look at the last dated file in the archive, then go to that date in my current work folder and everything newer goes into the backup. Once it's in the main backup folder, I then sort the files into year and project.

Still, on my system (a MacPro from the Olden Times when Steve Jobs was still kicking) I have 4 drives, so it's crucial to know what is where.

load more comments (7 replies)
load more comments (4 replies)
[–] jj4211@lemmy.world 36 points 1 day ago (2 children)

I just don't get it, even the purportedly best models screw things up so much that I can't just leave them to the job without reviewing and fixing the mess they made... And I'm also drowning in pull requests that turn out to be broken as it proudly has "co authored by Claude" in it... Like it manages to pass their test case but it's so messed up that it's either explicitly causing problems, or had a bunch of unrelated changes randomly.

I feel like I'm being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

Closest I got was a chore that had a perfect criteria "address all warnings from the build". Then let it go and iterate. Then after 50 rounds each round saying "ok should be done now, everything is taken care of, just need to do a final check". It burned though most of my monthly quota doing this task before succeeding. Then I look at the proposed change... And it just added directives to the top of every file telling the tools to disable all the warnings... This was the best opus 4.6 could do...

Now sure, I can have it tear through a short boiler plate and it notice a pattern I'm doing and tab through it. But I haven't see this "vibe" approach working at all...

[–] kescusay@lemmy.world 28 points 1 day ago (1 children)

I feel like I'm being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

That's because you are being gaslit.

The people making those claims are either a) not developers in the first place, with no awareness of just how shit the "products" they're pushing are, b) paid astroturfers trying to prop up AI, or c) former actual developers who've become addicted to the speed that's possible with AI who are downplaying how crappy their own code quality has become because they have no familiarity with their codebase anymore and have forgotten how to do so much as a for loop.

All these people claiming 10x or 100x gains, and everything they're making is garbage no one should or would touch with a ten-foot pole.

[–] boogiebored@lemmy.world 10 points 1 day ago (6 children)

there are also the low tier coders who have ai making better code than they could have produced.

load more comments (6 replies)
[–] flandish@lemmy.world 13 points 1 day ago

what it seems to be doing, in your case and others i have seen, is pushing the burden onto those who “care” and really fully grok (no pun intended) the concept of a real code review. it’s exhausting.

[–] circuitfarmer@lemmy.world 27 points 1 day ago (1 children)

When you start relying on something else, it's quite natural and expected to no longer be good at the thing now being done for you.

But in this context, it's a net negative. While you can certainly write more code while using the tool, you're almost always writing worse code. And you still get the atrophy, so the result overall: now you're not good at the thing, and neither is the tool you're using.

And remember, AI models need constant retraining as systems and approaches are updated, languages change, etc. Where is that training data going to come from? From the people now worse at coding than they were before.

[–] foodandart@lemmy.zip 8 points 1 day ago

The atrophy scares the hell out of me.

Years ago, I would often have long conversations with my dad about how manual skill sets in the trades (my training) and in engineering in the field (which was his bailiwick) were being lost to the pivot towards college degrees for every student, including the ones that preferred to work with their hands.

Three decades on, I witnessed the full turn when construction firms had to - and still have to - mass import workers from Central and South America (legally and illegally) just to get things built. NGL, there are some scary good builders that have been brought in, and those people work insanely hard.

Yes, it's slowly pivoting back as more boys and men opt for the trades and become journeymen and apprentices, but to get the skillsets needed to get to a master's level, you're looking at at least 20k hours. Wer're still a decade out - at best - before we get enough kids through the system and into steady work that they can step up and strike out on their own and make crazy bank. Skilled craftsmen and women can earn 100 bucks an hour - easily - in the right markets, and the rich folks will be glad to pay.

Goddamn, it's gonna be scary until that sorts itself out in another decade or so (and that does pin itself on the hope the financially feckless idiot in the White House doesn't torpedo the economy..)

[–] thericofactor@sh.itjust.works 75 points 1 day ago (2 children)

I notice getting lazier. Even adding a. gitignore file I ask Claude now. It takes longer than typing it myself and costs more probably. But I don't have to do anything but wait a few seconds.

[–] cecilkorik@lemmy.ca 53 points 1 day ago (1 children)

If I was paying for it, hell naw. But if my employer not only is willing to pay for it, but considers it a performance metric? I'm going to use it for fucking everything. These are the incentives they give me, I'm going to follow the incentives. Talking to Claude is what they pay me for, apparently.

But like the article says, if I don't continue practicing on my own code in my unpaid off-work hours, I imagine I'd be regressing in my skills too. I do that because I enjoy it as a hobby, but if I didn't, I could see myself and probably a lot of other people getting rugpulled by this.

[–] Wfh@lemmy.zip 24 points 1 day ago* (last edited 1 day ago)

I'm not using it for the incentive. I'm using it to avoid punishment. The company I work for made it mandatory to use it daily. So I'm tokenmaxxing bullshit tasks so I can focus on interesting ones, but yeah I already feel it's making me lazy because I sometimes can't be bothered to read a log anymore. We are truly fucked.

This company is working on terrible assumptions. They spent years hunting for the best engineers in the country (or so they pretend to anyway) and suddenly decided that

  • we are average at best and it is better and faster than most of us (it's not)
  • software engineers don't like to write code anyway (we do, at least when the challenge is interesting)
  • it will forever be more affordable than properly qualified engineers (oh boy it won't)
  • a PM with Claude is as qualified as us to bring features to production (talk about tech stack suicide)
  • etc.

They either have drunk the propaganda koolaid and betting everything on this lie, or are so arrogant they think we can succeed where the largest AI investors in the world utterly failed (see GitHub that can't even get 3 nines of availability since the switched to full-ai-code).

[–] meme_historian@lemmy.dbzer0.com 37 points 1 day ago (1 children)

The thing that scares me (and why I've stopped using it): my brain automatically reaches for the shortcut whenever I would have to do deep thinking/planning.

I have ADD, so getting my brain to focus and work on a task is not an easy feat to begin with. Now I've found myself multiple times a day unable to will myself to think about a problem but rather deferred to Claude. It's seriously fucked up.

[–] NoForwadSlashS@piefed.social 12 points 1 day ago (3 children)

That's not even diminished coding ability, that's diminished thinking ability.

And herein lies the reason AI is being pushed at all costs.

load more comments (3 replies)
[–] Appoxo@lemmy.dbzer0.com 32 points 1 day ago (10 children)

For those unable to code without AI:
What even is your contribution outside of a glorified typing monkey that can parse code but is unable to write it?
It's like a paramedic not being trained at all for a medical emergency response but sent there regardless to just stand and observe the patient while writing notes about the sounds they make while dying.

[–] Luckyfriend222@lemmy.world 27 points 1 day ago (1 children)

So this is going to invoke a multitude of downvotes, but here goes.

I will give you an example. I can read a bit of python code, not the advanced stuff, but enough to understand to a large degree what the code does. Last week, I had the need to add a button to Netbox that will download a multitude of device configs that are being rendered via config templates. This use case helps a whole department apply configs, without having to create them by hand.

I knew Netbox has a very powerful plugins ecosystem. The way the base code is written grants the capability of adding any type of plugin you might need in your unique environment. I used Claude to create this plugin for me. I wrote a very specific spec file, told it to utilise the already built pynetbox plugin and ensure it uses nothing fancy that is not sustainable. It created the plugin, helped me with pip installing it, and I deployed it on my dev environment where I tested it extensively.

My alternative to using claude: Asking our internal development team to write something like this. I would need to wait 3 weeks to even get a spot on their meeting for the request, just to then be told their backlog is full with customer code and they won't be able to help. This plugin will help our support team with fewer calls, because the configs are accurately built according to the source of truth (Netbox) and will need less human input. So in the greater scheme of the company, that is a net positive.

What I will do when Netbox updates, is update my dev environment, install the plugin, and test it. If something broke, I will troubleshoot it, of course I will be using Claude with error logs etc, then update the plugin code to work on the new netbox. Is this ideal? Probably not. Is it the only way to get this done? Maybe not either. Is it all I can do at this very moment? Yes.

My specialist fields are the lower levels. Hardware, hypervisors and setting up VMs + System Software. I need code from time to time to get something functional done. I don't write whole systems with Claude, that is just ridiculously naive. But small pieces of functional code that solves a single small problem, I honestly don't understand the problem with that.

My 2c.

[–] Appoxo@lemmy.dbzer0.com 21 points 1 day ago (2 children)

But you arent a dev as a main job.
This is talking about developers, employed as developers, beginning to being inept to be developers and (not offense) being not worth much more than what your technical abbilities already provide.
So what's their point?

It's like someone being employed as a translator, is able to hear the language and sort of understand it but every translation is done through deepL or google translate.
So why should I a translator instead of using paid deepL directly and proofread it using google translate to make sure it didnt generate (mostly) nonsense?
Isnt this mostly the point of a trained professional to being better than a self taught amateur?

load more comments (2 replies)
load more comments (9 replies)
[–] farmgineer@nord.pub 31 points 1 day ago (6 children)

This is why I don't use it for coding at all.

load more comments (6 replies)
[–] Matty_r@programming.dev 19 points 1 day ago (1 children)

Go ahead, use your AI to replace all of your own skills. The rest of us will gladly take your job when you can no longer troubleshoot problems.

[–] jj4211@lemmy.world 10 points 1 day ago (1 children)

Based on my experience with LLM and developers I personally know, my only assumption is they don't have the skills in the first place...

In corporate world there are a lot of "developers" that actually act kind of like codegen. They just throw plausible sounding bullshit into an editor and hope for the best. Two examples:

Once asked to help a team speed something that ran slow, even by their low standards. Turned out they had made their own copy file routine instead of using the standard library one, and sucked the file into memory, expanding array 512 bytes at a time, and then wrote it out, 512 bytes at a time. I made the thing nearly instant by just making it a call to the standard library function to copy a file.

While helping with a separate problem, I noticed their solution for transferring some file with an indeterminate version number in the middle of the file name. It was a huge mess, but the most illustrative line was the line in their Java application declaring a string "ls /path/with/file|grep prefix.*.extension".....

Lots of human slop out there that AI can actually compete with.

load more comments (1 replies)
[–] CaptainBasculin@lemmy.dbzer0.com 10 points 1 day ago (4 children)

It's really useful in creating base templates, but anything further than that and you won't be able to read "your own" codebase if you depend too much on AI.

load more comments (4 replies)
[–] gokayburucdev@lemmy.world 9 points 1 day ago

Muscles that are not used lose their function.It weakens and eventually becomes unusable.As humans' ability to ask questions of artificial intelligence increases, its ability to learn and store information is disappearing.If the brain can obtain something easily, it doesn't feel the need to take precautions regarding it.Therefore, memorizing code doesn't involve things like writing it anymore.It only keeps the recognition function active when it sees it.

load more comments
view more: ‹ prev next ›