this post was submitted on 09 Sep 2025
519 points (98.5% liked)
Technology
74966 readers
2764 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Personal Anecdote
Last week I used the AI coding assistant within JetBrains DataGrip to build a fairly complex PostgreSQL function.
It put together a very well organized, easily readable function, complete with explanatory comments, that failed to execute because it was absolutely littered with errors.
I don't think it saved me any time but it did help remove my brain block by reorganizing my logic and forcing me to think through it from a different perspective. Then again, I could have accomplished the same thing by knocking off work for the day and going to the driving range.
Hey, look at the bright side, as long as you were chained to your desk instead, that's all that matters.
At one point I tried to use a local model to generate something for me. It was full of errors, but after some searching online to look for a library or existing examples I found a github repo that was almost an exact copy of what it generated. The comments were the same, and the code was mostly the same, except this version wasn't fucked up.
It turns out text prediction isn't that great at understanding the logic of code. It's only good at copying existing code, but it doesn't understand why it works, so the predictive model fucks things up when it takes the less likely result. Maybe if you turn the temperature to only give the highest prediction it wouldn't be horrible, but you might as well just search online and copy the code that it's going to generate anyway.
But.. how else do we sell our tool as a super intelligent sentient do-it-all?
The bigger problem is that your skills are weakened a bit every time you use an assistant to write code.
Not when you factor in that you are now doing code review for it and fixing all its mistakes..
It depends how you're using it. I use it for boilerplate code, for stubbing out classes and functions where I can tell it clearly what I want, for finding inconsistencies I might have missed, to advise me on possible tools and approaches for small things, and as a supplement to the documentation when I can't find what I'm looking for. I don't use it for architecting new things, writing complex and specialized code, or as a replacement for documentation. I feel like I have it fairly well contained to what it does well, so I don't waste my time on what it does badly, and it isn't really eating away at my coding brain because I still do the tricky bits myself.
This is exactly how it's meant to be used. People who think it's to be used for more than what you've described are not serious people.
There is no "meant to be used". LLM were not created to solve a specific problem.
That is just dumb.
Your skills are weakened even more by copying code from someone else. Because you have the use even less of your brain to complete your task.
Yet you people don't complain about that part at all and do it yourself all the time. For some it is even the preferred method of work.
"Using your skills less means they get weaker, who would have thought!"
With your logic, you shouldn't use any form of help to code. Programmers should just lock themselves in a big black box until their project is finished, that will make sure their skills aren't "weakened" by using outside help.
No that’s not the same thing. It’s the difference between looking up how to do something and having it done for you.
There have been multiple articles recently that show AI weakens skills.
https://www.forbes.com/sites/chriswestfall/2024/12/18/the-dark-side-of-ai-tracking-the-decline-of-human-cognitive-skills/
Btw there’s no need to add strawman arguments with scenarios I didn’t mention.