this post was submitted on 18 Mar 2026
699 points (98.5% liked)
Technology
82956 readers
2908 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Tests should be written from requirements. Using LLMs to write tests after the code is written (probably also by LLMs) is a huge anti-pattern:
The model looks at what the code is doing and writes tests that pass (or fail because they bungle the setup). What the model does not do, is understand what the code needs to do and write tests that ensure that functionality is present and correct.
Tests are the thing that should get the most human investment because they anchor the project to its real-world requirements. You will have tons more confidence in your vibe coded appslop if you at least thought through the test cases and built those out first. Then, whatever the shortcomings of the AI codebase, if the tests pass you can know it is doing something right.
Honestly, never been on a team that stuck to TDD. As you test your stuff, and understand whatever libraries and apis you're calling you modify your implementation as you go.
For public facing methods, especially ones called by customers, having pre agreed upon tests matter more but usually that's at the integration test and system test level. I usually use AI for unit testing and read what was written. Tests end up being a lot of writing harnesses and setting up mocks that you delegate to the model and if there's gaps or incorrect requirements, you change them.
I would never let the agent define the code structure. It doesn't understand business processes or what might need to be extended or we're instead about.
I've been doing software for a while, I know how to review code. I don't vibe code, I let the model implement boilerplate and mapping functions while I do other stuff, like manual testing or talking with product. If done correctly, you can incorporate generative models into your workflows without fully handing over all control.