this post was submitted on 30 Sep 2025
1122 points (98.5% liked)
Technology
75682 readers
3219 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I know how to use parametrised tests, but thanks.
Tests are still much more repetitive than application code. If you're testing a wrapper around some API, each test may need you to mock a different underlying API call. (Mocking all of them at once would hide things). Each mock is different, so you can't just extract it somewhere; but it is still repetitive.
If you need three tests each of which require a (real or mock) user, a certain directory structure to be present somewhere, input data to be got from somewhere, that's three things that, even if you streamline them, need to be done in each test. I have been involved in a project where we originally followed the principle of, "if you need a user object in more than one test, put it in
setUp
or in a shared fixture" and the result is rapid unwieldy shared setup between tests - and if ever you should want to change one of those tests, you'd better hope you only need to add to it, not to change what's already there, otherwise you break all the other tests.For this reason, zealous application of DRY is not a good idea with tests, and so they are a bit repetitive. That is an acceptable trade-off, but also a place where an LLM can save you some time.
Ah, the end of all coding discussions, "if this is a problem for you, your code sucks." I mean, you're not wrong, because all code sucks.
LLMs are like the junior dev. You have to review their output because they might have screwed up in some stupid way, but that doesn't mean they're not worth having.
I absolutely agree. My point is that if you need complex setup, there's a good chance you can reuse it and replace only the data that's relevant for your test instead of constructing it every time.
But yes, there's a limit here. We currently have a veritable mess because we populate the database with fixture data so we have enough data to not need setup logic for each test. Changing that fixture data causes a dozen tests to fail across suites. Since I started at this org, I've been pushing against that and introduced the repository pattern so we can easily mock db calls.
IMO, reused logic/structures should be limited to one test suite. But even then, rules are meant to be broken, just make sure you justify it.
I'm still not convinced that's the case though. A typical mock takes a minute or two to write, most of the time is spent thinking about which cases to hit or refactoring code to make testing easier. Working with the LLM takes at least that long, esp if you count reviewing the generated code and whatnot.
Right, and I don't want a junior dev writing my tests. Junior devs are there to be trained with the expectation that they'll learn from mistakes. LLMs don't learn, they're perennially junior.
That's why I don't use them for code gen and instead use them for research. Writing code is the easy part of my job, knowing what to write is what takes time, so I outsource as much of the latter as I can.