Unit testing with LLMs is just asking an AI to hallucinate requirements.
Tests are what documents expected behavior and are therefore the worst candidate for code gen.
Unit testing with LLMs is just asking an AI to hallucinate requirements.
Tests are what documents expected behavior and are therefore the worst candidate for code gen.