Using AI to find errors that can then be independently verified sounds reasonable.
The danger would be in assuming that it will find all errors, or that an AI once-over would be "good enough". This is what most rich AI proponents are most interested in, after all; a full AI process with as few costly humans as possible.
The lesser dangers would be 1) the potential for the human using the tool to lose or weaken their own ability to find bugs without external help and 2) the AI finding something that isn't a bug, and the human "fixing" it without a full understanding that it wasn't wrong in the first place.