Code written with the help of LLM and being reviewed is different than like what was happening with Lutris where the developer decided to obfuscate their use of AI-generated code.
The approach you suggest to totally ban it, while in principle can agree and I think that's noble, it could lead to people accusing each other of using AI code where it may or may not have happened, or others just hiding it and trying to submit anyway without the reviewers knowing, which is just counter-productive.
I've followed Lemmy development now for 3 years, the devs approach is slow and steady, to a fault in some people's views. I think it's a better use of open source resources if we encourage candor and honesty. If the repo gets spammed with AI-generated PRs, then it will probably be blanket banned, but contributors accurately documenting and reporting their usage of AI will help direct reviewers attention to ensure the code is not slop quality or full of hallucinations.