The guy replying to me is (as far as I can tell) the sole owner and moderator of .wtf, which is the instance I've been using up until this point. I kinda already knew they allowed AI slop, as there's nothing in the rules that says otherwise, but this interaction really sealed my decision. "Hey, person who makes music. If you don't like another musician using the fascist plagiarism machine, how about you offer to create art for them? After all, if people simply donated their time and effort, maybe they wouldn't have to resort to pissing in the face of their fellow artists of a different medium. Think about it."
Also, I think you can donate to the instance in crypto?
Fuck right off with that.
On another note: PeerTube itself uses Whisper for automatic subtitle generation. It's something I don't LIKE, but I approached the devs about it and they responded very thoughtfully. I'll admit I don't know all the differences between locally run, open source models that are used for accessibility and the horrible plagiarism machines we all despise the most. I suspect they're still built off exploitive tech / trained on stolen data and whatnot, and Whisper being the product of OpenAI doesn't inspire confidence, but Framasoft only uses it to detect speech, not create it. That's hardly "generative" at all, is it? It's just creating subtitles. Now, that doesn't mean the program itself is ethical given how it was likely created (as the devs acknowledge), and we SHOULD push for ethical, FLOSS methods of doing these sorts of things. I'm sure it can be done, it wasn't exploitative before the AI boom, right? This is where my knowledge ends and I ask for feedback. Any thoughts?