You have yet to suggest or confirm otherwise, so my point stands that your original post is unhelpful and non-contributive
techt
The issue is you didn't confirm anything the text prediction machine told you before posting it as a confirmation of someone else's point, and then slid into a victimized, self-righteous position when pushed back upon. One of the worst things about how we treat LLMs is comparing their output to humans -- they are not, figuratively or literally, the culmination of all human knowledge, and the only fault they have comparable to humans is a lack of checking the validity of its answers. In order to use an LLM responsibly, you have to already know the answer to what you're requesting a response to and be able to fact-check it. If you don't do that, then the way you use it is wrong. It's good for programming where correctness is a small set of rules, or discovering patterns where we are limited, but don't treat it like a source of knowledge when it constantly crosses its wires.
Yeah I feel you. I don't think the content is necessarily bad, but LLM output posing as a factual post at a bare, bare minimum needs to also include the sources that the bot used to synthesize its response. And, ideally, a statement from the poster that they checked and verified against all of them. As it is now, no one except the author has any means of checking any of that; it could be entirely made up, and very likely is misleading. All I can say is it sounds good, I guess, but a vastly more helpful response would have been a simple link to a reputable source article.