this post was submitted on 21 Apr 2026
462 points (95.7% liked)
Fuck AI
6809 readers
3817 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Read the article. Their definition of "sabotage" includes not using AI tools.
I guess the wording says "sabotaging AI strategy" so it's our fault for the intended misunderstanding?
yeah I guess I fall under that definition as well
sure, I am an AI skeptic. I work in engineering. I should be critical of any tools. that doesn't mean that I'm sabotaging the company's strategy, unless their strategy is to blindly implement AI tools. in which case yeah sure, but like surely that's not the actual strategy, right?
anyways, short story time:
our company had a demo for an AI drawing creation tool. it was not very impressive and their team couldn't answer many questions about how it works. it didn't seem like it would provide any value to us since it couldn't do complex drawings and simple drawings take little time and effort to create. so the moderate complexity stuff is where it could shine, which coincidentally is also where junior people train their skills to become intermediates and seniors. and like I'm not going to choose to turn our jobs into reviewing AI output, because that's bad for the company - it leads to poor job satisfaction, poor work quality, and inexperienced team members
our CTO is vibe coding a bunch of tools for us right now. his approach is to basically not validate anything and let people find issues. I report these issues when I find them, as well as go looking for them if I suspect something is wrong. does that make me a saboteur? trying to correct something?
same guy also has used Claude to create some technical specifications/ document summaries, and sent them out external. external team had many questions because the document didn't make sense. bad look for our company, I think, and lots of wasted time trying to figure out that information and then going back and correcting it later. am I a saboteur because I don't blindly adopt AI document generation and keep asking who validated certain information instead of just using it and proceeding with my work?
Funny enough they count even "using unapproved AI tools" as sabotage. So I would say your actions fell under high treason?