That's just the ones who admit it.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
And we'll do it again!
We did it Reddit! We stopped AI!
Read the article. Their definition of "sabotage" includes not using AI tools.
I guess the wording says "sabotaging AI strategy" so it's our fault for the intended misunderstanding?
yeah I guess I fall under that definition as well
sure, I am an AI skeptic. I work in engineering. I should be critical of any tools. that doesn't mean that I'm sabotaging the company's strategy, unless their strategy is to blindly implement AI tools. in which case yeah sure, but like surely that's not the actual strategy, right?
anyways, short story time:
-
our company had a demo for an AI drawing creation tool. it was not very impressive and their team couldn't answer many questions about how it works. it didn't seem like it would provide any value to us since it couldn't do complex drawings and simple drawings take little time and effort to create. so the moderate complexity stuff is where it could shine, which coincidentally is also where junior people train their skills to become intermediates and seniors. and like I'm not going to choose to turn our jobs into reviewing AI output, because that's bad for the company - it leads to poor job satisfaction, poor work quality, and inexperienced team members
-
our CTO is vibe coding a bunch of tools for us right now. his approach is to basically not validate anything and let people find issues. I report these issues when I find them, as well as go looking for them if I suspect something is wrong. does that make me a saboteur? trying to correct something?
-
same guy also has used Claude to create some technical specifications/ document summaries, and sent them out external. external team had many questions because the document didn't make sense. bad look for our company, I think, and lots of wasted time trying to figure out that information and then going back and correcting it later. am I a saboteur because I don't blindly adopt AI document generation and keep asking who validated certain information instead of just using it and proceeding with my work?
Funny enough they count even "using unapproved AI tools" as sabotage. So I would say your actions fell under high treason?
My boss asked people with tech skills to chime in about AI, so I made a quick report of a bunch of things that AI sounds like it'd be useful for in my line of work, and why it wouldn't be, with examples of times when a real human made the same kind of mistake while I've been working there, and how much that mistake cost the company. He decided not to pursue AI. You can call it sabotaging the AI strategy, or you can call it helping keep the company from making a major fuckup, take your pick.
I'd call it helping form the AI strategy. Sabotaging it would be waiting until they make one and then not following it.
I see they're laying the groundwork for a "stabbed in the back" narrative, when AI inevitably bursts.
DING-DING-DING
Fuck this ad for AI. It's trying to make it seem like workers don't use AI because they're scared. It's only 8% that said they were scared. The rest of us 92% workers aren't scared we're going to be replaced by AI. We see how shitty AI is and we don't like it because it sucks and makes things slower, not faster.
The super-users we surveyed were around 3x more likely to have received both a promotion and pay raise in the past year, compared to employees who have been slow to adopt these tools
I do agree with this point. One of my team members recently got a lot of brownie points because he's been doing AI demos. The execs love him because he's visibly following orders. Does he generate way more code than everyone else? YES, this is actually a horrible thing, but execs are clueless and think more code == more better. Is he more productive than others? Definitely not. The hot garbage he's generating is just bug-ridden tech debt.
I guess I'm sabotaging our AI rollout by getting out of the way. You wanna inject AI everywhere? Fine, do it. I'm not gonna review it though. If you can't take the time to write something, I'm not going to spend my time reading it.
In about five years from now, there will be so much garbage code with unfixable bugs. It’s difficult for me to imagine what kind of collapse this will cause. Or how we will recover from it, which might take another decade. Fortunately we might be fighting eachother with spears over fresh water by then, so we will have bigger problems to not solve.
I'd say be hopeful, but I don't know.
I am a software developer, and there's absolutely been times where a temp fix becomes permanent, but I've also had times where my boss has told me to clean up tech debt or I've been "look this whole chunk of code is both wrong and unmaintainable" (wrong as in it didn't do the thing correctly but it looked correctish) and I've been allowed to just rewrite the broken code from scratch.
Idk, I feel like also at a certain point the codes bugs might be so obvious and troublesome that companies are forced to actually deal with the problem code and when that happens will be different for every company and every program.
God, I can't imagine having to review code from that guy. What an ass
Oh no, it's so irresponsible of europesays.com to publish this practical list of ways to sabotage your company's AI rollout. Hopefully no other outlets include longer, more detailed lists, or we might see this kind of behavior start to spread:
The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools. Some employees report outright refusing to use AI tools. Others have even admitted to tampering with performance reviews or intentionally generating low-output work to make AI appear less effective.
This is amateur work. I've seen someone volunteer to head the staff AI training and in the presentation outline how bad AI is (i.e., terrible for the environment, not reliable, all true things) and also just put out the most half-assed training rollout. It had the effect of half the staff intentionally or unintentionally doing other forms of sabotage.
The balls on that guy, damn.
Balls? That's just doing your job
The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools.
Not sure how that one sabotages the company's AI strategy. That's just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.
If the data leaks via an LLM, it discredits the LLM. If it leaks via a forum, it discredits the forum.
Not really imo. People will blame the leakers, not the llm, and they wouldn't be wrong. There's nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.
What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn't agree to.
And the CEOs phone number is 867-5309. I got it!
same number that i enter at grocery store checkouts!
if it's output by an ai, it can't be copyrighted.
That just sounds like the employees are using AI as asked of them, but the company's own offerings/tools are bad, or they're given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it's all AI anyway, rather than overt sabotage.
Some employees report outright refusing to use AI tools.
So having morals is sabotage now?
Only a living wage can prevent warehouse fires.
Too soon?
unfortunately too late, if only that poor ceo had known, he could have prevented this 😂
The AI First companies will reap what they sow.
You gotta pump those up! Those are rookie numbers.
From the article:
An Anthropic study released last month found AI is already theoretically capable of completing the majority of tasks associated with computer science, law, business, and finance, and other major white-collar fields
There's a huge difference between "capable of completing the majority of tasks" and "capable of completing the majority of tasks WELL".
Sure, you can have an AI code your web app or mobile app, due example. But it will be riddled with bugs, and bloated with inefficient code.
And from what I've seen, it's not getting noticeable better at that.
But the AI companies won't acknowledge that, of course. They will continue selling the snake oil that cures everything that ails you.
An Anthropic study says AI can do everything
Crack dealer says crack is awesome
🙄
Honestly, the only reason I use that LLM shit now is because the job market in tech is getting a bit like the hunger games and my employer strongly encourages its use - and even then, the vast majority of my usage is “fancy search engine”. Even the boilerplate it gives me sometimes is like… really weirdly styled and quite often has to be corrected to be fit for purpose. I cannot understand people who just slap that shit in without even bothering to check it.
I cannot understand people who just slap that shit in without even bothering to check it.
Really? Because that's exactly what I do. I'm metered by my spend, so I produce just absolutely wholesale volumes of slop, don't review it or even look at it and just am like fine here you go, this is what you want. No one else reviews it, it gets piled on top of other slop, shits going to really start piling up and breaking. Execs won't care, they'll start firing people anyways, it'll continue to pile up, problems will multiply...
I mean the alternative is do nothing, get fired. Or contribute to a downfall, survive a little longer maybe, still get fired. Or fix it, produce really good work at antagonizing levels of fixing everything, and still get fired. So what's not to understand here?
Well, I do my best to fight the good fight and make systems that aren’t impossible to maintain, because I work at an oncology biotech and despite all its flaws it’s a very compelling and worthwhile mission to me overall. So I do my best to not treat things too transactionally.
The company I work for is pushing Claude and have set up a bunch of training sessions for us after noticing the lack of use instructing us to plan to attend one of these sessions. I keep ignoring every push they make. Why the hell would I train this hallucinating liar to take my job? It's crazy to expect us to use the thing they want to eventually replace us with.
Use it. Fuck it. You are cooked either way. It's not going to replace you, it's like basically the next evolution of basically an excel formula for tasks (claudes referencing tasks like matching actually does work pretty well), and it's like an evolution of a search engine. I am unaware of any successful companies that are solely staffed by fancy search engines and next gen excel formulas. You still need a human in the middle. This whole wave is like when they tried to offshore everyone a decade ago. Their customers left in droves because you couldn't understand the guy on the other side of the conversation, and someone being paid like 30 cents an hour or whatever crap those poor people are exposed to, they aren't going to have any sort of due care or attention to the details.
Execs continue to funnel money and burn everything to the ground. This whole AI thing might actually be a good thing in the long run, in a round about way, in that it's going to cause a financial catastrophe, and everyone is going to get so burned by all this overhype and promising that they'll both be terrified to spend money on stuff like this for quite some time+ it'll maybe force a day of reckoning where the world maybe finally wakes up as to what the value of a modern executive is (near zero).
Trying to use AI to do my job would honestly only make my job more stressful. It is a very detail oriented job and I do not trust AI to do it for me and I'm not willing to risk my job over having it done wrong, at least if I'm replaced I get severance and shit, if I'm fired I get squat.
Only 29% admit to it. Most of the other 70% have the sense to keep their mouths shut.
There are plenty of AI shills out there, they're just a minority.
The rest are telling that 29% to shut the fuck up.
Seeing the dumb fuck everywhere in corpo, they don't need any sabotage. Some people don't know how to write a jira 10 years after a reorganization. Or can't create a git repo on the first shot after 10 year at a post. Train an AI on him and you fuck all your datas. Most of smart people don't want to be the first to go because they have objective and they know that a new tools fuck the rhythm at first and AI is blingbling for most use case for now.
I think i headed off AI at my job.
Back when chat gpt was still new, the DOO of the 20 person company company i worked at sent out a employment contract out as a PDF asking us to esign. I replied-all back asking if it was intentional that there was a 3 year non-compete in the middle of a list of terms under a header stating
' is an at will state. The following terms of employmebt are revokeable by either party upon notice of termination of employment:'
The company owner replied back LOL to the group and that was the last of it. They never did actually end up asking us to sign an employment contract. Have yet to see any AI adoption, besides the office director occasionally responding to my emails based on mistakes she blames on the ai summary.
had an AI training this morning... Asked a very pointed question about how to prevent hallucinations and bad responses, their response was hilarious.
I was wondering the other day whether AI companies were polluting their competitions AI sources to make their AI look better.
when you poison the only well in town, everybody dies.
Not those of us that have filters on our houses
same.
if not sabotage, outright rejecting to interact with anything that comes from it.
This is the way
Look what modernity has done to our Xwing pilots