until you have a coworker that loves using AI and produces an ungodly amount of work product in barely any time and now you have to keep up
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
My good friend had a boss that loves AI, and so they used it to produce a strategic roadmap based off an email and a teams transcript.
AI has it's place.. GIGO
I'm a line worker in a factory, and I recently managed to give a presentation on "AI" to a group of office workers (it went well!). One of the people there is in regular contact with the C_Os but fortunately is pretty reasonable. His attitude is "We have this problem; what tools do we have to fix it", and so isn't impressed by " AI" yet. The C_Os, alas, insist it's the future. They keep hammering on at him to get everybody to integrate "AI" in their workflows, but they have no idea how to actually do that (let alone what the factory actually does), they just say "We have this tool, use it somehow".
The reasonable manager asked me how I would respond if a C_O said we would get left behind if we don't embrace " AI". I quipped that it's fine to be left behind when everybody else is running towards a cliff. I was pretty proud of that one.
Try giving them each an allen wrench and tell them to apply it to their daily lives to boost productivity.
That’s a banger line and I’m totally stealing it
I'm so sick of fixing AI slop code, especially because there's no love for people who fix the slop, only for the people who shipped the slop.
Hell I'm sick of fixing slop work from actual people
I am now semiconvinced that half of my co-workers are AI bots due to some of the dumb shit that they say
like literally AI hallucinations and reversals, coming from real people
They have to justify the cost of the consultants they paid to tell them to spend money on it.
The emperor's new clothes in the trillions.
Any boss ramming a tool down their workers throats without understanding it or validating it's usefulness is not a particularly good boss.
There’s bosses, and then there’s directors, and managers, and c-suites. Essentially, the people who don’t do any real fucking work are super impressed by it.
We just had an all hands where they were circlejerking about how incredible “AI” is. Then they started talking about OKRs around using that shit on a regular basis.
On the one hand, I’m more than a little peeved that none of the pointed and cogent concerns that I have raised on personal, professional, hobbyist, sustainability, environmental, public infrastructure, psychological, social, or cultural grounds - backed up with multiple articles and scientific studies that I have provided links to in previous all-hands meetings - have been met with anything more than hand-waving before being simply ignored outright.
On the other hand, I’m just going to make a fucking cron job pointed at a script that hits the LLM API they’re logging usage on, asking it to summarize the contents, intent, capabilities, advantages, and drawbacks of random GitHub repos over a certain SLOC count. There’s a part of me that feels bad for using such a wasteful service like in such a wasteful fashion. But there’s another part of me that is more than happy to waste their fucking money on LLM tokens if they’re gonna try to make me waste my time like that.
If you have to define OKRs to get people to use a tool, perhaps the tool is not a good investment.
Hey man you are preaching to the choir here lol
Bosses aren't oblivious, AI isn't for the workers benefit. They need the workers to use the AI, so it can improve and begin to replace them.
That's part of how they're oblivious - mass adoption won't actually improve LLMs beyond a certain point, and we're long past it. The tech is fundamentally limited in what they can actually do, and instead of recognizing the limitations to work within them they're pretending we're gonna have AGI.
Our new tech lead loves fucking AI, which let's him refactor our terraform (I was already doing that), write pipelines in gitlab, and lots of other shiny cool things (after many many many attempts, if his commit history is any indication).
Funnily, he won't touch our legacy code. Like, he just answers "that's outside my perimeter" when he's clearly the one who should be helping us handle that shit. Also it's for a mission critical part of our company. But no, outside his perimeter. Gee I wonder why.
I used some AI at work to do some stuff in polars, because I don't really know that library very well.
As a result I have a function that does what I asked for (I wrote tests), but I don't understand it and didn't really learn anything. Not a great trade.
And the only reason they can get away with not charging the training and computation costs is bunch of rich people essentially gambling a small portion of their generational wealth.
Dilbert manager energy
it's just great at pretending to do something, good enough to trick stupid execs
they are stringing it along so they can get thier golden parachutes and bounce.
It's undeniable that AI is great at problems with tight feedback loops, like software engineering.
Most jobs don't have the tight feedback loops that software engineering has
It's undeniable that AI is great at problems with tight feedback loops, like software engineering
I, CandleTiger, do hereby deny that AI is great at software engineering.
it is totally deniable. Because it's simply not true. It's been studied.
One nit: they're good at writing code. Specifically, code that has already been written. Software Engineers and Computer Scientists still need to exist for technology to evolve.
This. Was setting up a new service and it scaffolded all the endpoints after the swagger and helped me setup tooling, tests, within a few hours. Also helped me research what has happened in the area since my last ms.
Now when adding the business logic I'll be doing most of it myself as it tends to be a bit creative about what I'm trying to achieve and tends to forget to check my models etc.
It's great at generic code, has issues on specifics.
It is pretty bad at things that are "black boxes" that require documentation to analyze. For instance, I was trying to debug an SSL issue with DB2 (IBM database) and chatgpt and copilot gave conflicting answers. They frequently gave commands that didn't work, with great confidence of course. I had to keep feeding errors back to it. I even had to remind it that I was working in Linux and not Windows.
FWIW, ChatGPT and Copilot are two of the worst AIs out there for things like this. At many gigs I've had they're outright banned for use because of how garbage they are.
Which ones have you had recommended?
Claude Code, or Claude in general, notably Sonnet 4.5 and Opus 4.5
Gemini also solid, though for coding found it lesser than Claude, but for heavy inference and reasoning it can be great and also supports a larger context window