this post was submitted on 28 Jan 2026
582 points (99.0% liked)

Fuck AI

5502 readers
1311 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] gustofwind@lemmy.world -1 points 6 days ago (1 children)

until you have a coworker that loves using AI and produces an ungodly amount of work product in barely any time and now you have to keep up

load more comments (1 replies)
[–] cmeu@lemmy.world -2 points 6 days ago* (last edited 6 days ago) (3 children)

My good friend had a boss that loves AI, and so they used it to produce a strategic roadmap based off an email and a teams transcript.

AI has it's place.. GIGO

load more comments (3 replies)
[–] Infrapink@thebrainbin.org 121 points 1 week ago (10 children)

I'm a line worker in a factory, and I recently managed to give a presentation on "AI" to a group of office workers (it went well!). One of the people there is in regular contact with the C_Os but fortunately is pretty reasonable. His attitude is "We have this problem; what tools do we have to fix it", and so isn't impressed by " AI" yet. The C_Os, alas, insist it's the future. They keep hammering on at him to get everybody to integrate "AI" in their workflows, but they have no idea how to actually do that (let alone what the factory actually does), they just say "We have this tool, use it somehow".

The reasonable manager asked me how I would respond if a C_O said we would get left behind if we don't embrace " AI". I quipped that it's fine to be left behind when everybody else is running towards a cliff. I was pretty proud of that one.

[–] kayzeekayzee@lemmy.blahaj.zone 47 points 1 week ago

Try giving them each an allen wrench and tell them to apply it to their daily lives to boost productivity.

[–] FearMeAndDecay@literature.cafe 18 points 1 week ago (1 children)

That’s a banger line and I’m totally stealing it

[–] Infrapink@thebrainbin.org 16 points 1 week ago (1 children)

Hey now, stealing is wrong.

I will give it to you as a gift.

load more comments (1 replies)
load more comments (8 replies)
[–] friend_of_satan@lemmy.world 49 points 1 week ago (1 children)

I'm so sick of fixing AI slop code, especially because there's no love for people who fix the slop, only for the people who shipped the slop.

[–] mrgoosmoos@lemmy.ca 19 points 1 week ago (2 children)

Hell I'm sick of fixing slop work from actual people

I am now semiconvinced that half of my co-workers are AI bots due to some of the dumb shit that they say

like literally AI hallucinations and reversals, coming from real people

load more comments (2 replies)
[–] Triumph@fedia.io 34 points 1 week ago (1 children)

They have to justify the cost of the consultants they paid to tell them to spend money on it.

[–] pdxfed@lemmy.world 18 points 1 week ago

The emperor's new clothes in the trillions.

[–] nucleative@lemmy.world 30 points 1 week ago (1 children)

Any boss ramming a tool down their workers throats without understanding it or validating it's usefulness is not a particularly good boss.

There’s bosses, and then there’s directors, and managers, and c-suites. Essentially, the people who don’t do any real fucking work are super impressed by it.

[–] gravitas_deficiency@sh.itjust.works 25 points 1 week ago (5 children)

We just had an all hands where they were circlejerking about how incredible “AI” is. Then they started talking about OKRs around using that shit on a regular basis.

On the one hand, I’m more than a little peeved that none of the pointed and cogent concerns that I have raised on personal, professional, hobbyist, sustainability, environmental, public infrastructure, psychological, social, or cultural grounds - backed up with multiple articles and scientific studies that I have provided links to in previous all-hands meetings - have been met with anything more than hand-waving before being simply ignored outright.

On the other hand, I’m just going to make a fucking cron job pointed at a script that hits the LLM API they’re logging usage on, asking it to summarize the contents, intent, capabilities, advantages, and drawbacks of random GitHub repos over a certain SLOC count. There’s a part of me that feels bad for using such a wasteful service like in such a wasteful fashion. But there’s another part of me that is more than happy to waste their fucking money on LLM tokens if they’re gonna try to make me waste my time like that.

[–] acchariya@lemmy.world 15 points 1 week ago (1 children)

If you have to define OKRs to get people to use a tool, perhaps the tool is not a good investment.

Hey man you are preaching to the choir here lol

load more comments (4 replies)
[–] Rekorse@sh.itjust.works 23 points 1 week ago (1 children)

Bosses aren't oblivious, AI isn't for the workers benefit. They need the workers to use the AI, so it can improve and begin to replace them.

[–] queermunist@lemmy.ml 22 points 1 week ago (4 children)

That's part of how they're oblivious - mass adoption won't actually improve LLMs beyond a certain point, and we're long past it. The tech is fundamentally limited in what they can actually do, and instead of recognizing the limitations to work within them they're pretending we're gonna have AGI.

load more comments (4 replies)
[–] fibojoly@sh.itjust.works 21 points 1 week ago

Our new tech lead loves fucking AI, which let's him refactor our terraform (I was already doing that), write pipelines in gitlab, and lots of other shiny cool things (after many many many attempts, if his commit history is any indication).

Funnily, he won't touch our legacy code. Like, he just answers "that's outside my perimeter" when he's clearly the one who should be helping us handle that shit. Also it's for a mission critical part of our company. But no, outside his perimeter. Gee I wonder why.

[–] jjjalljs@ttrpg.network 12 points 1 week ago

I used some AI at work to do some stuff in polars, because I don't really know that library very well.

As a result I have a function that does what I asked for (I wrote tests), but I don't understand it and didn't really learn anything. Not a great trade.

[–] Jankatarch@lemmy.world 9 points 1 week ago

And the only reason they can get away with not charging the training and computation costs is bunch of rich people essentially gambling a small portion of their generational wealth.

[–] Blaster_M@lemmy.world 9 points 1 week ago

Dilbert manager energy

[–] hexagonwin@lemmy.sdf.org 8 points 1 week ago

it's just great at pretending to do something, good enough to trick stupid execs

[–] Tollana1234567@lemmy.today 5 points 1 week ago

they are stringing it along so they can get thier golden parachutes and bounce.

[–] despite_velasquez@lemmy.world 3 points 1 week ago (4 children)

It's undeniable that AI is great at problems with tight feedback loops, like software engineering.

Most jobs don't have the tight feedback loops that software engineering has

[–] CandleTiger@programming.dev 24 points 1 week ago

It's undeniable that AI is great at problems with tight feedback loops, like software engineering

I, CandleTiger, do hereby deny that AI is great at software engineering.

[–] vrighter@discuss.tchncs.de 21 points 1 week ago

it is totally deniable. Because it's simply not true. It's been studied.

[–] laranis@lemmy.zip 11 points 1 week ago* (last edited 6 days ago) (1 children)

One nit: they're good at writing code. Specifically, code that has already been written. Software Engineers and Computer Scientists still need to exist for technology to evolve.

[–] MirrorGiraffe@piefed.social 3 points 1 week ago (2 children)

This. Was setting up a new service and it scaffolded all the endpoints after the swagger and helped me setup tooling, tests, within a few hours. Also helped me research what has happened in the area since my last ms.

Now when adding the business logic I'll be doing most of it myself as it tends to be a bit creative about what I'm trying to achieve and tends to forget to check my models etc.

It's great at generic code, has issues on specifics.

load more comments (2 replies)
[–] SocialMediaRefugee@lemmy.world 9 points 1 week ago (1 children)

It is pretty bad at things that are "black boxes" that require documentation to analyze. For instance, I was trying to debug an SSL issue with DB2 (IBM database) and chatgpt and copilot gave conflicting answers. They frequently gave commands that didn't work, with great confidence of course. I had to keep feeding errors back to it. I even had to remind it that I was working in Linux and not Windows.

[–] AlecSadler@lemmy.dbzer0.com 6 points 1 week ago (1 children)

FWIW, ChatGPT and Copilot are two of the worst AIs out there for things like this. At many gigs I've had they're outright banned for use because of how garbage they are.

[–] SocialMediaRefugee@lemmy.world 0 points 6 days ago (1 children)

Which ones have you had recommended?

[–] AlecSadler@lemmy.dbzer0.com 1 points 6 days ago

Claude Code, or Claude in general, notably Sonnet 4.5 and Opus 4.5

Gemini also solid, though for coding found it lesser than Claude, but for heavy inference and reasoning it can be great and also supports a larger context window

load more comments
view more: ‹ prev next ›