this post was submitted on 08 Apr 2026
4 points (100.0% liked)

Programmer Humor

31090 readers
869 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 17 comments
sorted by: hot top controversial new old
[–] zieg989@programming.dev 4 points 2 weeks ago (1 children)

I would not be surprized if Anthropic would actually hire a real developer to make these PRs as a marketing stunt

[–] BestBouclettes@jlai.lu 4 points 2 weeks ago

Well, if the model detected an issue, and a human tested it to make sure it was real and then fixed it, I think that's an acceptable use of AI tools.

[–] SkunkWorkz@lemmy.world 2 points 2 weeks ago

The ffmpeg team was mad at Google when they reported a bug that was found and reported automatically with an AI. Google reported the bug without providing a fix and also gave an ultimatum. Google would publicize the bug report after 60 days. That’s what pissed off the ffmpeg devs. Not to mention that it was a very obscure bug, like ffmpeg didn’t decode a video file from a 90’s videogame correctly.

Anthropic on the other hand found a bug and provided a fix. So why would they be mad if the fix is properly written and fixes the bug ?

[–] CannonFodder@lemmy.world 1 points 2 weeks ago (1 children)

ai tools can detect potential vulnerabilities and suggest fixes. You can still go in by hand and verify the problem carefully apply a fix.

[–] shirasho@feddit.online 1 points 2 weeks ago

AI is actually SUPER good at this and is one of the few places I think AI should be used (as one of many tools, ignoring the awful environmental impacts of AI and assuming an on-prem model). AI is also good at detecting code performance issues.

With that said, all of the fix recommendations should be fixed by hand.

[–] railcar@midwest.social 1 points 2 weeks ago (1 children)

It's OK to hate AI slop and recognize the immediate threat to cyber security it brings. At least they are trying to mitigate it. There's been no similar actions from other frontier models. They are deliberately helping open source projects with little funding to keep pace.

https://www.anthropic.com/glasswing

[–] sunbeam60@feddit.uk 0 points 2 weeks ago (1 children)

Anthropic right now are the good people.

That probably won’t last. But out of a bad bunch they’re the least bad.

[–] 0xDREADBEEF@programming.dev 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

the good people.

You are limiting your own intelligence by thinking companies can be described in those words.

They are not good. They are profit-seeking. Profit seeking doesn't necessarily mean evil, but it can never mean good. A non-profit who's goal is to improve their community around them, a co-op who's goal is to treat their workers with respect etc etc can all be described as 'good' to varying degrees, but no for-profit entity, especially a publicly traded one, can ever be described as 'good'

[–] hitmyspot@aussie.zone 1 points 2 weeks ago

Hence their point about being the best of a bad bunch. Remember the people making decisions are people. A corporation has no soul and only seeks profit. People work for them and can make good decisions and be good people whomever they work for.

There were good people that worked for the nazis. Unless you think the cleaner, for instance of the Nazi headquarters cleaned as a way to speak evil.

However. I take your point. I just think that's not what is the point of the discussion here and is no different to both sides being bad on politics. It lacks nuance.

[–] spectrums_coherence@piefed.social 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

LLM is very good at programming when there are huge number of guardrails against them. For example, exploit testing is a great usecase because getting a shell is getting a shell.

They kind of acts as a smarter version of infinite monkey that can try and iterate much more efficiently than human does.

On the other hand, in tasks that requires creativity, architecture, and projects without guard rail, they tend to do a terrible job, and often yielding solution that is more convoluted than it needs to be or just plain old incorrect.

I find it is yet another replacement for "pure labor", where the most unintelligent part of programming, i.e. writing the code, is automated away. While I will still write code from scratch when I am trying to learn, I likely will be able automate some code writing, if I know exactly how to implement it in my head, and I also have access to plenty of testing to gaurentee correctness.

[–] Serinus@lemmy.world 1 points 2 weeks ago

People have trouble with the middle ground. AI is useful in coding. It's not a full replacement. That should be fine, except you've got the ai techbros and CEOs on one end thinking it will replace all labor, and the you've got the backlash to that on the other end that want to constantly talk about how useless it is.

[–] sun_is_ra@sh.itjust.works 0 points 2 weeks ago (1 children)

Maybe he meant code quality was so good its like a human wrote it.

After all if the code is good and follow all best practices of the project, why reject it just because it was an AI who wrote it. That's racism against machines.

[–] endless_nameless@lemmy.world 0 points 2 weeks ago (2 children)

It's not possible to be racist toward inanimate objects. Computers are not a race. LLMs are not people.

[–] lIlIlIlIlIlIl@lemmy.world 0 points 2 weeks ago (1 children)

It’s possible to leverage the same human quality called “hate,” which underpins racism. It’s the same ugly human behavior. You can call it whatever you want, it’s still ugly

We have a word for the concept you're thinking of. It's called bigotry. Racism is race-based bigotry. Anti-AI bigotry is reasonable and awesome, and is just called bigotry.

[–] Samsy@lemmy.ml 0 points 2 weeks ago (1 children)

That was rude against my wife-chatbot. Apologize to her, here: https://...

More like http://localhost:8000/wifebot