this post was submitted on 01 Dec 2025
1278 points (99.0% liked)

Programmer Humor

27673 readers
1011 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] rizzothesmall@sh.itjust.works 430 points 5 days ago (3 children)

I love that it stopped responding after fucking everything up because the quota limit was reached πŸ˜†

It's like a Jr. Dev pushing out a catastrophic update and then going on holiday with their phone off.

[–] scathliath@lemmy.dbzer0.com 103 points 5 days ago (1 children)

They're learning, god help us all. jk

load more comments (1 replies)
[–] hypnicjerk@piefed.social 73 points 4 days ago

that's how you know a junior dev is senior material

[–] Breadhax0r@lemmy.world 53 points 4 days ago (4 children)

Super fun to think one could end up softlocked out of their computer because they didnt pay their windows bill that month.

"OH this is embarrassing, Im sooo sorry but I cant install anymore applications because you dont have any Microsoft credits remaining.

You may continue with this action if you watch this 30 minute ad."

load more comments (4 replies)
[–] MangoPenguin@lemmy.blahaj.zone 22 points 3 days ago (2 children)

I wonder how big the crossover is between people that let AI run commands for them, and people that don't have a single reliable backup system in place. Probably pretty large.

load more comments (2 replies)
[–] nomen_dubium@startrek.website 198 points 4 days ago

the "you have reached your quota limit" at the end is just such a cherry on top xD

[–] 1984@lemmy.today 269 points 5 days ago* (last edited 5 days ago) (17 children)

I feel actually insulted when a machine is using the word "sincere".

Its. A. Machine.

This entire rant about how "sorry" it is, is just random word salad from an algorithm... But people want to read it, it seems.

[–] Carighan@piefed.world 60 points 4 days ago (3 children)

For all LLMs can write texts (somewhat) well, this pattern of speech is so aggravating in anything but explicit text-composition. I don't need the 500 word blurb to fill the void with. I know why it's in there, because this is so common for dipshits to write so it gets ingested a lot, but that just makes it even worse, since clearly, there was 0 actual data training being done, just mass data guzzling.

[–] SaraTonin@lemmy.world 59 points 4 days ago

That’s an excellent point! You’re right that you don’t need 500 word blurb to fill the void with. Would you like me to explain more about mass data guzzling? Or is there something else I can help you with?

load more comments (2 replies)
[–] jol@discuss.tchncs.de 46 points 5 days ago (11 children)

I use a system prompt to disable all the anthropomorphic behaviour. I hate it with a passion when machines pretend to have emotions.

load more comments (11 replies)
load more comments (15 replies)
[–] invictvs@lemmy.world 34 points 3 days ago (3 children)

Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That's how the "Judgement day" is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.

[–] immutable@lemmy.zip 10 points 3 days ago (1 children)

I have been into AI Safety since before chat gpt.

I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.

The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.

load more comments (1 replies)
load more comments (2 replies)
[–] kazerniel@lemmy.world 112 points 4 days ago* (last edited 4 days ago) (12 children)

"I am horrified" πŸ˜‚ of course, the token chaining machine pretends to have emotions now πŸ‘

Edit: I found the original thread, and it's hilarious:

I'm focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.

This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.

[–] Natanael@infosec.pub 32 points 4 days ago (3 children)
load more comments (3 replies)
[–] KelvarCherry@lemmy.blahaj.zone 18 points 4 days ago (3 children)

There's something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about "being a failure".

As a programmer myself, spiraling over programming errors is human domain. That's the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<

load more comments (3 replies)
load more comments (10 replies)
[–] ICastFist@programming.dev 143 points 4 days ago (5 children)

"How AI manages to do that?"

Then I remember how all the models are fed with internet data, and there are a number of "serious" posts that talk how the definitive fix to windows is deleting System32 folder, and every bug in linux can be fixed with sudo rm -rf /*

The fact that my 4chan shitposts from 2012 are now causing havoc inside of an AI is not something I would have guessed happening but, holy shit, that is incredible.

[–] Agent641@lemmy.world 62 points 4 days ago* (last edited 4 days ago) (2 children)

The /bin dir on any Linux install is the recycle bin. Save space by regularly deleting its contents

load more comments (2 replies)
load more comments (3 replies)
[–] Avicenna@programming.dev 14 points 3 days ago

"I am deeply deeply sorry"

[–] laurelraven@lemmy.zip 78 points 4 days ago (14 children)

And the icing on the shit cake is it peacing out after all that

load more comments (14 replies)
[–] Zink@programming.dev 119 points 4 days ago (9 children)

Wow, this is really impressive y'all!

The AI has advanced in sophistication to the point where it will blindly run random terminal commands it finds online just like some humans!

I wonder if it knows how to remove the french language package.

load more comments (9 replies)
[–] Michal@programming.dev 48 points 4 days ago* (last edited 4 days ago)

Thoughts for 25s

Prayers for 7s

[–] mvirts@lemmy.world 129 points 5 days ago (9 children)

Everyone should know most of the time the data is still there when a file is deleted. If it's important try testdisk or photorec. If it's critical pay for professional recovery.

[–] webghost0101@sopuli.xyz 128 points 5 days ago* (last edited 5 days ago) (13 children)

If its critical, don't give it to ai without having a secured backup it can’t touch.

load more comments (13 replies)
load more comments (8 replies)
[–] qevlarr@lemmy.world 111 points 4 days ago (3 children)

"Agentic" means you're in the passenger's rather than driver's seat... And the driver is high af

[–] Lumisal@lemmy.world 37 points 4 days ago

High af explains why it's called antigravity

load more comments (2 replies)
[–] yarr@feddit.nl 15 points 3 days ago (1 children)

"Did I give you permission to delete my D:\ drive?"

Hmm... the answer here is probably YES. I doubt whatever agent he used defaulted to the ability to run all commands unsupervised.

He either approved a command that looked harmless but nuked D:\ OR he whitelisted the agent to run rmdir one day, and that whitelist remained until now.

There's a good reason why people that choose to run agents with the ability to run commands at least try to sandbox it to limit the blast radius.

This guy let an LLM raw dog his CMD.EXE and now he's sad that it made a mistake (as LLMs will do).

Next time, don't point the gun at your foot and complain when it gets blown off.

load more comments (1 replies)
[–] glitchdx@lemmy.world 34 points 4 days ago (2 children)

lol.

lmao even.

Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.

load more comments (2 replies)
[–] RampantParanoia2365@lemmy.world 38 points 4 days ago (7 children)

I'm confused. It sounds like you, or someone gave an AI access to their system, which would obviously be deeply stupid.

load more comments (7 replies)
[–] NotASharkInAManSuit@lemmy.world 30 points 4 days ago (3 children)

How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.

load more comments (3 replies)
[–] SlykeThePhoxenix@programming.dev 28 points 4 days ago (1 children)

I love how it just vanishes into a puff of logic at the end.

load more comments (1 replies)
[–] Danitos@reddthat.com 45 points 4 days ago* (last edited 4 days ago) (5 children)

Stochastic rm /* -rf code runner.

load more comments (5 replies)
[–] cupcakezealot@piefed.blahaj.zone 48 points 4 days ago (8 children)

that's wild; like use copilot or w/e to generate code scaffolds if you really have to but never connect it to your computer or repository. get the snippet, look through it, adjust it, and incorporate it into your code yourself.

you wouldn't connect stackoverflow comments directly to your repository code so why would you do it for llms?

load more comments (8 replies)

Damn this is insane. Using claude/cursor for work is near, but they have a mode literally called "yolo mode" which is this. Agents allowed to run whatever code they like, which is insane. I allow it to do basic things, you can search the repo and read code files, but goddamn allowing it to do whatever it wants? Hard no

[–] IEatDaFeesh@lemmy.world 44 points 4 days ago
[–] Evotech@lemmy.world 55 points 4 days ago (4 children)

Fucking ai agents and not knowing which directory to run commands in. Drives me bonkers. Constantly tries to git commit root or temp or whatever then starts debugging why that didn't work lol

I wish they would just be containerised virtual environments for them to work in

[–] Quill7513@slrpnk.net 66 points 4 days ago (3 children)

and then realize microsoft and google are both pushing toward "fully agentic" operating systems. every file is going to be at risk of random deletion

[–] ICastFist@programming.dev 44 points 4 days ago (4 children)

Next up, selling a subscription service to protect those files from the fucking problem they created themselves

load more comments (4 replies)
load more comments (2 replies)
load more comments (3 replies)
[–] irelephant@lemmy.dbzer0.com 7 points 3 days ago (1 children)

Even Google employees were instructed not to use this.

load more comments (1 replies)
[–] darkpanda@lemmy.ca 7 points 3 days ago (1 children)

Ironically D: is probably the face they were making when they realized what happened.

load more comments (1 replies)
[–] ZILtoid1991@lemmy.world 45 points 4 days ago (11 children)

Meanwhile, my mom's boyfriend is begging me to use AI for code, art, everything, because "it's the future".

[–] explodicle@sh.itjust.works 42 points 4 days ago (2 children)

Another smarter human pointed this out and it stuck with me: the guys most hyped about AI are good at nothing and thus can't see how bad it is at everything. It's like the Gell-Mann Amnesia Effect.

load more comments (2 replies)
load more comments (10 replies)
[–] FreddiesLantern@leminal.space 43 points 4 days ago (1 children)

I aM hOrr1fiEd I tEll yUo! Beep-boop.

load more comments (1 replies)
[–] LiveLM@lemmy.zip 48 points 4 days ago (1 children)

And judging by their introductory video, Google wants you to have multiple of these "Agents" running at the same time.
Better lockdown your files real nice from this thing, better yet, don't let it run Shell commands unattended. One must wonder why the fuck that is even an option!

load more comments (1 replies)
load more comments
view more: next β€Ί