this post was submitted on 01 Dec 2025
1281 points (99.0% liked)

Programmer Humor

27673 readers
850 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] greybeard@feddit.online 4 points 4 days ago (1 children)

The problem (or safety) of LLMs is that they don't learn from that mistake. The first time someone says "What's this Windows folder doing taking up all this space?" and acts on it, they wont make that mistake again. LLM? It'll keep making the same mistake over and over again.

[–] skisnow@lemmy.ca 1 points 4 days ago (1 children)

I recently had an interaction where it made a really weird comment about a function that didn't make sense, and when I asked it to explain what it meant, it said "let me have another look at the code to see what I meant", and made up something even more nonsensical.

It's clear why it happened as well; when I asked it to explain itself, it had no access to its state of mind when it made the original statement; it has no memory of its own beyond the text the middleware feeds it each time. It was essentially being asked to explain what someone who wrote what it wrote, might have been thinking.

[–] greybeard@feddit.online 2 points 4 days ago (1 children)

One of the fun things that self hosted LLMs let you do (the big tech ones might too), is that you can edit its answer. Then, ask it to justify that answer. It will try its best, because, as you said, it its entire state of mind is on the page.

[–] skisnow@lemmy.ca 1 points 4 days ago* (last edited 4 days ago)

One quirk of github copilot is that because it lets you choose which model to send a question to, you can gaslight Opus into apologising for something that gpt-4o told you.