this post was submitted on 25 Nov 2025
832 points (98.7% liked)
Programmer Humor
27673 readers
830 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I was talking about you, but not /srs, that was an attempt @ satire. I'm dismissing the results by appealing to the fact that there's a process.
Reward is an AI maths term. It's the value according to which the neurons are updated, similar to "loss" or "error", if you've heard those.
Yes this is also possible, it depends on minute details of the training set, which we don't know.
Edit: As I understand, these models are trained in multiple modes, one where they're trying to predict text (supervised learning), but there are also others where it's given a prompt, and the response is sent to another system to be graded i.e. for factual accuracy. It could learn to identify which "training mode" it's in and behave differently. Although, I'm sure the ML guys have already thought of that & tried to prevent it.
I agree, noted this in my comment. Just saying, this isn't evidence either way.
Deferring to authority is fine as long as you don't make assumptions about what happened or didn't happen.
I mean, because it's a risk that's obvious even to me, and it's not my job to think about it all day. I guess they could just be stupid. 🤷
I feel like it is not wise to discard the opinion of a layperson with this reasoning. Sure experts have been working on it as their day job vs. Us just looking at the fruits of their labour. But that doesn't justify the assumption that they are infallible. Don't you agree in our own areas of supposed expertise we are often corrected or get inspiration from supposed laymen simply because we have been too myopic about solving the problem ahead of us?