196
Community Rules
You must post before you leave
Be nice. Assume others have good intent (within reason).
Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.
Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.
Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".
Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.
Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.
Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.
Avoid AI generated content.
Avoid misinformation.
Avoid incomprehensible posts.
No threats or personal attacks.
No spam.
Moderator Guidelines
Moderator Guidelines
- Don’t be mean to users. Be gentle or neutral.
- Most moderator actions which have a modlog message should include your username.
- When in doubt about whether or not a user is problematic, send them a DM.
- Don’t waste time debating/arguing with problematic users.
- Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
- Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
- Ask the other mods for advice when things get complicated.
- Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
- Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
- Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
- Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
- Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
- First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
- Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
- No large decisions or actions without community input (polls or meta posts f.ex.).
- Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
- Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.
view the rest of the comments
I've started using AI pretty heavily for writing code in languages I'm not as confident in (especially JS and SQL) after being skeptical for a while, as well as code which can be described briefly but is tedious to write, and I think the problem here is "by" - it would be better to say "with"
You don't say that 90% of code was written by code completion plugins, because it takes someone to pick the right thing from the list, check the docs to see it's right, etc.
It's the same for AI, I check the "thinking"/planning logs to make sure the logic is right, and sometimes it is, sometimes it isn't, at which point you can write a brief psudocode brief of what you want to do, sometimes it starts on the right path then goes off, at which point you can say "no, go back to this point" and generally it works well.
I'd say this kind of code is maybe 30-50% of what I write, the other 50-70% being more technically complex and in a language I'm more experienced in, so I can't fully believe the 30% figure when you're going to be having some people wasting time by not using it when they could use it for speedup, and others using it too much and wasting time trying to implement more complex things than it's capable of - this one irks me especially after having to spend 3½ hours yesterday reviewing a new hire's MR that they could've spent actually learning the libraries, or I could've spent implementing the whole ticket with some time left over to teach them.
Large language models can't think. The "thinking" it spits out to explain the other text it spits out is pure bullshit.
Why do you think I said
"thinking"/planninginstead of just calling it thinking...The "thinking" stage is actually just planning so that it can list out the facts and then try and find inconsistencies, patterns, solutions etc. I think planning is a perfectly reasonable thing to call it, as it matches the distinct between planning and execution in other algorithms like navigation.
“Thinking” is just an arbitrary process to generate additional prompt tokens. In their training data now, they’ve realized people suck at writing prompts, and that it was clear their models lack causal or state models of anything. They’re simply good at word substitution to a context that is similar enough to the prompt they’re given. So a solution to sucky prompt writing and trying to sell people on its capacity (think full self driving — it’s never been full self driving, but it’s marketed that way to make people think it is super capable) is to simply have the model itself look up better templates within its training data that tend to result in better looking and sounding answers.
The thinking is not thinking. It’s fancier probabilistic look up.
nope
I like this write up.
It reflects my experience with AI assisted code generation.
That kind of matches my experience, but some of the negatives they bring up can be fixed with monitoring thinking mode. If they start to make assumptions on your behalf, or go down the wrong path, you can interrupt it and tell it to persue the correct line without polluting the context.