this post was submitted on 16 Mar 2026
393 points (95.8% liked)

Fuck AI

6318 readers
982 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Related:

This is in a PR where Shougo, another long-time contributor, communicates entirely in walls of unparseable AI slop text: https://github.com/vim/vim/pull/19413

Thank you for the detailed feedback! I've addressed all the issues:

Thank you for the feedback! I agree that following the Vim 8+ naming convention makes sense.

Thank you for the feedback on naming!

Thanks for the suggestion! After thinking about this more, I believe repeat_set() / repeat_get() is the right choice:

Thank you for the feedback. A brief clarification.

https://hachyderm.io/@AndrewRadev/116176001750596207

@AndrewRadev@hachyderm.io

top 50 comments
sorted by: hot top controversial new old
[–] hperrin@lemmy.ca 207 points 6 days ago (11 children)

I spent literally all day yesterday working on this:

https://sciactive.com/human-contribution-policy/

I’ve started to add it to my projects. Eventually, it will be on all of my projects. I made it so that any project could adopt it, or modify it to their needs. It’s got a thorough and clear definition of what is banned, too, so it should help any argument over pull requests.

Hopefully more projects will outright ban AI generated code (and other AI generated material).

[–] PlutoniumAcid@lemmy.world 39 points 6 days ago (11 children)

I like this approach, but how can it be enforced? Would you have to read every line and listen to a gut feeling?

[–] hperrin@lemmy.ca 98 points 6 days ago (5 children)

Basically the best you can do is continue as normal, and if someone submits something that says it is or obviously is AI, point to this policy and reject it. Just having the policy should be a decent deterrent.

load more comments (5 replies)
[–] Jankatarch@lemmy.world 24 points 6 days ago* (last edited 6 days ago)

Same mindset as "You don't need a perfect lock to protect your house from thieves, you just need one better than what your neighbors have."

If a vibecoder sees this they will not bother with obfuscation and simply move onto the next project.

load more comments (9 replies)
[–] thethunderwolf@lemmy.dbzer0.com 21 points 5 days ago (1 children)

this is cool

you should make a post about this somewhere here on Lemmy

people should know about it

[–] hperrin@lemmy.ca 15 points 5 days ago

Ok, yeah, I’ll make a post for it.

Feel free to share it anywhere. :)

[–] Bibip@programming.dev 4 points 4 days ago (1 children)

hi, i have strong feelings about the use of genai but i come at it from a very different direction (story writing). it's possible for someone to throw together a 300 page story book in an afternoon - in the style of lovecraft if they want, or brandon sanderson, or dan brown (dan brown always sounds the same and so we might not even notice). now, the assumption that i have about said 300 pager is that it will be dogshit, but art is subjective and someone out there has been beside themselves pining for it.

but this has always been true. there have always been people churning out trash hoping to turn a buck. the fact that they can do it faster now doesn't change that they're still in the trash market.

so: i keep writing. i know that my projects will be plagiarized by tech companies. i tell myself that my work is "better" than ai slop.

for you, things are different. writing code is a goal-oriented creative endeavor, but the bar for literature is enjoyment, and the bar for code is functionality. with that in mind, i have some questions:

if someone used genai to generate code snippets and they were able to verify the output, what's the problem? they used an ersatz gnome to save them some typing. if generated code is indistinguishable from human code, how does this policy work?

for code that's been flagged as ai generated- and let's assume it's obvious, they left a bunch of GPT comments all over the place- is the code bad because it's genai or is it bad because it doesn't work?

i'm interested to hear your thoughts

[–] hperrin@lemmy.ca 8 points 4 days ago* (last edited 4 days ago)

That’s a very good question, and I appreciate it.

I put a lot of this in the reasoning section of the policy, but basically there are legal, quality, security, and community reasons. Even if the quality and security reasons are solved (as you’re proposing with the “indistinguishable from human code” aspect), there are still legal and community reasons.

Legal

AI generated material is not copyrightable, and therefore licensing restrictions on it cannot be enforced. It’s considered public domain, so putting that code into your code base makes your license much less enforceable.

AI generated material might be too similar to its copyrighted training data, making it actually copyrighted by the original author. We’ve seen OpenAI and Midjourney get sued for regurgitating their training data. It’s not farfetched to think a copyright owner could go after a project for distributing their copyrighted material after an AI regurgitated it.

Community

People have an implicit trust that the maintainers of a project understand the code. When AI generated code is included, that may not be the case, and that implicit trust is broken.

Admittedly, I’ve never seen AI generated code that I couldn’t understand, but it’s reasonable to think that as AI models get bigger and more capable of producing abstract code, their code could become too obscure or abstracted to be sufficiently understood by a project maintainer.

load more comments (8 replies)
[–] maegul@lemmy.ml 97 points 6 days ago (7 children)

Couldn’t help but notice the casual gendering of Claude to “he” as well.

Someone somewhere made the important observation not long ago that computer assistants tended to be gendered female when more like a secretary (Siri and Alexa) but now that AIs are “intelligent” and powerful … Claude now has to be a male.

Especially weird (and telling?) when it is objectively gender neutral as it’s not human.

[–] TheTechnician27@lemmy.world 64 points 6 days ago* (last edited 6 days ago) (8 children)

Couldn’t help but notice the casual gendering of Claude to “he” as well.

"Claude" is a male given name. If you think it's actually a problem, blame Anthropic for giving their LLM a gendered name. I've never gendered AI assistants, but I'm not going to begrudge people who do when it's in the name (or in the case of old Siri, the voice, which would later be the default rather than only option).

Women named "Claude" exist, but they're staggeringly outnumbered by men to a point where most people don't even know of women named "Claude" – let alone would immediately associate it as masculine.

[–] amino@lemmy.blahaj.zone 26 points 6 days ago (1 children)

it's extremely telling however the shift in marketing. i don't believe giving the coding plagiarism bot a male name is coincidental. most feminists would probably agree. we've known for decades that chatbots were given female names because they're trying to reenact some tradwife fetish and attract a male audience

load more comments (1 replies)
[–] maegul@lemmy.ml 16 points 6 days ago

Not blaming anyone, this is social commentary.

But like the neutral “it” is right there.

In a world that’s both charged around gender and pronoun usage, and focused on the nature and value of LLMs … I think it’s weird that there isn’t more commonly pushback enforcing the non-human neutral for the simple reason that it’s an objective fact amidst a swampy pool of (mis-)information synthesis.

A little like the bechdel test, I feel like it’s the casualness and indifference around this gender bias (at least at the moment) that’s interesting and telling.

load more comments (6 replies)
[–] GrindingGears@lemmy.ca 12 points 5 days ago

Let's not lose focus more on the more immediate concern here, that this person is using a human pronoun to describe a computer.

[–] unknownuserunknownlocation@kbin.earth 14 points 6 days ago (1 children)

Let's not over interpret things here. Siri and Alexa are both mainly voice assistants, or at least started out as such. Studies have been conducted that show people trust female voices more than male voices. So the choice of female voices was obvious, and having female names is nothing surprising.

Also, Siri, Alexa and Cortana were seen as "intelligent" at the time, as well (or were supposed to be seen, depending on who you ask).

load more comments (1 replies)
load more comments (4 replies)
[–] hayvan@piefed.world 74 points 6 days ago (5 children)

The devs do have my sympathy, they dedicate their time and energy for these projects and start burning out.
The solution obviously shouldn't be drowning it on slop. They should be just slowing down. Vim has been an excellent and functional tool for many years now, it doesn't need more speed.
There are better ways to use LLMs as a productivity tool.

[–] unexposedhazard@discuss.tchncs.de 56 points 6 days ago* (last edited 6 days ago) (1 children)

I see this excuse of burn out every time it comes to LLM use, but i honestly do not buy it. You cant tell me every other dev out there just burnt out at the same time in sync with the release of LLM coding assistants. If you use LLMs like this you simply dont care about the project anymore and should move on with your life. Its better for everyone if it gets abandoned by the original dev and forked by ones that care. Sometimes you just gotta let go.

[–] hayvan@piefed.world 18 points 6 days ago

Agreed. They need to take a break at least.

load more comments (4 replies)
[–] grandma@sh.itjust.works 49 points 5 days ago

AI psychosis

[–] AeonFelis@lemmy.world 10 points 4 days ago

TBH I don't really mind when LLMs are used for code reviews. My main issue[^1] with coding assistants is that the people using them don't verify the code they emit thoroughly (that would be too much work. Remember - reading code is harder then writing it) and thus they often push junk into the codebase and blame the AI for the bad quality when it crashes. But with code reviews there is no such risk, because you still have to read and understand the comments and decide on your own how to resolve them.

[^1]: Quality issue - I'm not talking about the ethical issues here.

Some caveats;

  • It must be disclosed that the comment was generated by AI. Disagreeing with a human reviewer (who's usually maintainer) and disagreeing with an LLM are very different beasts.
  • If the submitter disagrees with an AI comment, and the reviewer agrees with the model's initial criticism - the reviewer[^2] need to defend it themselves, not delegate the argument back to the LLM.

[^2]: Regular Open Source etiquette applies, of course. The reviewer is always allowed to reject the PR and ask the submitted to kindly fuck off.

[–] chonglibloodsport@lemmy.world 47 points 6 days ago (7 children)

Shougo is Japanese. I’m guessing he communicates like that because he uses translation rather than trying to communicate in broken English.

[–] SlurpingPus@lemmy.world 13 points 5 days ago

TBF if the reviewer just quoted Claude at me, I would reply with Claude or ChatGPT.

load more comments (6 replies)
[–] fdnomad@programming.dev 55 points 6 days ago (2 children)

It's such a monumental waste of LLMs to include these slop phrases.

Employee 1 enters a prompt to send a slop mail that is so garbage it is unbearable to read using a brain.

So employee 2 either summarizes the slop mail using an LLM too or skips obtaining the information entirely and just goes straight to answering by prompting the next slop mail.

I wonder if that's by design - to make interacting with slop so painful that human-to-human communication will not happen without a LLM in between anymore.

[–] Mothra@mander.xyz 24 points 6 days ago (2 children)

I originally meant to leave a much shorter comment; apologies.

I can't code to save my life. However I find your observation interesting. The way I see it, AI, no matter where, is eroding human to human interactions. It becomes the middleman for everything.

It's really obvious with personal research. A couple years ago if you wanted to start say, growing tomatoes in your backyard, you would have searched people's comments on a variety of media platforms, would have read a few books or blogs. You would have asked questions to a bunch of people with some experience, left a like or upvote on people posting photos of their tomatoes, you would have used your own judgement to discern what consisted good quality advice and what not.

It would have taken you days. But all that interaction is very rewarding especially for those authoring comments, blogs, books, and photos of their experiences. Because nobody makes something just to be ignored.

Now LLM does all that process for you. In a matter of seconds. And giving no feedback or interaction to anyone whose information was used. It's depressing, but I'm intrigued to see how it plays out.

[–] fdnomad@programming.dev 12 points 6 days ago

I agree. Specifically for your example I think the transformation has been going on for a while with the aggressive monitization of internet content / the ad industry and the general downfall of google search. LLMs could to be the final nail in the coffin for nieche expertise on the broader internet.

I too am curious to see how AI companies will try to overcome the lack of human generated content to train their models on.

load more comments (1 replies)
[–] user224@lemmy.sdf.org 9 points 6 days ago

Reverse compression: making transmission larger (while still being lossy).

[–] LiveLM@lemmy.zip 20 points 5 days ago

Truly nothing is sacred lmaoooooo

[–] hexagonwin@lemmy.today 29 points 6 days ago (7 children)

wtf. i really like vim. is everyone really using neovim instead and there's no good dev maintaining vim now?

[–] lemonhead2@lemmy.world 10 points 6 days ago

i ❤️vim. used it for some 15 years.

switched to neovim cause of firenvim which allowed me to use neovim in text areas in firefox

load more comments (5 replies)
[–] peanuts4life@lemmy.blahaj.zone 21 points 6 days ago

I would like to mirror another commentor and mention that Shougo is Japanese and probably issuing Claude to communicate.

[–] Brummbaer@pawb.social 24 points 6 days ago (5 children)

I wonder what Bram's stance would have been on AI.

Anyway, looks like it's time to learn emacs.

load more comments (5 replies)
[–] AVengefulAxolotl@lemmy.world 15 points 6 days ago (1 children)

Having an AI understand your codebase, and potentially answering an issue, which might not be an issue is great I think.

The problem I see here is that you have no idea that a bot is answering. Why isnt there a 'shougo-bot' / 'vim-helper-bot' / whatever named bot user for it?

"Talking" to an AI should always be disclosed, everyone feels betrayed whenever they find out that a clanker is on the other side of the channel.

[–] riccardo@lemmy.ml 11 points 6 days ago* (last edited 6 days ago)

I don't think those comments are generated and posted automatically by a bot plugged to their github repo. I think they are generated by the author using an LLM and copy-pasted there - or if the account is plugged to some LLM, they are at least manually reviewed. The answer to the replied-to comment are posted from 10 minutes to some hours later. I don't think they lost their mind to the point of giving unvetted access to their reputable account to an AI that simply posts for them. That said, they could al least strip the obvious/uneasy parts that give very LLM vibes, specifically those quoted in the op

[–] mrmaplebar@fedia.io 15 points 6 days ago

I'm probably more surprised than I should be that so many programmers are so pathetically lonely and delusional.

load more comments
view more: next ›