this post was submitted on 11 Mar 2026
453 points (98.1% liked)
Fuck AI
5751 readers
1490 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't get it, why would you take a program (or ANYTHING) you created and let some AI shit all over it. I will never.
Am I allowed to have an unpopular narrative here?
There are levels of vibe coding, and it's possible to use AI without vibe coding at all.
If you're very targeted in what you're having the AI do and you carefully review the code, it can be a great tool.
For example, "make this html grid sortable and add a download button that creates a csv file." You know exactly what this does, it's self contained, and it's something you know can just be copied from stack overflow and applied to your code.
That works, and works well.
"Create an app that..." is vibe coded slop.
Even if this works, you'll be stealing someone else's code without authorship attribution for anything that's a non-trivial algorithm.
Most devs are already doing that. This just saves them the time of doing it for themselves.
The copyright/license issues that come with it due to the current unregulated nature of ai are a completely different issue to the vibecode slop allegations.
no. it's one aspect of many. Using slop is ethically wrong AND it produces shitty code with zero innovation and creating technical debt.
It can be useful when an experienced programmer knows how to guide it, although you have to be very intentional or you'll end up wasting your time cleaning up after it.
That being said I think most people are upset that they're no longer declaring which parts of code are AI assisted
I'm going to assume from the part where they say they were at their lowest that the option the saw infront of them wasn't "code with AI or not" but rather "burnout and don't code, or code with AI". And they chose to make progress using the crutch rather than stop. That's my guess.
Humm, I mean, that happens to every creator. Writers block, burnout, etc. I guess it all comes down to what you think is important and your values are. I usually just walk away and do something else for a while, even a few weeks or months.
most writers don't get growing stacks of bug reports. open source burnout is extremely common, unfortunately.
Like, I agree with you about open source burnout, but it feels weird to make it a dick measuring contest with writers, as a writer myself.
writers are arguably suffering more. not because llms can replace them at all to the degree they can junior programmers, but because the people making the decisions believe they can.
also, i wasn't the one who brought it up :P
LLMs also aren't good at replacing junior programmers, but the people in charge believe that they can do that too.
well they are, in that they produce bad code that has to be vetted thoroughly and they don't know git.
It doesn't take that long to reach a junior dev basic Git, and they can often explain their bad code. Plus, junior devs turn into senior devs, and LLMs don't.
junior devs don't turn into senior devs if they're replaced with llms though :(
You didn't bring it up but you're the one who implied it was a contest of who suffers more. Your comment was worded very much in a way that made it sound like they had it worse than writers, when the original commenter was just stating that all creatives experience burnout (not a comparison)
it wasn't really about suffering more, the point was that it's more out in the open and more directly connecting with people. i'm sure andy weir had the same issues with the martian since it was written in public.
Good point there, that sounds like it would be annoying and I'm sure I would want to fix the bugs as fast as possible too, but then you are using AI and introducing how many more new bugs, and ones that you will not easily be able to track down since you didn't write the code, so then you are locked in to using AI. Personally I would rather have buggy software, nothing is perfect. Open source developers don't owe anyone anything, so if people are being assholes about bugs that's pretty lame.
yeah that's what's bugging me about all this. "remember the human" is even more important now.
regarding introducing new bugs, both high-profile cases from this past week have been seasoned developers of tools with extensive test suites that claimed to have tested everything thoroughly. when someone with 30 years of experience say they've tested something, i tend to trust that judgement. but on the other hand we've also seen the cognitive decline heavy llm usage seems to lead to...
Because you can do a lot more with it, have you ever tried coding? Before AI, if you didn't know how to do something, it was "Ask a question on Stack Overflow, then get told this question had already been asked/answered, then get linked to a loosely related question". Now I can ask AI all my random obscure questions.
I get being cautious around sensitive equipment like banking apps and government databases, but why would you hate LLM-generated code this much?
What I don't get, is people's inability to cope with their own limitations, or find their way out of problems without asking a magic box to do everything for them. Yes I have done some coding. Asking on Stack Overflow wasn't even that bad, and eventually you could find an answer to almost anything there if you knew what you were looking for. Paging through programming books looking for answers was relatively a lot more difficult. However, both actually taught you things during the process, you made mistakes, learned, etc. The AI is teaching you nothing it's just doing work for you. I don't respect that, if you use it that's you're business but it's not your code and not your product or whatever.
I don't know who those people are. I coded for 20 years before LLMs, and I coped just fine.
Unless you ask it to explain things to you. Which is often required to fix the things that the AI can't get right on its own.
How is it not my code?
An LLM cannot ever "explain" anything to anyone, because it doesn't know anything. How are people still trusting anything these fucking things say?
Right?? It's bizarre to me that otherwise-smart-seeming people will think they can write "explain your reasoning" to the AI and it will explain its reasoning.
Yes, it will write some fluent response that reads as an explanation of its reasoning. But you may not even be talking to the same model that wrote the original text when you get the "explanation".
Because it's right more often than google? I swear you AI critics aren't actually using AI.
Agreed. Delusional mindsets stuck in 2023. I've never seen more entitled people before punching on FOSS devs and how they use their free time. "We need high quality, human coded FOSS programs with ZERO AI slop in them!" "Why no, I've never contributed to an open source project, nor do I know how to code, why do you ask?"
Forks exist, get over it.
In case you missed it, courts have ruled that works produced by AI cannot have copyright, because it was not made by a human.
You can make use of AI-generated code, but you didn't write it. Since you can't copyright it, it's not your code - it's our code, comrade.
Courts have ruled that art that was 100% generated by AI cannot be copyrighted by the AI, because the AI is not a human person.
The same courts have also ruled that works that were assisted by AI but created by a human can be copyrighted by that human.
Thankfully real life is far more nuanced than "fuck ai" allows.
And get the wrong answer. But you don't know it's wrong, because you're not already an expert on the obscure subject.
Before AI, yes you had to learn how to do things. Why is that bad?
No, it's right more often than google was.
If it was the wrong answer, the projects wouldn't work, now would they?
I'm still learning how to do things, just a lot faster, thanks to this helpful tool. Why is that bad?
I asked plenty of questions on SO and never had a bad experience. But I put quite a bit of work in. You couldn't ask "how do i sort a list in JAVA" and get answers, you had to ask "here's some code I'm writing and it does but I think it should do because what's going on?" and people gave some really nice answers. (Or you could put "how do sort list java" into web search and get a fine answer to that; it's not like SO was the only place to ask low-effort questions.)
One of the bad things with AI is it's soooo helpful that when I get questions now it's like "please create a DNS entry for foo.bar.baz" and they're asking because the AI got completely stuck on something simple (like making a request to api.github.com) and wandered up and down and eventually decided on some nonsense course of action and the developer has given up on thinking about anything.