"Huge rich company responsible for hosting like half of the fucking internet spent the last year pushing code to global-scale production without so much as a review by a senior engineer."
That's how I read that headline.
This is a most excellent place for technology news and articles.
"Huge rich company responsible for hosting like half of the fucking internet spent the last year pushing code to global-scale production without so much as a review by a senior engineer."
That's how I read that headline.
How in the glorious fuck was this not a thing from the start? In a system this big and this critical all code should be reviewed by cognizant individuals. Anyone who thought an LLM would be perfect and not need code reviews has their heads so far up their asses they can see through their pee hole.
If you do this, you signal the AI isn't ready for production capabilities, which limits your sales groups capability to market it. Which is in reality the actual case and AI sucks and should never be trusted.
What is AI good at? Creating thousands of lines of code that look plausibly correct in seconds.
What are humans bad at? Reviewing changes containing thousands of lines of plausibly correct code.
This is a great way to force senior devs to take the blame for things. But, if they actually want to avoid outages rather than just assign blame to them, they'll need to submit small, efficient changes that the submitter understands and can explain clearly. Wouldn't it be simpler just to say "No AI"?
If you ask a writer what is Ai good for? They will say it's good for art. But never use it for writing, because it's terrible at it.
If you ask a artist what is Ai good for? They will say it's good for writing. but never use it for art, because it's terrible at it.
AI's greatest feature in the eyes of the Epstein class is the ability to shift responsibility. People will do all kinds of fucked up shit if they can shift the blame to someone else, and AI is the perfect bag holder.
Just ask the school of little girls in Iran which were likely targets picked by AI with out of date information about it being a barracks. Why bother confirming the target with current intel from the ground when no one's going to take the blame anyway?
Or I suppose add extra work by walking an AI tool through making small incremental changes.
In my experience, LLMs suck at making smart, small changes. To know how to do that they need to "understand" the entire codebase, and that's expensive.
Yeah that’s what I mean by extra work. I can make the change myself or I can argue with claude code until it does what I want.
If my job ends up being reviewing AI code spammed at me by vibe coding juniors all day, I’m joining a nunnery.
or hear me out, they can build it themselves so they don’t have to chase hallucinations. as a matter of fact, let’s cut the ai out of the project and leave it to summarizing emails.
This 1000x. You think that senior dev got to that level hoping one day all they'd have to do is evaluate randomly generated code? No! They want to create, build, design, integrate, share. Cut out the middle, useless step and get back to the work these professionals have dedicated their careers to.
AI is an assistant, not a replacement. It amazes me that Amazon, Microsoft, Google, and all these "tech leader" companies are going to make the same tech fuckup multiple times.
If only the lessons were painful for them and not just us/the workers.
Wonder what the turnover rate in executives is. I bet it is about 8 years.
Yes, so now when there's a success, it gets attributed to AI. When there's an outage, that's the fault of humans not reviewing correctly. These senior engineers will get fucked in all scenarios.
Precisely. From Cory Doctorow's latest, very insightful essay on AI, where he talks about the promise of AI replacing 9 out of 10 radiologists:
"if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."
This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.
Morged.
Couldn't they, I don't know, just go back to people writing the code, and stop using AI to do something it clearly can't handle? Just an idea.
I guess they've invested (thrown) so much money at this thing, they're determined to make it work. Also, I know they've gone into insanely deep debt and if it doesn't work they're going to lose an eye watering amount of money, and perhaps the bubble bursting will be the catalyst to bringing down the entire world economy.
Oh, so yeah, they do have great incentive to make this work, but I don't see it happening. As usual, they fuck up and the rest of us pay the bill. None of the billionaires will suffer any more than loss of face over this. Even if they've broken laws, all they ever get is a small fine and a slap on the back, "Better luck, next time, ol' boy!"
I always saw a code review like a dissertation defense. Why did you choose to implement the requirement in this way? Answers like 'I found a post on Stackoverflow' or 'the AI told me to' would only move the question back one step; why did you choose to accept this answer?
I was a very unpopular reviewer.
Likely, but you did not let poor code pass. That is valuable.
as a sr, I would just keep rejecting them and make AI find "reasons" why.