Interesting! That's why my ....penis! Exactly! Thanks Mr autocorrect!

That why the website I was trying to visit wasn't working!
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Interesting! That's why my ....penis! Exactly! Thanks Mr autocorrect!

That why the website I was trying to visit wasn't working!
That's why your what wasn't working?
Fine, penis.
Penis
Is there a reason these outages seem to have increased recently?
Is there a reason these outages seem to have increased recently?
We've had three years of unnecessary tech layoffs.
Nobody knows how the fuck anything in their technology stack works anymore.
Everyone is just spinning the giant wheel over and over and hoping it doesn't land on bankrupt.
Sell your technology stocks, kids.
From the blog post OP linked in a comment:
We made an unrelated change that caused a similar, longer availability incident two weeks ago on November 18, 2025. In both cases, a deployment to help mitigate a security issue for our customers propagated to our entire network and led to errors for nearly all of our customer base.
It seems that the method they have of specifically propagating new security configurations to their servers is not a gradual or group-based rollout, it pushes certain changes to all servers at once, so uncaught bugs end up hitting everything instead of just some initial test group.
In particular, the projects outlined below should help contain the impact of these kinds of changes:
Enhanced Rollouts & Versioning: Similar to how we slowly deploy software with strict health validation, data used for rapid threat response and general configuration needs to have the same safety and blast mitigation features. This includes health validation and quick rollback capabilities among other things.
"Fail-Open" Error Handling: As part of the resilience effort, we are replacing the incorrectly applied hard-fail logic across all critical Cloudflare data-plane components. If a configuration file is corrupt or out-of-range (e.g., exceeding feature caps), the system will log the error and default to a known-good state or pass traffic without scoring, rather than dropping requests. Some services will likely give the customer the option to fail open or closed in certain scenarios. This will include drift-prevention capabilities to ensure this is enforced continuously.
This is the actual answer with respect to Cloudflare. Their config system was fucked in November. It's still fucked in December. React's massive CVE just forced them to use it again.
More generally, the issue is a matter of companies forcefully accelerating feature development at the cost of stability, likely due to AI. This is how the company I'm at is like anyway.
Something they (Cloudflare) said recently about the last big outage is that there is some bug in some part of their system that isn't their own code/product and the developer of that thing isn't fixing the bug.
Interesting! Thanks for the information.
Without looking into this specific outage, I'd suggest things like deferred maintenance and "cost optimizing" technical staffing are often contributing factors. (At least in my experience)
Lack of NSA funding to run their man in the middle platform that everyone likes.
There is now a blog post from cloudflare on the outage: https://blog.cloudflare.com/5-december-2025-outage/
I like that the headline needs to include the date so people know this is not an article from a few weeks ago.
Stop using it already. The internet was not meant to be centralised.
On the one hand, I 110% agree with you
On the other hand, it's so damn convenient. They cache your shit and they protect you from DDoS attacks, and they do it for free*
*Until you're big enough to warrant extortion from them.
I am pretty sure that 99% of sites would have less downtime due to DDoS attacks than from such outages. I have so many issues with Cloudflare that I don't even know where to begin with, from over-caching causing issues up to decrypting all traffic, who the hell thinks this is really a good idea?
We just need IANA to add that new status code. /s

Is this an off season April Fool's joke?
TL;DR: React broke the internet.
Well, that, but also Cloudflare went down because they were trying to fix React's shit.
it's back
Welcome to weekend Spain