this post was submitted on 21 Apr 2026
534 points (99.1% liked)

Technology

84043 readers
9240 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] friend_of_satan@lemmy.world 16 points 1 day ago (1 children)

Oh no! How did this happen? ...I mean, how exactly did this happen? Is there a tutorial on how other engineers at other companies can replicate this?

[–] postmateDumbass@lemmy.world 1 points 20 hours ago

Just so they can avoid the same mistakes of course. Engineers hate mistakes.

[–] AeonFelis@lemmy.world 19 points 1 day ago (2 children)

That's what happens when you are renting your very skills from a company. You'll hone nothing and you'll be happy.

[–] imjustmsk@lemmy.ml 3 points 23 hours ago

but but ai better, ai future, we pay moni to all companiea Nd buy ai or we will be left without any growth - pleaz buy all ai- ai goof for making woled better place because it makes billionaires richer and they will definitely use that fo donate for charity 

( blinking twice Elon musk and Mark Zuckerberg told me to say that, I'm being held at gunpoint) 

[–] deathbird@mander.xyz 1 points 1 day ago

Good twist on that one.

[–] Treczoks@lemmy.world 3 points 20 hours ago

Now this company can see which employee can actually still program, and which is just a "AI Prompt Engineer".

[–] pyr0ball@lemmy.dbzer0.com 14 points 1 day ago

This is the nightmare scenario for any team that built their whole workflow around a cloud API. No warning, no clear reason, no real support path. just a Google form and 60 people sitting on their hands.

The uncomfortable truth is that "terms of service" at this scale is just "we can pull the rug whenever." Anthropic isn't unique here either. OpenAI, Google, all of them have the same opaque enforcement problem. It's a big part of why I've been building tools that run on local inference by default. Not because cloud is bad, but because your users shouldn't be one vague policy complaint away from a complete outage.

Local gives you continuity even when the upstream disappears.

[–] Lettuceeatlettuce@lemmy.ml 35 points 1 day ago (1 children)

Aaaaaand example #99999... Of why tech sovereignty is so important. The moment you start outsourcing your control, you become vulnerable to this exact kind of action by a company.

Everybody got sucked into the cloud "magic" for years, but now we are seeing the monster emerge more and more as proprietary technology enshitifies.

Luckily, there is a boom happening across the FOSS world, more and more people are finally waking up to the principles of software freedom and actual ownership.

May it continue to grow, as the corpos struggle and wither.

[–] naevaTheRat@lemmy.dbzer0.com 3 points 1 day ago (1 children)

I was working as a sadmin (like a sysadmin but more alcoholism) when the ~cloud~ butt became all the rage.

Suddenly nobody wanted to host services on the hypervisor down the road, administered by someone you could ~throttle~ call in a crisis. Nobody wanted to hire a monkey to keep their local tubes clean and run the basic stuff they needed.

Everyone could tell you that once they had your overbuilt shit locked in to their very specific apis and services they had you by the short and curlies and by god were they gonna squeeze for all you were worth.

Alas, nobody cared because initial offerings were cheap and your stupid magento storefront had to be webscale.

Now 6 companies control the internet and everything else is going that way too.

[–] partial_accumen@lemmy.world 4 points 23 hours ago* (last edited 23 hours ago) (1 children)

Everyone could tell you that once they had your overbuilt shit locked in to their very specific apis and services they had you by the short and curlies and by god were they gonna squeeze for all you were worth.

On-prem solutions don't necessarily protect companies from this either though. Anyone staring down the barrel of a Broadcom renewal for on-prem VMware licenses knows this pain.

Broadcom's screwing over of VMware has been the biggest accelerator of migration into the cloud in the last 5 years.

[–] naevaTheRat@lemmy.dbzer0.com 3 points 21 hours ago

There are FOSS hypervisors that are more than adequate for almost everyone's useage. I would not advise anyone to make any single company a critical part of their infrastructure unless you are tightly integrated in a mutually beneficial arrangement.

If you have your own sysadmin then you don't tend to get as fucked, alternatively migrating hypervisor software is a fuckload easier than migrating from a cloud service provider.

[–] Don_alForno@feddit.org 34 points 1 day ago (1 children)

Many commenters were quick to point out that he should never have coupled his company so closely with Claude to begin with, a reasonable critique by itself. However, it's worth noting that the story could have easily been the same if it had instead been Amazon Web Services, Azure, or an authentication provider like Okta.

You are so close, you almost got it!

[–] SuspciousCarrot78@lemmy.world 15 points 1 day ago* (last edited 1 day ago)

https://bannedbyanthropic.com/

I believe the word is capricious. Everything cloud based is at the whim of someone else.

There are ways to mitigate against that, but ultimately if it's not yours...it's not yours.

[–] Kaligalis@lemmy.world 17 points 1 day ago (3 children)

Just continue coding using the natural neural networks in the brains of those 60 employees until the problem has been resolved and/or another AI provider selected. It's not like Claude invented coding. Sure, it's a pretty useful tool. But it is possible to research obscure APIs and develop software manually.

load more comments (3 replies)
[–] CompactFlax@discuss.tchncs.de 241 points 2 days ago (33 children)

60 employees who can’t be productive without AI?

And this is progress?

[–] audaxdreik@pawb.social 93 points 2 days ago (26 children)

Your point is well-taken, but this is also exactly why AI reliance is dangerous. Anyone who sees this should realize the precarity of relying on products that can just be locked away from you.

[–] plyth@feddit.org 45 points 2 days ago (1 children)

Windows 11, Onedrive, Intel Management Engine, Google accounts, ...

[–] Mountainaire@lemmy.world 4 points 1 day ago

France's government is actively leaving Windows for Linux as you read this. I'm about to follow suit, too.

[–] Jrockwar@feddit.uk 23 points 2 days ago (2 children)

Like Gmail? Google drive? Slack?

I'm not defending AI, but I can come up with >10 products that would absolutely cripple the company I work at if the provider suddenly says "Soz, terms of service violation".

Vendor reliance is dangerous. That doesn't just apply to AI. If the company in OP's message had both Claude and Gemini they'd been okay, so the problem isn't with AI explicitly - the problem is with reliance on services that are critical for workflows, and providers being able to change their mind at a moment's notice.

In any case, leaving aside where the problem is, the idea that 60 employees can't use Natural Intelligence to do their jobs means there's something really wrong with that company...

load more comments (2 replies)
load more comments (24 replies)
[–] Telorand@reddthat.com 58 points 2 days ago (6 children)

My company is pivoting hard to Claude for everything, and besides the fact that it's irritating as fuck to use, it has me worried about shenanigans like in this article. For almost 50 years, they've had a "no reliance upon 3rd party platforms for core functions," but since they hired an AI apologist to the C-suite, all that has gone out the window in a matter of months.

Got me thinking I should warm up my resume...

[–] BlameTheAntifa@lemmy.world 44 points 2 days ago (5 children)

Got me thinking I should warm up my resume...

Don’t wait, start now. The job market is a nightmare and finding one that isn’t being consumed by incompetent C-level AI FOMO is getting harder every day. I work on life-saving medical equipment and AI is being pushed on us for things that could literally kill people if not done correctly. Why would anyone spend 30 minutes using AI and risking people’s lives when I can just write it myself in 5 or 10? Madness. Complete, society-scale madness. The people pushing AI have no fucking idea what they are doing or how engineering works. People are going to die.

load more comments (5 replies)
load more comments (5 replies)
load more comments (31 replies)
[–] Tamps@feddit.uk 155 points 2 days ago (2 children)

Just another form of vendor lock-in. If your business model is mostly/entirely dependent on an external party, that should be a well understood risk.

[–] itsathursday@lemmy.world 68 points 2 days ago (3 children)

The only people winning are selling shovels

[–] IrateAnteater@sh.itjust.works 67 points 2 days ago (1 children)

Dude, it's 2026. We don't sell shovels, we sell shovel subscriptions.

[–] underisk@lemmy.ml 36 points 2 days ago (1 children)

Tiered shovel subscriptions.

[–] SocialMediaRefugee@lemmy.world 18 points 2 days ago (1 children)
load more comments (1 replies)
load more comments (2 replies)
[–] shirasho@feddit.online 34 points 2 days ago (1 children)

I am responsible for gathering information on AI to determine whether we should use it for our next project. The ask was to use it for a critical process task. Immediately in my head I was like "no, we are not using AI at all", but I obviously need quantifiable data. This is just another thing to add to my list of why using AI for core processes is one of the stupidest things you could ever do.

load more comments (1 replies)
[–] ulkesh@piefed.social 72 points 2 days ago (2 children)

Or... taps mic... don't fucking rely on AI for your business! Play stupid games, win stupid prizes.

[–] Dionysus@leminal.space 3 points 1 day ago

We're in a period where the tools, agentic systems in this case, are gated by large companies.

This is like if IBM or Cray in the 60s through 90s only allowed rental of mainframes that they owned, and they can cut you off.

That wasn't the case then, but just like Google shutting down the father's entire Google account cause the pediatric doctor wanted a photo of the kid who had a rash to see if they needed to be brought into the ER or a cream, then got his phone (Google Fi), email (Gmail), and all his paperwork backups (drive) cut off... When you don't own the infrastructure you live at the whims of things you can not even appeal to.

This is a story about people and companies putting their entire business workflows in the hands of big tech who really don't care about anyone.

So, AI drama aside, the moment your life or business is fully dependent on an unreliable partner, this is what happens.

[–] NotMyOldRedditName@lemmy.world 37 points 2 days ago* (last edited 2 days ago) (5 children)

This has nothing to do with AI.

Don't rely on software or workflows or really anything that you can't easily switch if said company decides to stop doing business with you.

If you do, it better be a strategic partnership where something like this can't happen.

In this case, their workflows should have been AI provider agnostic or had a way to continue functioning if Claude went down.

[–] ulkesh@piefed.social 26 points 2 days ago (3 children)

This definitely has to do with AI. Because CEOs are losing their stupid minds over it. I agree with you in principle, but let's not lose sight of the fact that this specific technology is what CEOs are drooling over. Even in my company I had to tell the owner/CEO, "What problem are you trying to solve with AI?" His response was his mouth being open with a dumb look on his face.

So no business should rely on AI (or, to your point, any software) that it becomes detrimental to their business or workforce should that access be revoked.

load more comments (3 replies)
load more comments (4 replies)
[–] LordCrom@lemmy.world 42 points 2 days ago (2 children)

This is true for any company using 3rd party services. I worked for one that used a 3rd party messaging service to send out mfa texts to users. The company was hacked and went offline, so we couldnt send any mfa codes.... and of course, they had no plan b.

In business, always have a backup

load more comments (2 replies)
[–] Jaysyn@lemmy.world 21 points 2 days ago (1 children)

You're going to see a lot more of this and other forms of fuckery as the VC money dries up.

https://www.wheresyoured.at/four-horsemen-of-the-aipocalypse/

[–] mojofrododojo@lemmy.world 5 points 1 day ago

Yup. when the purveyors have to finally charge for what it costs, these fanboys will flee quickly

[–] FaceDeer@fedia.io 26 points 2 days ago (4 children)

Ironically, this is a great case study to illustrate the value of Chinese models. They've released a number that are on par with Claude's latest models under "open weight" licenses that would allow you to run them yourselves if you wanted to, or to hire some other third party to provide API access. It wouldn't matter what the original company's "usage policy" is in that case.

There are a couple of Western open models that aren't bad either, but they tend to be aimed at a smaller and simpler use case than Claude.

[–] EtAl@lemmy.dbzer0.com 11 points 2 days ago (1 children)

What models exactly? And what kind of hardware do you need to run them? Also, are there any GitHub repos that replicate Claude projects?

[–] FaceDeer@fedia.io 10 points 1 day ago

The one currently making the headlines is Kimi K2.6, on the benchmarks it's just short of Opus 4.7. It's a trillion-parameter model so it won't run on desktop computers, but it's something a company could run on reasonably buildable servers for their own use.

For local use, I've been finding Qwen3.6's 35B parameter model to be uncannily good. Gemma4 is also good, that's one of the Western ones. These models won't do the sort of heavy lifting that Opus can do but you don't need that heavy lifting for all tasks.

load more comments (3 replies)
[–] one_old_coder@piefed.social 44 points 2 days ago* (last edited 2 days ago) (8 children)

60 employees were dead in the water, as reportedly their daily workflows rely on the AI assistant's

Is that a joke? 60 employees do not know how to do their job? This is not Anthropic's problem.

load more comments (8 replies)
[–] Ludicrous0251@piefed.zip 32 points 2 days ago (1 children)

Either they didn't pay, they found an exploit, or, more likely, someone at Claude was reviewing their conversations. Take note, any business that cares about IP or confidentiality.

[–] PetteriPano@lemmy.world 24 points 2 days ago

I'll bring two theories to the table.

a) they got caught distilling for their own models b) they re-sold their $200/mo plans as APIs

[–] HugeNerd@lemmy.ca 10 points 2 days ago

Oh my God, my Eliza 2.0 chatbot is blocked. I'm experiencing withdrawals already, my productivity is down 76.8%.

[–] Jankatarch@lemmy.world 10 points 2 days ago

That's one way to save costs.

load more comments
view more: next ›