this post was submitted on 28 Feb 2026
106 points (97.3% liked)

Technology

81948 readers
3093 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/43810526

Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.

Feb. 27, 2026

https://archive.ph/hwHbe

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations.

And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon.

Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

all 15 comments
sorted by: hot top controversial new old
[–] brucethemoose@lemmy.world 8 points 5 hours ago* (last edited 5 hours ago)

Yeah… Microsoft and Google have a list of employees to fire now.

Trump will back off to some extent, to avoid inflaming stock markets (and his Big Tech friends heavily invested in Anthropic tooling).

Anthropic will fire a few people. OpenAI will raise money somehow.

That’s about it.

[–] gravitas_deficiency@sh.itjust.works 23 points 8 hours ago (1 children)

This was the red line for the techbros? This was a bridge to far? Don’t get me wrong, it’s good that they didn’t fold on this point… but fuck, would have been nice if they had taken exception to any of the thousands of red lines the regime has crossed up until now.

[–] echodot@feddit.uk 1 points 24 minutes ago* (last edited 23 minutes ago)

They're all invested in each other, a threat to one is a threat to all and up until now the regime hasn't threatened their investments.

Seriously there's a graph somewhere showing who's invested in what and basically it's all just one thing now. I don't know why they maintain the charade of being separate companies.

And the reason they don't want their technology being used to kill people is because they don't trust the administration to keep it to foreign countries in the middle East where no one cares what happens. They'll use it in the United States and everyone will know who's technology is powering their drones.

All that's happening is that financial self-interest and ethics both give the same answer in this scenario.

[–] Zwuzelmaus@feddit.org 22 points 8 hours ago* (last edited 8 hours ago) (3 children)

Dario Amodei [...] said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

This is absolutely reasonable and I support this position.

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

But I don't trust this guy who shows regularly that he wants to be the ruler of the whole world by means of his own AI.

[–] partofthevoice@lemmy.zip 1 points 18 minutes ago

This stuff is really scary when you think about it. If we keep getting closer to a reality where technology can silently monitor your every thought, with analysis and automation becoming evermore efficient, what’s bound to happen so long as the only thing stopping it from being used against us is moral standing? Eventually, someone somewhere can make something so trivially that it tips the scales in their favor so long as they lack the moral standing to not do so. Technology is a unique kind of threat, given especially the glorification that’s often given to its innovation. Skepticism could have been applied earlier.

[–] echodot@feddit.uk 1 points 21 minutes ago

Yeah but he doesn't want Trump to have the technology.

[–] RblScmNerfHerder@lemmy.world 2 points 5 hours ago

Srs Ted Faro vibes, though less arrogant.

[–] inari@piefed.zip 13 points 7 hours ago

This is like Alien vs Predator, whoever wins, we all lose

[–] Casterial@lemmy.world 6 points 6 hours ago (4 children)

Trump wants to use Grok for all things government, but isn't Grok one of the most biased and poorly performative AIs?

[–] echodot@feddit.uk 1 points 22 minutes ago

It'll fit right in. They're looking to automate their corruption.

[–] Zwuzelmaus@feddit.org 1 points 2 hours ago

but isn't Grok the most biased

ftfy

[–] thejml@sh.itjust.works 3 points 5 hours ago

That first part is likely a large selling point.

[–] ExFed@programming.dev 1 points 4 hours ago

Has "performance" or "merit" meant much to Trump for anything else?

[–] db2@lemmy.world 8 points 8 hours ago (1 children)

How is he not dead yet jfc