this post was submitted on 04 May 2026
76 points (96.3% liked)

Technology

84413 readers
3407 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 26 comments
sorted by: hot top controversial new old
[–] partial_accumen@lemmy.world 2 points 15 hours ago* (last edited 15 hours ago)

This sounds like politicians that don't understand the technology.

Anyone can create an AI model (including Gen AI LLMs). I personally created on for a hobby project trained exclusively on a series of old public domain novels from the early 1900s. Don't get me wrong, my AI model sucks and only produced barely coherent responses, but it absolutely meets the definition of an AI model.

So how would this White House action (if implemented into law) affect me and my model?

  • Would I have to submit my model to a government agency to run it on my local computer?
  • Would it only apply to models deployed for the consumption of others?
  • Private companies that build their own AI models purely for internal business purposes not used for public consumption, would they be obligated to put it them through some government process before these models could be used inside companies?
  • Is it perhaps not all AI models would need to go through this government approval process, but then what criteria defines a model that would vs one that wouldn't?
[–] AverageEarthling@feddit.online 3 points 20 hours ago

so, pay to play. got it.

[–] db2@lemmy.world 63 points 2 days ago (2 children)

Stupid people who can't think will be vetting software that they believe thinks for them. What could possibly go wrong.

[–] pelespirit@sh.itjust.works 32 points 2 days ago (2 children)

The AI also tells them that they're awesome and smart. It's a great match.

[–] db2@lemmy.world 31 points 2 days ago
[–] rezifon@lemmy.world 10 points 2 days ago

You’re absolutely right!

[–] givesomefucks@lemmy.world 17 points 2 days ago

It's worse...

I remember something about them preventing states from regulating them too.

They're gonna say only Grok level chatbots are "real" be ause it's constantly tweaked to stay right wing

They know the type of people who us AInare incredibly gullible and prone to being manipulated.

So they're going to force ever American chatbot addict, to use chatbots that only reinforce maga propaganda

This isn't trump making these decisions, they're too logical. It's likely Peter Thiel.

[–] mthomson@forum.macaque.social 29 points 2 days ago

Why so they can ensure the model lies in their favor?

[–] Treczoks@lemmy.world 8 points 1 day ago

How about vetting Trump before he posts something?

[–] rslogix89@lemmy.world 18 points 2 days ago* (last edited 2 days ago)

FIFY White House Considers ~~Vetting~~ taking donations from A.I. ~~Models~~ companies Before They Are Released

[–] eager_eagle@lemmy.world 29 points 2 days ago

good, I'll add vetted models to my blocklist

[–] teft@piefed.social 20 points 2 days ago (3 children)
[–] Naich@piefed.world 7 points 2 days ago

Don't forget small government.

[–] Miller@lemmy.world 8 points 2 days ago

Same capitalist market where billionaires receive massive socialistic handouts, bailouts and tax negations from governments.

[–] ZoteTheMighty@lemmy.zip 5 points 2 days ago

Of course it is, companies are free to bribe their way out of these rules.

[–] terabyterex@lemmy.world 11 points 2 days ago* (last edited 2 days ago) (2 children)

this worries me with any tech.

  1. if a smaller company dvelopes a competing product (openai and anthropic used to be small) will it hinder them and provide access to only the mion stream companies?

  2. how does this affect non-us models?

  3. will they only be approved if they say wonderful things about trump? have you ever asked grok about elon?

[–] Voroxpete@sh.itjust.works 15 points 2 days ago

I really feel like you're actually being too generous to this proposal.

Let's be clear, when this administration says they want to vet new models, what they mean is that they want to turn them into right wing propaganda engines. This is "reprogram ChatGPT to say the 2020 election was stolen, white genocide is real and trans people are all sex predators, or we won't certify it."

[–] XLE@piefed.social 7 points 2 days ago (1 children)

This is some Cold War regulatory capture BS based on a Myth(os), something that didn't happen:

Anthropic did in its Mythos system card, suggest a model has “broken containment and sent a message” when it A) was instructed to do so and B) did not actually break out of any container.

[–] ryannathans@aussie.zone 1 points 2 days ago

I thought it had to exploit zero days to get out to email researchers?

[–] darthsundhaft@piefed.social 5 points 2 days ago (1 children)

Government finally realizing that bleeding edge software that has no regulation being applied to it needs said regulation after all.

Of course, this is more like the government is making sure the models produce the content the government wants the public to know. Very much like China, Russia, et al. Ergo, controlling the narrative.

[–] mPony@kbin.earth 1 points 1 day ago

or expecting grease on their palms

[–] trackball_fetish@lemmy.wtf 3 points 2 days ago (1 children)

Lol good luck with that. Its too late.

[–] partofthevoice@lemmy.zip 2 points 2 days ago* (last edited 2 days ago)

Training costs are still enormous for Generational AI. It’s possible to moderate it by tracking power consumption, data centers, and known actors. Even DeepSeek 4 still cost about $6M to train. If they want to, they can impose regulation while watching for training. Of course won’t matter what China releases

[–] itsathursday@lemmy.world 4 points 2 days ago* (last edited 2 days ago) (1 children)

Start with vetting the big arch and other burgers then let’s talk

[–] CameronDev@programming.dev 5 points 2 days ago (1 children)

He's been vetting them directly for years. Daily vettings, directly into his gullet. What more do you want?

[–] baggachipz@sh.itjust.works 1 points 1 day ago

For him to eat as many as it takes to have an immediate fatal heart attack