JigglypuffSeenFromAbove

joined 2 years ago
[–] JigglypuffSeenFromAbove@lemmy.world 18 points 20 hours ago (3 children)

From OpenAI's statement:

We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

• No use of OpenAI technology for mass domestic surveillance.

• No use of OpenAI technology to direct autonomous weapons systems.

• No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

It specifically states their AI can't/won't be used for surveillance and autonomous weapons. Of course I'm not saying I trust them, but isn't this the same thing Anthropic says they're against? What's the difference here or what did I miss?

[–] JigglypuffSeenFromAbove@lemmy.world 10 points 1 month ago* (last edited 1 month ago) (1 children)

"Unfortunately you ran out of credits. Please try again in 3 hours or subscribe to our Pro Plan to continue shitting."