this post was submitted on 15 Oct 2025
41 points (95.6% liked)

Technology

40868 readers
157 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

Just the other day I asked AI to create an image showing a diverse group of people, it didn't include a single black person. I asked it to rectify and it couldn't do it. This went on a number of times before I gave up. There's still a long way to go.

top 9 comments
sorted by: hot top controversial new old
[–] cerebralhawks@lemmy.dbzer0.com 43 points 1 month ago (2 children)

TL;DR: Woman missing an arm couldn't get AI image generators to generate an amputee. It apparently didn't know how. Now it does and the woman says the representation is important.

I guess it couldn't find enough art to steal of amputees for it to form enough of a basis to draw them? And so in reaction to the backlash (such as it was), they gave it more data?

[–] GenderNeutralBro@lemmy.sdf.org 22 points 1 month ago (1 children)

Representation...in AI image generation?

The idea that this is something anyone should want is hard to wrap my head around.

If I could opt out of being deepfake-able, I would.

[–] cerebralhawks@lemmy.dbzer0.com 4 points 1 month ago

That's what the article said. I would opt-out as well.

[–] CanadaPlus@lemmy.sdf.org 7 points 1 month ago* (last edited 1 month ago)

Image generation often happens in a kind of region by region way, too, so not just continuing the arm might be hard.

It's annoying that she asked ChatGPT why it was doing that and they reported the answer uncritically.

[–] irotsoma@piefed.blahaj.zone 18 points 1 month ago

Despite living with one arm, Jess doesn't see herself as disabled, saying the barriers she faces are societal.

Actually, this is what disability is all about. It's not that people can't complete tasks or take care of themselves, it's that society doesn't provide the same tools to disabled people that they provide to so called "able bodied" people to allow them to complete those tasks.

It's the trope of the single grocery store that everyone goes to, but the person in a wheelchair, but otherwise able, can't use because there's a curb. So, suddenly they can't feed themselves. It's not that they are unable to feed themselves, it's that they can't access the food without assistance and thus are "disabled". As soon as a ramp is installed they are no longer "disabled", just differently abled.

[–] Eq0@literature.cafe 17 points 1 month ago (1 children)

Inherent bias is going to get worse and worse if we let AI roam free.

[–] SSUPII@sopuli.xyz 3 points 1 month ago* (last edited 1 month ago) (2 children)

I am instead thinking it will instead not be the case? Bigger models will be able to store more of the less common realities

[–] Eq0@literature.cafe 10 points 1 month ago

They will, at best, replicate the data sets. They will learn racial discrimination and propagate it.

If you have a deterministic system, for example, to rate a CV, you can ensure that no obvious negative racial bias is included. If instead you have a LLM (or other AI) there is no supervision on which data element is used and how. The only thing we can check is if the predictions match the (potentially racist) data.

[–] luxyr42@lemmy.dormedas.com 2 points 1 month ago

You may be able to prompt for the less common realities, but the default of the model is still going to see "doctor" as a white man.