AI machine learning models use training data, usually from the internet, to "teach" the algorithm. This "learning" is essentially a statistical method that finds clusters of similar patterns and then "assumes" that the pattern represents the "correct" answer. Seems pretty obvious, right?
So, let's say you give a machine learning algorithm a million photos of people, each labeled with their current title (doctor, prisoner, Uber driver, etc.). The algorithm groups all these photos by title and studies the characteristics of the group to come up with a likely pattern for that role.
Now, let's say a group called OpenAI builds such an AI system that allows you to type in a title in words and it renders a picture of its own design that represents a person in that role. You can even type things like "doctor shaking hands with a prisoner" and it will render the image. Pretty cool, huh!
I suspect that it wouldn't surprise you to learn that the doctor is NEVER black and the prisoner is NEVER white. Why would it be? The training data told the AI that this situation is highly unlikely. Imagine the disappointment of the authors of the DALL-E 2 system (yes, this system exists!) when they discover the bias in their system.
But wait! Is their system biased? They thought so. In fact, they released it without allowing it to render faces at all until they could "fix" it. What does "fix it" even mean? They want to remove the bias from their system. Sadly, the bias isn't in their system, dear Watson. It's in the data!
Now, they could select data for how they WANT the world to be. Or, we could all take a lesson from the unbiased algorithm that our world is what needs fixing. Maybe it's time to stop hiding the bias and expose it; however uncomfortable it may be.
Making people uncomfortable is...well...uncomfortable. What do we do when we're uncomfortable? If you're like me, you seek to move to a more comfortable state as quickly as possible. Fixing an algorithm is a lot easier than fixing social injustice. Quicker too! So, it's natural to use that approach.
To the developers of DALL-E 2, I say, "turn on the faces!" Let's live with our discomfort until we can fix it the "right way." It will take longer and a lot more work, but papering over it will not help us in the long run.
The creators of said system might argue that by showing the world as we want it to be, we will be subtly indoctrinating people to change their assumptions. I'm sure there is some validity to this argument for fixing the algorithm. I contend that those who want to see the world the way the repaired AI represents it will relax in the knowledge that all is right and those who don't like what they see will be motivated to push their agenda all the harder. Thus, let's motivate the right people to be pushing their agenda. Don't fix the algorithm!