BIAS #3

BlurHashwhatsappeng3

Help! My AI model is discriminatory!

AI systems worldwide show prejudices or bias, and those prejudices are more harmful than you might think. We don’t always know what AI bases its decisions on. The companies do know what the training is based on (data). And we can see what comes out of it (the output), but we don’t always know exactly how the AI system arrives at that output. We call this the black box (see below). Just think of the possible consequences.

What developments have we seen so far?

No women at Amazon

In 2015, Amazon discovered that its AI system was preferring male candidates for technical jobs. Why? Not because they were better, but because the system had been fed with people’s CVs that had been submitted over the previous 10 years. Most of these were men, so the AI system decided that men were better for the job.

Worse school results for poorer students

In the UK, students were unable to take their national exams during the COVID-19 pandemic, so an AI system decided on their grades. The result was that the system gave higher grades to students from wealthy areas and lower grades to students from poor areas. This was because the model had been trained with data from previous years, in which students from wealthy areas tended to perform better.

Imagine working really hard, as someone from a poorer area, and still getting bad grades...

Less care for African Americans

An AI system for American hospitals concluded that African Americans needed less care.

The system was basing this on past care costs. Because these were lower for African Americans than for white Americans, it was incorrectly assumed that African Americans were healthier than white Americans.

Black Nazis

The opposite has also happened: historical images were created with Google Gemini, but because Google considers diversity important, it made a correction to its system in order to obtain more diverse results. Again, this produced misleading results, such as a black Nazi soldier.

Positive note

If we are conscious of this bias, we can do something about it. We can have diverse teams work on AI systems, making it less easy for stereotypes to creep in. We can also look for more diverse data, which will make the output more diverse.

Other solutions have also been devised. For example, Missjourney, an AI tool that only creates images of professional women such as CEOs, scientists, astronauts or engineers.

There is also something called explainable AI. This is a set of tools and methods that can explain how an AI system came to its decision, enabling us to see where the prejudices lie. This is useful for diagnoses: doctors can only trust the diagnosis provided by an AI system if it can also explain how it has arrived at that diagnosis.