Contacts
Algorithmic bias may lead to Machine Learning systems discriminating against minorities. The issue is to be tackled now, says Luca Trevisan in the sixth episode of the Think Diverse podcast

We are wary of Artificial Intelligences for all the wrong reasons. The chance of one of them taking control of the world, as in a Hollywood movie, is thin, but they can still hurt large chunks of the humankind (e.g. women) or minorities (according to ethnicity, sexual preferences and so on) through the so-called algorithmic bias.

In The Opinionated Machine, the sixth episode of the THINK DIVERSE podcast series, Luca Trevisan, Full Professor of Computer Science at Bocconi, clarifies how Machine Learning (a type of Artificial Intelligence) can perpetuate societal bias or prove so ineffective in dealing with minorities to practically discriminate against them.



"The use of Machine Learning systems may seem limited for now," Prof. Trevisan warns, "but their rate of adoption is increasing at an exponential rate, and we must tackle the issue as soon as possible. To this end, we need a multidisciplinary effort including computer scientists, mathematicians, political scientists, lawyers, and other scientists."

When we want a Machine Learning system to make decisions, we feed it a set of data (decisions made in the past) and let it calibrate itself (i.e. understand which are the relevant variables) until it makes ca. the same decisions.

If past decisions are biased, Professor Trevisan says to host Catherine De Vries, such bias is perpetuated. It may be the case of judges discriminating against persons of color in bail decisions, or employers discriminating against women when screening resumes.

Minorities can be also hurt in subtler ways. If they are underrepresented in the training dataset, for example, a facial recognition system can be ineffective with them. But even when they are adequately represented, the optimal calibration solution for a system may be to be very accurate with the majority, even if much less accurate, or completely wrong, with the tiny minority. "This kind of bias is harder to detect and to fix," Prof. Trevisan says.

Listen to the episode and follow the series on:

Spotify
Apple Podcasts
Spreaker
Google Podcasts

The Opinionated Machine | Podcast #6

Watch video