Loading...

L’ensemble des contenus Business Digest est exclusivement réservé à nos abonnés.
Nous vous remercions de ne pas les partager.

Little Find

Sexist or racists algorithms can be corrected

The bad news: algorithms are everywhere, and they’re potentially discriminatory. The good news: despite their “black-box”-style design, the factors that generate unfair bias can be detected.

The technical terms “algorithm” and “artificial intelligence” suggest reassuring scientific objectivity, but do not assume that decisions based on computer programs are more neutral than humans’. They may be just as flawed. In his recent research, HEC Professor Christophe Pérignon offers a blatant example of downright sexist bias, that of an algorithm judging a prominent developer 20 times more credit-worthy than his wife despite similar financial credentials. That’s because AI is based on data selected by humans, therefore potentially biased. And the tainted variable is difficult to identify, given the massive amounts of data used for AI – in effect, a black box. Pretty sobering, when you consider the critical role of algorithms in our lives, from facial recognition to CV screening in job application processes.

But Pérignon and his fellow researchers have come up with an answer. Using statistical theory, they not only devised a test to determine the fairness of an algorithm, but have also figured out how to detect and weed out the variables within the algorithm that generate unfair bias. Let’s hope banks – and other powers-that-be – adopt it.  

Further reading

“A $%^* Sexist Program: Detecting and Addressing AI Bias”

(By Christophe Pérignon, hec.edu/knowledge, december 2020).  

© Copyright Business Digest - All rights reserved

Andrea Davoust
Published by Andrea Davoust
A French/English bilingual journalist with more than 15 years’ experience in the press, multimedia, and publishing.