Understanding how machine learning works – it’s vital
This HBR article not only explains exactly why AI can produce inaccurate decisions, but also details potential legal implications for businesses relying on it, and suggests solutions to mitigate risks.
By now we all know that machine learning (algorithms that adapt their decisions with constantly updated data) don’t perform magic, and that AI can lead to biased hiring, medical misdiagnoses or investment losses. But with this piece in the HBR, AI seems less of a black box, and more of a tool box. The authors (four profs at INSEAD and Harvard, bless their clarity) actually made me understand how the choice of the data input and its relation to the environment in which AI is used, or even incomplete data (say, images of skin lesion that do not take into account skin colour and cancer prevalence in minority groups) influences outcomes. In fact, it makes the whole process feel almost touch-and-go. Plus they detail liability issues for businesses offering products or services based on AI, and weigh different options (e.g. “locking” software) open to managers. It’s a long read, but a worthwhile one, not just to feel smarter than a machine, but also to ask the right questions when using machine learning.
“When Machine Learning Goes Off the Rails, A guide to managing the risks”
(By Boris Babic, I. Glenn Cohen, Theodoros Evgeniou and Sara Gerke, (Harvard Business Review, January 2021).
© Copyright Business Digest - All rights reserved