Loading...
Book synthesis

4 AI myths debunked

Artificial intelligence inspires many fantasies. Some see it as the savior of humanity, while others view it as a job and/or privacy thief. Which is it? Scientist Janelle Shane answers bluntly, with wit and intelligence, debunking many of the looming misconceptions about artificial intelligence.

Artificial intelligence is everywhere: it is present in our hobbies, travels, shopping, social interactions, and increasingly our professional sphere. But does it live up to its reputation? Nothing is less certain. Can algorithms be trusted? Or are there some areas where you should keep AI on a leash if  you value your ethics, your ROI, and your reputation? And where does that leave humans? 

Myth no. 1: AI is never wrong (it just sees giraffes everywhere) 

Artificial intelligence technologies have made huge strides. In the past three years, algorithms have humiliated the top Go and poker champions, been approved by the US Food and Drug Administration to make autonomous diagnosesplayed a role in more than half of stock trades, and pushed the boundaries of disinformation. With each milestone crossed, the old fears of humans being surpassed by machines resurface. Janelle Shane maintains that these highperformance AI programs are in fact exceptions; the field is still in its infancyTry asking, for example, Visual Chatbot, an imagedescription algorithm trained by Virginia Tech researchers, to identify a random image. You might be surprised by its propensity to see giraffes everywhere, even in your living room. And if it gives a correct answer, ask it to explain further, and it’s likely to spout nonsenseThis may be amusing when it’s an algorithm with few consequences, but much less so in serious circumstances. What about a melanoma detector that diagnoses tumors whenever a ruler appears in the photograph? 

Is artificial intelligence in fact stupid? No more, no less than the humans who develop it. The top four mistakes humans make regarding AI: choosing an overly broad problem that requires handling a large amount of information from different sources (such as evaluating a job candidate from an interview video); training the AI with data that is too restricted, badly filtered, or unsuitable (like the data Tesla used for its AI, which mistook a truck for a traffic sign because it had been trained with images of vehicles seen from the rear); formulating the problem badly, and finally, overestimating the AI’s memory. This last point explains why most chatbots break down after a few sentences: they no longer know what theyre talking about! 

Conclusion no. 1: Define the problem you want to solve with AI very specifically and verify that AI is the best solution. 

 

Myth no. 2: AI doesn’t cheat (but it makes up its own rules) 

Excerpt from Business Digest N°302, December 2019 – January 2020

75%
SIGN IN
SUBSCRIBE TO
THE PUBLICATION
SUBSCRIBE TO THE PUBLICATION
See all subscription plans
Françoise Tollet
Published by Françoise Tollet
She spent 12 years in industry, working for Bolloré Technologies, among others. She co-founded Business Digest in 1992 and has been running the company since 1998. And she took the Internet plunge in 1996, even before coming on board as part of the BD team.