Loading...

L’ensemble des contenus Business Digest est exclusivement réservé à nos abonnés.
Nous vous remercions de ne pas les partager.

Little Find

The fine line between artificial intelligence and artificial stupidity

What do you expect from a chatbot? Do you
expect it to operate in the same way as a
human being, to have good and bad qualities,
or do you expect it to react like a machine
programmed to solve your problems?

Today, we tend to think that an AI should imitate humans in order to be accepted, with proofs, prizes and tests to support that theory: Alan Turing’s test, which judges machines on their ability to imitate humans through language, or the Loebner Prize which honors chatbots and other audio assistants on their ability to adopt human style.

But why restrict the accumulation of developed knowledge on the part of robots? Why program them to make mistakes that seem human, or slow them down at complex mathematical formulas, just so they’re less distinguishable from humans? Such current commonplace practices contradict our expectations for quick and efficient problem-solving intelligence.

Instead, why not consider a program as autonomous in its learning process and let each one develop its own language, without making it mirror our own errors? It might be reassuring to make machines communicate like humans but doing so is likely to deprive us of an essential source of future wealth. Restricting machines by teaching them to commit our same errors pushes us over the edge into artificial stupidity.

Learn more: “Humain, trop humain” by Javier Gonzalez (Les Echos, May 2018)

© Copyright Business Digest - All rights reserved

Valentino Di Nardo
Published by Valentino Di Nardo