Bitvore : AI that keeps its results explainable
In his book Everyday Chaos, author David Weinberger urges us to embrace the chaos created by the overwhelming amount of information available to us today. Chaos made visible — and usable — by technical innovations. Bitvore CDO Greg Bolcer, however, insists that sometimes we have to make sense of the chaos.
Based on an interview with Gregory Alan Bolcer, Chief Data Officer, Bitvore (June 2019)
[highlight_box title=”Biography” text=”Currently CDO of data analytics company Bitvore, in Irvine, California, Greg Bolcer founded the social gaming app Kerosene and a Match in 2009. He previously worked as VP of Engineering for High Tower software, and was Chief Software Architect and Founder of Encryptanet. His specialties include Internet-scale technologies, Web protocols, e-commerce and micropayments, encryption and security, network-based workflow, digital rights management and software engineering.” ” img=”https://business-digest.eu:443/wp-content/uploads/2019/09/1567519693-c00d81b5dcd3827535f276879c843858-e1567520042863.jpg”]
When Stockton, California, went bankrupt in 2012, Bitvore’s Greg Bolcer wasn’t surprised. The city was in his dead pool. “It’s gallows humor,” Bolcer says today, somewhat apologetically. “We keep a shortlist of cities that we think will go bankrupt in the next year.” Bolcer is not a soothsayer, but CDO of Bitvore, which uses artificial intelligence to scour information sources (news media, press releases, business filings, etc.) to help businesses spot sales opportunities, track trends and identify risk factors: “precision intelligence,” as the company calls it.
Some of Bitvore’s clients — including major ratings agencies, who use Bitvore’s services to follow the financial health of municipalities — were at first skeptical of the Stockton prediction when it came down the pike several months before the bankruptcy filing. “They will do their own analysis,” Bolcer says of the ratings agencies. “But they also want to know our reasons.”
Looking inside the black box: sometimes “inexplicable models” must be explained
Some AI and machine learning systems are like a black box. They correlate countless pieces of data from multiple sources to make connections and predictions that humans cannot. But the reasons behind the conclusions are not always explained or explainable in human terms — what Weinberger calls “inexplicable models.” “We are getting accustomed to the notion that much of what we do may not have, and does not need, an explanation,” Weinberger claims in his book. “Theories, of course, still have value, but if there’s a way to influence a shopper’s behavior or to cure a genetic disease, we’re not waiting for a theory before we give the lever a pull.”
But Bolcer says his customers do want and need an explanation before pulling any levers. They want to make sense of the chaos. “You need to have humans in the loop if you want to explain to a human what the system did,” Bolcer says. Sometimes AI will offer the right answer but for the wrong reasons. It’s important to know that. And then sometimes AI will give the wrong answer, and human supervision is the best way to catch the mistake.
Can AI make jokes?: catching AI mistakes
Bolcer admits that sometimes we don’t care how AI came up with a solution — for example, with audio or visual content. ThisPersonDoesNotExist website generates fake but plausible-seeming faces. It doesn’t really matter how or why the AI did it, just that its results are convincing.
Excerpt from Business Digest N°299, September 2019