Capital One Managing Product VP on Building Smarter AI Products
AI Products: Artificial Intelligence vs. General Intelligence
At Capital One, I set technology strategy for Eno (our banking chatbot) – as well as for email, push notifications and SMS enterprise capabilities. I got into artificial intelligence (AI) and machine learning (ML) back in the 1990s. Back then, there was a lot of theory but not much real-world applicability for AI products. However, things are changing fast.
Game Changing Event in AI History
Adequate processing power, memory power, algorithm development and big data are all required for true machine learning. In 2012 there was an industry-transforming competition called ImageNet. The winners used a technique called deep convolutional neural networks which increased the image classification performance by 41 percent. This was in large part thanks to deep learning. That was a watershed moment, and things really took off from there.
Now, this technology is used in many applications such as image recognition, facial recognition and self-driving cars. Basically, in 2012 the theoretical era of artificial intelligence ended, and we moved into the era of practical application.
AI & ML: Hype or Reality?
Machine learning is a subset of AI. ML is the set of techniques to teach machines how to learn on their own. There’s a lot of hype around AI. Currently, we only have specialized algorithms that solve very specific tasks. We’re a long way off from artificial general intelligence.
The hype surrounding AI is very situational. For instance, when it comes to image recognition, machines outperform humans in many cases. Still, the ability to have an artificial intelligence assistant to answer many questions in a meaningful way is a long way off.
Machines possessing the capacity of “general understanding” is not yet possible. Furthermore, we don’t even know how it happens in humans. Plus, there’s a debate around whether machine processes should mimic human processes.
Limitations to AI
Some of the main limitations today are still data volume and processing power. Do we have enough examples to train? Do we have enough processing power even if we have enough data? As tasks begin to broaden, you must also consider the right architecture to learn a lot of specific information.
There’s a notion that a human needs 10,000 hours to master something. If that’s true, you need to potentially train an artificial intelligence machine the equivalent of 10,000 human hours. We don’t expect humans to know everything, and machines won’t either since humans are training them.