AI Products: Artificial Intelligence vs. General Intelligence

At Capital One, I set technology strategy for Eno (our banking chatbot) – as well as for email, push notifications and SMS enterprise capabilities. I got into artificial intelligence (AI) and machine learning (ML) back in the 1990s. Back then, there was a lot of theory but not much real-world applicability for AI products. However, things are changing fast.

Game Changing Event in AI History

Adequate processing power, memory power, algorithm development and big data are all required for true machine learning. In 2012 there was an industry-transforming competition called ImageNet. The winners used a technique called deep convolutional neural networks which increased the image classification performance by 41 percent. This was in large part thanks to deep learning. That was a watershed moment, and things really took off from there.

Now, this technology is used in many applications such as image recognition, facial recognition and self-driving cars. Basically, in 2012 the theoretical era of artificial intelligence ended, and we moved into the era of practical application.

AI & ML: Hype or Reality?

Machine learning is a subset of AI. ML is the set of techniques to teach machines how to learn on their own. There’s a lot of hype around AI. Currently, we only have specialized algorithms that solve very specific tasks. We’re a long way off from artificial general intelligence.

The hype surrounding AI is very situational. For instance, when it comes to image recognition, machines outperform humans in many cases. Still, the ability to have an artificial intelligence assistant to answer many questions in a meaningful way is a long way off.

Machines possessing the capacity of “general understanding” is not yet possible. Furthermore, we don’t even know how it happens in humans. Plus, there’s a debate around whether machine processes should mimic human processes.

Limitations to AI

Some of the main limitations today are still data volume and processing power. Do we have enough examples to train? Do we have enough processing power even if we have enough data?  As tasks begin to broaden, you must also consider the right architecture to learn a lot of specific information.

There’s a notion that a human needs 10,000 hours to master something. If that’s true, you need to potentially train an artificial intelligence machine the equivalent of 10,000 human hours. We don’t expect humans to know everything, and machines won’t either since humans are training them.


Click here for Part 2

Click here for Part 3


Please to leave your comment

Load more comments
About the Speaker
Capital One Managing VP
Margaret Mayer is the Managing VP of Messaging and Conversational AI at Capital One - setting the technology strategy for the company's banking chatbot (Eno) and enterprise-level messaging platforms. Over the past two decades, Margaret has held several leadership positions at Capital One - including software engineering, consumer identity and partnership development. Margaret holds a PhD in industrial engineering from Lehigh University and a bachelors degree from Cornell University. She currently lives in Richmond, VA.
About the Host
Mark Pydynowski is the Senior Director of Product at Experian - helping 300+ businesses protect 60M+ consumers by building enterprise-level products in a white-label format. Prior to Experian, Mark led new product management and business development for Capital One's Digital Commerce B2B team. Prior to Capital One, Mark was the CMO at - a value-based comparison shopping engine for consumer insurance. Before, he was the co-founder and CEO of SOMARK Innovations, Inc., a venture-backed life science company (acquired by Two Oceans Pty Ltd). Mark earned his BSBA from Washington University in St. Louis and currently lives in Austin.

Recent Posts