Blog

Gary Marcus Probes AI’s Limitations

Wednesday, December 6, 2017
Paula Klein

Increasingly, academics, scientists and business leaders are questioning the progress of AI. Gary Marcus—who holds all of these titles, as well as author and AI contrarian—framed the issues at an MIT IDE seminar Nov. 15 entitled: Artificial General Intelligence: Why Aren't We There Yet? Among the central topics raised were: What has AI achieved, and what obstacles still remain?

Marcus put the brakes on many of the claims--and hype--about AI and machine learning triumphs. For instance, he says, we “still have a long way to go for automated diagnosis, domestic robots, scene comprehension, and safe, reliable driverless cars.” Eighty percent accuracy rates may be OK for advertising or recommendation engines, he notes, but not for medical diagnosis or autonomous vehicles. Many technologies work well in the lab, but not in daily life, he said.

“All-purpose, all-powerful AI systems, capable of catering to our every intellectual need, have been promised for six decades, but thus far still not arrived. What will it take to bring AI to something like human-level intelligence?”

Marcus says that steeper advances in speech and language recognition, as well as inference and decision-making, are needed. Machine learning has gotten better, but it’s not equivalent or even close to humans in many situations.

Probing the ‘AI Bubble’

Marcus isn’t alone is his skepticism. A recent headline in the MIT Technology Review proclaimed that Progress in AI isn’t as Impressive as You Might Think. “There’s no question there have been a number of breakthroughs in recent years,” according to Erik Brynjolfsson, MIT IDE Director, and one of the authors of the report. “But it’s also clear we are a long way from artificial general intelligence.”

Brynjolfsson, along with his peers from Stanford University, SRI International, and Open AI, are creating an AI Index to examine the “AI bubble” that currently exists. Several experts quoted in the Tech Review article point to the need for huge amounts of data to train current AI systems, and to their inability to generalize about solving a variety of problems.

From an economic perspective, Brynjolfsson also just published a research paper raising questions about the lack of productivity gains in spite of AI technology advances. He writes that “the most impressive capabilities of AI, particularly those based on machine learning, have not yet diffused widely. More importantly, like other general-purpose technologies, their full effects won’t be realized until waves of complementary innovations are developed and implemented.”

Marcus told seminar attendees that “machine learning is hard to debug, revise, and verify. We have no procedures for building complex cognitive systems.” We have big data resources, but not enough accurate, valuable data and abstract knowledge for the complex tasks required. For example: How do machines acquire common sense? “We can use speech recognition in search, but not in conversation.” Similarly, image recognition and natural language can be applied to narrow operations, but not broadly.

The Difficulty of Emulating Human Learning

As a Professor of Psychology and Neural Science at NYU, Marcus approaches AI from a behavioral perspective. He says, for instance, that there’s a huge bias in machine learning that everything is learned and nothing is innate, which ignores human instincts and brain biology. In order for machines to form goals, determining outcomes, and problem-solve, algorithms need to emulate human learning processes much more accurately. And that will take some time.

Marcus was CEO and Founder of the machine learning startup Geometric Intelligence, recently acquired by Uber. His books include: The Algebraic MindKluge: The Haphazard Evolution of the Human Mind, and The New York Times Bestseller, Guitar Zero. He is also editor of the recent book, The Future of the Brain: Essays by the World's Leading Neuroscientists