By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

Pedro Domingos: In Search of the Master Algorithm for Machine Learning

Written by

Paula Klein

pedro

Philosophers have said that to know your present, look to the past, then, imagine the future. This aphorism holds true when it comes to understanding how machine learning works. First, we need to understand the underlying principles of where knowledge comes from, and how humans learn. But which principles should we follow? Figuring out the best approach falls to scholars like Pedro Domingos.

Domingos, a professor of computer science at the University of Washington and the author of The Master Algorithm (Basic Books, 2015), said that in the past few decades, five schools of thought have dominated the understanding of machine learning, each with its own master algorithm and each with its own flaws. 

At a recent MIT IDE seminar, he explained these “five tribes of machine learning,” and how they each contribute to the ultimate goal of a unified, “master algorithm” that will combine many parts into a scalable model. In other words, to fully grasp the potential of artificial intelligence, it is necessary to deconstruct human intelligence and learning.

Domingos said he agrees with the sentiment of Yann LeCun, director of AI at Facebook, that: “Most of the knowledge in the future will be extracted by machines and will reside in machines.” But we have a long way to go to perfect that learning, he said.

Studying human learning patterns, according to Domingos, requires a broad, multidisciplinary approach. In fact, for much of the 20th Century, philosophy and logic vied with biology, neuroscience, statistics and psychology to explain how humans learn and how computers can emulate that process.

These five fields have led to distinct paradigms and schools of research that became known as:

  • The Symbolists, promoting logic, philosophy and symbolic learning and use inverse deduction
  • The Connectionists, studying networks of brain neurons and back-propagation theories
  • The Evolutionaries, genetic programming advocates
  • The Bayseans, using statistics and probabilistic inference
  • The Analogizers, known for psychology-based kernel machines

Emergence of each concept was a major advance, and each one exceeds the others by orders of magnitude, Domingos said. Computer learning is already taking leaps forward and is building on these theories for different types of applications. Baysean thinking has led to Spam filters that use machine learning to distinguish what is spam, while medical diagnosis are often based on analogy-based algorithms, for instance.

And one application of symbolic learning is a robotic biologist that learns about yeasts and uses inverse deduction to study growth; robot scientists that use logic and symbolic learnings are being tested today as well.  

At Google and Microsoft, artificial models of neural networks are being developed to create Deep Learning environments based on some of the work of Connectionists. Yet, deep learning has many challenges, Domingos said, and producing a whole network of these neurons is difficult to achieve.

As his book explains, “in the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask.” When will we find it? It’s hard to predict, because scientific progress is not linear. It could happen tomorrow, or it could take many decades.

Watch the full IDE seminar presentation.