A few months ago, Andy McAfee and Erik Brynjolfsson published Machine, Platform, Crowd: Harnessing Our Digital Future – their third book on the impact of the 21st century digital revolution on the economy and society – following the publication of The Second Machine Age in 2014, and Race Against the Machine in 2011. Brynjolfsson and McAfee are professor and research scientist respectively at MIT’s Sloan School of Management, as well as co-Directors of MIT’s Initiative on the Digital Economy.
The book is organized into three sections, each focused on a major trend that’s reshaping the business world: the rapidly expanding capabilities of machines; the emergence of large, asset-light platform companies; and the ability to now leverage the knowledge, expertise and enthusiasm of the crowd. These three trends are combining into a triple revolution, causing companies to rethink the balance between minds and machines; between products and platforms ; and between the core and the crowd.
I cannot possible do justice to all three trends in one blog, so let me summarize the key themes of the Mind and Machine section, which I found to be an excellent explanation of the current state of AI.
The standard partnership
With the advent of ERP systems and the Internet in the 1990s, businesses settled on what McAfee and Brynjolfsson call the standard partnership between people and computers. The machines would handle routine processes, record keeping, and quantitative tasks, leaving more time for people to exercise their judgement, intuition, creativity, and interactions with each other.
Underlying the standard partnership is the belief that human decisions are generally well thought-out and rational, and that our judgement and intuition are far superior to that of any computer. But, this isn’t quite the case, as shown in the pioneering research of Princeton Professor Emeritus Daniel Kahneman – for which he was awarded the 2002 Nobel Prize in Economics – and his long time collaborator, Amos Tversky – who died in 1996.
Their work was explained in Kahneman’s 2011 bestseller Thinking, Fast and Slow. Its central thesis is that our mind is composed of two very different systems of thinking, System 1 and System 2. System 1 is the intuitive, fast and emotional part of our mind. Thoughts come automatically and very quickly to System 1, without us doing anything to make them happen. System 2, on the other hand, is the slower, logical, more deliberate part of the mind. It’s where we evaluate and choose between multiple options, because only System 2 can think of multiple things at once and shift its attention between them.
System 1 typically works by developing a coherent story based on the observations and facts at its disposal. This helps us deal efficiently with the myriads of simple situations we encounter in everyday life. Research has shown that the intuitive System 1 is actually more influential in our decisions, choices and judgements than we generally realize.
But, while enabling us to act quickly, System 1 is prone to mistakes. It tends to be overconfident, creating the impression that we live in a world that is more coherent and simpler than the actual real world. It suppresses complexity and information that might contradict its coherent story, unless System 2 intervenes because it realizes that something doesn’t quite feel right. System 1 does better the more expertise we have on a subject. Mistakes tend to happen when operating outside our areas of expertise.
“The 20-year old standard partnership of minds and machines more often than not places too much emphasis on human judgment, intuition and gut…” write McAfee and Brynjolfsson. “[O]ur fast, effortless System 1 style of reasoning is subject to many different kinds of bias. Even worse, it is unaware when it’s making an error, and it hijacks our rational System 2 to provide a convincing justification for what is actually a snap judgement. The evidence is overwhelming that, whenever the option is available, relying on data and algorithms alone usually leads to better decisions and forecasts than relying on the judgment of even experthumans… Many decisions, judgments, and forecasts now made by humans should be turned over to algorithms.”
But, algorithms are far from perfect. Inaccurate or biased data will lead to inaccurate or biased predictions. Machines lack common sense, that is, the ordinary, pragmatic, comprehensive understanding of the world that we get from all the information we’re constantly taking in. Machines have a deep but narrow view of the world they were designed for. It’s generally a good idea to have a person check the machine’s decisions to make sure they make sense, being careful we don’t let our intuitive System 1 override a good but counter-intuitive machine decision.
We know more than we can tell
In March of 2016, AlphaGo – a Go-playing application developed by Google Deep Mind – claimed victory against Lee Sedol – one of the world’s top Go players. Go is a much more complex game than chess, for which there are far more possible board positions than there are atoms in the universe. Nobody can explain how the top human players make smart Go moves – not even the players themselves. As one such top player explained, “I’ll see a move and be sure it’s the right one, but won’t be able to tell you exactly how I know. I just see it.”
Playing world-class Go is an example of tacit knowledge, a concept first introduced in the 1950s by scientist and philosopher Michael Polanyi. Explicit knowledge is formal, codified, and can be readily explained to people and captured in a computer program. Tacit knowledge, on the other hand, is the kind of knowledge we are often not aware we have, and is therefore difficult to transfer to another person, let alone to a machine.
“We can know more than we can tell,” noted Polanyi in what’s become known as Polanyi’s paradox. This seeming paradox succinctly captures the fact that we tacitly know a lot about the way the world works, yet aren’t able to explicitly describe this knowledge. Tacit knowledge is best transmitted through personal interactions and practical experiences. Everyday examples include speaking a language, riding a bike, driving a car, and easily recognizing many different objects.
Continue reading the full blog here.