What We Talk About When We Talk About AI

Friday, June 16, 2017
Thomas Davenport

What do we call the collection of technologies that make up what we used to call “artificial intelligence?” This conundrum reminds me of a Raymond Carver short story (and book) called What We Talk About When We Talk About Love. Artificial Intelligence (AI) isn’t quite as ambiguous a concept as love, but it’s moving in that direction.

I was prompted to discuss this issue by a conversation with Jeremy Achin, the young CEO of DataRobot. We were preparing for a collaborative presentation at the Open Data Science Conference in Boston a few weeks ago, and I told him I could present on “The Cognitive Company.” Achin, who doesn’t mince words, wrinkled up his nose and said he really didn’t like the use of the word “cognitive”. He suggested that IBM had forced the term on the world and to use it was to promote that company. He also suggested that the field of AI is not actually very close to replicating or surpassing the capabilities of human cognition.

I’m not sure I agree with his IBM claim, although the company does seem to pay top prices for ad space for searches involving “cognitive technology” or “cognitive computing.” Achin is certainly correct that current technologies are not yet worthy of a comparison to the human brain’s capabilities, but then “artificial intelligence” also implicitly makes that comparison.

There is another competitor in this race, and that’s the one that best characterizes DataRobot’s offerings: machine learning. It’s a bit grandiose as well in comparing computer-based learning to human learning, but it’s a bit more specific than “AI” or “cognitive.” The trouble with it is that several technologies that have often been included in the AI category—rule-based expert systems and “robotic process automation” tools—don’t actually learn or improve their performance over time without human intervention. So I don’t think it’s a good fit for an umbrella term that describes all intelligent technologies.

There are also the more generic terms like “machine intelligence” or “smart machines.” For whatever reason—perhaps their high level of generality—these haven’t caught on. Some people also use the term “robotics” as the general term for intelligent machines, but to me anything involving “robot” will always suggest a machine with the ability to manipulate the physical world. That’s why I don’t like the term “robotic process automation”—it has nothing to do with physical robots. For that matter, I am also not a fan of “automation”—we’ve been talking about it for decades, and many of us still seem to have jobs. I prefer “augmentation” in almost every case for the impact of technology on human labor.

What terms have caught on? If we turn to Google Trends, the arbiter of how we use terminology to find out about the world, “artificial intelligence” and “machine learning” are quite dominant compared to any of the alternatives—see this comparison over the last five years, for example. It suggests that “machine learning” is now the popularity winner, followed by “artificial intelligence.” “Cognitive computing,” “cognitive technology” and “machine intelligence” are hardly visible on the graph at all. Not surprisingly, “automation” is substantially more popular than “augmentation.”

One could suggest that this is all a matter of personal preference, but terminology does have its consequences. “Machine learning” might well mislead amateurs to expect that all smart technologies can learn about their environment and improve their performance within it over time. “Cognitive” does imply that we won’t have to rely on human brains for too much longer. “Automation” tends to instill fear in the hearts of human workers.


Continue reading the full blog that first appeared on DataInformed, May 29, here.


Tom Davenport is a Fellow of the MIT Initiative on the Digital Economy, co-founder of the International Institute for Analytics, and an independent senior adviser to Deloitte Analytics. He also is a member of the Data Informed Board of Advisers.