By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

Machine Learning Strides and Limitations: A conversation with Andrew McAfee, Hilary Mason, and Claudia Perlich

June 01, 2020

AM AC chat

As AI and machine learning (ML) become more mainstream in business applications and more widely accepted by the public — in everything from ‘smart’ vacuum cleaners and navigation systems, to the ubiquitous Alexa and Siri — it’s important to view the strides as well as the shortcomings with a wide lens. MIT IDE Co-director and Principal Research Scientist, Andrew McAfee, did just that during a fireside chat at the recent IDE Annual Conference held virtually on May 20.

McAfee spoke with two “rock Starts in the discipline,” Hilary Mason, Data Science in Residence, at Accel (lower left in photo), and Claudia Perlich, Chief Scientist at Dstillery (upper left in photo), in a wide-ranging discussion of about machine learning’s past, present, and future.

What follows are some key excerpts and takeaways from the conversation.

 

Andrew McAfee: You’ve been around to watch ML go from a very niche discipline, something that was maybe used to recognize handwritten digits on checks, to this technology that appears to be taking over the economyWhat was your ‘a-ha’ moment when you first realized the potential of ML?

Hilary Mason: I’ve had a series of those moments at different times in my career but the set of problems and places where the technology can actually be impactful has radically expanded over the past 10 years. ML techniques are not new; people have been using them in financial services and for really high-value problems for a very long time. But when I was starting out, we began to answer questions that previously had been out of reach from a technical perspective. For example, it allowed coders to go beyond the very narrow edges of the problem to create huge data domains with lots of human labor involved in the classification of data. We started having more generalized approaches to solving problems, and that is one of those step-function changes in capability that’s really interesting.

Claudia Perlich: I also saw many changes over the years. For the longest time data mining, as it was called then, was this niche course. Suddenly, people realized that data mining could span many different application domains; social sciences, medical applications, and even law — and everyone wanted to take the course! I knew all these different applications existed, but my frustration before 2010 had been our limitations to fun pilots where you try to see how far you can push the technology. I was never convinced that the programs were put to their best use — not because of insufficient data, or even incorrect data; there was no technology stack or an API where you could just ping the cloud, and get the answer. Back then, you had to almost manually re-implement everything from scratch.

Today, ML is a much more relevant value proposition across domains that couldn’t previously afford the technical infrastructure and the skills needed.

In addition, personalization has created new industries where auctions and billions of decisions are made daily in real-time based on analytics. Now, there’s an economic model — an affordable infrastructure — that didn’t exist before.

 

AM: What areas still need work?

HM: The commoditization of the underlying platform was hugely enabling. It started happening around 2008 or 2009, but it’s still happening right now. Deployment and monitoring of machine learning systems is still something of an open question. It’s not solved. And so we will continue to see this unfold over the next five to 10 years.

CP: We don’t yet have sufficient control systems that understand when the model might be going out of whack, or whether we’re really using it for the right cause. Most ML models typically are built for a specific use case, and the person who built it understands the boundaries, but beyond that, generalization becomes questionable, at best.

HM: When you’re dealing with data collected from the real world, it will change; the real world changes. Things happen for random reasons; human behavior changes over time. All of that impacts the accuracy of a model and a system. And that’s not even getting into potential adversarial attempts to impact it deliberately.

So, while ML is easier now, there’s still a lot of work to do in order to make sure that the things we make are working accurately, and are repeatable.

On the human side of the discipline, people inside of organizations must be working together efficiently, using the right features, have some understanding of the providence of those things and what they actually mean when they’re creating systems. We do have a fairly long road ahead of us, but that’s not to diminish the incredible progress.

 

AM: How would you describe ML’s progress and capabilities? What can we compare it to?

HM: Machine learning is the mobile phone of making decisions. The smartphone is a piece of infrastructure that fundamentally changes the way we do even common tasks in our everyday lives or business, but it doesn’t change everything. And the fact that we all walk around holding something like this is also not the end game.

It’s absolutely a big deal. It enables us to do things ways we couldn’t do them before. Something like Google maps on a mobile device is revolutionary; we not only look at it when we don’t know where we’re going — which, by the way, has made going new places a completely different experience than it used to be — but we now look at it when we know where we’re going to figure out the best way to get there, given current conditions. That means that two levels of information are available to us instantly, and that changes the whole experience of navigating an unknown place, but also a known place.

CP: It’s a technology that gives you additional information to help you make better decisions. What you do with the information you gather should still be your choice. Just because you build a classifier to detect breast cancer doesn’t mean that the computer is telling the person what the next step should be. It means you are given some additional component that should be integrated into a decision to ultimately achieve a better outcome.

For instance, in radiology, the benefit is not only finding out if there’s cancer in an image, but supporting radiologists with reports so they actually have more time to talk to the patient — work the machine learning can’t do. So classifying images is not the value proposition: It is giving professionals additional information or making their life and workflow easier.

 

AM: In other words, not making very experienced, highly trained and busy people act like clerks for a lot of their working day.

CP: Exactly right.

HM: I like framing this as reducing cognitive drudgery, and the drudgery is not always separable from the core of the work.

 

AM: What are some caveats as move forward with adoption?

CP: There has been lots of hesitation in adopting these technologies, and a good amount of care and concern is important. Transparency isn’t the main concern; machines are not much less transparent than people. The bigger concern is the scale at which machines can execute, which means that the potential for large scale, unintended side effects is so much bigger than with human design. I would always prefer that we’re thinking a lot harder about how we make decisions with the things that the machines give us, rather than asking how the machines came up with that thing in the first place.

If you think about biases in hiring, for example, you can try to forcefully make a model that’s gender-blind, but it’s going to be really, really painful and I wonder if we should be doing it at all. If you are convinced that you want to hire an equal ratio of men and women, nothing prevents you from doing so with the data you already have. Sometimes, pushing moral questions onto ML is extremely unfair and unproductive. Let’s not focus on getting complete transparency on the machine learning system, let’s focus on greater transparency about the ground rules that we’re using to make important decisions like who should we hire? Who should we let out on parole? Things like that.

At the same time, if I trust the machine, I need to be willing to go along with its recommendations, because let’s face it, if human cognition was as good as machine learning, we probably wouldn’t need it. The value proposition of machine learning is that it can do statistical things a lot better.

HM: There are clearly applications where using ML is really not a great idea, or we need to give people a better understanding of the way those decisions are being made — things like sending someone to jail, or giving someone financial credit. But there are also many applications that don’t have that same kind of impact, and maybe we’re willing to be a little more flexible on how we make those decisions; like, do we think this temperature sensor is giving us a good reading or not?

 

AM: What are you optimistic about?

HM: I am most optimistic about machines being able to interface with human beings, such as much better use of natural language, of looking at our environment and making inferences about what’s going on. This goes a long way toward alleviating that cognitive drudgery and those tedious tasks — in areas like healthcare or education — and to do more meaningful work as human beings. I wish it could free us to be more human and less focused on technology, or less constrained by technology. That’s my vision.I’m also terrified, because these same technologies can be used to manipulate people. We are not spending nearly enough time on the adversarial side of machine learning and the impact that has on decision-making. So I’m both very excited and also have a lot of concerns.

CP: My concern is the extent to which we are becoming less cognizant of what’s going on in the world and more tuned into our own tastes; we can become complacent and adopt a false sense of security about what we think we know.