Professor Daniel Kahneman was awarded a Nobel Prize for his work on the psychology of judgment and decision-making, as well as behavioral economics. In this age of human/machine collaboration and shared learning, IDE Director, Erik Brynjolfsson, asked Kahneman about the perils, as well as the potential, of machine-based decision-making. The conversation took place at a recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor, in New York City. Some key highlights follow.
Erik Brynjolfsson: We heard today about algorithmic bias and about human biases. You are one of the world’s experts on human biases, and you’re writing a new book on the topic. What are the bigger risks — human or the algorithmic biases?
Daniel Kahneman: It’s pretty obvious that it would be human biases, because you can trace and analyze algorithms.
In the example of sexist hiring, if you use a system that is predicatively accurate, you are going to penalize women because, in fact, they are penalized by the organization. The problem is really not the selection, it’s the organization. So something has to be done to make the organization less sexist. And then, as part of doing that, you would want to train your algorithm. But you certainly wouldn’t want just to train the algorithm and keep the organization as it is.
Brynjolfsson: Your new book, Noise, is about the different kinds of mistakes that people can make that are different than biases. Help us understand that a little bit. Kahneman: At an insurance company, we measured what is technically called noise, and we did that in the following way: We constructed a series of six completely realistic cases that were given to 50 of their underwriters. We wanted to determine how much variability there was in their funding decisions. We expected differences between 10% and 15%, but in fact, they disagreed about 56% of the time. That’s a lot of noise.
Continue reading the full blog on IDE ‘s Medium publication, here.