In recent years, machine learning algorithms have begun to guide many socio-technical systems that affect our lives and our welfare. Such algorithms make recommendations and drive decisions about what to read, who to be friends with, who to advertise to, jobs and job prospects, college admissions, loan applications and many other important life choices.
Recent research has shown that such algorithms could introduce and enable bias and discrimination in a number of ways. We also suspect that such algorithms could be designed to combat bias and discrimination, depending on how they are coded.
In this panel, we explored how machine learning and algorithmic thinking can reinforce or overcome stereotypes, inequality and discrimination. We also discussed possible solutions to the dilemma, including, for example, whether experimentation itself could offer a way to maximize the benefits, while minimizing the risks, of algorithmic thinking.
Panelists: David Parkes (Harvard), Alessandro Acquisti (CMU), Catherine Tucker (MIT), Sandy Pentland (MIT), Susan Athey (Stanford)
Moderator: Sinan Aral (MIT)
Read the blog here.