By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

New IDE Research to Focus on AI Privacy, Bias, Inequality, and Trust

Written By David Verrill

Artificial Intelligence (AI) is becoming more ubiquitous in business and society every day, and it’s also creating new challenges. As machine learning advances and AI applications soar, difficult social issues and implications—about privacy, bias, trust, and control–must be addressed.

How do we turn concern into action? This year, the MIT Initiative on the Digital Economy (IDE), is confronting these topics head on by launching two new research groups focused on the future of AI: Data-Driven Societies, and The Human/AI Interface.

With these new groups, our researchers will put inclusive needs front and center by building a bridge between research and practice to develop real-world solutions. We do not assume that science and technology are neutral.  Rather, we assume that power dynamics are affected and created by technology and those who control it.  Our researchers are uniquely positioned to uncover critical insights about AI’s influence on the world.

 New Groups Lead the Way

The Data-Driven Societies research group will be led by Professor Alex ‘Sandy’ Pentland.  Professor Pentland, who also directs MIT Connection Science, is one of the most-cited computational scientists in the world. His group helps develop AI ecosystems in which “all partners — citizens, companies, and government — are winners.” Pentland believes that “data and AI are new primary means of production, along with capital and labor. Working with multinational companies, governments, tax and monetary authorities, and citizen organizations we are building and testing new software and legal architectures that better leverage data and AI.”

Late last year, Pentland joined numerous global leaders to sign a Social Contract for the AI Age, a proposal submitted for discussion at the United Nation’s 75th anniversary celebrations. The Contract recognizes that “without guidelines or directives, the undisciplined use of AI poses risks to the wellbeing of individuals and creates possibilities for economic, political, social, and criminal exploitation.” Those who sign “seek to build a world where all are recognized and valued, and all forms of governance adhere to a set of values and are accountable and transparent.”

The Human/AI Interface research group will be led by IDE Principal Research Scientist Renée Richardson Gosline. Gosline, a Senior Lecturer in the Management Science group at the MIT Sloan School of Management and a Principal Research Scientist at MIT’s IDE, is leading compelling research that examines Customer Experience (CX) strategy and decision systems.

By extension, the research will explore “when humans trust algorithms and how to develop mutual learning between humans and algorithms for optimal decision-making.” Further, “the symbiosis between humans and AI,” and how their interactions vary by task, will be a focal point. Using experiments and mixed-method approaches, this research group will offer strategic, policy, and behavioral solutions.

These groups expand the IDE’s ongoing AI efforts and collaborations including membership in the Partnership in AI.  It’s all part of IDE building a core competency in understanding both the promise and peril of AI.  While we embrace the promise of AI, we must also address algorithmic bias and the role that trust in AI technology —or the lack of it –play in inequality.

Please consider this a call to action to our partners and to the tech community.

No doubt you are looking at your systems, data, and products with new eyes. Your employees, clients, friends, and your industry are asking for more. And it’s likely you are asking more of yourself, too.

We seek to partner with a diverse group of organizations and researchers to keep the acceleration of digital transformation on track for the many, not the few– even amid a global pandemic. Our goal is for our research groups to lead the way for our partners, and for those who have been woefully excluded from the digital economy.

The new AI research groups join four other newly created groups at the IDE that seek to apply rigorous analysis to solving the world’s pressing problems. The four groups are: Misinformation and Fake News, led by Professor David Rand; Online Marketplaces, led by Associate Professor John Horton; Tech for Good, led by IDE co-director Andrew McAfee; and Social Networks and Digital Experimentation, led by Associate Professor Dean Eckles. Watch for further descriptions of this work, and updates here.

Join us as we embark upon a new slate of research at the MIT Initiative on the Digital Economy.

 

David Verrill

Executive Director

dverrill@mit.edu

Written by