By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

Probing the Tyranny — and Promise — of Machine Algorithms

November 01, 2016

code3

As computer algorithms proliferate, how will bias be kept in check?

What if machines and AI are subject to the same flaws in decision-making as the humans who design them? In the rush to adopt machine learning and AI algorithms for predictive analysis in all kinds of business applications, unintended consequences and biases are coming to light.

One of the thorniest issues in the field is how, or whether, to control the explosion of these advanced technologies, and their roles in society. These provocative questions were debated at a panel, “The Tyranny of Algorithms?” at the MIT CODE conference last month.

Far from being an esoteric topic for computer scientists to study in isolation, the broader discussion about big data has wide social consequences.

As panel moderator and MIT Professor, Sinan Aral, told attendees, algorithms are “everywhere: They’re suggesting what we read, what we look at on the Internet, who we date, our jobs, who are friends are, our healthcare, our insurance coverage and our financial interest payments.”  And, as he also pointed out, studies disturbingly show these pervasive algorithms may “bias outcomes and reify discrimination.” Aral, who heads the Social Analytics and Large Scale Experimentation research programs of the IDE, said it’s critical, therefore, to examine both the proliferation of these algorithms and their potential impact—both positive and negative– on our social welfare.

Growing Concerns

For example, predictive analytics are being used more frequently to determine the rise of violent recidivism among prison inmates, and by police forces to ascertain resource allocation. Yet, tests show that African-American inmates are twice as likely to be misclassified in the data. In a more subtle case, search ads are excluding certain populations, or making false assumptions about consumers based solely on algorithmic data. “Clearly, we need to think and talk about this,” Aral said.

Several recent U.S. government reports have been issued voicing concern about improper use of data analytics. In an Oct. 31 letter to the EEOC, The President of the Leadership Conference on Civil and Human Rights, wrote:

[The Conference] believes that big data is a civil and human rights issue. Big data can bring greater safety, economic opportunity, and convenience, and at their best, data-driven tools can strengthen the values of equal opportunity and shed light on inequality and discrimination. Big data, used correctly, can also bring more clarity and objectivity to the important decisions that shape people’s lives, such as those made by employers and others in positions of power and responsibility, However, at the same time, big data poses new risks to civil and human rights that may not be addressed by our existing legal and policy frameworks. In the face of rapid technological change, we urge the EEOC to protect and strengthen key civil rights protections in the workplace. 

Even AI advocates and leading experts see red flags. At the CODE panel discussion, Harvard University Professor and Dean for Computer Science, David Parkes, said, “we have to be very careful given the power” AI and data analytics have in fields like HR recruitment and law enforcement. “We can’t reinforce the biases of the data. In criminal justice, it’s widely known that the data is poor,” and misidentifying criminal photos, for instance, is common. And Alessandro Acquisti, Professor of Information Technology and Public Policy at Carnegie Mellon University,  told of employment and hiring test cases where extraneous personal information was used that should not have been included. Yet, humans assisted in these decisions, he said, so where does the bias originate?

For Catherine Tucker, Professor of Management Science and Marketing at MIT Sloan, the biases in social advertising often stem from nuances and subtleties that machines won’t pick up. These are “the real worry,” she said. Coders aren’t sexist and the data is not the problem.  

Nonetheless, discriminatory social media policies—such as Facebook’s ethnic affinity tool– are increasingly  problematic.

Sandy Pentland— a member of many international privacy organizations such as the World Economic Forum Big Data and Personal Data initiative, as well as head of the Big Data research program of the MIT IDE– said that proposals for data transparency and “open algorithms” that include public input about what data can be shared or excluded, are positive steps toward reducing bias. “We’re at a point where we could change the social contract to include the public,” he said.  

This week’s EEOC letter urged the agency “to take appropriate steps to protect workers from errors in data, flawed assumptions, and uses of data that may result in a discriminatory impact.”

Overlooking Machine Strengths?

But many  fears may be overstated, suggested Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business.  In fact, most algorithms do better than people in areas such as hiring decisions, partly because people are more biased, she said.  And studies of police practices such as “stop-and-frisk” show that the chance of having a gun is more accurately predicted when based on machine data, not human decisions alone. Algorithmic guidelines for police or judges do better than humans, she argued, because “humans simply aren’t objective.”

Athey points to “incredible progress by machines to help override human biases. De-biasing people is more difficult than fixing data sets.” Moreover, she reminded attendees that robots and ‘smart’ machines “only do what we tell them to do; they need constraints” to avoid crashes or destruction. That why social scientists, business people and economists need to be involved, and “we need to be clear about what we’re asking the machine to do. Machines will drive you off the cliff and will go haywire fast.”

Ultimately, Tucker and other panelists were unconvinced that transparency alone can solve the complex issues of machine learning’s potential benefits and challenges– though global policymakers, particularly in the EU, see the merits of these plans. Pentland suggested that citizens need to be better educated. But others noted that shifting the burden to the public won’t work when corporate competition to own the algorithms is intensifying.

Athey summed up the “tough decisions” we face saying that “structural racism can’t be tweaked by algorithms.” Optimistically, she hopes that laws governing safety, along with self-monitoring by businesses and the public sector, will lead to beneficial uses of AI technology. Surveillance can be terrifying, she said, but it can also be fair. Correct use of police body cameras, with AI machines reviewing the data, for example, could uncover and solve systemic problems.

With reasonable governments and businesses, solid democracy, and fair news media in place, machines can serve more good than harm, according to the experts. But that’s a tall order, especially on a global scale. And perhaps the even more difficult task is defining– or dictating– what bias is, and what we want algorithms to do. Theoretically, algorithms themselves could be designed to combat bias and discrimination, depending on how they are coded. For now, however, that design process is still the domain of very nonobjective human societies with very disparate values.

As a recent Harvard Business Review article stated:

Big data, sophisticated computer algorithms, and artificial intelligence are not inherently good or bad, but that doesn’t mean their effects on society are neutral. Their nature depends on how firms employ them, how markets are structured, and whether firms’ incentives are aligned with society’s interests.

 

Watch the panel video.