By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

The Algorithm Let Them Do It; The Algorithm Helped Them Do It?

Despite Algorithmic Aversion, AI input diffuses users’ perceived responsibility for racial bias

June 14, 2022

AI is increasingly involved in high stakes decision-making, particularly in efforts to combat human bias, as AI is perceived as more neutral and objective. Yet recent research has demonstrated the potential for embedded bias in AI and the risk of amplifying human bias. One second-order effect of human-AI cooperation in decision-making is that it creates complexity around accountability. We raise the question: in the case of an untoward outcome, does AI affect perceptions of accountability and deflect “moral culpability?” Recent research has raised the specter of “moral crumple zones,” questioning who is responsible for accidents when agency is distributed in a system and control over an action is mediated through robots (Elish 2019). We take an empirical approach to this question to understand how perceptions of culpability in high stakes decisions change when artificially intelligent input sources become collaborative partners. Across several scenarios — courts, hiring, admissions, healthcare — we test whether including an AI system in a decision-making setting (where the result suggests a biased decision) leads participants to attribute less responsibility to the human decision-maker. Our findings suggest that despite the fact that participants expect lower algorithmic accuracy in the consequential domains studied, AI input lowers the perceived responsibility of a biased human decision maker to a level commensurate with human advisor input. This suggests that, as AI is incorporated into an increasing number of decisions, algorithmic input may inadvertently lower perceived moral culpability among decision-makers.