Technological advancements have provided consumers the option to choose services provided by “artificial Intelligence” (AI) or human beings. Previous research (Dietvorst et al. 2015) has shown that consumers display “algorithmic aversion” when AI errs. We further explore this phenomenon by testing the boundaries of this effect, and how it compares to human-generated errors. In two experiments examining preference for an AI or a human forecaster in a statistical prediction task, we find that the degree of algorithm aversion varies with the size of the error made by the AI or the human forecaster. Algorithm aversion disappears as the AI’s error becomes smaller or as the human forecaster’s error becomes larger. We further find that algorithm aversion does not happen if an AI performs worse than one’s expectation, but happens when the AI’s error is larger than one’s expected human forecaster’s error. A textual analysis further uncovers insights into understanding why subjects prefer an AI or a human forecaster. These analyses identify both individual-level heterogeneity of algorithmic aversion as well as the role of contextual cues. Overall, our findings suggest that firms should continue to improve their AI as consumers would tolerate AI’s imperfection and appreciate its superior performance.
Understanding Algorithm Aversion: When Do People Abandon AIs When They See Them Err?
June 14, 2022