By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

Human-First AI

How do we understand human-AI interaction to shape systems that place humans first, encourage optimal decision making, and prioritize responsible innovation? As organizations experience rapid digital transformation, the effective adoption of AI by employees, partners, and customers, becomes essential to competitive advantage. It is also critical that we place the welfare of humans first, in order to maximize the benefits of human-AI collaboration and reduce harm. We believe that using artificial intelligence in a way that’s human-centered as opposed to exploitative will be a true strategic advantage. This group examines human-AI interaction through a behavioral science lens, with a focus on what symbiosis exists between humans and AI and how the balance varies by task. We examine when humans trust algorithms, how to prevent bias in algorithms, and how to develop mutual learning between humans and algorithms for optimal decision-making. Using experiments and mixed-method approaches, the Human-First AI Group will proffer strategic, policy, and behavioral solutions. Renée Richardson Gosline leads this Research Group.