Krzysztof Gajos - Human Cognitive Engagement in Human-AI Collaboration on Decision-Making Tasks

Speaker

Harvard University

Host

Arvind Satyanarayan
CSAIL MIT
Abstract:
People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI's suggestion even when the suggestion is wrong. Consequently, most human-AI teams make poorer decisions on average than the AI systems on their own. Our research suggests that people rarely engage analytically with each individual AI recommendation and explanation, and instead appear to develop general heuristics about whether and when to follow the AI suggestions. In one study, we showed that interventions applied at the decision-making time to disrupt heuristic reasoning can reduce (but not entirely eliminate) human overreliance on the AI. In a follow up study that measured incidental learning during human-AI collaboration, we provided even more direct evidence that people do not engage cognitively with AI recommendations and explanations. However, our participants made high quality decisions and demonstrated evidence of cognitive engagement when the human-AI interaction was redesigned such that AI presented the information to support a decision but offered no explicit decision recommendation. Lastly, our research also points to two shortcomings in how our research community is pursuing the explainable AI research agenda. First, the commonly-used evaluation methods rely on proxy tasks that artificially focus people’s attention on the AI models leading to misleading (overly optimistic) results. Second, by insufficiently examining the sociotechnical contexts, we may be solving problems that are technically the most obvious but that are not the most valuable to the key stakeholders, which limits their motivation to engage. In short, our research provides several converging strands of evidence demonstrating that the lack of human cognitive engagement with the AI-generated recommendations and explanations is a major reason limiting the effectiveness of AI-powered decision support tools, and it suggests two broad ways of redesigning the human-AI interactions to address this challenge.

Bio:
Krzysztof Gajos is a Gordon McKay professor of Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. Krzysztof’s current interests include 1. Principles and applications of intelligent interactive systems; 2. Tools and methods for behavioral research at scale (e.g., LabintheWild.org); and 3. Design for equity and social justice. He has also made contributions in the areas of accessible computing, creativity support tools, social computing, and health informatics.

Prior to arriving at Harvard, Krzysztof was a postdoctoral researcher at Microsoft Research. He received his Ph.D. from the University of Washington and his M.Eng. and B.Sc. degrees from MIT. From 2013 to 2016 Krzysztof was a coeditor-in-chief of the ACM Transactions on Interactive Intelligent Systems (ACM TiiS), he was the general chair of ACM UIST 2017, and he is currently a program co-chair of the 2022 ACM Conference on Intelligent User Interfaces. His work was recognized with best paper awards at ACM CHI, ACM COMPASS, and ACM IUI. In 2019, he received the Most Impactful Paper Award at ACM IUI for his work on automatically generating personalized user interfaces.

For those who might wish to attend remotely, the seminar will also be on Zoom: https://mit.zoom.us/j/98602138341.

This is a Tim Tickets event. Please bring your MIT ID or scan the QR code on your phone: https://tim-tickets.atlas-apps.mit.edu/FjeDKqRhFixxiys2A.