Artificial Intelligence technologies are increasingly used to aid human decisions and perform autonomous tasks in critical domains. The need to understand AI in order to improve, contest, develop appropriate trust, and better interact with AI systems has spurred great academic and public interest in Explainable AI (XAI). The technical field of XAI has produced a vast collection of algorithms in recent years. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are increasingly important especially as practitioners begin to leverage XAI algorithms to build XAI applications. In this talk, I will draw on my own research and broad HCI works to highlight the central role that human-centered approaches should play in shaping XAI technologies, including driving technical choices by understanding users’ explainability needs, uncovering pitfalls of existing XAI methods, and providing conceptual frameworks for human-compatible XAI.
Q. Vera Liao is a Principal Researcher at Microsoft Research Montréal, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI. Prior to joining MSR, she worked at IBM Research and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Her research received multiple paper awards at ACM and AAAI venues. She currently serves as the Co-Editor-in-Chief for Springer HCI Book Series, on the Editorial Board of ACM Transactions on Interactive Intelligent Systems (TiiS), an Editor for CSCW, and an Area Chair for FAccT 2023.
The talk will also be streamed over Zoom: https://mit.zoom.us/j/96445121768.