HCI Seminar - Michael Terry - Interactive Alignment and the Design of Interactive AI
Speaker
Michael Terry
Google
Host
Arvind Satyanarayan
CSAIL MIT
Abstract:
AI alignment considers the overall problem of ensuring an AI produces desired outcomes, without undesirable side effects. These goals are not dissimilar to those of researchers developing novel interfaces to the latest AI (e.g., LLMs and generative AI systems). What points of intersection exist between these communities? In this talk, I'll map the concepts of AI alignment onto a basic interaction cycle to highlight three potential focus areas in the design of interactive AI: 1) specification alignment (ensuring users can efficiently and reliably communicate objectives to an AI and verify their correct interpretation), 2) process alignment (supporting users in verifying and optionally controlling the AI’s execution process), and 3) evaluation support (ensuring the user can verify and understand the AI's output). I'll use a set of case studies to illustrate the descriptive and prescriptive value of these concepts, and draw implications for future research.
Bio:
Michael Terry is a Research Scientist at Google, where he co-leads the People and AI Research (PAIR) group. Prior to Google, Michael was an Associate Professor at the University of Waterloo where he co-founded the HCI Research Lab with Ed Lank. His current research focuses on designing new tools and interfaces to AI, which has led to external offerings such as Google's MakerSuite prompt programming environment.
The talk will also be streamed over Zoom: https://mit.zoom.us/j/99976311141.
AI alignment considers the overall problem of ensuring an AI produces desired outcomes, without undesirable side effects. These goals are not dissimilar to those of researchers developing novel interfaces to the latest AI (e.g., LLMs and generative AI systems). What points of intersection exist between these communities? In this talk, I'll map the concepts of AI alignment onto a basic interaction cycle to highlight three potential focus areas in the design of interactive AI: 1) specification alignment (ensuring users can efficiently and reliably communicate objectives to an AI and verify their correct interpretation), 2) process alignment (supporting users in verifying and optionally controlling the AI’s execution process), and 3) evaluation support (ensuring the user can verify and understand the AI's output). I'll use a set of case studies to illustrate the descriptive and prescriptive value of these concepts, and draw implications for future research.
Bio:
Michael Terry is a Research Scientist at Google, where he co-leads the People and AI Research (PAIR) group. Prior to Google, Michael was an Associate Professor at the University of Waterloo where he co-founded the HCI Research Lab with Ed Lank. His current research focuses on designing new tools and interfaces to AI, which has led to external offerings such as Google's MakerSuite prompt programming environment.
The talk will also be streamed over Zoom: https://mit.zoom.us/j/99976311141.