Developing techniques to allow self-driving cars and other AI-driven systems to explain behaviors and failures.

When humans and autonomous systems share control of a vehicle, there will be some explaining to do. When the autonomous system takes over suddenly, the driver will ask why. When an accident happens in a car that is co-driven by a person and a machine, police officials, insurance companies, and the people who are harmed will want to know who or what is accountable for the accident. Control systems in the vehicle should be able to give an accurate unambiguous accounting of the events. Explanations will have to be simple enough for users to understand even when subject to cognitive distractions. At the same time, given the need for legal accountability and technical integrity these systems will have to support their basic explanations with rigorous and reliable detail. In the case of hybrid human-machine systems, we will want to know how the human and mechanical parts contributed to final results such as accidents or other unwanted behaviors.The ability to provide coherent explanations of complex behavior is also important in the design and debugging of such systems, and it is essential to give us all confidence in the competence and integrity of our automatic helpers. But the mechanisms of merging measurements with qualitative models can also enable more sophisticated control strategies than are currently feasible. Our research explores the development of methodology and supporting technology for combining qualitative and semi-quantitative models with measured data to produce concise, understandable symbolic explanations of actions taken by a system built out of many parts (including the human operator).

Publications

L.H. Gilpin, D. Bau, B.Z. Yuan, A. Bajawal, M. Specter, and L. Kagal. Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning. The 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2018).  [To appear in proceedings].

L.H. Gilpin, J.C. Macbeth, and E. Florentine. Monitoring Scene Understanders with Conceptual Primitive Decomposition and Commonsense Knowledge. The Sixth Annual Conference on Advances in Cognitive Systems (ACS 2018).  [To appear in proceedings].

L.H. Gilpin, C. Zaman, D. Olson, and B.Z. Yuan. Simulating Human Explanations of Visual Scene Understanding. Human Robot Interaction (HRI) 2018. Online

L.H. Gilpin. Reasonableness Monitors. The 23rd AAAI/SIGAI Doctoral Consortium (DC) at AAAI- 18. Online

L.H. Gilpin and B. Yuan. Getting Up to Speed on Vehicle Intelligence. Proceedings of the AAAI Spring Symposium Series, 2017. Online

Research Areas

Impact Areas