Interpretable Representation Learning for Visual Intelligence

Speaker

Bolei Zhou

Host

Antonio Torralba
Recent progress of deep neural networks in computer vision and machine learning has enabled transformative applications across robotics, healthcare, and security. However, despite the superior performance of the deep neural networks, it remains challenging to understand their inner workings and explain their output predictions. My thesis work has pioneered several novel approaches for opening up the “black box” of neural networks used in vision tasks. In this talk, I will first show that objects and other meaningful concepts emerge as a consequence of recognizing scenes. A network dissection approach is further introduced to automatically identify the emergent concepts and quantify their interpretability. Then I will describe an approach that can efficiently explain the output prediction for any given image. It sheds light on the decision-making process of the networks and why they succeed or fail. Finally, I will talk about ongoing efforts toward learning efficient and interpretable deep representations for video event understanding and applications in robotics and medical image analysis.

Committee: Antonio Torralba, Aude Oliva, Bill Freeman