Quantifying Interpretability of Deep Learning in Visual Recognition

Speaker

Bolei Zhou
CSAIL MIT

Host

David Bau
CSAIL MIT
Abstract:
We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of deep convolutional neural networks (CNNs) by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power. The project page is at http://netdissect.csail.mit.edu.

Biography:
Bolei Zhou is the 5th-year Ph.D. Candidate in Computer Science and Artificial Intelligence Laboratory at MIT, working with Prof. Antonio Torralba. His research is on computer vision and machine learning, with particular interest in visual scene understanding and network interpretability. He is the award recipient of the Facebook Fellowship, Microsoft Research Asia Fellowship, MIT Greater China Fellowship. More details about his research work is at the homepage http://people.csail.mit.edu/bzhou/.