Domain Adaptation: from Manifold Learning to Deep Learning

Speaker

Andreas Savakis
Rochester Institute of Technology

Host

Bolei Zhou
MIT
Abstract:
Domain Adaptation (DA) aims to adapt a classification engine from a train (source) dataset to a test (target) dataset. The goal is to remedy the loss in classification performance due to the dataset bias attributed to variations across test/train datasets. This seminar presents an overview of domain adaptation methods from manifold learning to deep learning. Popular DA methods on Grassmann manifolds include Geodesic Subspace Sampling (GSS) and Geodesic Flow Kernel (GFK). Grassmann learning facilitates compact characterization by generating linear subspaces and representing them as points on the manifold. I will discuss robust versions of these methods that combine L1-PCA and Grassmann manifolds to improve DA performance across datasets.

Deep domain adaptation has received significant attention recently. I will present a new domain adaptation approach for deep learning that utilizes Adaptive Batch Normalization to produce a common feature-space between domains. Our method then performs label transfer based on subspace alignment and k-means clustering on the feature manifold to transfer labels from the closest source cluster to each target cluster. The proposed manifold-guided label transfer method produces state-of-the-art results for deep adaptation on digit recognition datasets.

Bio:
Andreas Savakis is Professor of Computer Engineering at Rochester Institute of Technology (RIT) and director of the Real Time Vision and Image Processing Lab. He served as department head of Computer Engineering from 2000 to 2011. He received the B.S. with Highest Honors and M.S. degrees in Electrical Engineering from Old Dominion University in Virginia, and the Ph.D. in Electrical and Computer Engineering with Mathematics Minor from North Carolina State University in Raleigh NC. He was Senior Research Scientist with the Kodak Research Labs before joining RIT. His research interests include domain adaptation, object tracking, expression and activity recognition, change detection, deep learning and computer vision applications. Prof. Savakis has co-authored over 100 publications and holds 11 U.S. patents. He received the NYSTAR Technology Transfer Award for Economic Impact in 2006, the IEEE Region 1 Award for Outstanding Teaching in 2011, and the best paper award at the International Symposium on Visual Computing (ISVC) in 2013. He is Associate Editor of the IEEE Transactions on Circuits and Systems for Video Technology and the Journal for Electronic Imaging. He co-organized the first Int. Workshop on Extreme Imaging (http://extremeimaging.csail.mit.edu/) at ICCV 2015, and is Guest Editor at the IEEE Transactions on Computational Imaging for a Special Issue on Extreme Imaging.