Deep learning with multiplicative interactions
Speaker: Geoffrey Hinton , University of Toronto, Canadian Institute for Advanced ResearchContact:
Date: April 20 2010
Time: 4:00PM to 5:00PM
Location: Singleton Auditorium, MIT 46-3002
Host: Prof. Tomaso A. Poggio, McGovern Inst., BCS Dept. & CSAIL
Kathleen D. Sullivan, 617-253-0551, email@example.comRelevant URL: http://isquared.mit.edu/
Abstract: Deep networks can be learned efficiently from unlabeled data. The layers of representation are learned one at a time using a simple learning module that has only one layer of latent variables. The values of the latent variables of one module form the data for training the next module. Although deep networks have been quite successful for tasks such as object recognition, information retrieval, and modeling motion capture data, the simple learning modules do not have multiplicative interactions which are very useful for some types of data.
The talk will show how to introduce multiplicative interactions into the basic learning module in a way that preserves the simple rules for learning and perceptual inference. The new module has a structure that is very similar to the simple cell/complex cell hierarchy that is found in visual cortex. The multiplicative interactions are useful for modeling images, image transformations and different styles of human walking. They can also be used to create generative models of spectrograms. The features learned by these generative models are excellent for phone recognition. This is joint work with Marc'Aurelio Ranzato, Graham Taylor, Roland Memisevic and George Dahl.
See other events that are part of Brains and Machines Seminar Series 2009/2010
See other events happening in April 2010