Kernel Methods for Representing Probabilities
Speaker
Arthur Gretton
Gatsby Computational Neuroscience Unit, University College London
Host
Suvrit Sra
MIT EECS
Abstract: I will provide an introduction to kernel tools developed to represent and compare probability distributions, and to generate samples with properties that match a reference sample. I'll begin by defining the Maximum Mean Discrepancy (MMD), which is a measure of similarity between kernel probability representations. The MMD may be defined both as a distance between features, and as an integral probability metric (similar to a Wasserstein distance). Since each kernel defines a distance measure, the kernel can be chosen to optimise performance on a task, such as hypothesis testing. The MMD may also be used as a critic in a generative adversarial network, where it provides a measure of similarity between the generated and reference samples: the generator attempts to reduce the MMD by generating more plausible samples, while the MMD is made more sensitive through adaptive feature/kernel choice. Time permitting, I'll discuss additional applications of kernel distribution representations, such as model criticism and measuring statistical dependence.
Bio: Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit, UCL. He received degrees in physics and systems engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics and at the Machine Learning Department, Carnegie Mellon University.
Arthur's research interests include machine learning, kernel methods, statistical learning theory, nonparametric hypothesis testing, blind source separation, and non-parametric techniques for neural data analysis. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, a member of the NIPS Program Committee in 2008 and 2009, a Senior Area Chair for NIPS in 2018, an Area Chair for ICML in 2011 and 2012, and a member of the COLT Program Committee in 2013. Arthur was co-chair of AISTATS in 2016 (with Christian Robert), and co-tutorials chair of ICML in 2018 (with Ruslan Salakhutdinov).
Bio: Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit, UCL. He received degrees in physics and systems engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics and at the Machine Learning Department, Carnegie Mellon University.
Arthur's research interests include machine learning, kernel methods, statistical learning theory, nonparametric hypothesis testing, blind source separation, and non-parametric techniques for neural data analysis. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, a member of the NIPS Program Committee in 2008 and 2009, a Senior Area Chair for NIPS in 2018, an Area Chair for ICML in 2011 and 2012, and a member of the COLT Program Committee in 2013. Arthur was co-chair of AISTATS in 2016 (with Christian Robert), and co-tutorials chair of ICML in 2018 (with Ruslan Salakhutdinov).