Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates

Speaker

Louis Sharrock
Lancaster University, UK

Host

Justin Solomon
CSAIL MIT
Abstract: In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this work, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms with no need to tune a learning rate.

Bio: Louis Sharrock is a Senior Research Associate in Statistical Machine Learning at Lancaster University, UK. Prior to this, he obtained a PhD in Statistics at Imperial College London, and a MA in Mathematics at the University of Cambridge. His research interests include computational statistics, machine learning, and optimisation, with a particular focus on the development and analysis of scalable methods for inference in complex statistical models.