Statistical inference is traditionally divided into two schools: Bayesian and frequentist. Bayesian seeks to estimate the distribution of an unknown quantity (i.e., posterior), and often relies on sampling-based algorithms (e.g., Markov Chain Monte Carlo); Frequentist seeks to estimate the single "best" value of an unknown quantity, and often relies on optimization algorithms. In general, Bayesian inference captures uncertainty more systematically compared to frequentist. Yet, in a big data regime, it often lags behind in terms of speed. In this project, we hope to apply optimization algorithms for scalable and accurate Bayesian inference. For example, in a recent work, we develop a framework called "boosting variational inference", which iteratively refines posterior estimation with optimization techniques, such that one enjoys both the speed from the frequentist side and full treatment of uncertainty from the Bayesian side. Moreover, it enables the trade-off between statistical accuracy and computational time: the algorithm can produce better estimation given more running time.
If you would like to contact us about our work, please scroll down to the people section and click on one of the group leads' people pages, where you can reach out to them directly.