Our goal is to enable scalable and accurate Bayesian inference for rich probabilistic models by applying optimization techniques. We also aim to understand the connections between the two approaches of statistical inference: Bayesian and frequentist.

Statistical inference is traditionally divided into two schools: Bayesian and frequentist. Bayesian seeks to estimate the distribution of an unknown quantity (i.e., posterior), and often relies on sampling-based algorithms (e.g., Markov Chain Monte Carlo); Frequentist seeks to estimate the single "best" value of an unknown quantity, and often relies on optimization algorithms. In general, Bayesian inference captures uncertainty more systematically compared to frequentist. Yet, in a big data regime, it often lags behind in terms of speed. In this project, we hope to apply optimization algorithms for scalable and accurate Bayesian inference. For example, in a recent work, we develop a framework called "boosting variational inference", which iteratively refines posterior estimation with optimization techniques, such that one enjoys both the speed from the frequentist side and full treatment of uncertainty from the Bayesian side. Moreover, it enables the trade-off between statistical accuracy and computational time: the algorithm can produce better estimation given more running time.