Many modern Bayesian models involve infinitely many latent parameters. We seek to develop finite approximations which are more tractable for use in practice, and characterize their incurred error.
Models from Bayesian nonparametrics allow us to capture data with unknown latent structure by imagining an infinity of latent traits in the prior, and letting the data tell us how many have actually been observed. However, we cannot tractably learn or even simulate a model with infinitely many latent parameters in practice. Modern posterior inference techniques, such as variational Bayesian inference, typically overcome this via the use of a finite approximation to the infinite model. However, numerous such finite approximations often exist for each model, and it is not known how their errors compare. Furthermore, it is as of yet unknown how to select the size of the approximation in a principled manner. In this project, we aim to develop, characterize, and compare finite approximations to (normalized) completely random measures, a class of priors in Bayesian nonparametrics that generalizes the popular Dirichlet, beta, and gamma processes. This work will provide researchers with a standardized methodology for generating and analyzing finite approximations in Bayesian nonparametrics, and will be useful in the development of automated probabilistic programming inference techniques.