This project aims to uncover theoretical properties and new applications of perturbation models, a family of probability distributions for high dimensional structured prediction problems.
Structured prediction problems have been central to machine learning applications in computer vision, natural language or speech processing, bioinformatics and many others fields. Solving such problems typically involves sampling plausible structured outputs and a key direction is building models which are easy to sample from. We focus on probability distributions induced by mapping samples from simple distributions into structured configurations, which include perturbation models, variational auto-encoders, generative adversarial networks, and so on. In perturbation models, the mapping is defined via a combinatorial optimization problem, encoding specific structural constraints to be satisfied by the solution. Our goals include better understanding of the induced distributions and developing new algorithms and applications for structured output problems and reinforcement learning with structured action spaces.