Structured prediction problems have been central to machine learning applications in computer vision, natural language or speech processing, bioinformatics and many others fields. Solving such problems typically involves sampling plausible structured outputs and a key direction is building models which are easy to sample from. We focus on probability distributions induced by mapping samples from simple distributions into structured configurations, which include perturbation models, variational auto-encoders, generative adversarial networks, and so on. In perturbation models, the mapping is defined via a combinatorial optimization problem, encoding specific structural constraints to be satisfied by the solution. Our goals include better understanding of the induced distributions and developing new algorithms and applications for structured output problems and reinforcement learning with structured action spaces.
If you would like to contact us about our work, please scroll down to the people section and click on one of the group leads' people pages, where you can reach out to them directly.