Navigating Generative Vector Fields: Principled Inference for High-Dimensional Inverse Problems
Speaker
While deep learning has achieved high-fidelity representations of complex domains, our ability to control these models to solve inverse problems remains limited. We propose a unified framework to fill this "inference gap", framing generative modeling as a navigation problem of high-dimensional vector fields governed by stochastic differential equations. By shifting the focus from learning the prior model to the principled steering of its flows, we develop inference machinery for three distinct classes of inverse tasks. First, we address the mixing bottleneck in physical landscapes by using learned symmetries to accelerate Molecular Dynamics, bypassing redundant exploration to discover rare states. Second, we introduce thresholded and piecewise linear guidance to enable rigorous compositional logic in diffusion flows, providing a theoretically grounded alternative to heuristic guidance for satisfying multiple simultaneous constraints. Finally, we derive an alignment objective based on Fenchel-Legendre duality to warp generative manifolds toward task-specific rewards. Together, these contributions advance an inference-centric paradigm for artificial intelligence, transforming high-capacity world models into steerable vehicles for scientific discovery and engineering design.