Seminar: Ben Eysenbach: Temporal Representations Enable Generalization and Exploration
Title: Temporal Representations Enable Generalization and Exploration
Abstract: In the same way that deep computer vision models find structures and patterns in images, how might deep reinforcement learning models find structures and patterns in solutions to control problems? This talk will focus on learning temporal representations, which map high-dimensional observations to compact representations where distances reflect shortest paths. Once learned, these temporal representations encode the value function for certain tasks – learning temporal representations is itself an RL algorithm. In both robotics and reasoning problems, we'll show how such representations capture temporal patterns. Temporal representations also facilitate a form of (temporal) generalization: navigating between pairs of states that are more distant than those seen during training. Finally, we'll share some evidence that agents trained via temporal representations exhibit surprising exploration strategies in both single-agent and multi-agent settings.
Bio: Benjamin Eysenbach is an Assistant Professor of Computer Science at Princeton University, where he runs the Princeton Reinforcement Learning Lab. His research focuses on reinforcement learning algorithms: AI methods that learn how to make intelligent decisions from trial and error. His group has developed several successful algorithms and analysis for self-supervised methods, which enable agents to explore and learn without any human supervision. His work has been recognized by an NSF CAREER Award, a Hertz Fellowship, an NSF GRFP Fellowship, and the Alfred Rheinstein Faculty Award. Prior to joining Princeton, he received his PhD in machine learning from Carnegie Mellon University, worked at Google AI, and studied math as an undergraduate at MIT.