EECS Special Seminar: Gabriel Poesia, "Learning Formal Reasoning"

Speaker

Gabriel Poesia
Stanford University

Host

Adam Chlipala and Kaiming He
CSAIL

Abstract:
Formal systems, such as type theories, provide general foundations for representing mathematics and computation, with increasing adoption in the formalization of research-level mathematical results, as well as for implementing and verifying critical real-world software. However, their flexibility comes at a cost: most key problems in these systems, like finding proofs, are computationally undecidable. Nonetheless, humans routinely solve novel mathematical problems, write new programs and prove them correct. Crucially, we leverage our ability to learn, developing increasingly better heuristics and abstractions for our particular domains of interest as we gain experience. We do so even without specific goals other than exploring and making interesting discoveries.

In this talk, I'll present my research addressing fundamental challenges arising in learning formal reasoning. We'll aim at building systems that self-improve by spending compute without requiring human examples, much like AlphaZero was able to achieve for challenging games. First, I'll show how reinforcement learning and abstraction learning combined enable an agent to master sections from the Khan Academy algebra curriculum, and even reconstruct the human-designed curriculum using its learned abstractions, despite seeing problems in a random order. Then, I'll present my work on open-ended learning for theorem proving, where an agent starts only from axioms and learns from self-generated conjectures, bootstrapping its ability to prove human-written theorems despite only training on proofs it found by itself. Along the way, I'll present methods for interfacing symbolic and neural systems, with applications to program generation and verification, and discuss standing challenges in developing self-improving reasoning machines.

Bio: 
Gabriel Poesia is a PhD student at Stanford University in the Computation and Cognition Lab. His research is centered on learning formal reasoning, interfacing dependent type theory, language models, reinforcement learning and intrinsically motivated learning, with work at the intersection of all these areas recognized with an Oral presentation at NeurIPS 2024. His research has been supported by the Stanford Interdisciplinary Graduate Fellowship.