AI Reasoning at Scale with Search

RSVP here: https://forms.gle/iztowTvkndwa9nBM8
Large Language Models (LLMs) have shown impressive generalization across a wide range of tasks, yet they still struggle with complex reasoning and out-of-distribution problem solving. Rather than simply memorizing patterns from pretraining, we seek LLMs that can innovate—generating novel solutions in unfamiliar domains. In this talk, I present a common framework for integrating search-based techniques with LLMs to push the boundaries of their reasoning capabilities. By shifting computational effort from training time to inference time, we enable a new paradigm of inference-time scaling, where search becomes a mechanism for exploration, deliberation, and improvement. Unlike classical search over symbolic states or action spaces, LLM-guided search must operate over open-ended text, requiring novel approaches that are language-centric and model-aware. Through applications in strategy games, code generation, and mathematical problem solving, I will illustrate how these search-augmented methods unlock human-level performance in challenging, unfamiliar environments—paving the way toward more general and superhuman AI systems.