Visual Computing Seminar: Scaling World Simulators for Safe Physical Intelligence
Abstract:
AI empowers physical robots with intelligence to perceive, predict, plan, and act in dynamic real-world environments. Despite significant advances, modern autonomous systems remain fragile: they perform reliably only in controlled setups but fail to generalize to unstructured or safety-critical scenarios. This poses challenges for real-world deployment, particularly in domains like self-driving, where handling long-tail edge cases is critical. Existing solutions either depend on expensive, risky real-world data collection or on handcrafted simulators that lack diversity, realism, and scalability. In this talk, I will present my efforts on scaling world simulators for safe physical intelligence. First, I develop methods that combine generative models, neural rendering, and physics priors to transform sparse sensor streams into scalable 4D world simulators. Then, I show how these simulators provide a foundation for safety by enabling closed-loop evaluation, counterfactual generation, and robust policy learning. These world simulators bridge the gap between simulation and reality for generalizable and trustworthy physical agents. Finally, I will discuss how these efforts point toward a broader vision: endowing physical agents with richer spatial intelligence so they can operate safely and reliably in complex real-world environments.