Neuro-Symbolic Learning For Bilevel Robot Planning (Thesis Defense)

Speaker

CSAIL / EECS
Location: 32-G449 (Patil/Kiva)

Abstract: Decision-making in robotics domains is complicated by continuous state and action spaces, long horizons, and sparse feedback. One way to address these challenges is to perform bilevel planning, where decision-making is decomposed into reasoning about “what to do” (task planning) and “how to do it” (continuous optimization). Bilevel planning is powerful, but it requires multiple types of domain-specific abstractions that are often difficult to design by hand. In this defense, I will give an overview of my PhD work on learning these abstractions from data. This work represents the first unified system for learning all the abstractions needed for bilevel planning. In addition to learning to plan, I will also briefly discuss planning to learn, where the robot uses planning to collect additional data that it can use to improve its abstractions. My long-term goal is to create a virtuous cycle where learning improves planning and planning improves learning, leading to a very general library of abstractions and a broadly competent robot.