Kai Xiao: Thesis Defense Title: Probing, improving, and verifying machine learning model robustness

Speaker

Kai Xiao

Host

Thesis Committee: Aleksander Madry (Supervisor), Costis Daskalakis, and Martin Rinard
Title: Probing, improving, and verifying machine learning model robustness
Abstract: Machine learning models are brittle when faced with distribution shifts, making them hard to rely on for real-world deployment. To improve their real-world reliability, we need to be able to detect and alleviate model brittleness, and to verify that our models meet desired robustness guarantees.

In this thesis, we develop toolkits that help us detect model vulnerabilities and biases. Specifically, we first create a suite of new datasets that disentangle image backgrounds from foregrounds, which we then use to obtain a finer-grained understanding of model reliance on backgrounds. Next, we discuss 3DB, a framework that leverages photorealistic simulation, to probe model vulnerabilities to more varied distribution shifts. After identifying these vulnerabilities, we overview interventions that can make models more robust to distribution shifts.

Finally, we show how to efficiently and formally verify that our models are robust to one of the most well-studied types of distribution shift: adversarial examples.