Towards Interpretable and Trustworthy RL for Healthcare

Speaker

Finale Doshi-Velez
Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences at Harvard University

Host

Marzyeh Ghassemi
IMES, CSAIL, EECS
Abstract:
Reinforcement learning has the potential to take into the many factors about a patient and identify a personalized treatment strategy that will lead to better long-term outcomes. In this talk, I will focus on the offline, or batch setting: we are giving a large amount prior clinical data, and from those interactions, our goal is to propose a better treatment policy. This setting is common in healthcare, where both safety and compliance concerns make it difficult to train a reinforcement learning agent online. However, when we cannot actually execute our proposed actions, we have to be extra careful that our reinforcement learning agent does not hallucinate bad actions as good ones.

Toward this goal, I will first discuss how the limitations of batch data can actually be a feature, when it comes to interpretability. I will share an offline RL algorithm that takes advantage of the fact that we can only make inference about alternative treatments when clinicians have tried many alternatives not only to produce policies that have higher confidence statistically but also are compact enough to inspect by human experts. Next, I will touch on questions of reward design and taking advantage of the fact that our batch of data was produced by experts. Can we expose to the clinicians what their behaviors seem to be optimizing? Can we identify situations in which what a clinician claims is their reward does not match their actions? Can we perform offline RL with all the great qualities above in a way that is robust to reward misspecification? That takes into account that clinicians are in general doing their best? Our work in these areas brings us closer to realizing the potential of RL in healthcare.