Towards Reliable and Trustworthy Machine Learning Methods for Medical Imaging
Host
Adrian Dalca
Abstract: Machine learning (ML) algorithms powering the AI revolution are leading to breakthroughs in medical image analysis. These algorithms are enabling fast and scalable automation of human-expensive tasks like image registration and image reconstruction, while also showing promise in performing more complex, higher-level tasks like diagnosis and prognosis. At the same time, the healthcare arena that AI seeks to disrupt is formidable; healthcare is not only facilitated by domain experts (e.g. doctors and radiologists) who undergo years of training, but also is characterized by a high-stakes setting where safety and trust is critical. Indeed, there is a need for reliable and trustworthy ML in this arena, which can interface with humans, perform well under varying conditions, and accept user input and feedback. In this talk, I will present several ML methods we’ve developed across various tasks in medical imaging, focused around three directions towards reliability and trustworthiness: interpretability, robustness, and controllability. In the first part of the talk, I will describe an interpretable, robust, and controllable image registration method, called KeyMorph, which uses a deep neural network to extract corresponding keypoints in a pair of images and subsquently uses the keypoints to solve for the desired transformation which aligns the images in closed-form. In the second part of the talk, I will present an inherently interpretable image classification method, called the Nadaraya-Watson Head, which can be seen as a “soft” version of a nearest-neighbors classifier and works by making a classification prediction via comparisons with examples in the training dataset. Furthermore, we leverage this model to learn “invariant” representations of images that come from multiple environments (e.g. hospitals) for the purposes of robust domain generalization, starting from rigorous causally-informed assumptions of the data-generating process.
Bio: Alan Wang is a PhD candidate at Cornell University in the School of Electrical and Computer Engineering. He is also affiliated with Cornell Tech and the Department of Radiology at Weill Cornell Medical School. Previously, he obtained his B.S. degree in Computer Engineering at the University of Illinois at Urbana-Champaign (UIUC). His research interests include machine learning and computer vision, especially applied to medical imaging. More specifically, he is interested in bridging the gap between deep learning models and human experts like doctors and clinicians, which leads him to be interested in research in interpretability, explainability, interactivity, and robustness of deep learning models.
Bio: Alan Wang is a PhD candidate at Cornell University in the School of Electrical and Computer Engineering. He is also affiliated with Cornell Tech and the Department of Radiology at Weill Cornell Medical School. Previously, he obtained his B.S. degree in Computer Engineering at the University of Illinois at Urbana-Champaign (UIUC). His research interests include machine learning and computer vision, especially applied to medical imaging. More specifically, he is interested in bridging the gap between deep learning models and human experts like doctors and clinicians, which leads him to be interested in research in interpretability, explainability, interactivity, and robustness of deep learning models.