Multi-scale Local Representation Learning for Medical Image Analysis
Speaker
Neel Dey
MIT CSAIL
Host
Polina Golland
MIT CSAIL
Recent medical vision methods approach the limitations of scarce
expert annotations and intractable inter-domain variation by
attempting to learn broadly-generalizable self-supervised
representations. However, current work commonly discards multi-scale
spatiotemporal self-similarity and/or embeds images onto vector
representations that preclude several neighborhood-dependent
image-to-image tasks. In this talk, we will present methods for
locality-sensitive multi-scale representation learning by developing
patchwise self-supervised losses and regularizations acting on
intermediate activations in convolutional architectures. This style of
approach will be shown to benefit disparate yet classic neuroimage
analysis tasks including longitudinal infant brain segmentation and
multi-modality deformable registration. We will also briefly discuss
the application of these methods to ongoing studies of Down syndrome
and Autism Spectrum Disorder in infancy.
expert annotations and intractable inter-domain variation by
attempting to learn broadly-generalizable self-supervised
representations. However, current work commonly discards multi-scale
spatiotemporal self-similarity and/or embeds images onto vector
representations that preclude several neighborhood-dependent
image-to-image tasks. In this talk, we will present methods for
locality-sensitive multi-scale representation learning by developing
patchwise self-supervised losses and regularizations acting on
intermediate activations in convolutional architectures. This style of
approach will be shown to benefit disparate yet classic neuroimage
analysis tasks including longitudinal infant brain segmentation and
multi-modality deformable registration. We will also briefly discuss
the application of these methods to ongoing studies of Down syndrome
and Autism Spectrum Disorder in infancy.