December 13

Add to Calendar 2019-12-13 11:30:00 2019-12-13 12:30:00 America/New_York Three Deep Learning Techniques for 3D diffusion MRI Image Enhancement In this talk, we discuss three deep learning techniques to improve theimage quality of 3D diffusion MRI Images. We first introduce a novellow-memory method, which allows us to control the GPU memory usageduring training therefore allowing us to handle the processing of3-dimensional, high-resolution, multi-channeled medicalimages. Secondly we present the first multi-task learning approach indata harmonization, where we integrate information from multipleacquisitions to improve the predictive performance and learningefficiency of the training procedure. Thirdly we present an extensionof the transposed convolution, where we learn both the offsets oftarget locations and a blur to interpolate the fractionalpositions. All three techniques can be applied in other image-relatedparadigms. 32-D451

December 05

Add to Calendar 2019-12-05 11:30:00 2019-12-05 12:30:00 America/New_York Generative-Deep Hybrid Models to Decipher Brain Functionality Clinical neuroscience is field with all the difficulties that come from high dimensional data, and none of the advantages that fuel modern-day breakthroughs in computer vision, automated speech recognition, and health informatics. It is a field of unavoidably small datasets, massive patient variability, environmental confounds, and an arguable lack of ground truth information. It is also a field where classification accuracy plays second fiddle to interpretability, particularly for functional neuroimaging modalities, such as EEG and fMRI. As a result of these challenges, deep learning methods have gained little traction in understanding neuropsychiatric disorders.My lab tackles the challenges of functional data analysis by blending the interpretability of generative models with the representational power of deep learning. This talk will highlight three ongoing projects that span a range of “old school” methodologies and clinical applications. First, I will discuss a joint optimization framework that combines non-negative matrix factorization with artificial neural networks to predict multidimensional clinical severity from resting-state fMRI. Second, I will describe a probabilistic graphical model for epileptic seizure detection using multichannel EEG. The latent variables in this model capture the spatio-temporal spread of a seizure; they are complemented by a nonparametric likelihood based on convolutional neural networks. Finally, I will touch on a very recent initiative to manipulate emotional cues in human speech, as a possible assistive technology for autism. Our approach combines traditional speech analysis, diffeomorphic registration, and highway neural networks. 32-D507

November 19

Add to Calendar 2019-11-19 11:30:00 2019-11-19 12:30:00 America/New_York Model-based imaging and image-based modelling My talk will update on three current research topics. 1.Microstructure imaging, which uses mathematical or computationalmodelling and machine learning to estimate and map microstructuralfeatures of tissue. Examples from my group's work include NODDI (Zhanget al Neuroimage 2012) for brain imaging and VERDICT (Panagiotaki etal Cancer Research 2014) for cancer imaging. I will describe movestowards a new paradigm combining multi-contrast measurements throughsophisticated computational models exploiting machine learning. 2.Data-driven disease progression models (e.g. Fonteijn et al Neuroimage2012; Young et al Brain 2014; Lorenzi et al Neuroimage 2017), whichaim to piece together longitudinal pictures of disease fromcross-sectional or short-term longitudinal data sets and thus gaindisease understanding, stratification systems, and predictivepower. The recent Subtype and Stage Inference (SuStaIn - Young et alNature Comms 2018) algorithm extends the idea to identify diseasesubgroups defined by distinct longitudinal trajectories of change. 3Image Quality Transfer (Alexander et al Neuroimage 2017; Tanno et alMICCAI 2017), which uses machine learning to estimate a high qualityimage, e.g. that we would have acquired from a one-off super-poweredscanner, from a lower quality image, e.g. acquired on a standardhospital scanner. 32-D507

October 29

Add to Calendar 2019-10-29 11:30:00 2019-10-29 12:30:00 America/New_York Imaging Physics meets Machine Learning: AI approaches for Image Reconstruction and Acquisition Over the past few decades, top-down expert engineering has driven thecreative design of tomographic imaging acquisition and reconstructionprocesses. Image reconstruction is challenging because analyticknowledge of the exact inverse transform may not exist a priori,especially in the presence of sensor non-idealities and noise. Thus,the standard reconstruction approach involves approximating theinverse function with multiple ad hoc stages in a signal processingchain whose composition depends on the details of each acquisitionstrategy. We present here a unified framework for imagereconstruction, AUtomated TransfOrm by Manifold APproximation(AUTOMAP), which recasts image reconstruction as a data-driven,supervised learning task that allows a mapping between sensor andimage domain to emerge from an appropriate corpus of trainingdata. Implemented with a deep neural network, AUTOMAP is remarkablyflexible in learning reconstruction transforms for a variety ofacquisition strategies, utilizing a single network architecture andhyperparameters. We further demonstrate its efficiency in sparselyrepresenting transforms along low-dimensional manifolds, resulting insuperior immunity to noise and a reduction in reconstruction artifactscompared with conventional handcrafted reconstruction methods. Inthis talk we also describe work deploying machine learning for MRimage acquisition with AUTOmated pulse SEQuence generation (AUTOSEQ),using both model-based and model-free reinforcement learningapproaches to produce canonical (gradient echo) as well asnon-intuitive pulse sequences that can perform spatial encoding andslice selection in unknown inhomogeneous B0 fields. 32-D507

September 12

Add to Calendar 2019-09-12 11:00:00 2019-09-12 12:00:00 America/New_York Listening to the Sound of Light to Guide Surgeries Minimally invasive surgeries require complicated maneuvers, delicate hand-eye coordination, and ideally would incorporate “x-ray vision” to see beyond tool tips and underneath tissue prior to making incisions. The Photoacoustic and Ultrasonic Systems Engineering (PULSE) Lab is pioneering this feature, but not with harmful ionizing x-rays. Instead, we use optical fibers for photoacoustic sensing of major structures – like blood vessels and nerves – that are otherwise hidden from a surgeon’s immediate view. Our goal is to eliminate surgical complications caused by accidental injury to these structures. Photoacoustic imaging utilizes light and sound to make images by transmitting laser pulses that illuminate regions of interest, causing thermal expansion and the generation of sound waves that are detectable with conventional ultrasound transducers. In this talk, I will describe our novel light delivery systems that attach to surgical tools to deliver light to surgical sites. I will also introduce how we learn from the physics of sound propagation in tissue to develop acoustic beamforming algorithms that improve image quality, using both state-of-the-art deep learning methods and our newly developed spatial coherence theory. These light delivery and acoustic beamforming methods hold promise for robotic tracking tasks, visualization and visual servoing of surgical tool tips, and assessment of relative distances between the surgical tool and nearby critical structures (e.g., major blood vessels and nerves that if injured will cause severe complications, paralysis, or patient death). Impacted surgeries and procedures include neurosurgery, liver surgery, spinal fusion surgery, hysterectomies, and biopsies. 32-D507