Add to Calendar
2024-04-29 13:00:00
2024-04-29 14:00:00
America/New_York
ML-enabled Genetic Analysis of High-Content Phenotypes
Abstract: In my talk, I will discuss new machine learning (ML) approaches for human genetics. First, I will present ML-enhanced genetic analysis of histological traits, where we leverage a novel semantic autoencoder to compress histological images into trait embeddings for GWAS. In an application to multiple tissues from the GTEx dataset, we discover 4 genome-wide significant loci associated with histological changes, which we can visualise and interpret for each of the discovered variants thanks to our decoder.Second, I will introduce a new method combining machine learning and genetic causal inference for risk predictions. A key advantage of this method is that it doesn't require longitudinal data. This allows for risk prediction of late-onset diseases in large biobanks, where follow-up cases are often limited.Overall, these contributions demonstrate the transformative power of ML in human genetics. Our approaches enable more nuanced analyses of high-dimensional traits and facilitate biomarker discovery.Bio: Francesco Paolo Casale studied physics at the University of Naples Federico II, Italy. He received his PhD in statistical genetics at the University of Cambridge and the European Bioinformatics Institute in 2016, where he developed new computational methods for genetic association studies and contributed to landmark international projects such as the last phase of the 1000 Genomes Project and the Blueprint initiative. He conducted his postdoctoral studies at the Microsoft Research New England lab in Boston, working on deep generative models for imaging genetics and automated machine learning. In 2019, he joined insitro, a drug discovery and development company located in the bay area. There, he led the statistical genetics team, working at the intersection of human genetics, machine learning and functional genomics to enable target identification and characterization. Since January 2022, he is a Principal Investigator in Machine Learning in Biomedicine at the Helmholtz Munich Institute of AI for Health.
D507
Events
April 27, 2024
No events scheduled
April 29, 2024
May 06, 2024
Thesis Defense: Towards Object-based SLAM
Yihao Zhang
MIT MechE
Add to Calendar
2024-05-06 10:00:00
2024-05-06 11:30:00
America/New_York
Thesis Defense: Towards Object-based SLAM
Abstract:Simultaneous localization and mapping (SLAM) is a fundamental capability for a robot to perceive its surrounding environment. The research area has developed for more than two decades from the original sparse landmark-based SLAM to dense SLAM, and now there is a demand for semantic understanding of the environment beyond pure geometric understanding. This thesis makes a number of contributions to help realize object-based SLAM, in which the map consists of a set of objects with their semantic categories recognized and their poses and shapes estimated. Such a map provides vital object-level semantic and geometric perception for applications such as augmented reality (AR), mixed reality (MR), mobile manipulation, and autonomous driving.In order to perform object-based SLAM, the sensor measurements have to undergo a series of processes. First, objects are semantically segmented in the sensor measurements. This step is typically done by a neural network. As robots are often required to bootstrap from some initial labeled datasets and adapt to different environments where labeled data are unavailable, it is important to enable semi-supervised learning to improve the robot performance with the unlabeled data collected by the robot itself. Second, after the objects are segmented, measurements for each object across different viewpoints have to be associated together for downstream processing. Lastly, the robot must be able to extract the object pose and shape information from the measurements without access to the detailed CAD models of the objects. This thesis studies these three aspects of object-based SLAM, namely semi-supervised learning of semantic segmentation in a robotics context, data association for object-based SLAM, and category-level object pose and shape estimation.For category-level object pose and shape estimation, we developed ShapeICP (ICP: iterative closest point), an algorithm that does not use pose-annotated data and generates meshes as the object shape representation. For data association, we developed DAF-SLAM (DAF: data association free) to estimate the associations in the back-end instead of relying on sensor-dependent front-end methods. For semi-supervised learning, we applied temporal semantic consistency inspired by the photometric consistency technique in the traditional SLAM methods. Each contribution is evaluated via experimental datasets, demonstrating improvements over previous techniques.Committee Members:John J. Leonard (Advisor), Department of Mechanical EngineeringFaez Ahmed, Department of Mechanical EngineeringNicholas Roy, Department of Aeronautics and Astronautics
32-G882 (https://mit.zoom.us/j/92202523862)
May 07, 2024
Quest | CBMM Seminar Series: Invariance and equivariance in brains and machines
Bruno Olshausen
UC Berkeley
Add to Calendar
2024-05-07 16:00:00
2024-05-07 17:30:00
America/New_York
Quest | CBMM Seminar Series: Invariance and equivariance in brains and machines
Abstract: The goal of building machines that can perceive and act in the world as humans and other animals do has been a focus of AI research efforts for over half a century. Over this same period, neuroscience has sought to achieve a mechanistic understanding of the brain processes underlying perception and action. It stands to reason that these parallel efforts could inform one another. However recent advances in deep learning and transformers have, for the most part, not translated into new neuroscientific insights; and other than deriving loose inspiration from neuroscience, AI has mostly pursued its own course which now deviates strongly from the brain. Here I propose an approach to building both invariant and equivariant representations in vision that is rooted in observations of animal behavior and informed by both neurobiological mechanisms (recurrence, dendritic nonlinearities, phase coding) and mathematical principles (group theory, residue numbers). What emerges from this approach is a neural circuit for factorization that can learn about shapes and their transformations from image data, and a model of the grid-cell system based on high-dimensional encodings of residue numbers. These models provide efficient solutions to long-studied problems that are well-suited for implementation in neuromorphic hardware or as a basis for forming hypotheses about visual cortex and entorhinal cortex.Bio: Professor Bruno Olshausen is a Professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds B.S. and M.S. degrees in Electrical Engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see http://redwood.berkeley.edu).Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work.
Singleton Auditorium (46-3002)
June 07, 2024
Add to Calendar
2024-06-07 9:00:00
2024-06-07 18:00:00
America/New_York
CSAIL + Imagination in Action Symposium 2024
The symposium will showcase the extraordinary and substantive contributions CSAIL research groups have made, and highlight the remarkable impacts of our work.
Kirsch Auditorium