April 26

Add to Calendar 2019-04-26 16:00:00 2019-04-26 17:00:00 America/New_York Communicating gradients for learning using activity dynamics Abstract: Theoretical and empirical results in the neural networks literature demonstrate that effective learning at a real-world scale requires changes to synaptic weights that approximate the gradient of a global loss function. For neuroscientists, this means that the brain must have mechanisms for communicating loss gradients between regions, either explicitly or implicitly. Here, I describe our research into potential means of communicating loss gradients using the dynamics of activity in a population of neurons. Using a combination of computational modelling and two-photon imaging data, I will present evidence suggesting that the neocortex may encode loss gradients related to motor learning and sensory prediction using the temporal derivative of activity in populations of pyramidal neurons. Singleton Auditorium (46-3002)

April 02

Add to Calendar 2019-04-02 16:00:00 2019-04-02 17:00:00 America/New_York The topology of representation teleportation, regularized Oja's rule, and weight symmetry Abstract: When trained to minimize reconstruction error, a linear autoencoder (LAE) learns the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this talk, I'll explain how this observation became the focus of a project on representation learning of neurons using single-cell RNA data. I'll then share how this focus led us to a satisfying conversation between numerical analysis, algebraic topology, random matrix theory, deep learning, and computational neuroscience. We'll see that an L2-regularized LAE learns the principal directions as the left singular vectors of the decoder, providing a simple and scalable PCA algorithm related to Oja's rule. We'll use the lens of Morse theory to smoothly parameterize all LAE critical manifolds and the gradient trajectories between them; and see how algebra and probability theory provide principled foundations for ensemble learning in deep networks, while suggesting new algorithms. Finally, we'll come full circle to neuroscience via the "weight transport problem" (Grossberg 1987), proving that L2-regularized LAEs are symmetric at all critical points. This theorem provides local learning rules by which maximizing information flow and minimizing energy expenditure give rise to less-biologically-implausible analogues of backproprogation, which we are excited to explore in vivo and in silico. Joint learning with Daniel Kunin, Aleksandrina Goeva, and Cotton Seed.Project resources: https://github.com/danielkunin/Regularized-Linear-AutoencodersShort Bio: Jon Bloom is an Institute Scientist at the Stanley Center for Psychiatric Research within the Broad Institute of MIT and Harvard. In 2015, he co-founded the Models, Inference, and Algorithms Initiative and a team (Hail) building distributed systems used throughout academia and industry to uncover the biology of disease. In his youth, Jon did useless math at Harvard and Columbia and learned useful math by rebuilding MIT’s Intro to Probability and Statistics as a Moore Instructor and NSF postdoc. These days, he is exuberantly surprised to find the useless math may be useful after all. 46-3002

March 22

Add to Calendar 2019-03-22 16:00:00 2019-03-22 17:00:00 America/New_York Brains, Minds + Machines Seminar Series: Probing memory circuits in the primate brain: from single neurons to neural networks Abstract: The brain’s memory systems are like time machines for thought: they transport sensory experiences from the past to the present, to guide our current decisions and actions. Memories have been classified into long-term, stored for time intervals of days, months, or years, and short-term, stored for shorter intervals of seconds or minutes. There is a consensus that these two types of memories involve different brain systems and have different underlying mechanisms. In this talk I will present data from different experiments in non-human primates examining brain circuits and mechanisms of both short-term memory and long-term memory. Biography: Julio Martinez-Trujillo is Professor in the Department of Physiology and Pharmacology and Scientist at the Robarts Research Institute. He holds an Academic Chair in Autism. Prior to joining Western University in 2014, he was Associate Professor in the Department of Physiology and Canada Research Chair in Neuroscience at McGill University. 46-3002

March 20

Add to Calendar 2019-03-20 16:00:00 2019-03-20 17:00:00 America/New_York CBMM Special Seminar: Self-Learning Systems Speaker Biography: Demis is a former child chess prodigy, who finished his A-levels two years early before coding the multi-million selling simulation game Theme Park aged 17. Following graduation from Cambridge University with a Double First in Computer Science, he founded the pioneering videogames company Elixir Studios producing award winning games for global publishers such as Vivendi Universal. After a decade of experience leading successful technology startups, Demis returned to academia to complete a PhD in cognitive neuroscience at UCL, followed by postdocs at MIT and Harvard, before founding DeepMind. His research into the neural mechanisms underlying imagination and planning was listed in the top ten scientific breakthroughs of 2007 by the journal Science. Demis is a 5-times World Games Champion, a Fellow of the Royal Society of Arts, and the recipient of the Royal Society’s Mullard Award and the Royal Academy of Engineering's Silver Medal.---This talk is co-hosted by the Center for Brains, Minds, and Machines (CBMM) and MIT Quest for Intelligence. 34-101

March 19

CBMM Special Seminar: The State of Autonomous Driving

President and CEO Mobileye, an Intel company; Senior Vice President, Intel Corporation; Sachs Professor of Computer Science | Hebrew University
Add to Calendar 2019-03-19 16:30:00 2019-03-19 17:30:00 America/New_York CBMM Special Seminar: The State of Autonomous Driving *Please note change of location - this talk will be held in MIT 10-250.*Speaker Biography: Professor Amnon Shashua is senior vice president at Intel Corporation and president and chief executive officer of Mobileye, an Intel company. He leads Intel’s global organization focused on advanced driving assist systems (ADAS), highly autonomous and fully autonomous driving solutions and programs.Professor Shashua holds the Sachs Chair in computer science at the Hebrew University of Jerusalem. Prof. Shashua’s field of expertise is computer vision and machine learning. Prof. Shashua received the MARR Prize Honorable Mention in 2001, the Kaye Innovation Award in 2004, and the Landau Award in Exact Sciences in 2005. Since 2010 Prof. Shashua is the co-founder, Chief Technology Officer and Chairman of OrCam, an Israeli company that recently launched an assistive product for the visually impaired based on advanced computerized visual interpretation capabilities.---This talk is co-hosted by the Center for Brains, Minds, and Machines (CBMM) and MIT Quest for Intelligence. 10-250

November 30

Add to Calendar 2018-11-30 16:00:00 2018-11-30 17:00:00 America/New_York Perceiving what we cannot sense: Insights from 3D vision Abstract: Our sensory systems are unable to directly sense all the aspects of the world we perceive. For example, our perception of the world as three-dimensional (3D) is compelling, but our eyes only detect two-dimensional (2D) projections of our surroundings. Creating accurate and precise 3D percepts is critical for successful interactions with our environment, but how does the brain solve this inverse problem? Using 3D vision in the macaque monkey as the model system, I will show behavioral, neuroimaging, and electrophysiological data that together reveal a hierarchical, cortical pathway specialized for implementing the 2D-to-3D visual transformation. The results of these experiments reveal roles of little explored brain areas in the dorsal visual pathway, including V3A and CIP, and have broader implications for our understanding of how the brain solves nonlinear otimization problems required to perceive what we cannot sense. 46-3002

November 20

Add to Calendar 2018-11-20 14:00:00 2018-11-20 18:00:00 America/New_York Quest Symposium on Robust, Interpretable AI The Quest Symposium on Robust, Interpretable AI will explore the latest techniques for making AI more trustworthy. Join us for posters by MIT students and postdocs, and talks by MIT faculty. Research topics will include attack and defense methods for deep neural networks, visualizations, interpretable modeling, and other methods for revealing deep network behavior, structure, sensitivities, and biases. Building 46 Atrium and Auditorium

October 26

Body-Brain Interface: Neuroanatomical and Functional Insights from the Primate Insular Cortex

Head of Research Group, CIN Functional and Comparative Neuroanatomy, Werner Reichardt Center for Integrative Neuroscience, Max Planck Institute for Biological Cybernetics
Add to Calendar 2018-10-26 16:00:00 2018-10-26 17:00:00 America/New_York Body-Brain Interface: Neuroanatomical and Functional Insights from the Primate Insular Cortex Abstract: Interoception substantiate embodied feelings and shape cognitive processes including perceptual awareness. My lab combines architectonics, tract-tracing, electrophysiology, direct electrical stimulation fMRI (DES-fMRI), neural event triggered fMRI (NET-fMRI) and optogenetics in the macaque monkey in order to examine the neuroanatomical and functional organization of the insular cortex, one of the key central interface of bodily and brain states. Our anatomical examination revealed that the insular cortex is anatomically organized according to a refined and high-consistent modular Bauplan where architectonics and hodology perfectly overlap. Hodological and functional examinations suggest that the insula contains a granular-to-dysgranular-to-agranular processing flow where interoceptive afferents are progressively integrated with self-agency and socially relevant activities from other parts of the brain, until reaching an ultimate representation of instantaneous physiological states in the anterior insula. The anterior insula contains distinct areas that have each specific projections. One of these areas specifically contains the atypical spindle-shaped von Economo neuron (VEN). A relatively high proportion of VEN projects to distant preautonomic midbrain regions. Recording and stimulation in the 'VEN area' confirmed the connection with these regions and highlighted prominent functional relations to high-order cortical areas, supporting the idea that the VEN area could serve as hub for the simultaneous interoceptive shaping of polymodal perceptual experience and high-order regulation of bodily states. 46-3002

October 19

Add to Calendar 2018-10-19 16:00:00 2018-10-19 17:00:00 America/New_York What are the computational functions of feedback to early visual cortex? Abstract: The existence of feedforward and feedback neural connections between areas in the primate visual cortical hierarchy is well known. While there is a general consensus for how feedforward connections support the sequential stages of visual processing for tasks such as object recognition, the computational functions of feedback for recognition and other tasks are less well understood. I will discuss several proposals for the functions of feedback including resolving local ambiguity using high-level predictive knowledge, binding information across levels of abstraction in the visual hierarchy, and engaging lower-level "expertise" as the task requires it. Several human behavioral and neuroimaging results will be described that support these proposals: the first showing how the larger spatial context modulates local activity in the visual system, the second how cortical activity in human V1/V2 depends on whether shape information is extracted in the presence or absence of clutter, and the third how tasks requiring the analysis of spatial detail influence responses in foveal cortex. 46-3002

October 12

Add to Calendar 2018-10-12 16:00:00 2018-10-12 17:00:00 America/New_York Modal-Set Estimation using kNN graphs, and Applications to Clustering Abstract: Estimating the mode or modal-sets (i.e. extrema points or surfaces) of an unknown density from sample is a basic problem in data analysis. Such estimation is relevant to other problems such as clustering, outlier detection, or can simply serve to identify low-dimensional structures in high dimensional-data (e.g. point-cloud data from medical-imaging, astronomy, etc). Theoretical work on mode-estimation has largely concentrated on understanding its statistical difficulty, while less attention has been given to implementable procedures. Thus, theoretical estimators, which are often statistically optimal, are for the most part hard to implement. Furthermore for more general modal-sets (general extrema of any dimension and shape) much less is known, although various existing procedures (e.g. for manifold-denoising or density-ridge estimation) have similar practical aim. I’ll present two related contributions of independent interest: (1) practical estimators of modal-sets – based on particular subgraphs of a k-NN graph – which attain minimax-optimal rates under surprisingly general distributional conditions; (2) high-probability finite sample rates for k-NN density estimation which is at the heart of our analysis. Finally, I’ll discuss recent successful work towards the deployment of these modal-sets estimators for various clustering applications. 46-3002

September 21

Add to Calendar 2018-09-21 16:00:00 2018-09-21 17:00:00 America/New_York Linking the statistics of network activity and network connectivity Abstract: There is an avalanche of new data on the brain’s activity, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect the structure of tasks or latent variables. For what networks does each case occur? Our contribution to the answer is a new framework that links tractable statistical properties of network connectivity with the dimension of the activity that they produce. I’ll describe where we have succeeded, where we have failed, and the many avenues that remain. 46-3002

July 24

Add to Calendar 2018-07-24 11:00:00 2018-07-24 12:00:00 America/New_York CBMM Special Seminar: What information dynamics can tell us about ... brains Abstract: The space-time dynamics of interactions in neural systems are often described using terminology of information processing, or computation, in particular with reference to information being stored, transferred and modified in these systems. In this talk, we describe an information-theoretic framework -- information dynamics -- that we have used to quantify each of these operations on information, and their dynamics in space and time. Not only does this framework quantitatively align with natural qualitative descriptions of neural information processing, it provides multiple complementary perspectives on how, where and why a system is exhibiting complexity. We will review the application of this framework in computational neuroscience, describing what it can and indeed has revealed in this domain. First, we discuss examples of characterising behavioural regimes and responses in terms of information processing, including under different neural conditions and around critical states. Next, we show how the space-time dynamics of information storage, transfer and modification directly reveal how distributed computation is implemented in a system, highlighting information processing hot-spots and emergent computational structures, and providing evidence for conjectures on neural information processing such as predictive coding theory. Finally, via applications to several models of dynamical networks and human brain images, we demonstrate how information dynamics relates the structure of complex networks to their function, and how it can invert such analysis to infer structure from dynamics.This event is organized by the CBMM Trainee Leadership Council. 46-3002

July 02

Add to Calendar 2018-07-02 16:00:00 2018-07-02 17:00:00 America/New_York CBMM Special Seminar: Transformative Generative Models Abstract: Generative models are constantly improving, thanks to recent contributions in adversarial training, unsupervised learning, and autoregressive models. In this talk, I will describe new generative models in computer vision, voice synthesis, and music.In music – I will describe the first music translation method to produce convincing results (https://arxiv.org/abs/1805.07848)In voice synthesis – I will discuss the current state of multi-speaker text to speech (https://arxiv.org/abs/1802.06984) Singleton Auditorium

May 21

Add to Calendar 2018-05-21 16:00:00 2018-05-21 17:00:00 America/New_York CBMM Special Seminar: Psychophysics of Cephalopod Camouflage: Life as GIM in a GAN. Abstract: By a quirk of evolution, camouflaging octopus and cuttlefish report their visual perceptions by modulating their skin color and 3-d texture on time scales of seconds or minutes to match their surroundings (they are generative image modelers). Their survival demands that predators perceive them as visual noise, whereas the survival of a predator demands that it detect them as signal, in a feedback loop that evolved on evolutionary time scales over millions of years (the generalized adversarial network). Whereas the mechanical and physiological mechanisms of this camouflage have been studied intensively over the last few decades with steady progress, my research group seeks instead to elucidate the computation underlying it. Following the phenomenological tradition of Helmholtz, who discovered the RGB basis of human color perception over one hundred years before its physics and physiology emerged, we couple experimental, computational, and theoretical methods to characterize the i/o transfer function of the eye to skin mapping, in the first instance by identifying its fixed points. 46-3002