September 12

Quest | CBMM Seminar Series: Coding of space and time in cortical structures

Prof. Michael Hasselmo
Director, Center for Systems Neuroscience; Boston University
Add to Calendar 2023-09-12 16:00:00 2023-09-12 17:30:00 America/New_York Quest | CBMM Seminar Series: Coding of space and time in cortical structures Abstract: Recordings of neurons in cortical structures in behaving rodents show responses to dimensions of space and time relevant to encoding and retrieval of spatiotemporal trajectories of behavior in episodic memory. This includes the coding of spatial location by grid cells in entorhinal cortex and place cells in hippocampus, some of which also fire as time cells when a rodent runs on a treadmill (Kraus et al., 2013; 2015; Mau et al., 2018). Trajectory encoding also includes coding of the direction and speed of movement. Speed is coded by both firing rate and frequency of neuronal rhythmicity (Hinman et al., 2016, Dannenberg et al., 2020), and inactivation of input from the medial septum impairs the spatial selectivity of grid cells suggesting rhythmic coding of running speed is important for spatial coding by grid cells (Brandon et al., 2011; Robinson et al., 2023). However, entorhinal neurons code head direction more than movement direction, raising questions about the role of path integration for computing position (Raudies et al., 2015). As a complementary mechanism, allocentric spatial location could be coded by transformation of egocentric sensory input. Data from our lab shows coding of environmental boundaries in egocentric coordinates (Hinman et al., 2019; Alexander et al., 2020) that can be combined with head direction coding for a transformation into allocentric coding of boundaries and spatial location. Thus, a variety of functional neuronal responses could contribute to the coding of time and spatial location.Bio: Research in the Hasselmo Laboratory concerns the cortical dynamics of memory-guided behavior, including effects of neuromodulation and theta rhythm oscillations in cortical function. Neurophysiological techniques are used to analyze intrinsic and synaptic properties of cortical circuits in rodents and to explore the effects of modulators on these properties. Computational modeling is used to link these physiological data to memory-guided behavior.Experiments using multiple single-unit recording in behavioral tasks are designed to test predictions of the computational models.Areas of research focus include episodic memory function and theta rhythm dynamics in the entorhinal cortex, prefrontal cortex, and hippocampal formation. This research has implications for understanding the pathology of Alzheimer’s disease, schizophrenia, and depression.

May 18

Add to Calendar 2023-05-18 14:00:00 2023-05-18 15:15:00 America/New_York A Fruitful Reciprocity: The Neuroscience-AI Connection Abstract: The emerging field of NeuroAI has leveraged techniques from artificial intelligence to model brain data. In this talk, I will show that the connection between neuroscience and AI can be fruitful in both directions. Towards "AI driving neuroscience", I will discuss a new candidate universal principal for functional organization in the brain, based on recent advances in self-supervised learning, that explains both fine details as well as large-scale organizational structure in the vision system, and perhaps beyond. In the direction of "neuroscience guiding AI", I will present a novel cognitively-grounded computational theory of perception that generates robust new learning algorithms for real-world scene understanding. Taken together, these ideas illustrate how neural networks optimized to solve cognitively-informed tasks provide a unified framework for both understanding the brain and improving AI.Bio: Dr. Yamins is a cognitive computational neuroscientist at Stanford University, an assistant professor of Psychology and Computer Science, a faculty scholar at the Wu Tsai Neurosciences Institute, and an affiliate of the Stanford Artificial Intelligence Laboratory. His research group focuses on reverse engineering the algorithms of the human brain to learn how our minds work and build more effective artificial intelligence systems. He is especially interested in how brain circuits for sensory information processing and decision-making arise by optimizing high-performing cortical algorithms for key behavioral tasks. He received his AB and PhD degrees from Harvard University, was a postdoctoral researcher at MIT, and has been a visiting researcher at Princeton University and Los Alamos National Laboratory. He is a recipient of an NSF Career Award, the James S. McDonnell Foundation award in Understanding Human Cognition, and the Sloan Research Fellowship. Additionally, he is a Simons Foundation Investigator. Singleton Auditorium (46-3002)

May 09

Add to Calendar 2023-05-09 16:00:00 2023-05-09 17:15:00 America/New_York Photographic Image Priors in the Era of Machine Learning Abstract: Inference problems in machine or biological vision generally rely on knowledge of prior probabilities, such as spectral or sparsity models. In recent years, machine learning has provided dramatic improvements in most of these problems using artificial neural networks, which are typically optimized using nonlinear regression to provide direct solutions for each specific task. As such, the prior probabilities are implicit, and intertwined with the tasks for which they are optimized. I'll describe properties of priors implicitly embedded in denoising networks, and describe methods for drawing samples from them. Extensions of these sampling methods enable the use of the implicit prior to solve any deterministic linear inverse problem, with no additional training, thus extending the power of a supervised learning for denoising to a much broader set of problems. The method relies on minimal assumptions, exhibits robust convergence over a wide range of parameter choices, and achieves state-of-the-art levels of unsupervised performance for deblurring, super-resolution, and compressive sensing. It also can be used to examine perceptual implications of physiological information processing.Bio: Eero received a BA in Physics from Harvard (1984), a Certificate of Advanced study in Math(s) from University of Cambridge (1986), and a MS and PhD in Electrical Engineering and Computer Science from MIT (1988/1993). He was an assistant professor in the Computer and Information Science Department at the University of Pennsylvania from 1993 to 1996, and then moved to NYU as an assistant professor of Neural Science and Mathematics (later adding Psychology, and most recently, Data Science). Eero received an NSF CAREER award in 1996, an Alfred P. Sloan Research Fellowship in 1998, and became an Investigator of the Howard Hughes Medical Institute in 2000. He was elected a Fellow of the IEEE in 2008, and an associate member of the Canadian institute for Advanced Research in 2010. He has received two Outstanding Faculty awards from the NYU GSAS Graduate Student Council (2003/2011), two IEEE Best Journal Article awards (2009/2010) and a Sustained Impact Paper award (2016), an Emmy Award from the Academy of Television Arts and Sciences for a method of measuring the perceptual quality of images (2015), and the Golden Brain Award from the Minerva Foundation, for fundamental contributions to visual neuroscience (2017). His group studies the representation and analysis of visual images in biological and machine systems.This will be an in-person only event. 43 Vassar St., Cambridge, MA 02139

May 02

Improving Deep Reinforcement Learning via Quality Diversity, Open-Ended, and AI-Generating Algorithms

Associate Professor, Computer Science, University of British Columbia; Canada CIFAR AI Chair and Faculty Member, Vector Institute; Senior Research Advisor, DeepMind
Add to Calendar 2023-05-02 16:00:00 2023-05-02 17:15:00 America/New_York Improving Deep Reinforcement Learning via Quality Diversity, Open-Ended, and AI-Generating Algorithms Abstract: Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will summarize how they enable robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solved all unsolved Atari games, including Montezuma’s Revenge and Pitfall, considered by many to be a grand challenges of AI research. I will next motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. Finally, I’ll argue that an alternate paradigm—AI-generating algorithms (AI-GAs)—may be the fastest path to accomplishing our field’s grandest ambition of creating general AI, and describe how QD, Open-Ended, and unsupervised pre-training algorithms (e.g. our recent work on video pre-training/VPT) will likely be essential ingredients of AI-GAs. Bio: Jeff Clune is an Associate Professor of computer science at the University of British Columbia and a Canada CIFAR AI Chair at the Vector Institute. Jeff focuses on deep learning, including deep reinforcement learning. Previously he was a research manager at OpenAI, a Senior Research Manager and founding member of Uber AI Labs (formed after Uber acquired a startup he helped lead), the Harris Associate Professor in Computer Science at the University of Wyoming, and a Research Scientist at Cornell University. He received degrees from Michigan State University (PhD, master’s) and the University of Michigan (bachelor’s). More on Jeff’s research can be found at http://www.JeffClune.com or on Twitter (@jeffclune). Since 2015, he won the Presidential Early Career Award for Scientists and Engineers from the White House, had two papers in Nature and one in PNAS, won an NSF CAREER award, received Outstanding Paper of the Decade and Distinguished Young Investigator awards, and had best paper awards, oral presentations, and invited talks at the top machine learning conferences (NeurIPS, CVPR, ICLR, and ICML). His research is regularly covered in the press, including the New York Times, NPR, NBC, Wired, the BBC, the Economist, Science, Nature, National Geographic, the Atlantic, and the New Scientist.

April 25

Add to Calendar 2023-04-25 16:00:00 2023-04-25 17:15:00 America/New_York CBMM | Quest Seminar Series: Characterizing complex meaning in the human brain Abstract: Aligning neural network representations with brain activity measurements is a promising approach for studying the brain. However, it is not always clear what the ability to predict brain activity from neural network representations entails. In this talk, I will describe a line of work that utilizes computational controls (control procedures used after data collection) and other procedures to understand how the brain constructs complex meaning. I will describe experiments aimed at studying the representation of the composed meaning of words during language processing, and the representation of high-level visual semantics during visual scene understanding. These experiments shed new light on meaning representation in language and vision. Bio: Leila Wehbe is an assistant professor in the Machine Learning Department and the Neuroscience Institute at Carnegie Mellon University. Her work is at the interface of cognitive neuroscience and computer science. It combines naturalistic functional imaging with machine learning both to improve our understanding of the brain and to find insight for building better artificial systems. Previously, she was a postdoctoral researcher at UC Berkeley, working with Jack Gallant. She obtained her PhD from Carnegie Mellon University, where she worked with Tom Mitchell.This will be an in-person only event. Singleton Auditorium (46-3002)

April 11

Add to Calendar 2023-04-11 16:00:00 2023-04-11 17:15:00 America/New_York CBMM | Quest Seminar Series - Eleanor Jack Gibson: A Life in Science Abstract: More than two decades after her death, Eleanor Gibson still may be the best experimental psychologist ever to work in the developmental cognitive sciences, yet her work appears to have been forgotten, or never learned, by many students and investigators today. Here, drawing on three of Gibson’s autobiographies, together with her published research and a few personal recollections, I aim to paint a portrait of her life and science. What’s it like to be a gifted and knowledgeable scientist, working in a world that systematically excludes people like oneself, both institutionally and socially? What institutional actions support such people, both for their benefit and for the benefit of science and its institutions? In this talk, I focus primarily on Gibson’s thinking and research, but her life and science suggest some answers to these questions and some optimism for the future of our fields.Bio: Elizabeth Spelke is the Marshall L. Berkman Professor of Psychology at Harvard University and an investigator at the NSF-MIT Center for Brains, Minds and Machines. Her laboratory focuses on the sources of uniquely human cognitive capacities, including capacities for formal mathematics, for constructing and using symbols, and for developing comprehensive taxonomies of objects. She probes the sources of these capacities primarily through behavioral research on human infants and preschool children, focusing on the origins and development of their understanding of objects, actions, people, places, number, and geometry. In collaboration with computational cognitive scientists, she aims to test computational models of infants’ cognitive capacities. In collaboration with economists, she has begun to take her research from the laboratory to the field, where randomized controlled experiments can serve to evaluate interventions, guided by research in cognitive science, that seek to enhance young children’s learning.This will be an in-person only event. Singleton Auditorium (46-3002)

April 04

Discussion panel on Transformers vs. Humans: The ultimate battle for general intelligence

Ev Fedorenko, Sydney Levine, Josh Tenenbaum, Phillip Isola, Martin Schrimpf;
(CBMM, BCS, CSAIL, EECS, MIT)
Add to Calendar 2023-04-04 16:00:00 2023-04-04 17:15:00 America/New_York Discussion panel on Transformers vs. Humans: The ultimate battle for general intelligence Panelists: Ev Fedorenko, Sydney Levine, Josh Tenenbaum, Phillip Isola, Martin Schrimpf;Moderator: Tommy PoggioAbstract: Transformer models have been rapidly gaining popularity as they underlie some of the most advanced deep learning systems to date. Despite their apparent successes, several questions remain unanswered. This discussion panel will focus on comparing transformer networks to human intelligence at multiple levels including the ability for these models to have general cognitive abilities, the similarities in operations to the computations in the human brain and the ability to meet or exceed human performance in different tasks.About the Panelists: Ev Fedorenko, Middleton CD Associate Professor of Neuroscience, MIT BCS Sydney Levine, Research Scientist, Allen Institute for AI Josh Tenenbaum, Professor, MIT BCS Phillip Isola, Class of 1948 Career Development Professor, MIT EECS Martin Schrimpf, Research Scientist, MIT Quest / Assistant Professor, EPFL

March 21

Add to Calendar 2023-03-21 16:00:00 2023-03-21 17:00:00 America/New_York Quest | CBMM Seminar Series - Quantifying and Understanding Memorization in Deep Neural Networks Abstract: Deep learning algorithms are well-known to have a propensity for fitting the training data very well and memorize idiosyncratic properties in the training examples. From a scientific perspective, understanding memorization in deep neural networks shed light on how those models generalize. From a practical perspective, understanding memorization is crucial to address privacy and security issues related to deploying models in real world applications. In this talk, we present a series of studies centered at quantifying memorization in neural language models. We explain why in many real world tasks, memorization is necessary for optimal generalization. We also present quantitative studies on memorization, forgetting and unlearning of both vision and language models, to better understand the behaviors and implications of memorization in those models.