May 07

Add to Calendar 2024-05-07 16:00:00 2024-05-07 17:30:00 America/New_York Quest | CBMM Seminar Series: Invariance and equivariance in brains and machines Abstract: The goal of building machines that can perceive and act in the world as humans and other animals do has been a focus of AI research efforts for over half a century. Over this same period, neuroscience has sought to achieve a mechanistic understanding of the brain processes underlying perception and action. It stands to reason that these parallel efforts could inform one another. However recent advances in deep learning and transformers have, for the most part, not translated into new neuroscientific insights; and other than deriving loose inspiration from neuroscience, AI has mostly pursued its own course which now deviates strongly from the brain. Here I propose an approach to building both invariant and equivariant representations in vision that is rooted in observations of animal behavior and informed by both neurobiological mechanisms (recurrence, dendritic nonlinearities, phase coding) and mathematical principles (group theory, residue numbers). What emerges from this approach is a neural circuit for factorization that can learn about shapes and their transformations from image data, and a model of the grid-cell system based on high-dimensional encodings of residue numbers. These models provide efficient solutions to long-studied problems that are well-suited for implementation in neuromorphic hardware or as a basis for forming hypotheses about visual cortex and entorhinal cortex.Bio: Professor Bruno Olshausen is a Professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds B.S. and M.S. degrees in Electrical Engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see http://redwood.berkeley.edu).Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work. Singleton Auditorium (46-3002)

April 02

Add to Calendar 2024-04-02 16:00:00 2024-04-02 17:30:00 America/New_York Quest | CBMM Seminar Series: The Debate Over “Understanding” in AI’s Large Language Models Abstract: I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to "understand" language—and the physical and social situations language encodes—in any important sense. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions. Short Bio: Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) was shortlisted for the 2023 Cosmos Prize for Scientific Writing. Singleton Auditorium (46-3002)

March 26

Add to Calendar 2024-03-26 16:00:00 2024-03-26 17:30:00 America/New_York Quest | CBMM Seminar Series: Physical and Social Human-Robot Interaction with the iCub Humanoid Abstract: The iCub is a humanoid robot designed to support research in embodied AI. At 104 cm tall, the iCub has the size of a five-year-old child. It can crawl on all fours, walk, and sit up to manipulate objects. Its hands have been designed to support sophisticate manipulation skills. The iCub is distributed as Open Source following the GPL licenses (http://www.iCub.org). More than 50 robots have been built so far which are available in laboratories across Europe, US, Korea, Singapore, and Japan. It is one of the few platforms in the world with a sensitive full-body skin to deal with the physical interaction with the environment including possibly people. In this talk I report about the work of two of research units of the Italian Institute of Technology whose focus is the design of methods to enable natural interaction with the iCub robot.Bio: Giorgio Metta is the Scientific Director of the Istituto Italiano di Tecnologia (IIT). He holds a MSc cum laude (1994) and PhD (2000) in electronic engineering both from the University of Genoa. From 2001 to 2002, Giorgio was postdoctoral associate at the MIT AI-Lab. He was previously with the University of Genoa and from 2012 to 2019 Professor of Cognitive Robotics at the University of Plymouth (UK). He was member of the board of directors of euRobotics aisbl, the European reference organization for robotics research. Giorgio Metta served as Vice Scientific Director of IIT from 2016 to 2019. He coordinated IIT's participation into two of the Ministry of Economic Development Competence Centers for Industry 4.0 (ARTES4.0, START4.0). He was one of the three Italian representatives at the 2018 G7 forum on Artificial Intelligence and, more recently, one of the authors of the Italian Strategic Agenda on AI. Giorgio coordinated the development of the iCub robot for more than a decade making it de facto the reference platform for research in embodied AI. Presently, there are more than 40 robots reaching laboratories as far as Japan, China, Singapore, Germany, Spain, UK and the United States. Giorgio Metta research activities are in the fields of biologically motivated and humanoid robotics and, in particular, in developing humanoid robots that can adapt and learn from experience. Giorgio Metta is author of more than 300 scientific publications. He has been working as principal investigator and research scientist in about a dozen international research as well as industrial projects. Singleton Auditorium (46-3002)

March 12

Add to Calendar 2024-03-12 16:00:00 2024-03-12 17:30:00 America/New_York Quest | CBMM Seminar Series: Bayes in the age of intelligent machines Abstract: Recent rapid progress in the creation of artificial intelligence (AI) systems has been driven in large part by innovations in architectures and algorithms for developing large scale artificial neural networks. As a consequence, it’s natural to ask what role abstract principles of intelligence — such as Bayes’ rule — might play in developing intelligent machines. In this talk, I will argue that there is a new way in which Bayes can be used in the context of AI, more akin to how it is used in cognitive science: providing an abstract description of how agents should solve certain problems and hence a tool for understanding their behavior. This new role is motivated in large part by the fact that we have succeeded in creating intelligent systems that we do not fully understand, making the problem for the machine learning researcher more closely parallel that of the cognitive scientist. I will talk about how this perspective can help us think about making machines with better informed priors about the world and give us insight into their behavior by directly creating cognitive models of neural networks.Bio: I am interested in developing mathematical models of higher level cognition, and understanding the formal principles that underlie our ability to solve the computational problems we face in everyday life. My current focus is on inductive problems, such as probabilistic reasoning, learning causal relationships, acquiring and using language, and inferring the structure of categories. I try to analyze these aspects of human cognition by comparing human behavior to optimal or "rational" solutions to the underlying computational problems. For inductive problems, this usually means exploring how ideas from artificial intelligence, machine learning, and statistics (particularly Bayesian statistics) connect to human cognition. These interests sometimes lead me into other areas of research such as nonparametric Bayesian statistics and formal models of cultural evolution.I am the Director of the Computational Cognitive Science Lab at Princeton University. Here is a reasonably up-to-date curriculum vitae.My friend Brian Christian and I recently wrote a book together about the parallels between the everyday problems that arise in human lives and the problems faced by computers. Algorithms to Live By outlines practical solutions to those problems as well as a different way to think about rational decision-making.I am interested in how novel approaches to data collection and analysis - particularly "big data" - can change psychological research. Read my manifesto and check out the Center for Data on the Mind.

February 14

Add to Calendar 2024-02-14 14:00:00 2024-02-14 15:30:00 America/New_York Quest | CBMM Seminar Series: How fly neurons compute the direction of visual motion *Due to the forecast weather event for Cambridge, MA on Tuesday February 13th, this talk will be held on Wednesday February 14th at 2:00PM*Abstract: Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Our results obtained in the fruit fly Drosophila demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.

February 06

Add to Calendar 2024-02-06 16:00:00 2024-02-06 17:30:00 America/New_York Quest | CBMM Seminar Series: Latent cause inference and mental health Abstract: No two events are alike. But still, we learn, which means that we implicitly decide what events are similar enough that experience with one can inform us about what to do in another. Starting from early work by Sam Gershman, we have suggested that this relies on parsing of incoming information into “clusters” according to inferred hidden (latent) causes. In this talk, I will present this theory and illustrate its breadth in explaining human learning. I will then discuss the relevance of latent cause inference to understanding mental health conditions and their treatment.Research in the Niv lab focuses on the neural and computational processes underlying reinforcement learning and decision-making. We study the ongoing day-to-day processes by which animals and humans learn from trial and error, without explicit instructions, to predict future events and to act upon the environment so as to maximize reward and minimize punishment. In particular, we are interested in how attention and memory processes interact with reinforcement learning to create representations that allow us to learn to solve new tasks so efficiently. Our emphasis is on model-based experimentation: we use computational models to define precise hypotheses about data, to design experiments, and to analyze results. In particular, we are interested in normative explanations of behavior: models that offer a principled understanding of why our brain mechanisms use the computational algorithms that they do, and in what sense, if at all, these are optimal. In our hands, the main goal of computational models is not to simulate the system, but rather to understand what high-level computations is that system realizing, and what functionality do these computations fulfill. A new focus of the lab is computational cognitive neuropsychiatry. Here our aim is to use the computational toolkit that we have developed for quantifying dynamical behavioral processes in order to better diagnose, understand, and treat psychiatric illnesses such as depression, OCD, schizophrenia and addiction. This work is done under the auspices of the new Rutgers-Princeton Center for Computational Cognitive Neuropsychiatry. Building 46

December 05

Add to Calendar 2023-12-05 16:00:00 2023-12-05 17:30:00 America/New_York Quest | CBMM Seminar Series: Statistical learning in human sensorimotor control Abstract: Humans spend a lifetime learning, storing and refining a repertoire of motor memories appropriate for the multitude of tasks we perform. However, it is unknown what principle underlies the way our continuous stream of sensorimotor experience is segmented into separate memories and how we adapt and use this growing repertoire. I will review our recent work on how humans learn to make skilled movements focusing on how statistical learning can lead to multi-modal object representations, how we represent the dynamics of objects, the role of context in the expression, updating and creation of motor memories and how families of objects are learned. Bio: Daniel Wolpert FMedSci FRS. Daniel qualified as a medical doctor in 1989. He worked with John Stein and Chris Miall in the Physiology Department of Oxford University where he received his D.Phil. in 1992. He worked as a postdoctoral fellow in the Department of Brain and Cognitive Sciences at MIT in Mike Jordan's group and in 1995 joined the Sobell Department of Motor Neuroscience, Institute of Neurology as a Lecturer. In 2005 moved to the University of Cambridge where he was Professor of Engineering (1875) and a fellow of Trinity College and from 2013 the Royal Society Noreen Murray Research Professorship in Neurobiology. In 2018 Daniel joined the Zuckerman Mind Brain and Behavior Institute at Columbia University as Professor of Neuroscience and is vice-chair of the Department of Neuroscience. Daniel retains a part-time position as Director of Research at the Department of Engineering, University of Cambridge.He was elected a Fellow of the Academy of Medical Sciences in 2004 and a Fellow of the Royal Society in 2012.He was awarded the Royal Society Francis Crick Prize Lecture (2005), the Minerva Foundation Golden Brain Award (2010), the Royal Society Ferrier Medal (2020) and gave the Fred Kavli Distinguished International Scientist Lecture at the Society for Neuroscience (2009).