March 04

A Theory of Appropriateness with Applications to Generative Artificial Intelligence

Joel Leibo
Senior staff research scientist at Google DeepMind and professor at King's College London
Add to Calendar 2025-03-04 16:00:00 2025-03-04 17:30:00 America/New_York A Theory of Appropriateness with Applications to Generative Artificial Intelligence Abstract: What is appropriateness? Humans navigate a multi-scale mosaic of interlocking notions of what is appropriate for different situations. We act one way with our friends, another with our family, and yet another in the office. Likewise for AI, appropriate behavior for a comedy-writing assistant is not the same as appropriate behavior for a customer-service representative. What determines which actions are appropriate in which contexts? And what causes these standards to change over time? Since all judgments of AI appropriateness are ultimately made by humans, we need to understand how appropriateness guides human decision making in order to properly evaluate AI decision making and improve it. In this talk, I will present a theory of appropriateness: how it functions in human society, how it may be implemented in the brain, and what it means for responsible deployment of generative AI technology. TBD

February 11

Add to Calendar 2025-02-11 16:00:00 2025-02-11 17:30:00 America/New_York Aligning deep networks with human vision will require novel neural architectures, data diets and training algorithms Abstract: Recent advances in artificial intelligence have been mainly driven by the rapid scaling of deep neural networks (DNNs), which now contain unprecedented numbers of learnable parameters and are trained on massive datasets, covering large portions of the internet. This scaling has enabled DNNs to develop visual competencies that approach human levels. However, even the most sophisticated DNNs still exhibit strange, inscrutable failures that diverge markedly from human-like behavior—a misalignment that seems to worsen as models grow in scale.In this talk, I will discuss recent work from our group addressing this misalignment via the development of DNNs that mimic human perception by incorporating computational, algorithmic, and representational principles fundamental to natural intelligence. First, I will review our ongoing efforts in characterizing human visual strategies in image categorization tasks and contrasting these strategies with modern deep nets. I will present initial results suggesting we must explore novel data regimens and training algorithms for deep nets to learn more human-like visual representations. Second, I will show results suggesting that neural architectures inspired by cortex-like recurrent neural circuits offer a compelling alternative to the prevailing transformers, particularly for tasks requiring visual reasoning beyond simple categorization. TBD

December 03

November 12

Panel Discussion: Open Questions in Theory of Learning

Moderator: Tomaso Poggio - Panel: Ila Fiete, Philip Isola, Eran Malach, Haim Sompolinsky
Add to Calendar 2024-11-12 16:00:00 2024-11-12 17:30:00 America/New_York Panel Discussion: Open Questions in Theory of Learning Abstract: In a society that is confronting the new age of AI in which LLMs begin to display aspects of human intelligence, understanding the fundamental theory of deep learning and applying it to real systems is a compelling and urgent need.This panel will introduce some new simple foundational results in the theory of supervised learning. It will also discuss open problems in the theory of learning, including problems specific to neuroscience.Panelists:Ila Fiete - Professor of Brain and Cognitive Sciences, MITHaim Sompilinski - Professor of Molecular and Cellular Biology and of Physics, Harvard UniversityEran Malach - Research fellow, Kempner Institute at Harvard UniversityPhilip Isola - Associate Professor, EECS at MIT 46-3002

October 01

September 10

Add to Calendar 2024-09-10 16:00:00 2024-09-10 17:30:00 America/New_York Quest | CBMM Seminar Series - Conveying Tasks to Computers: How Machine Learning Can Help Abstract: It is immensely empowering to delegate information processing work to machines and have them carry out difficult tasks on our behalf. But programming computers is hard. The traditional approach to this problem is to try to fix people: They should work harder to learn to code. In this talk, I argue that a promising alternative is to meet people partway. Specifically, powerful new approaches to machine learning provide ways to infer intent from disparate signals and could help make it easier for everyone to get computational help with their vexing problems.Bio: Michael L. Littman, Ph.D. is a Professor of Computer Science at Brown University and Division Director of Information and Intelligent Systems at the National Science Foundation. He studies machine learning and decision-making under uncertainty and has earned multiple awards for his teaching and research. Littman has chaired major conferences in A.I. and machine learning and is a Fellow of both the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He was selected by the American Association for the Advancement of Science as a Leadership Fellow for Public Engagement with Science in Artificial Intelligence, has a popular YouTube channel and appeared in a national TV commercial in 2016. His book, "Code to Joy: Why Everyone Should Learn a Little Programming" was published in October 2023 by MIT Press. 46-3002