December 02

Add to Calendar 2025-12-02 12:00:00 2025-12-02 13:00:00 America/New_York CSAIL Forum with Andrew Lo: Quantamental Investing and Generative AI The convergence of quantitative and fundamental investment styles has been a pipe dream of many asset management companies and hedge funds for decades, but with few if any industrial examples. The rise of generative AI and large language models (LLMs) has dramatically lowered the barriers between these two disparate methods of investing, but several remaining challenges must be overcome before a true rapprochement is realizable. In this talk, Prof. Lo will describe those challenges and map out a process by which quantamental investing may be realized within the next few years.Registration required: https://mit.zoom.us/meeting/register/33v2S1mjSLOQm5HEpCbQBQAndrew Lo biography: https://www.csail.mit.edu/person/andrew-lo TBD

November 18

Add to Calendar 2025-11-18 12:00:00 2025-11-18 13:00:00 America/New_York CSAIL Forum with Sam Madden Please join us for CSAIL Forum with Prof. Sam MaddenSpeaker: Sam Madden, College of Computing Distinguished ProfessorDate/time: Tuesday 12:00-1:00 EDT, November 18, 2025 Venue: Live stream via Zoom: Registration requiredTitle: How I Learned to Start Querying and Love AIAbstract: Over the past five decades, the relational database model has proven to be a scaleable and adaptable model for querying a variety of structured data, with use cases in analytics, transactions, graphs, streaming and more. However, most of the world’s data is unstructured. Thus, despite their success, the reality is that the vast majority of the world’s data has remained beyond the reach of relational systems. The rise of deep learning and generative AI offers an opportunity to change this. These models provide a stunning capability to extract semantic understanding from almost any type of document, including text, images, and video which can extend the reach of databases to all the world's data. In this talk I explore how these new technologies will transform the way we build database management software, creating new systems that can ingest, store, process, and query all data. Building such systems presents many opportunities and challenges. In this talk I focus on three: scalability, correctness, and reliability, and argue that the declarative programming paradigm that has served relational systems so well offers a path forward in the new world of AI data systems as well. To illustrate this, I describe several examples of such declarative AI systems we have built in document and video processing, and provide a set of research challenges and opportunities to guide research in this exciting area going forward. Bio:  Samuel Madden is the College of Computing Distinguished Professor of Computing at MIT. His research interests include databases, distributed computing, and AI systems. Past research projects include learned database systems, the C-Store column-oriented database system, and the CarTel mobile sensor network system. Madden heads the Data Systems Group at MIT and the Data Science and AI Lab (DSAIL), an industry supported collaboration focused on developing systems that use AI and machine learning. Registration requiredMadden received his Ph.D. from the University of California at Berkeley in 2003 where he worked on the TinyDB system for data collection from sensor networks. Madden was named one of Technology Review's Top 35 Under 35 in 2005 and an ACM Fellow in 2020, and is the recipient of several awards including the SIGMOD Edgar F. Codd Innovations Award and "test of time" awards from VLDB, SIGMOD, SIGMOBILE, and SenSys. He is the co-founder and Chief Scientist at Cambridge Mobile Telematics, which develops technology to make roads safer and drivers better.  TBD

November 04

CSAIL Forum with Alison Gopnik

Alison Gopnik
UC Berkeley
Add to Calendar 2025-11-04 12:00:00 2025-11-04 13:00:00 America/New_York CSAIL Forum with Alison Gopnik Please join us for the CSAIL Forum with Alison GopnikCSAIL Forum hosted by Daniela RusSpeaker: Alison GopnikDistinguished Professor of Psychology, Affiliate Professor of Philosophy, and Berkeley Artificial Intelligence Research Group, UC BerkeleyTitle: Empowerment Gain as Causal Learning, Causal Learning as Empowerment Gain: A bridge between Bayesian causal hypothesis testing and reinforcement learningDate/time: Tuesday 12:00-1:00 EDT, November 4, 2025 Venue: Live stream via Zoom: Registration requiredBio: Alison Gopnik is a leader in cognitive science, particularly the study of learning and development. She was a founder of the field of “theory of mind”, an originator of the “theory theory” of cognitive development, and the first to apply Bayesian models to children’s learning. She has received the APS Lifetime Achievement, Cattell, and William James Awards, the SRCD Lifetime Achievement Award, the APA Distinguished Scientific Contributions Award, the Bradford Washburn and Carl Sagan Awards for Science Communication, and the Rumelhart Prize for Theoretical Foundations of Cognitive Science. She is a member of the National Academy of Sciences and the American Academy of Arts and Sciences and a Cognitive Science Society, American Association for the Advancement of Science, and Guggenheim Fellow. She was 2022-23 President of the Association for Psychological Science. She has six grandchildren. She is the author of over 160 journal articles and several books including the bestselling and critically acclaimed popular books The Scientist in the Crib, 1999, The Philosophical Baby, 2009, and The Gardener and the Carpenter, 2016. She has written widely about cognitive science and psychology for The Wall Street Journal, The New York Times, The Economist, and The Atlantic, among others. Her TED talk has been viewed more than 5.6 million times. She has frequently appeared on TV, radio, and podcasts including “The Charlie Rose Show”, “The Colbert Report”, and “The Ezra Klein Show”. AbstractLearning about the causal structure of the world is a fundamental problem for human cognition, and causal knowledge is central to both intuitive and scientific world models. However, causal models and especially causal learning have proved to be difficult for standard Large Models using standard techniques of deep learning. In contrast, cognitive scientists have applied advances in our formal understanding of causation in computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. These approaches also face challenges when it comes to learning however. In parallel, in the very different tradition of reinforcement learning, researchers have developed the idea of an intrinsic reward signal called “empowerment”. An agent is rewarded for maximizing the mutual information between its actions and their outcomes, regardless of the external reward value of those outcomes. In other words, the agent is rewarded if variation in an action systematically leads to parallel variation in an outcome so that variation in the action predicts variation in the outcome. Empowerment, then has two dimensions , it involves both controllability and variability. The result is an agent that has maximal control over the maximal part of its environment. “Empowerment” may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines.  If an agent learns an accurate causal model of the world they will necessarily increase their empowerment, and, vice versa, increasing empowerment will lead to a more accurate (if implicit) causal model of the world. Empowerment may also explain distinctive empirical features of children’s causal learning, as well as providing a more tractable computational account of how that learning is possible.  TBD

October 28

CSAIL Forum with Tim Berners-Lee

Sir Tim Berners-Lee
CSAIL Professor Emeritus
Add to Calendar 2025-10-28 12:00:00 2025-10-28 13:00:00 America/New_York CSAIL Forum with Tim Berners-Lee CSAIL Forum hosted by Daniela RusSpeaker: Tim Berners-Lee, CSAIL Professor Emeritus; Co-founder/CTO, Inrupt; author, This is For Everyone: the Unfinished Story of the World Wide WebDate/time: Tuesday 12:00-1:00 EDT, October 28, 2025 Venue: Live stream via Zoom: Registration requiredBio: Sir Tim Berners-Lee invented the World Wide Web in 1989 at CERN in Switzerland. Since then, through his work with the World Wide Web Consortium (W3C), The Open Data Institute, the World Wide Web Foundation, the development of the Solid Protocol and now as CTO and Co-Founder of Inrupt, he has been a tireless advocate for shared standards, open web access for all and the power of individuals on the web. A firm believer in the positive power of technology, he was named in Time magazine’s list of the most important people of the 20th century. He has been the recipient of several honorary degrees and awards, including the Seoul Peace Prize and the Turing Prize widely recognised as akin to the Nobel Prize for Computing. He was knighted in 2004 and later appointed to the Order of Merit by Her Majesty Queen Elizabeth II. He is an Emeritus Professor of Computer Science at MIT and Oxford. please direct questions to events@csail.mit.edu TBD

October 21

Add to Calendar 2025-10-21 12:00:00 2025-10-21 13:00:00 America/New_York CSAIL Forum with Shafi Goldwasser https://mit.zoom.us/meeting/register/lMyzkgHoRCektZTWtK3R3w Please join us for the next CSAIL Forum, featuring Prof. Shafi GoldwasserCSAIL Forum hosted by Daniela RusSpeaker: Shafi Goldwasser, Leighton Family ProfessorDate/time: Tuesday 12:00-1:00 EDT, October 21, 2025 Venue: Live stream via Zoom: Registration requiredBio: https://www.csail.mit.edu/person/shafi-goldwasser Title: What Cryptography Can Tell Us about AI Abstract: For decades now cryptographic tools and models have at their essence transformed technology platforms controlled by worst case adversaries to trustworthy platforms. In this talk I will describe how to use cryptographic tools and cryptographic modeling to build trust in various phases of the machine learning pipelines. We will touch on privacy in the training and inference stage, verification protocols for the quality of machine learning models, and robustness in presence of adversaries.  If time permits, I will show how cryptographic assumptions can help characterize  the limits and possibilities of AI safety.  TBD

September 16

CSAIL Forum with Josh Tenenbaum

Brain & Cognitive Sciences & CSAIL, MIT
Add to Calendar 2025-09-16 12:00:00 2025-09-16 13:00:00 America/New_York CSAIL Forum with Josh Tenenbaum Please join us for the next CSAIL Forum, featuring Prof. Josh TenenbaumCSAIL Forum hosted by Daniela RusSpeaker: Joshua Tenenbaum, Professor of Computational Cognitive ScienceDate/time: Tuesday 12:00-1:00 EDT, September 16, 2025 Venue: Live stream via Zoom: Registration requiredBio: https://bcs.mit.edu/directory/joshua-b-tenenbaum Title: Scaling Intelligence the Human Way: Rebuilding the Bridge between AI and Cog Sci Abstract: Today's leading AI systems have achieved one of the field's oldest dreams and promises: They can take in any language as input and produce reasonable responses – often very much like the responses that a reasonable (and knowledgeable and helpful) person would produce.  Yet the processes at work inside these systems, and how they are built, do not (at least obviously) have much in common with the mechanisms or origins of the human mind.  What would it take to build a model with something like the input-output behavior of ChatGPT but whose inner workings actually instantiated a theory of human cognition – and even our best current scientific theory?  I will discuss several possible routes to this goal, and the challenges and opportunities they present. I will argue that now more than ever is the time for a bidirectional exchange between the fields of AI and Cog Sci – fields that grew up together starting in the 1950s, but have followed very different trajectories recently.  AI tools and techniques have much to offer cognitive theories, but cognitive science has just as much if not more to offer back to AI.  Understanding and using AI tools in a framework guided by foundational thinking in cognitive science represents the best hope to deliver on the theoretical goals, dreams, and promises of both fields.  TBD

May 13

Add to Calendar 2025-05-13 12:00:00 2025-05-13 13:00:00 America/New_York CSAIL Forum with Armando Solar-Lezama: Programming the way to better AI Registration required: https://mit.zoom.us/meeting/register/TG0DF7tFQp-hS2BXku5ahAAbstract: For decades, programming was the way through which we told machines what to do, but modern AI techniques promise new ways of creating software directly from data and natural language. But programming has a number of advantages that have enabled us to build reliable large scale computing infrastructure. In this presentation, I explain some new approaches to learn from data while preserving some of the benefits of programming, and some of their applications in domains ranging from robotics to computational biology. TBD

May 06

Add to Calendar 2025-05-06 12:00:00 2025-05-06 13:00:00 America/New_York CSAIL Forum with Manish Raghavan: The role of information diversity in AI systems Registration required: https://mit.zoom.us/meeting/register/GP_RXB5BSTy_Ubf3wNJwxQAbstract:AI systems often consist of multiple actors or agents with different goals, incentives and critically, information. In this talk, we explore the role that heterogeneous information plays. Across decision-making, pricing, and production tasks, we show that social outcomes improve as information diversity increases. We discuss implications for the development, deployment, and use of AI.Bio: Manish Raghavan is the Drew Houston (2005) Career Development Professor at the MIT Sloan School of Management and Department of Electrical Engineering and Computer Science. Before that, he was a postdoctoral fellow at the Harvard Center for Research on Computation and Society (CRCS). His research centers on the societal impacts of algorithms and AI. TBD

April 22

Add to Calendar 2025-04-22 12:00:00 2025-04-22 13:00:00 America/New_York CSAIL Forum with Prof Yoon Kim: Efficient and Expressive Architectures for Language Modeling Efficient and Expressive Architectures for Language ModelingSpeaker: Yoon Kim, Assistant Professor, CSAIL Tuesday 12:00-1:00 EDT, April 22, 2025 live stream via Zoom: Registration requiredAbstract:Transformers are the dominant architecture for language modeling (and generative AI more broadly). The attention mechanism in Transformers is considered core to the architecture and enables accurate sequence modeling at scale. However, the complexity of attention is quadratic in input length, which makes it difficult to apply Transformers to model long sequences. Moreover, Transformers have theoretical limitations when it comes to the class of problems it can solve, which prevents their being able to model certain kinds of phenomena such as state tracking. This talk will describe some recent work on efficient alternatives to Transformers which can overcome these limitations.Bio: Yoon Kim is an assistant professor at MIT EECS and a principal investigator at CSAIL, where he works on natural language processing and machine learning. He obtained his Ph.D. in computer science from Harvard University. TBD

April 15

Add to Calendar 2025-04-15 12:00:00 2025-04-15 13:00:00 America/New_York CSAIL Forum with Prof Phillip Isola: The Platonic Representation Hypothesis The Platonic Representation HypothesisSpeaker: Phillip Isola, Associate Professor, CSAIL Tuesday 12:00-1:00 EDT, April 15, 2025 In person: Hewlett 32-G882 in the Stata Center, 32 Vassar Street and live stream via Zoom: Registration requiredAbstract: I will argue that representations in different deep nets are converging. First, I will survey examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned. Next, I will demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way. I will hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato's concept of an ideal reality. We term such a representation the platonic representation and discuss several possible selective pressures toward it. Finally, I'll discuss the implications of these trends, their limitations, and counterexamples to our analysis.Bio: https://web.mit.edu/phillipi/www/bio.html TBD