Skip to main content
  • For Students
  • For Industry
  • For Members
  • Accessibility
  • Login
MIT CSAIL
  • Research
  • People
  • News
  • Events
  • Symposia
  • Forum
  • About
  • Research
  • People
  • News
  • Events
  • Symposia
  • Forum
  • About
  • For Students
  • For Industry
  • For Members
  • Accessibility
  • Login
  • Contact
  • Press Requests
  • Accessibility

CSAIL Forum

  • Research
    • Videoarchive
    • CSAIL's UROP Program: Hackers’ Heaven
  • People
  • News
  • Events
  • Symposia
  • Forum
  • About

CSAIL Forum

blank
SYMPOSIA
DERTOUZOS Distinguished Lectures
HOT TOPICS in Computing

Sam Madden

How I Learned to Start Querying and Love AI
Sam Madden

18 November 2025 12 - 1:00pm

Over the past five decades, the relational database model has proven to be a scaleable and adaptable model for querying a variety of structured data, with use cases in analytics, transactions, graphs, streaming and more. However, most of the world’s data is unstructured. Thus, despite their success, the reality is that the vast majority of the world’s data has remained beyond the reach of relational systems. The rise of deep learning and generative AI offers an opportunity to change this. These models provide a stunning capability to extract semantic understanding from almost any type of document, including text, images, and video which can extend the reach of databases to all the world's data. In this talk I explore how these new technologies will transform the way we build database management software, creating new systems that can ingest, store, process, and query all data. Building such systems presents many opportunities and challenges. In this talk I focus on three: scalability, correctness, and reliability, and argue that the declarative programming paradigm that has served relational systems so well offers a path forward in the new world of AI data systems as well. To illustrate this, I describe several examples of such declarative AI systems we have built in document and video processing, and provide a set of research challenges and opportunities to guide research in this exciting area going forward. 
 

 

PAST EVENT VIDEOS

Alison Gopnik

CSAIL Forum with Alison Gopnik 
Alison Gopnik

4 November 2025 12 - 1:00pm

Learning about the causal structure of the world is a fundamental problem for human cognition, and causal knowledge is central to both intuitive and scientific world models. However, causal models and especially causal learning have proved to be difficult for standard Large Models using standard techniques of deep learning. In contrast, cognitive scientists have applied advances in our formal understanding of causation in computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. These approaches also face challenges when it comes to learning however. In parallel, in the very different tradition of reinforcement learning, researchers have developed the idea of an intrinsic reward signal called “empowerment”. An agent is rewarded for maximizing the mutual information between its actions and their outcomes, regardless of the external reward value of those outcomes. In other words, the agent is rewarded if variation in an action systematically leads to parallel variation in an outcome so that variation in the action predicts variation in the outcome. Empowerment, then has two dimensions , it involves both controllability and variability. The result is an agent that has maximal control over the maximal part of its environment. “Empowerment” may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines.  If an agent learns an accurate causal model of the world they will necessarily increase their empowerment, and, vice versa, increasing empowerment will lead to a more accurate (if implicit) causal model of the world. Empowerment may also explain distinctive empirical features of children’s causal learning, as well as providing a more tractable computational account of how that learning is possible. 
 

Tim Berners-Lee

CSAIL Forum with Tim Berners-Lee
Sir Tim Berners-Lee

28 October 2025 12 - 1:00pm

Sir Tim Berners-Lee invented the World Wide Web in 1989 at CERN in Switzerland. Since then, through his work with the World Wide Web Consortium (W3C), The Open Data Institute, the World Wide Web Foundation, the development of the Solid Protocol and now as CTO and Co-Founder of Inrupt, he has been a tireless advocate for shared standards, open web access for all and the power of individuals on the web. A firm believer in the positive power of technology, he was named in Time magazine’s list of the most important people of the 20th century. He has been the recipient of several honorary degrees and awards, including the Seoul Peace Prize and the Turing Prize widely recognised as akin to the Nobel Prize for Computing. He was knighted in 2004 and later appointed to the Order of Merit by Her Majesty Queen Elizabeth II. He is an Emeritus Professor of Computer Science at MIT and Oxford. 
 

goldwasser

What Cryptography Can Tell Us about AI  
Shafi Goldwasser

21 October 2025 12 - 1:00pm

For decades now cryptographic tools and models have at their essence transformed technology platforms controlled by worst case adversaries to trustworthy platforms. In this talk I will describe how to use cryptographic tools and cryptographic modeling to build trust in various phases of the machine learning pipelines. We will touch on privacy in the training and inference stage, verification protocols for the quality of machine learning models, and robustness in presence of adversaries.  If time permits, I will show how cryptographic assumptions can help characterize  the limits and possibilities of AI safety. 
 

Tenenbaum

Scaling Intelligence the Human Way: Rebuilding the Bridge between AI and Cog Sci 
Joshua Tenenbaum

16 September 2025 12 - 1:00pm

Today's leading AI systems have achieved one of the field's oldest dreams and promises: They can take in any language as input and produce reasonable responses – often very much like the responses that a reasonable (and knowledgeable and helpful) person would produce.  Yet the processes at work inside these systems, and how they are built, do not (at least obviously) have much in common with the mechanisms or origins of the human mind.  What would it take to build a model with something like the input-output behavior of ChatGPT but whose inner workings actually instantiated a theory of human cognition – and even our best current scientific theory?  I will discuss several possible routes to this goal, and the challenges and opportunities they present. I will argue that now more than ever is the time for a bidirectional exchange between the fields of AI and Cog Sci – fields that grew up together starting in the 1950s, but have followed very different trajectories recently.  AI tools and techniques have much to offer cognitive theories, but cognitive science has just as much if not more to offer back to AI.  Understanding and using AI tools in a framework guided by foundational thinking in cognitive science represents the best hope to deliver on the theoretical goals, dreams, and promises of both fields. 
 

FORUM VIDEO

Programming the way to better AI 
Armando Solar-Lezama

13 May 2025 12 - 1:00pm

For decades, programming was the way through which we told machines what to do, but modern AI techniques promise new ways of creating software directly from data and natural language. But programming has a number of advantages that have enabled us to build reliable large scale computing infrastructure. In this presentation, I explain some new approaches to learn from data while preserving some of the benefits of programming, and some of their applications in domains ranging from robotics to computational biology.
 

FORUM VIDEO

The role of information diversity in AI systems 
Manish Raghavan

6 May 2025 12 - 1:00pm

Manish Raghavan is the Drew Houston (2005) Career Development Professor at the MIT Sloan School of Management and Department of Electrical Engineering and Computer Science. Before that, he was a postdoctoral fellow at the Harvard Center for Research on Computation and Society (CRCS). His research centers on the societal impacts of algorithms and AI.
 

FORUM VIDEO

Efficient and Expressive Architectures for Language Modeling 
Yoon Kim
Assistant Professor, CSAIL
22 April 2025 12 - 1:00pm

Transformers are the dominant architecture for language modeling (and generative AI more broadly). The attention mechanism in Transformers is considered core to the architecture and enables accurate sequence modeling at scale. However, the complexity of attention is quadratic in input length, which makes it difficult to apply Transformers to model long sequences. Moreover, Transformers have theoretical limitations when it comes to the class of problems it can solve, which prevents their being able to model certain kinds of phenomena such as state tracking. This talk will describe some recent work on efficient alternatives to Transformers which can overcome these limitations.
 

FORUM VIDEO
Event Image

The Platonic Representation Hypothesis 
Phillip Isola
Associate Professor, CSAIL
15 April 2025 12:00 - 1:00pm

I will argue that representations in different deep nets are converging. First, I will survey examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned. Next, I will demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way. I will hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato's concept of an ideal reality. We term such a representation the platonic representation and discuss several possible selective pressures toward it. Finally, I'll discuss the implications of these trends, their limitations, and counterexamples to our analysis.
 

FORUM VIDEO

MIT CSAIL

Massachusetts Institute of Technology

Computer Science & Artificial Intelligence Laboratory

32 Vassar St, Cambridge MA 02139

  • Contact
  • Press Requests
  • Accessibility
MIT Schwarzman College of Computing