Skip to main content
  • For Students
  • For Industry
  • For Members
  • Accessibility
  • Login
MIT CSAIL
  • Research
  • People
  • News
  • Events
  • Symposia
  • Forum
  • About
  • Research
  • People
  • News
  • Events
  • Symposia
  • Forum
  • About
  • For Students
  • For Industry
  • For Members
  • Accessibility
  • Login
  • Contact
  • Press Requests
  • Accessibility

Events

Hide Sidebar Show Sidebar
Hide Sidebar Show Sidebar

Current Seminar Series

CSAIL Forum
Dertouzos Distinguished Lecture
Hot Topics in Computing
AI@MIT Reading Group
Algorithms and Complexity (A&C) 2025 - 2026
Bioinformatics Seminar 2025
Biomedical Imaging and Analysis 2025 - 2026
Boston IEEE/ACM 2025 -2026
Brains, Minds and Machines 2025 - 2026
CIS Seminar 2025-2026
CSAIL Security Seminar 2025 - 2026
EECS Special Seminar
Embodied Intelligence 2025-2026
HCI Seminar 2025-2026
ML+Crypto Seminar
ML Tea
Theory of Computation (ToC) 2025 - 2026
Thesis Defense
Previous Seminar Series

November 02, 2025

No events scheduled

November 03, 2025

Rethinking the Control Plane for Heterogeneous Systems

Matt Sinclair
University of Wisconsin-Madison
11:00A
- 12:30P

Location

32-G882
Hewlett
Add to Calendar 2025-11-03 11:00:00 2025-11-03 12:30:00 America/New_York Rethinking the Control Plane for Heterogeneous Systems SPEAKER: Matt SinclairAbstract: In recent years, system designers have increasingly been turning to heterogeneous systems to improve performance and energy efficiency.  Specialized accelerators are frequently used to improve the efficiency of computations that run inefficiently on conventional, general-purpose processors. As a result, systems ranging from smartphones to datacenters, hyperscalers, and supercomputers are increasingly using large numbers of accelerators (including GPUs) while providing better efficiency than CPU-based solutions.  In particular, GPUs are widely used in these systems due to their combination of programmability and efficiency.  Traditionally, GPUs are throughput-oriented, focused on data parallelism, and assume synchronization happens at a coarse granularity.  However, programmers have begun using these systems for a wider variety of applications which exhibit different characteristics, including latency-sensitivity, mixes of both task and data parallelism, and fine-grained synchronization.  Thus, future heterogeneous systems must evolve and make deadline-aware scheduling, more intelligent data movement, efficient fine-grained synchronization, and effective power management first-order design constraints.  In the first part of this talk, I will discuss our efforts to apply hardware-software co-design to help future heterogeneous systems overcome these challenges and improve performance, energy efficiency, and scalability.  Then, in the second part I will discuss how the on-going transition to chiplet-based heterogeneous systems exacerbates these challenges and how we address these challenges in chiplet-based heterogeneous systems by rethinking the control plane.Bio: Matt Sinclair is an Assistant Professor in the Computer Sciences Department at the University of Wisconsin-Madison. He is also an Affiliate Faculty in the ECE Department and Teaching Academy at UW-Madison. His research primarily focuses on how to design, program, and optimize future heterogeneous systems. He also designs the tools for future heterogeneous systems, including serving on the gem5 Project Management Committee and the MLCommons Power, HPC, and Science Working Groups. He is a recipient of the DOE Early Career and NSF CAREER awards, and his work has been funded by AMD, the DOE, Google, NSF, and SRC. Matt’s research has also been recognized several times, including an ACM Doctoral Dissertation Award nomination, a Qualcomm Innovation Fellowship, the David J. Kuck Outstanding PhD Thesis Award, and an ACM SIGARCH - IEEE Computer Society TCCA Outstanding Dissertation Award Honorable Mention. He is also the current steward for the ISCA Hall of Fame. TBD

New Vulnerabilities from Updating Old Systems

Alex Liu
University of California, San Diego

Part Of

CSAIL Security Seminar 2025 - 2026
12:00P
- 1:00P

Location

32-D463
Stata (Star)
Add to Calendar 2025-11-03 12:00:00 2025-11-03 13:00:00 America/New_York New Vulnerabilities from Updating Old Systems Abstract:Email and text messages (SMS and MMS) are among the oldest and most widely used digital communication systems in the world. Over time, providers have continuously updated these systems to support new features such as email forwarding and rich communication services. Yet, while the feature set has steadily evolved, the underlying security protocols and threat models have largely remained unchanged and have not been revisited. In this talk, I demonstrate with two examples that this discrepancy can result in subtle but powerful vulnerabilities.The first example studies vulnerabilities introduced by email forwarding, where an email is routed through a forwarding service before reaching its final recipient. We identify a range of security vulnerabilities in this process. We further demonstrate how attackers can exploit them to deliver spoofed emails to major providers (e.g., Gmail and Outlook) and spoof emails as tens of thousands of popular domains. The second example examines email-to-text gateways—services operated by mobile carriers that translate an email into a text message. We show that vulnerabilities in these gateways—combined with vulnerabilities in how phones parse messages—allow an attacker to deliver a text message with a spoofed sender identity of their choosing (e.g., arbitrary email address, phone number, or short code). The attacks we uncover work across a variety of phones (both Android and iPhone) and carriers (e.g., AT&T, Verizon, T-Mobile, and Google Fi). I end by discussing ongoing and future work on designing effective defense mechanisms.Zoom info:   Meeting ID: 945 5603 5878   Password: 865039 TBD

AI4Society Seminar - Angelina Wang - More Discrimination for Fairness

Angelina Wang
Cornell University
4:00P
- 5:00P

Location

45-102
Add to Calendar 2025-11-03 16:00:00 2025-11-03 17:00:00 America/New_York AI4Society Seminar - Angelina Wang - More Discrimination for Fairness Talk Abstract: As machine learning has proliferated, so too have concerns about fairness and bias. Much of this work defines fairness as the removal of discrimination — an approach that fits neatly within existing methods like constrained optimization and pursuing clearly defined metrics. However, I argue that this framing has gone too far, failing to recognize the realities of our inequitable world, and theorizing in a social vacuum. To build AI systems that work well for everyone, we must discriminate more. In this talk I will share research on three such contexts where we should treat social groups differently: when reasoning, when simulating human participants, and when personalizing chatbot responses. Finally, I will briefly touch on how even forms of oppression themselves should be distinguished, e.g., that racism and sexism are not interchangeable.Speaker Bio: Angelina Wang is an Assistant Professor in the Department of Information Science at Cornell University and at Cornell Tech. Her research is on responsible AI, with a particular interest in fairness, evaluation, and societal impacts. She has received the NSF GRFP, EECS Rising Stars, Siebel Scholarship, and Microsoft AI & Society Fellowship. She publishes in top journals (PNAS, Nature Machine Intelligence) as well as responsible computing (FAccT, AIES) and machine learning (ICML, ACL, ICCV) venues, winning a best paper award at ACL and orals and spotlights at ICCV and ECCV. Previously she did her postdoc at Stanford University, and received her PhD in computer science from Princeton University and BS from UC Berkeley. TBD

ML Tea: PDDL-Instruct: Enhancing Symbolic Planning Capabilities in LLMs through Logical Chain-of-Thought Instruction Tuning / Incentive-Aware Dynamic Pricing for Constrained Resource Allocation with Strategic Agents

Part Of

ML Tea
4:00P
- 5:00P

Location

32-G882
Add to Calendar 2025-11-03 16:00:00 2025-11-03 17:00:00 America/New_York ML Tea: PDDL-Instruct: Enhancing Symbolic Planning Capabilities in LLMs through Logical Chain-of-Thought Instruction Tuning / Incentive-Aware Dynamic Pricing for Constrained Resource Allocation with Strategic Agents Speakers: Pulkit Verma and Yan DaiBio 1 – Pulkit Verma is a Postdoctoral Associate at the Interactive Robotics Group at the Massachusetts Institute of Technology, where he works with Prof. Julie Shah. His research focuses on the safe and reliable behavior of taskable AI agents. He investigates the minimal set of requirements in an AI system that would enable a user to assess and understand the limits of its safe operability. He received his Ph.D. in Computer Science from Arizona State University, where he worked with Prof. Siddharth Srivastava. Before that, he completed his M.Tech. in Computer Science and Engineering at IIT Guwahati with Prof. Pradip K. Das. He was awarded the AAAI/ACM SIGAI Innovative AI Education Award at AAAI's EAAI Symposium in 2025, Graduate College Completion Fellowship at ASU in 2023, Post Graduation Scholarship from the Government of India in 2013 and 2014, and received the Best Demo Award at the International Conference on Autonomous Agents and Multiagent Systems (AAMAS) in 2022.Bio 2 – Yan Dai is a 2nd-year PhD student in Operations Research, co-advised by Prof. Patrick Jaillet and Prof. Negin Golrezaei. His recent research focuses on tackling EconCS challenges via an online learning toolbox. He's also interested in bandits, reinforcement learning theory, and optimization for deep learning. He belongs to the communities of COLT, ICML, NeurIPS, and ICLR. He has won the Best Paper award at ACM SIGMETRICS 2025.Abstract 1 – Large language models (LLMs) have demonstrated impressive capabilities across diverse tasks, yet their ability to perform structured symbolic planning remains limited, particularly in domains requiring formal representations like Planning Domain Definition Language (PDDL). In this paper, we present a novel instruction tuning framework designed to enhance LLMs' symbolic planning capabilities through logical chain-of-thought reasoning. Our approach focuses on teaching models to rigorously reason about action applicability, state transitions, and plan validity using explicit logical inference steps. By developing instruction prompts that guide models through the precise logical reasoning required to determine when actions can be applied in a given state, we enable LLMs to self-correct their planning processes through structured reflection. The framework systematically builds verification skills by decomposing the planning process into explicit reasoning chains about precondition satisfaction, effect application, and invariant preservation. Experimental results on multiple planning domains show that our chain-of-thought reasoning based instruction-tuned models are significantly better at planning, achieving planning accuracy of up to 94% on standard benchmarks, representing a 66% absolute improvement over baseline models. This work bridges the gap between the general reasoning capabilities of LLMs and the logical precision required for automated planning, offering a promising direction for developing better AI planning systems.Abstract 2 – Motivated by applications such as cloud platforms allocating GPUs to users or governments deploying mobile health units across competing regions, we study the dynamic allocation of a reusable resource to strategic agents with private valuations. Our objective is to simultaneously (i) maximize social welfare, (ii) satisfy multi-dimensional long-term cost constraints, and (iii) incentivize truthful reporting. We begin by numerically evaluating primal-dual methods widely used in constrained online optimization and find them to be highly fragile in strategic settings -- agents can easily manipulate their reports to distort future dual updates for future gain. To address this vulnerability, we develop an incentive-aware framework that makes primal-dual methods robust to strategic behavior. Our design combines epoch-based lazy updates -- where dual variables remain fixed within each epoch -- with randomized exploration rounds that extract approximately truthful signals for learning. Leveraging carefully designed online learning subroutines that can be of independent interest for dual updates, our mechanism achieves $\tilde O(\sqrt T)$ social welfare regret, satisfies all cost constraints, and ensures incentive alignment. This matches the performance of non-strategic allocation approaches while being robust to strategic agents. TBD

November 04, 2025

[ML+Crypto] Why Language Models Hallucinate

Adam Tauman Kalai
OpenAI
10:30A
- 12:30P

Location

32-G575
Add to Calendar 2025-11-04 10:30:00 2025-11-04 12:30:00 America/New_York [ML+Crypto] Why Language Models Hallucinate ML+Crypto SeminarTitle: Why Language Models HallucinateSpeaker: Adam Tauman Kalai (OpenAI)Time: Tuesday, November 4, 10:30–12.30pmLocation: 32-G575Seminar series: ML+Crypto Large language models (LLMs) sometimes generate statements that are plausible but factually incorrect—a phenomenon commonly called “hallucination.” We argue that these errors are not mysterious failures of architecture or reasoning, but rather predictable consequences of standard training and evaluation incentives. We show (i) that hallucinations can be viewed as classification errors: when pretrained models cannot reliably distinguish a false statement from a true one, they may produce the false option rather than saying I don’t know; (ii) that optimization of benchmark performance encourages guessing rather than abstaining, since most evaluation metrics penalize expressing uncertainty; and (iii) that a possible mitigation path lies in revising existing benchmarks to reward calibrated abstention, thus realigning incentives in model development. Joint work with Santosh Vempala (Georgia Tech) and Ofir Nachum & Edwin Zhang (OpenAI)   Large language models (LLMs) sometimes generate statements that are plausible but factually incorrect—a phenomenon commonly called “hallucination.” We argue that these errors are not mysterious failures of architecture or reasoning, but rather predictable consequences of standard training and evaluation incentives. We show (i) that hallucinations can be viewed as classification errors: when pretrained models cannot reliably distinguish a false statement from a true one, they may produce the false option rather than saying I don’t know; (ii) that optimization of benchmark performance encourages guessing rather than abstaining, since most evaluation metrics penalize expressing uncertainty; and (iii) that a possible mitigation path lies in revising existing benchmarks to reward calibrated abstention, thus realigning incentives in model development. Joint work with Santosh Vempala (Georgia Tech) and Ofir Nachum & Edwin Zhang (OpenAI) TBD

Visual Computing Seminar: TBA

Abe Davis
Cornell University
12:00P
- 1:00P

Location

32-D507
Add to Calendar 2025-11-04 12:00:00 2025-11-04 13:00:00 America/New_York Visual Computing Seminar: TBA Abstract:TBA TBD

CSAIL Forum with Alison Gopnik

Alison Gopnik
UC Berkeley

Part Of

CSAIL Forum
12:00P
- 1:00P

Location

TBD
Add to Calendar 2025-11-04 12:00:00 2025-11-04 13:00:00 America/New_York CSAIL Forum with Alison Gopnik Please join us for the CSAIL Forum with Alison GopnikCSAIL Forum hosted by Daniela RusSpeaker: Alison Gopnik Dept. of Psychology, UC BerkeleyTitle: Empowerment Gain as Causal Learning, Causal Learning as Empowerment Gain: A bridge between Bayesian causal hypothesis testing and reinforcement learningDate/time: Tuesday 12:00-1:00 EDT, November 4, 2025 Venue: Live stream via Zoom: Registration requiredBio: https://psychology.berkeley.edu/people/alison-gopnikAbstractLearning about the causal structure of the world is a fundamental problem for human cognition, and causal knowledge is central to both intuitive and scientific world models. However, causal models and especially causal learning have proved to be difficult for standard Large Models using standard techniques of deep learning. In contrast, cognitive scientists have applied advances in our formal understanding of causation in computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. These approaches also face challenges when it comes to learning however. In parallel, in the very different tradition of reinforcement learning, researchers have developed the idea of an intrinsic reward signal called “empowerment”. An agent is rewarded for maximizing the mutual information between its actions and their outcomes, regardless of the external reward value of those outcomes. In other words, the agent is rewarded if variation in an action systematically leads to parallel variation in an outcome so that variation in the action predicts variation in the outcome. Empowerment, then has two dimensions , it involves both controllability and variability. The result is an agent that has maximal control over the maximal part of its environment. “Empowerment” may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines.  If an agent learns an accurate causal model of the world they will necessarily increase their empowerment, and, vice versa, increasing empowerment will lead to a more accurate (if implicit) causal model of the world. Empowerment may also explain distinctive empirical features of children’s causal learning, as well as providing a more tractable computational account of how that learning is possible.  TBD

FAST CODE SEMINAR: Agile and evolvable software construction in the era of rapidly evolving hardware accelerator designs

Charith Mendis
University of Illinois at Urbana-Champaign
2:00P
- 3:00P

Location

32-G575
Add to Calendar 2025-11-04 14:00:00 2025-11-04 15:00:00 America/New_York FAST CODE SEMINAR: Agile and evolvable software construction in the era of rapidly evolving hardware accelerator designs Abstract Modern AI workloads have become exceedingly abundant and important in the current computing landscape. As a result, there have been numerous software and hardware innovations aimed at accelerating these workloads. However, we observe a subtle disconnect between the software and hardware communities. Most software innovations target well-established hardware platforms such as CPUs (e.g., x86, ARM) and GPUs (e.g., NVidia GPUs), while hardware innovations produce plenty of other tensor accelerator designs (e.g., Gemmini, Feather) each year. We asked the question, why aren’t the software community using these accelerators or even evaluating on them? The simple yet undeniable reason is the lack of standardized software tooling compared to CPUs and GPUs. For an architecture to be used, properly designed compiler backends, correctness, and performance testing tools should be abundant (e.g., CUDA ecosystem). In this talk, I will describe how we bridge this gap by automatically generating the necessary software tools for a large class of accelerators through the Accelerator Compiler Toolkit (ACT) ecosystem. Central to ACT is an ISA definition language, TAIDL, that for the first time standardizes the hardware-software interfaces for a large class of accelerators. Departing from the traditional approach of manually constructing test oracles, performance models, or retargetable compiler backends, we instead introduce agile and evolvable methodologies to automatically generate such necessary tooling using both formal methods and machine learning techniques for any TAIDL-defined accelerator interface. I will show how such automation enables rapid software prototyping, making rapidly evolving accelerator designs usable by the software community.  Bio Charith Mendis is an Assistant Professor in the Siebel School of Computing and Data Science at the University of Illinois at Urbana-Champaign. His broad research interests are at the intersection of compilers, programming languages and machine learning. He received his Ph.D. and Master’s from the Massachusetts Institute of Technology and his B.Sc. from the University of Moratuwa. He is the recipient of the DARPA Young Faculty Award, the NSF CAREER Award, the Google ML and Systems Junior Faculty Award, the Outstanding Advisor award at UIUC, the William A. Martin outstanding master’s thesis award at MIT and the university Gold Medal for his B.Sc. He has won numerous paper awards including a Distinguished Paper Award at POPL, a Best Student Paper Award at the IEEE BigData conference, an honorable mention for the Best Artifact Award at SIGMOD, a Best Paper Award at ML for Systems workshop at ISCA and an IEEE Top Picks Honorable Mention. TBD

HCI Seminar - Karthik Ramani - Hands, Bodies, and Machines: Reimagining Making and Learning Through Embodied Interaction

Karthik Ramani
Purdue University

Part Of

HCI Seminar 2025-2026
4:00P
- 5:00P

Location

32-G882
Add to Calendar 2025-11-04 16:00:00 2025-11-04 17:00:00 America/New_York HCI Seminar - Karthik Ramani - Hands, Bodies, and Machines: Reimagining Making and Learning Through Embodied Interaction Abstract:Across centuries, humans have learned and created through their hands and bodies—yet our computational tools have often abstracted away that embodied intelligence. My research reconnects the digital and the physical by designing systems where the body itself becomes the interface for authoring, learning, and fabrication.In the first theme, embodied authoring, projects such as GhostAR, CaptuAR, and VipO transform programming into a physical act—users “program by demonstration” through gestures, movement, and spatial context rather than code. These systems enable intuitive human–robot collaboration and allow non-programmers to rapidly create context-aware AR applications by performing rather than scripting workflows. In the second theme, embodied learning, systems like avaTTAR and PoVRTool leverage augmented and virtual reality to cultivate precision and procedural skill. avaTTAR enables users to master sports skills such as table tennis through digital-twin coaching and real-time, spatially aligned feedback. PoVRTool extends this embodied feedback paradigm to high-precision manufacturing, where users engage with virtual power tools that replicate real-world dynamics, ergonomics, and safety conditions. In the third theme, embodied making, projects like GestuAR, Handymate, and AdapTutAR explore how embodied interfaces can guide complex fabrication and machine-interaction tasks. AdapTutAR investigates tutoring presence in machine workshops, where tasks often require spatially and body-coordinated human–machine interactions. Finally AgentAR unifies these authoring tools using tool augmented agents. Together, these projects advance a vision of computing through doing—where making, learning, and programming are not abstract symbolic processes but embodied, spatially grounded experiences that unify mind, body, and machine to democratize skill, creativity, and expertise. Bio:Karthik Ramani is the Donald W. Feddersen Distinguished Professor of Mechanical Engineering at Purdue University, with additional appointments in Electrical and Computer Engineering and a courtesy role in the College of Education. He leads the Convergence Design Lab, where his research brings AI into the physical world by blending human-centered AI with spatial intelligence to create immersive, real-time solutions for design, manufacturing, sports training, surgery, and hands-on learning. His work spans augmented spatial interactions, symbiotic human-AI collaboration, computational design thinking and prototyping, and scalable upskilling platforms for production. Using the lens of Physical AI, he develops systems that perceive, understand, and act in real environments—extending human capacity through embodied and intuitive interfaces. He has recently published in top-tier venues across computer vision (CVPR, ECCV, ICCV), human-computer interaction (ACM CHI, UIST), and AI (NIPS, ICLR), in addition to leading engineering design journals. He co-founded VizSeek, the world’s first commercial shape-based search engine for mechanical parts, and ZeroUI, a CES-awarded robotics startup. His educational innovations include Purdue’s Toy Design and Product-Process-Business Model Design courses. He was a visiting professor at Stanford University and Oxford University and a research fellow at PARC (formerly Xerox PARC). He earned his B.Tech from IIT Madras, M.S. from The Ohio State University, and Ph.D. from Stanford—all in Mechanical Engineering. He currently also serves as coach of Purdue’s Table Tennis team, where research meets passion— in the emerging domain of Athletic AI.This talk will also be streamed over Zoom: https://mit.zoom.us/j/96684895383. TBD

November 05, 2025

Foundation of Prenatal Risk

Mads Nielsen
University of Copenhagen

Part Of

Biomedical Imaging and Analysis 2025 - 2026
11:00A
- 12:00P

Location

32-370
Add to Calendar 2025-11-05 11:00:00 2025-11-05 12:00:00 America/New_York Foundation of Prenatal Risk Most pregnant women undertake at least two ultrasound exams as well as physical and biochemical examinations. Nevertheless, many adverse pregnancy outcomes are detected timely with only very low sensitivity. Preterm birth 40%, Low weight for gestational age 25%, congenital heart disease 40%. Maternal health history as well as ultrasound exams are obvious objects for deep learning-based early detection of adverse outcomes. We develop foundation models for mother's health history and for ultrasound exams and shows fine-tunings with substantial increase in sensitivity of several adverse outcomes within the time-window of intervention. Data are based on national Danish data from more than 700.000 pregnancies. TBD

Cancelled

Interrogating the multi-omic architecture of the exposome and intervention from populations to individuals

This event has been cancelled

11:30A
- 1:00P

When Benchmarks Lie: >50% Error Rates, Misleading Rankings, and Unstable Training [Zoom Talk]

Daniel Kang
University of Illinois at Urbana-Champaign
1:00P
- 2:00P

Location

32-G882
Add to Calendar 2025-11-05 13:00:00 2025-11-05 14:00:00 America/New_York When Benchmarks Lie: >50% Error Rates, Misleading Rankings, and Unstable Training [Zoom Talk] Abstract: "For better or worse, benchmarks shape a field" and this is also true for progress in AI for data systems. Take text-to-SQL: the BIRD leaderboard has ~100 AI agents from groups ranging from Stanford to Google. Can we trust these leaderboards as researchers looking to develop new techniques or practitioners looking to choose high-performing agents?Unfortunately we cannot. We show that text-to-SQL leaderboards, including BIRD and Spider 2.0, have >50% error rates!  We further show that these errors result in poor correlation on leaderboard rankings compared to clean data, with rank correlations as low as 0.3. Finally, we show that AI agents, when trained with reinforcement learning, can learn incorrect patterns or collapse depending on the kind of noise. Finally, we show that on a range of challenging data benchmarks, AI agents still struggle. Our results show the incredible need for high-quality data to push the field forward.Bio: Daniel is a professor of computer science at UIUC, where he focuses on everything related to AI agents and data. His lab has recently focused on understanding what AI agent benchmarks really measure and how that affects downstream performance, both for those who want to select high-performing agents and agent developers. Daniel's work is supported by the Google ML and Systems Junior Faculty Award, Bridgewater AIA Labs, and others.----Please reach out to markakis@mit.edu for the Zoom password. TBD

Zeroth-order log-concave sampling: Uniform sampling from convex bodies

Yunbum Kook
Georgia Tech

Part Of

Algorithms and Complexity (A&C) 2025 - 2026
4:00P
- 5:00P

Location

32-G575
Add to Calendar 2025-11-05 16:00:00 2025-11-05 17:00:00 America/New_York Zeroth-order log-concave sampling: Uniform sampling from convex bodies Since the development of the first randomized polynomial-time algorithm for volume computation by Dyer, Frieze, and Kannan in 1989, convex-body sampling has been a central problem at the intersection of algorithms, geometry, and probability. A major milestone came in 1997, when Kannan, Lovász, and Simonovits analyzed the Ball Walk and formulated the influential KLS conjecture. This was extended to log-concave distributions by Lovász and Vempala in 2006, and further accelerated by Cousins and Vempala in 2015 through warm-start generation techniques.In this talk, I will present new and principled approaches that understand, streamline, and improve these advances. First, I propose a simple variant of the proximal sampler that achieves the query complexity with matched Rényi orders between the initial warmness and output guarantee. Then, I introduce a simple annealing scheme that produces a warm start in q-Rényi divergence. To relay a Rényi warmness across the annealing scheme, I establish hypercontractivity under simultaneous heat flow and translate it into an improved mixing guarantee for the proximal sampler under a logarithmic Sobolev inequality. These results extend naturally to general log-concave distributions accessible via evaluation oracles, incurring additional quadratic queries.The talk will be based on joint work with Santosh Vempala. TBD

November 06, 2025

Google PhD Research & Engineering: Insights from MIT+CSAIL PhD Alumni

12:00P
- 2:00P

Location

32-D463
Star Seminar Room: 32-D463 in CSAIL
Add to Calendar 2025-11-06 12:00:00 2025-11-06 14:00:00 America/New_York Google PhD Research & Engineering: Insights from MIT+CSAIL PhD Alumni Registration is required: https://forms.gle/HvKNKeGdvdGNebJF8 When/Where: November 6th, 12-2pm in CSAIL Star Seminar Room (32-D436)Abstract: Join MIT PhD alumni now working at Google for an insider's look at life as a Googler. Learn about:Day-to-day experiences: Hear firsthand accounts of what it's like to work on research and engineering projects at Google.Cutting-edge research: Discover how Google fosters innovation and pushes the boundaries of technology.Application and interview process: Get valuable tips and insights into successfully navigating the Google hiring process.This session will feature a presentation followed by a Q&A. Speaker Bio:Suvinay Subramanian is a Software Engineer at Google, where he works on the architecture and codesign for Google's ML supercomputers, Tensor Processing Units (TPUs). His work has directly impacted innovative architecture and systems features in multiple generations of TPUs, and empowered performant training and serving of Google's research and production AI workloads. He also co-hosts the Computer Architecture Podcast that spotlights cutting-edge developments in computer architecture and systems. Suvinay received his PhD here at CSAIL under Professor Daniel Sanchez.  Anny Zheng is a Senior Software Engineer of AI & Infrastructure at Google. She is directly engaged in the definition and design of Google's next generation data center network architecture for ML. She also serves as the Chair of Optica Optical Communications Technical Group and IEEE Communication Society Standards Liaison of Optical Networking Technical Committee. Anny received her PhD here at MIT under RLE Professor Vincent Chan  TBD

Using Metareasoning in Heuristic Search - Concurrent Planning and Execution and Task and Motion Planning

Erez Karpas
Technion – Israel Institute of Technology
3:00P
- 4:00P

Location

32-261
Add to Calendar 2025-11-06 15:00:00 2025-11-06 16:00:00 America/New_York Using Metareasoning in Heuristic Search - Concurrent Planning and Execution and Task and Motion Planning Speaker: Erez KarpasAbstract: Metareasoning is the process of thinking what to think about. In this talk, I will present two applications of metareasoning to heuristic search.The first addresses concurrent planning and execution, in which time starts ticking as soon as the agent starts planning, and the agent may have to execute an action even before it has a complete plan. The second addresses task and motion planning, where the agent must decide where to invest its computation time, in order to find a solution as quickly as possible.Short bio:Erez Karpas is an Associate Professor at the Faculty of Data and Decision Sciences, Technion – Israel Institute of Technology, where he is also head of the Cognitive Robotics Lab. He is currently a Harrington Faculty Fellow at the Department of Computer Science, The University of Texas at Austin.His main research interests are artificial intelligence and robotics, and specifically automated planning for robots.He was a postdoctoral associate at the Model-based Embedded and Robotics Systems Group at MIT, under the supervision of Prof. Brian Williams. Before that, he was a research fellow and the research coordinator of the Technion-Microsoft Electronic-Commerce Research Center, under Prof. Moshe Tennenholtz. He completed his Ph.D. in 2012 under the supervision of Prof. Carmel Domshlak and Prof. Shaul Markovitch at the Faculty of Industrial Engineering and Management at the Technion – Israel Institute of Technology. He obtained his M.Sc. (2005) and B.Sc. (2001) at the Department of Computer Science, Ben Gurion University. TBD

November 07, 2025

Efficiently Batching Unambiguous Interactive Proofs

Rohan Goyal
MIT

Part Of

CIS Seminar 2025-2026
10:30A
- 12:00P

Location

32-G882
Add to Calendar 2025-11-07 10:30:00 2025-11-07 12:00:00 America/New_York Efficiently Batching Unambiguous Interactive Proofs We show that if a language $\mathcal{L}$ admits a public-coin unambiguous interactive proof (UIP) with round complexity $\ell$, where $a$ bits are communicated per round, then the \emph{batch language} $\mathcal{L}^{\otimes k}$, i.e. the set of $k$-tuples of statements all belonging to $\cL$, has an unambiguous interactive proof with round complexity $\ell\cdot\mathsf{polylog}(k)$, per-round communication of $a\cdot \ell\cdot\mathsf{polylog}(k) + \poly(\ell)$ bits, assuming the verifier in the $\UIP$ has depth bounded by $\mathsf{polylog}(k)$.  Prior to this work, the best known batch $\UIP$ for $\mathcal{L}^{\otimes{k}}$ required communication complexity at least $(\mathsf{poly}(a)\cdot k^{\epsilon} + k) \cdot \ell^{1/\epsilon}$ for any arbitrarily small constant $\epsilon>0$ (Reingold-Rothblum-Rothblum, STOC 2016).As a corollary of our result, we obtain a \emph{doubly efficient proof system}, that is, a proof system whose proving overhead is polynomial in the time of the underlying computation, for any language computable in polynomial space and in time at most $n^{O\left(\sqrt{\frac{\log n}{\log\log n}}\right)}$.  This expands the state of the art of doubly efficient proof systems:  prior to our work, such systems were known for languages computable in polynomial space and in time $n^{({\log n})^\delta}$ for a small $\delta>0$ significantly smaller than $1/2$ (Reingold-Rothblum-Rothblum, STOC 2016).Based on joint work with Bonnie Berger, Matthew Hong, and Yael Kalai.  TBD

November 12, 2025

TBA

Victoria Popic
Broad Clinical Labs

Part Of

Bioinformatics Seminar 2025
11:30A
- 1:00P

Location

32-G575
Add to Calendar 2025-11-12 11:30:00 2025-11-12 13:00:00 America/New_York TBA TBA TBD

TBA

Uriya First

Part Of

Algorithms and Complexity (A&C) 2025 - 2026
4:00P
- 5:00P

Location

32-G575
Add to Calendar 2025-11-12 16:00:00 2025-11-12 17:00:00 America/New_York TBA TBA TBD

November 13, 2025

Easy Acceleration with Distributed Arrays on the World's Largest Interactive AI Supercomputer

Part Of

Boston IEEE/ACM 2025 -2026
6:30P
- 8:00P

Location

32-G449
Add to Calendar 2025-11-13 18:30:00 2025-11-13 20:00:00 America/New_York Easy Acceleration with Distributed Arrays on the World's Largest Interactive AI Supercomputer Boston Chapter of the IEEE Computer Society, GBC/ACM and MIT Student Chapter of SIAM (Society for Industrial and Applied Mathematics)7:00 PM, Thursday, 13 November 2025MIT Room 32-G449 (Kiva) and online via Zoom Easy Acceleration with Distributed Arrays on the World’s Largest Interactive AI SupercomputerJeremy Kepner Please register in advance for this seminar even if you plan to attend in person athttps://acm-org.zoom.us/webinar/register/1017607373508/WN_lYs4lxKfSlGkMVq71ibN-g After registering, you will receive a confirmation email containing information about joining the webinar.Indicate on the registration form if you plan to attend in person. This will help us determine whether the room is close to reaching capacity. We plan to serve light refreshments (probably pizza) before the talk starting at around 6:30 pm. Letting us know you will come in person will help us determine how much pizza to order.We may make some auxiliary material such as slides and access to the recording available after the seminar to people who have registered.Abstract:High level programming languages and GPU accelerators are powerful enablers for a wide range of applications. Achieving scalable vertical (within a compute node), horizontal (across compute nodes), and temporal (over different generations of hardware) performance while retaining productivity requires effective abstractions. Distributed arrays are one such abstraction that enables high level programming to achieve highly scalable performance. Distributed arrays achieve this performance by deriving parallelism from data locality, which naturally leads to high memory bandwidth efficiency. This talk explores distributed array performance on a variety of hardware. Scalable performance is demonstrated within and across CPU cores, CPU nodes, and GPU nodes. The interactive AI supercomputing hardware used spans decades and allows a direct comparison of hardware improvements over this time range. Bio:Dr. Jeremy Kepner is an MIT Lincoln Laboratory Fellow. He founded the Lincoln Laboratory Supercomputing Center and pioneered the establishment of the Massachusetts Green High Performance Computing Center. He has developed novel big data and parallel computing software used by thousands of scientists and engineers worldwide. He has led several embedded computing efforts, which earned him a 2011 R&D 100 Award. Kepner has chaired the SIAM Data Mining conference, the IEEE Big Data conference, and the IEEE High Performance Extreme Computing conference. Kepner is the author of two bestselling books, Parallel MATLAB for Multicore and Multinode Computers, and Graph Algorithms in the Language of Linear Algebra. His peer-reviewed publications include works on abstract algebra, astronomy, astrophysics, cloud computing, cybersecurity, data mining, databases, graph algorithms, health sciences, plasma physics, signal processing, and 3D visualization. In 2014, he received Lincoln Laboratory's Technical Excellence Award.Kepner holds a BA degree in astrophysics from Pomona College and a PhD degree in astrophysics from Princeton University. He is a fellow of the Society of Industrial Applied Mathematics (SIAM) and is a faculty advisor to the MIT SIAM student group. Directions to 32-G449 - MIT Stata Center, 32 Vassar Street, Cambridge, MA: Please use the main entrance to the Stata Center at 32 Vassar Street (the entrance closest to Main street) as those doors will be unlocked. Upon entering, proceed to the elevators which will be on the right after passing a large set of stairs and a MITAC kiosk. Take the elevator to the 4th floor and turn right, following the hall to an open area; 32-G449 will be on the left. Location of Stata on campus map  This joint meeting of the Boston Chapter of the IEEE Computer Society and GBC/ACM will be hybrid (in person and online).Up-to-date information about this and other talks is available online at https://ewh.ieee.org/r1/boston/computer/. You can sign up to receive updated status information about this talk and informational emails about future talks at https://mailman.mit.edu/mailman/listinfo/ieee-cs, our self-administered mailing list. TBD

November 14, 2025

Charles River Crypto Day @ MIT

Jad Silbak(MIT), Saroja Erabelli(NYU), Aparna Gupte(MIT), Jesko Dujmovic(BU and Northeastern)

Part Of

CIS Seminar 2025-2026
9:30A
- 4:00P

Location

32-G882
Add to Calendar 2025-11-14 9:30:00 2025-11-14 16:00:00 America/New_York Charles River Crypto Day @ MIT When: Friday, November 14Where: MIT Stata Center, Hewlett Room (G-882)Organizers: Ran Canetti, Henry Corrigan-Gibbs, Yael Kalai, Eran Tromer, Vinod Vaikuntanathan and Daniel WichsProgram :9:30am–10:00am: Coffee/Welcome 10:00am - 11:00am: Jad Silbak (MIT)"Extractors for Samplable Distributions with Low Min-Entropy"11:30am - 12:30pm: Saroja Erabelli (NYU)"Shuffling is Universal: Statistical Additive Randomized Encodings for All Functions"12:30pm - 1:30pm: Lunch (provided)1:30pm - 2:30pm: Aparna Gupte (MIT)"Classical Obfuscation of Quantum Circuits via Publicly-Verifiable QFHE"3:00pm - 4:00pm: Jesko Dujmovic (BU and Northeastern)"When Simple Permutations Mix Poorly: Limited Independence Does Not Imply Pseudorandomness"*****Abstracts for hour long talks*****Title: Extractors for Samplable Distributions with Low Min-EntropyAbstract: Trevisan and Vadhan (FOCS 2000) introduced the notion of seedless extractors for samplable distributions—that is, deterministic extractors for sources that can be generated by an efficient sampling algorithm.They showed that, under strong complexity theoretic (derandomization) hardness assumption, there are extractors for samplable distributions with large min-entropy of 𝑘 = (1 − 𝛾) · 𝑛, for some small constant 𝛾>0. Recent work by Ball, Goldin, Dachman-Soled and Mutreja (FOCS 2023) weakened the hardness assumption. However, since the original paper by Trevisan and Vadhan, there has been no improvement in the min-entropy threshold 𝑘.In this talk, I will survey recent developments on this problem. In particular, I will present a construction for samplable distributions with low min-entropy of 𝑘 = 𝑛^{1−𝛾} for some constant 𝛾>0, achieving 𝑘<𝑛/2 (which is a barrier for the construction of Trevisan and Vadhan).Our approach builds on the technique of Trevisan and Vadhan, while introducing new objects and ideas. We introduce and construct two objects: an errorless (seedless) condenser for samplable distributions, and functions that are hard to compute on every samplable distribution with sufficient min-entropy. We use techniques by Shaltiel and Silbak (STOC 2024), as well as additional tools and ideas, to construct the two new objects, under hardness assumptions. We then show how to modify the construction of Trevisan and Vadhan, using these new objects, so that the barrier of 𝑘=𝑛/2 can be bypassed, and we can achieve an extractor for samplable distributions with low min-entropy.This is a joint work with Marshall Ball and Ronen Shaltiel.**********Title: Shuffling is Universal: Statistical Additive Randomized Encodings for All FunctionsAbstract: The shuffle model is a widely used abstraction for non-interactive anonymous communication. It allows $n$ parties holding private inputs $x_1,\dots,x_n$ to simultaneously send messages to an evaluator, so that the messages are received in a random order. The evaluator can then compute a joint function $f(x_1,\dots,x_n)$, ideally while learning nothing else about the private inputs. The model has become increasingly popular both in cryptography, as an alternative to non-interactive secure computation in trusted setup models, and even more so in differential privacy, as an intermediate between the high-privacy, little-utility local model and the little-privacy, high-utility central curator model.The main open question in this context is which functions $f$ can be computed in the shuffle model with {\em statistical security.} While general feasibility results were obtained using public-key cryptography, the question of statistical security has remained elusive. The common conjecture has been that even relatively simple functions cannot be computed with statistical security in the shuffle model.We refute this conjecture, showing that all functions can be computed in the shuffle model with statistical security. In particular, any differentially private mechanism in the central curator model can also be realized in the shuffle model with essentially the same utility, and while the evaluator learns nothing beyond the central model result.This feasibility result is obtained by constructing a statistically secure additive randomized encoding (ARE) for any function. An ARE randomly maps individual inputs to group elements whose sum only reveals the function output. Similarly to other types of randomized encoding of functions, our statistical ARE is efficient for functions in $NC^1$ or $NL$. Alternatively, we get computationally secure ARE for all polynomial-time functions using a one-way function. More generally, we can convert any (information-theoretic or computational) “garbling scheme” to an ARE with a constant-factor size overhead.Joint work with Nir Bitansky, Rachit Garg, and Yuval Ishai.**********Title: Classical Obfuscation of Quantum Circuits via Publicly-Verifiable QFHEAbstract: A classical obfuscator for quantum circuits is a classical program that, given the classical description of a quantum circuit Q, outputs the classical description of a functionally equivalent quantum circuit Q' that hides as much as possible about Q. Previously, the only known feasibility result for classical obfuscation of quantum circuits (Bartusek and Malavolta, ITCS 2022) was limited to "nul" security, which is only meaningful for circuits that always reject. On the other hand, if the obfuscator is allowed to compile the quantum circuit Q into a quantum state |Q'>, there exist feasibility results for obfuscating much more expressive classes of circuits: All pseudo-deterministic quantum circuits (Bartusek, Kitagawa, Nishimaki and Yamakawa, STOC 2023, Bartusek, Brakerski and Vaikuntanathan, STOC 2024), and even all unitaries (Huang and Tang, FOCS 2025).We show that (relative to a classical oracle) there exists a classical obfuscator for all pseudo-deterministic quantum circuits. As our main technical step, we give the first construction of a compact quantum fully-homomorphic encryption (QFHE) scheme that supports public verification of (pseudo-deterministic) quantum evaluation, relative to a classical oracle.To construct our QFHE scheme, we improve on an approach introduced by Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023), which previously required ciphertexts that are both quantum and non-compact due to a heavy use of quantum coset states and their publicly-verifiable properties. As part of our core technical contribution, we introduce new techniques for analyzing coset states that can be generated "on the fly", by proving new cryptographic properties of the one-shot signature scheme of Shmueli and Zhandry (CRYPTO 2025). Our techniques allow us to produce QFHE ciphertexts that are purely classical, compact, and publicly-verifiable. This additionally yields the first classical verification of quantum computation protocol for BQP that simultaneously satisfies blindness and public-verifiability.**********Title: When Simple Permutations Mix Poorly: Limited Independence Does Not Imply PseudorandomnessAbstract: Over the past two decades, several works have used (almost) $k$-wise independence as a proxy property for block ciphers, since it guarantees resistance against broad classes of statistical attacks. For example, even the case $k = 2$ already implies security against differential and linear cryptanalysis. Hoory, Magen, Myers, and Rackoff (ICALP ’04; TCS ’05) formulated an appealing conjecture: if the sequential composition of $T$ independent local randomized permutations is (close to) four-wise independent, then it should also be a pseudorandom permutation. Here, “local” means that each output bit depends on only a constant number of input bits. This conjecture offers a potential strong justification for analyses of block ciphers that establish (almost) $k$-wise independence of this type of constructions. In this work, we disprove the conjecture in full generality by presenting an explicit local randomized permutation whose sequential composition is four-wise independent, but not a pseudorandom permutation. Our counterexample in fact extends to $k$-wise independence for any constant~$k$. TBD

November 17, 2025

Understanding the Efficacy of Phishing Training in Practice

Grant Ho
UChicago

Part Of

CSAIL Security Seminar 2025 - 2026
12:00P
- 1:00P

Location

32-G449
Kiva
Add to Calendar 2025-11-17 12:00:00 2025-11-17 13:00:00 America/New_York Understanding the Efficacy of Phishing Training in Practice Abstract: As a result of regulation and cyber-insurance mandates, many organizations require their employees to periodically take various forms of cybersecurity training. Despite a long history of research supporting some forms of security training, this practice remain controversial in practice; and recent work has questioned its efficacy and highlighted the burden it can impose. This talk will discuss our recent paper that empirically evaluated the efficacy of two ubiquitous forms of enterprise security training: annual cybersecurity awareness training and embedded anti-phishing training exercises. Specifically, our work conducted and analyzed an 8-month randomized controlled experiment involving ten simulated phishing campaigns sent to over 19,500 employees at a large healthcare organization. Our results suggest that commonly deployed anti-phishing training programs are unlikely to offer significant protective value, and our analysis surfaces several challenges that these trainings may inherently face in-the-wild. Bio: Grant Ho is an assistant professor in computer science at the University of Chicago. His research focuses on securing enterprises and organizations through data-driven insights and methods. Previously Grant was a postdoctoral fellow at UC San Diego and received his PhD from UC Berkeley. His work has been recognized by the 2017 Internet Defense Prize and four distinguished/best papers awards across the top security conferences, such as IEEE S&P and Usenix Security. Zoom info:   Meeting ID: 945 5603 5878   Password: 865039  TBD

November 18, 2025

Visual Computing Seminar: Support Function Parameterization of Convex Hulls for Fast Distance Queries

Anh Truong
CSAIL
12:00P
- 1:00P

Location

32-D463
Star Room
Add to Calendar 2025-11-18 12:00:00 2025-11-18 13:00:00 America/New_York Visual Computing Seminar: Support Function Parameterization of Convex Hulls for Fast Distance Queries Abstract:TBA TBD

CSAIL Forum with Sam Madden

Sam Madden
CSAIL

Part Of

CSAIL Forum
12:00P
- 1:00P

Location

TBD
Add to Calendar 2025-11-18 12:00:00 2025-11-18 13:00:00 America/New_York CSAIL Forum with Sam Madden Please join us for CSAIL Forum with Prof. Sam MaddenSpeaker: Sam Madden, College of Computing Distinguished ProfessorDate/time: Tuesday 12:00-1:00 EDT, November 18, 2025 Venue: Live stream via Zoom: Registration requiredTitle: How I Learned to Start Querying and Love AIAbstract: Over the past five decades, the relational database model has proven to be a scaleable and adaptable model for querying a variety of structured data, with use cases in analytics, transactions, graphs, streaming and more. However, most of the world’s data is unstructured. Thus, despite their success, the reality is that the vast majority of the world’s data has remained beyond the reach of relational systems. The rise of deep learning and generative AI offers an opportunity to change this. These models provide a stunning capability to extract semantic understanding from almost any type of document, including text, images, and video which can extend the reach of databases to all the world's data. In this talk I explore how these new technologies will transform the way we build database management software, creating new systems that can ingest, store, process, and query all data. Building such systems presents many opportunities and challenges. In this talk I focus on three: scalability, correctness, and reliability, and argue that the declarative programming paradigm that has served relational systems so well offers a path forward in the new world of AI data systems as well. To illustrate this, I describe several examples of such declarative AI systems we have built in document and video processing, and provide a set of research challenges and opportunities to guide research in this exciting area going forward. Bio:  Samuel Madden is the College of Computing Distinguished Professor of Computing at MIT. His research interests include databases, distributed computing, and AI systems. Past research projects include learned database systems, the C-Store column-oriented database system, and the CarTel mobile sensor network system. Madden heads the Data Systems Group at MIT and the Data Science and AI Lab (DSAIL), an industry supported collaboration focused on developing systems that use AI and machine learning. Registration requiredMadden received his Ph.D. from the University of California at Berkeley in 2003 where he worked on the TinyDB system for data collection from sensor networks. Madden was named one of Technology Review's Top 35 Under 35 in 2005 and an ACM Fellow in 2020, and is the recipient of several awards including the SIGMOD Edgar F. Codd Innovations Award and "test of time" awards from VLDB, SIGMOD, SIGMOBILE, and SenSys. He is the co-founder and Chief Scientist at Cambridge Mobile Telematics, which develops technology to make roads safer and drivers better.  TBD

AI4Society Seminar - Nikhil Garg - Recommendations in High-Stakes Settings

Nikhil Garg
Cornell University
4:00P
- 5:00P

Location

45-102
Add to Calendar 2025-11-18 16:00:00 2025-11-18 17:00:00 America/New_York AI4Society Seminar - Nikhil Garg - Recommendations in High-Stakes Settings Talk Abstract: Recommendation and search systems are now used in high-stakes settings, including to help find jobs, schools, and partners. Building public interest recommender systems in such settings bring both individual-level (enabling exploration, diversity, data quality) and societal (fairness, capacity constraints, algorithmic monoculture) challenges. In this talk, I’ll discuss our theoretical, empirical, and deployment work in tackling these challenges, including ongoing work on (a) applicant behavior and recommendations for the NYC HS match, (b) a platform to help discharge patients to long-term care facilities, (c) feed ranking algorithms on Bluesky for research paper recommendations. Speaker Bio: Nikhil Garg is an Assistant Professor of Operations Research and Information Engineering at Cornell Tech as part of the Jacobs Institute. He uses algorithms, data science, and economics approaches to study democracy, markets, and societal systems at large. Nikhil has received the NSF CAREER, INFORMS George Dantzig Dissertation Award, an honorable mention for the ACM SIGecom dissertation award, and paper awards including from CSCW, EAAMO, and CHIL. He received his PhD from Stanford University and has spent considerable collaborating with government agencies and non-profits. His work has been supported by the NSF, NASA, Sloan Foundation, and other organizations.   TBD
  • CSAIL Forum
  • Dertouzos Distinguished Lecture
  • Hot Topics in Computing
  • AI@MIT Reading Group
  • Algorithms and Complexity (A&C) 2025 - 2026
  • Bioinformatics Seminar 2025
  • Biomedical Imaging and Analysis 2025 - 2026
  • Boston IEEE/ACM 2025 -2026
  • CIS Seminar 2025-2026
  • CSAIL Security Seminar 2025 - 2026
  • EECS Special Seminar
  • Embodied Intelligence 2025-2026
  • HCI Seminar 2025-2026
  • ML Tea
  • Theory of Computation (ToC) 2025 - 2026
  • Thesis Defense
  • Algorithms and Complexity (A&C) 2024 - 2025
  • Biomedical Imaging and Analysis 2024 - 2025
  • Boston IEEE/ACM 2024 -2025
  • Brains, Minds and Machines 2024 - 2025
  • CIS Seminar 2024 - 2025
  • CSAIL Security Seminar 2024 - 2025
  • Embodied Intelligence 2024-2025
  • Theory of Computation (ToC) 2024 - 2025
  • HCI Seminar Series 2024
  • Theory of Computation (ToC) Seminar 2024
  • Brains, Minds and Machines 2023 - 2024
  • Boston IEEE/ACM Joint Seminar Series 2023 - 2024
  • CIS Seminar Series 2023 - 2024
  • Theory of Computation (ToC) Seminar 2023
  • Biomedical Imaging and Analysis 2023 - 2024
  • Bioinformatics Seminar Series 2023
  • Machine Learning and Health Seminar Series, Fall 2023
  • CSAIL Security Seminar Series 2023 - 2024
  • Algorithms and Complexity Seminar 2023
  • Brains, Minds and Machines Seminar Series 2022 - 2023
  • Biomedical Imaging and Analysis 2022 - 2023
  • Boston IEEE/ACM Joint Seminar Series 2022 - 2023
  • CSAIL Security Seminar Series 2022-2023
  • Cryptography and Information (CIS) Seminar 2022
  • HCI Seminar Series 2022 - 2023
  • CSAIL Security Seminar Series 2020
  • IEEE Computer Society and GBC/ACM 2019-2020
  • Brains, Minds and Machines Seminar Series 2019 - 2020
  • Algorithms and Complexity Seminar 2019-2020
  • Biomedical Imaging and Analysis 2019 - 2020
  • Fast Code Seminar 2019
  • Machine Learning Seminar Series 2019
  • Robotics@MIT Seminar Series 2019
  • CSAIL Security Seminar Series 2019
  • EECS Special Seminar Series 2019
  • Bioinformatics Seminar Series 2019
  • HCI Seminar Series 2019
  • Theory of Computation Seminar (ToC) 2019
  • Cryptography and Information Security (CIS) Seminar 2019
  • CSAIL Alliances Tech Talk 2018 - 2019
  • Programming Languages & Software Engineering Seminar 2018-2019
  • HCI Seminar Series 2018
  • Algorithms & Complexity Seminars 2018-2019
  • Biomedical Imaging and Analysis 2018 - 2019
  • IEEE Computer Society and GBC/ACM 2018-2019
  • Brains, Minds and Machines 2018/2019
  • Machine Learning Seminar Series 2018
  • Theory and Beyond
  • CSAIL Security Seminar 2018/2019
  • Robotics@MIT Seminar Series 2018
  • Bioinformatics Seminar Series 2018
  • Theory of Computation (TOC) 2018
  • Cryptography and Information Seminar (CIS) 2018
  • Brains, Minds and Machines Seminar Series 2017/2018
  • IEEE Computer Society and GBC/ACM 2017/2018
  • Machine Learning Seminar Series
  • CSAIL Security Seminar 2017/2018
  • Algorithms and Complexity Seminar Series 2017/2018
  • Biomedical Imaging and Analysis 2017/2018
  • Brains, Minds and Machines Seminar Series 2017
  • Machine Learning Seminar Series
  • Vision Seminar Series 2017
  • Robotics@MIT Seminar Series 2017
  • Bioinformatics Seminar Series 2017
  • EECS Special Seminar Series 2017
  • Cryptography and Information Seminar (CIS) 2017
  • Theory of Computation (TOC) 2017
  • HCI Seminar Series
  • Biomedical Imaging and Analysis 2016/2017
  • PL/SE Serminar Series 2016/2017
  • Algorithms and Complexity Seminar Series 2016/2017
  • CSAIL Security Seminar 2016/2017
  • Boston IEEE/ACM Joint Seminar Series 2016/2017

Event Type

  • Social Event
  • Private Event
  • Seminar
  • Thesis Defence

Impact Area

  • Big Data
  • Cybersecurity
  • Education
  • Energy
  • Entertainment
  • Health Care
  • Internet of Things
  • Manufacturing
  • Transportation
  • Wireless

Research Area

  • Algorithms & Theory
  • AI & ML
  • Computational Biology
  • Computer Architecture
  • Graphics & Vision
  • Human-Computer Interaction
  • Programming Languages & Software Engineering
  • Robotics
  • Security & Cryptography
  • Systems & Networking

MIT CSAIL

Massachusetts Institute of Technology

Computer Science & Artificial Intelligence Laboratory

32 Vassar St, Cambridge MA 02139

  • Contact
  • Press Requests
  • Accessibility
MIT Schwarzman College of Computing