Skip to main content
  • For Students
  • For Industry
  • For Members
  • Accessibility
  • Login
MIT CSAIL
  • Research
  • People
  • News
  • Events
  • Symposia
  • Forum
  • About
  • Research
  • People
  • News
  • Events
  • Symposia
  • Forum
  • About
  • For Students
  • For Industry
  • For Members
  • Accessibility
  • Login
  • Contact
  • Press Requests
  • Accessibility

Events

Hide Sidebar Show Sidebar
Hide Sidebar Show Sidebar

Current Seminar Series

CSAIL Forum
Dertouzos Distinguished Lecture
Hot Topics in Computing
AI@MIT Reading Group
Algorithms and Complexity (A&C) 2025 - 2026
Bioinformatics Seminar 2025
Biomedical Imaging and Analysis 2025 - 2026
Boston IEEE/ACM 2025 -2026
Brains, Minds and Machines 2025 - 2026
CIS Seminar 2025-2026
CSAIL Security Seminar 2025 - 2026
EECS Special Seminar
Embodied Intelligence 2025-2026
HCI Seminar 2025-2026
ML+Crypto Seminar
ML Tea
Theory of Computation (ToC) 2025 - 2026
Thesis Defense
Previous Seminar Series

November 11, 2025

No events scheduled

November 12, 2025

TBA

Victoria Popic
Broad Clinical Labs

Part Of

Bioinformatics Seminar 2025
11:30A
- 1:00P

Location

32-G575
Add to Calendar 2025-11-12 11:30:00 2025-11-12 13:00:00 America/New_York TBA TBA TBD

Low-Query Locally Testable Codes

Uriya First
University of Haifa

Part Of

Algorithms and Complexity (A&C) 2025 - 2026
4:00P
- 5:00P

Location

32-G575
Add to Calendar 2025-11-12 16:00:00 2025-11-12 17:00:00 America/New_York Low-Query Locally Testable Codes Locally testable codes (LTCs) are a special kind of error correcting codes where the receiver can correctly detect, with high probability, whether the received data was significantly corrected by reading just a few of its letters (chosen at random according to some distribution). This is useful because decoding very corrupted data may be time-consuming --- it is better to ask for retransmission in such a case. However, LTCs are usually motivated because of their connection to probabilistically checkable proofs (PCPs). While good error correcting codes were well-known to exist almost since the topic was founded, the existence of good LTCs remained open for a long time. In fact, LTCs were even shown to be constrained, e.g., Ben-Sasson, Goldreich and Sudan showed that there are no good 2-query LTCs on a binary alphabet, where "2-query" means that one is allowed to read only 2 letters from the received word. Surprisingly, Dinur-Evra-Lubotzky-Livne-Mozes and Panteleev-Kalachev (independently) showed recently that good LTCs do exist. In a more recent work with Tali Kaufman, we showed that there are even good 2-query LTCs (with a huge alphabet), and that the DELLM construction can be explained by means of cosystolic expansion (of sheaves). Finally, in a recent (still unpublished) work with Stav Lazarovich, we show that good 2-query LTCs exist for every alphabet of 3 or more letters, so the restriction proved by Ben-Sasson, Goldreich and Sudan is the only one.I will survey the above works and then explain some of the key ideas of my work with Kaufman, namely, the use of sheaves (a.k.a. local systems) on cell complexes to construct codes with a 2-query tester, and the use of a novel (almost-)local criterion to establish their desired properties. TBD

November 13, 2025

Easy Acceleration with Distributed Arrays on the World's Largest Interactive AI Supercomputer

Part Of

Boston IEEE/ACM 2025 -2026
6:30P
- 8:00P

Location

32-G449
Add to Calendar 2025-11-13 18:30:00 2025-11-13 20:00:00 America/New_York Easy Acceleration with Distributed Arrays on the World's Largest Interactive AI Supercomputer Boston Chapter of the IEEE Computer Society, GBC/ACM and MIT Student Chapter of SIAM (Society for Industrial and Applied Mathematics)7:00 PM, Thursday, 13 November 2025MIT Room 32-G449 (Kiva) and online via Zoom Easy Acceleration with Distributed Arrays on the World’s Largest Interactive AI Supercomputer --- Jeremy Kepner Please register in advance for this seminar even if you plan to attend in person athttps://acm-org.zoom.us/webinar/register/1017607373508/WN_lYs4lxKfSlGkMVq71ibN-g After registering, you will receive a confirmation email containing information about joining the webinar.Indicate on the registration form if you plan to attend in person. This will help us determine whether the room is close to reaching capacity. We plan to serve light refreshments (probably pizza) before the talk starting at around 6:30 pm. Letting us know you will come in person will help us determine how much pizza to order.We may make some auxiliary material such as slides and access to the recording available after the seminar to people who have registered.Abstract:High level programming languages and GPU accelerators are powerful enablers for a wide range of applications. Achieving scalable vertical (within a compute node), horizontal (across compute nodes), and temporal (over different generations of hardware) performance while retaining productivity requires effective abstractions. Distributed arrays are one such abstraction that enables high level programming to achieve highly scalable performance. Distributed arrays achieve this performance by deriving parallelism from data locality, which naturally leads to high memory bandwidth efficiency. This talk explores distributed array performance on a variety of hardware. Scalable performance is demonstrated within and across CPU cores, CPU nodes, and GPU nodes. The interactive AI supercomputing hardware used spans decades and allows a direct comparison of hardware improvements over this time range. Bio:Dr. Jeremy Kepner is an MIT Lincoln Laboratory Fellow. He founded the Lincoln Laboratory Supercomputing Center and pioneered the establishment of the Massachusetts Green High Performance Computing Center. He has developed novel big data and parallel computing software used by thousands of scientists and engineers worldwide. He has led several embedded computing efforts, which earned him a 2011 R&D 100 Award. Kepner has chaired the SIAM Data Mining conference, the IEEE Big Data conference, and the IEEE High Performance Extreme Computing conference. Kepner is the author of two bestselling books, Parallel MATLAB for Multicore and Multinode Computers, and Graph Algorithms in the Language of Linear Algebra. His peer-reviewed publications include works on abstract algebra, astronomy, astrophysics, cloud computing, cybersecurity, data mining, databases, graph algorithms, health sciences, plasma physics, signal processing, and 3D visualization. In 2014, he received Lincoln Laboratory's Technical Excellence Award.Kepner holds a BA degree in astrophysics from Pomona College and a PhD degree in astrophysics from Princeton University. He is a fellow of the Society of Industrial Applied Mathematics (SIAM) and is a faculty advisor to the MIT SIAM student group. Directions to 32-G449 - MIT Stata Center, 32 Vassar Street, Cambridge, MA: Please use the main entrance to the Stata Center at 32 Vassar Street (the entrance closest to Main street) as those doors will be unlocked. Upon entering, proceed to the elevators which will be on the right after passing a large set of stairs and a MITAC kiosk. Take the elevator to the 4th floor and turn right, following the hall to an open area; 32-G449 will be on the left. Location of Stata on campus map  This joint meeting of the Boston Chapter of the IEEE Computer Society and GBC/ACM will be hybrid (in person and online).Up-to-date information about this and other talks is available online at https://ewh.ieee.org/r1/boston/computer/. You can sign up to receive updated status information about this talk and informational emails about future talks at https://mailman.mit.edu/mailman/listinfo/ieee-cs, our self-administered mailing list. TBD

November 14, 2025

Charles River Crypto Day @ MIT

Jad Silbak(MIT), Saroja Erabelli(NYU), Aparna Gupte(MIT), Jesko Dujmovic(BU and Northeastern)

Part Of

CIS Seminar 2025-2026
9:30A
- 4:00P

Location

32-G882
Add to Calendar 2025-11-14 9:30:00 2025-11-14 16:00:00 America/New_York Charles River Crypto Day @ MIT When: Friday, November 14Where: MIT Stata Center, Hewlett Room (G-882)Organizers: Ran Canetti, Henry Corrigan-Gibbs, Yael Kalai, Eran Tromer, Vinod Vaikuntanathan and Daniel WichsProgram :9:30am–10:00am: Coffee/Welcome 10:00am - 11:00am: Jad Silbak (MIT)"Extractors for Samplable Distributions with Low Min-Entropy"11:30am - 12:30pm: Saroja Erabelli (NYU)"Shuffling is Universal: Statistical Additive Randomized Encodings for All Functions"12:30pm - 1:30pm: Lunch (provided)1:30pm - 2:30pm: Aparna Gupte (MIT)"Classical Obfuscation of Quantum Circuits via Publicly-Verifiable QFHE"3:00pm - 4:00pm: Jesko Dujmovic (BU and Northeastern)"When Simple Permutations Mix Poorly: Limited Independence Does Not Imply Pseudorandomness"*****Abstracts for hour long talks*****Title: Extractors for Samplable Distributions with Low Min-EntropySpeaker: Jad SilbakAbstract: Trevisan and Vadhan (FOCS 2000) introduced the notion of seedless extractors for samplable distributions—that is, deterministic extractors for sources that can be generated by an efficient sampling algorithm.They showed that, under strong complexity theoretic (derandomization) hardness assumption, there are extractors for samplable distributions with large min-entropy of 𝑘 = (1 − 𝛾) · 𝑛, for some small constant 𝛾>0. Recent work by Ball, Goldin, Dachman-Soled and Mutreja (FOCS 2023) weakened the hardness assumption. However, since the original paper by Trevisan and Vadhan, there has been no improvement in the min-entropy threshold 𝑘.In this talk, I will survey recent developments on this problem. In particular, I will present a construction for samplable distributions with low min-entropy of 𝑘 = 𝑛^{1−𝛾} for some constant 𝛾>0, achieving 𝑘<𝑛/2 (which is a barrier for the construction of Trevisan and Vadhan).Our approach builds on the technique of Trevisan and Vadhan, while introducing new objects and ideas. We introduce and construct two objects: an errorless (seedless) condenser for samplable distributions, and functions that are hard to compute on every samplable distribution with sufficient min-entropy. We use techniques by Shaltiel and Silbak (STOC 2024), as well as additional tools and ideas, to construct the two new objects, under hardness assumptions. We then show how to modify the construction of Trevisan and Vadhan, using these new objects, so that the barrier of 𝑘=𝑛/2 can be bypassed, and we can achieve an extractor for samplable distributions with low min-entropy.This is a joint work with Marshall Ball and Ronen Shaltiel.Title: Shuffling is Universal: Statistical Additive Randomized Encodings for All FunctionsSpeaker: Saroja ErabelliAbstract: The shuffle model is a widely used abstraction for non-interactive anonymous communication. It allows $n$ parties holding private inputs $x_1,\dots,x_n$ to simultaneously send messages to an evaluator, so that the messages are received in a random order. The evaluator can then compute a joint function $f(x_1,\dots,x_n)$, ideally while learning nothing else about the private inputs. The model has become increasingly popular both in cryptography, as an alternative to non-interactive secure computation in trusted setup models, and even more so in differential privacy, as an intermediate between the high-privacy, little-utility local model and the little-privacy, high-utility central curator model.The main open question in this context is which functions $f$ can be computed in the shuffle model with {\em statistical security.} While general feasibility results were obtained using public-key cryptography, the question of statistical security has remained elusive. The common conjecture has been that even relatively simple functions cannot be computed with statistical security in the shuffle model.We refute this conjecture, showing that all functions can be computed in the shuffle model with statistical security. In particular, any differentially private mechanism in the central curator model can also be realized in the shuffle model with essentially the same utility, and while the evaluator learns nothing beyond the central model result.This feasibility result is obtained by constructing a statistically secure additive randomized encoding (ARE) for any function. An ARE randomly maps individual inputs to group elements whose sum only reveals the function output. Similarly to other types of randomized encoding of functions, our statistical ARE is efficient for functions in $NC^1$ or $NL$. Alternatively, we get computationally secure ARE for all polynomial-time functions using a one-way function. More generally, we can convert any (information-theoretic or computational) “garbling scheme” to an ARE with a constant-factor size overhead.Joint work with Nir Bitansky, Rachit Garg, and Yuval Ishai.Title: Classical Obfuscation of Quantum Circuits via Publicly-Verifiable QFHESpeaker: Aparna GupteAbstract: A classical obfuscator for quantum circuits is a classical program that, given the classical description of a quantum circuit Q, outputs the classical description of a functionally equivalent quantum circuit Q’ that hides as much as possible about Q. Previously, the only known feasibility result for classical obfuscation of quantum circuits (Bartusek and Malavolta, ITCS 2022) was limited to “nul” security, which is only meaningful for circuits that always reject. On the other hand, if the obfuscator is allowed to compile the quantum circuit Q into a quantum state |Q’>, there exist feasibility results for obfuscating much more expressive classes of circuits: All pseudo-deterministic quantum circuits (Bartusek, Kitagawa, Nishimaki and Yamakawa, STOC 2023, Bartusek, Brakerski and Vaikuntanathan, STOC 2024), and even all unitaries (Huang and Tang, FOCS 2025).We show that (relative to a classical oracle) there exists a classical obfuscator for all pseudo-deterministic quantum circuits. As our main technical step, we give the first construction of a compact quantum fully-homomorphic encryption (QFHE) scheme that supports public verification of (pseudo-deterministic) quantum evaluation, relative to a classical oracle.To construct our QFHE scheme, we improve on an approach introduced by Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023), which previously required ciphertexts that are both quantum and non-compact due to a heavy use of quantum coset states and their publicly-verifiable properties. As part of our core technical contribution, we introduce new techniques for analyzing coset states that can be generated “on the fly”, by proving new cryptographic properties of the one-shot signature scheme of Shmueli and Zhandry (CRYPTO 2025). Our techniques allow us to produce QFHE ciphertexts that are purely classical, compact, and publicly-verifiable. This additionally yields the first classical verification of quantum computation protocol for BQP that simultaneously satisfies blindness and public-verifiability.Joint work with James Bartusek, Saachi Mutreja and Omri Shmueli.Title: When Simple Permutations Mix Poorly: Limited Independence Does Not Imply PseudorandomnessSpeaker: Jesko DujmovicAbstract: Over the past two decades, several works have used (almost) $k$-wise independence as a proxy property for block ciphers, since it guarantees resistance against broad classes of statistical attacks. For example, even the case $k = 2$ already implies security against differential and linear cryptanalysis. Hoory, Magen, Myers, and Rackoff (ICALP ’04; TCS ’05) formulated an appealing conjecture: if the sequential composition of $T$ independent local randomized permutations is (close to) four-wise independent, then it should also be a pseudorandom permutation. Here, “local” means that each output bit depends on only a constant number of input bits. This conjecture offers a potential strong justification for analyses of block ciphers that establish (almost) $k$-wise independence of this type of constructions. In this work, we disprove the conjecture in full generality by presenting an explicit local randomized permutation whose sequential composition is four-wise independent, but not a pseudorandom permutation. Our counterexample in fact extends to $k$-wise independence for any constant~$k$.Joint work with Angelos Pelecanos and Stefano Tessaro. TBD

Physics Informed Deep Unfolded Full Waveform Inversion for Edema Detection

Yonatan Kvich
Weizmann Institute of Science, Israel

Part Of

Biomedical Imaging and Analysis 2025 - 2026
11:00A
- 12:00P

Location

32-D407
Add to Calendar 2025-11-14 11:00:00 2025-11-14 12:00:00 America/New_York Physics Informed Deep Unfolded Full Waveform Inversion for Edema Detection Accurate detection of edema is clinically important but remains challenging due to the subtlety of its quantitative indicators. Ultrasound (US) offers a safe, accessible, and cost-effective imaging modality, yet conventional beamforming methods such as B-mode do not directly recover the tissue’s physical parameters. In this work, we present a physics-informed deep learning approach that performs inverse reconstruction of tissue properties directly from raw channel data, enabling quantitative estimation of the speed of sound and density. Our method, called Deep-Unfolded Full Waveform Inversion (DUFWI), unfolds the iterative steps of a classical inverse solver into a trainable neural network, preserving physical interpretability while learning efficient update rules from data. We demonstrate results on both simulated datasets and real hardware experiments using a Verasonics US system with phantom setups containing cylindrical rods of known speed of sound, showing substantial improvement over traditional FWI and MB-QRUS in performance and computational demand. The framework can be used for a wide range of inverse US imaging tasks, offering a practical path toward real-time, physics-based diagnostic imaging. TBD

How Cognitive Probes guide development of generative AI by diagnosing intellectual deficits

Isaac Galatzer-Levy
Google DeepMind
3:00P
- 4:00P

Location

32-G449
Kiva
Add to Calendar 2025-11-14 15:00:00 2025-11-14 16:00:00 America/New_York How Cognitive Probes guide development of generative AI by diagnosing intellectual deficits Users of Large Language Models increasingly observe a wide range of performance deficits, yet the core mechanistic causes of these errors remain poorly understood and characterized, hampering systematic efforts to address them and achieve artificial general intelligence (AGI). To address this challenge, Dr. Isaac Galatzer-Levy will present a novel framework for comprehensive cognitive evaluation of foundation models, applying principles from human psychometrics to characterize their abilities and deficits. This approach reveals profound performance asymmetries in leading models, such as Gemini and others: while they exhibit superhuman capabilities in verbal and working memory tasks, they show severe deficits in visual-perceptual and intuitive physics domains. These weaknesses are particularly evident in tasks requiring visual reasoning, complex puzzle-solving, and even basic perception, where models often perform at levels far below human norms. Such deficits will limit the ability to advance multimodal and robotics applications of generative AI. The research employs a wide range of testing paradigms, from one-way model evaluation on static benchmarks to interactive social agent evolution in multi-agent simulations. Dr. Galatzer-Levy will conclude by demonstrating how this detailed cognitive profiling can be applied to identify and remediate reasoning errors in AI that are analogous to human cognitive distortions, which can lead to the generation of delusions.Speaker Bio: Dr. Galatzer-Levy holds a PhD in Clinical Psychology from Columbia University. He is on the research faculty at NYU Grossman School of Medicine in the Department of Psychiatry, where he received postdoctoral training in neuroscience and bioinformatics. He has worked across start-ups and big tech (Meta Reality Labs; Google DeepMind) on applications of psychological constructs to the development of AI models ranging from large sensor models to foundational GenAI research and development. He holds multiple patents in these areas and has over 100 peer-reviewed publications. TBD

November 17, 2025

Understanding the Efficacy of Phishing Training in Practice

Grant Ho
UChicago

Part Of

CSAIL Security Seminar 2025 - 2026
12:00P
- 1:00P

Location

32-G449
Kiva
Add to Calendar 2025-11-17 12:00:00 2025-11-17 13:00:00 America/New_York Understanding the Efficacy of Phishing Training in Practice Abstract: As a result of regulation and cyber-insurance mandates, many organizations require their employees to periodically take various forms of cybersecurity training. Despite a long history of research supporting some forms of security training, this practice remain controversial in practice; and recent work has questioned its efficacy and highlighted the burden it can impose. This talk will discuss our recent paper that empirically evaluated the efficacy of two ubiquitous forms of enterprise security training: annual cybersecurity awareness training and embedded anti-phishing training exercises. Specifically, our work conducted and analyzed an 8-month randomized controlled experiment involving ten simulated phishing campaigns sent to over 19,500 employees at a large healthcare organization. Our results suggest that commonly deployed anti-phishing training programs are unlikely to offer significant protective value, and our analysis surfaces several challenges that these trainings may inherently face in-the-wild. Bio: Grant Ho is an assistant professor in computer science at the University of Chicago. His research focuses on securing enterprises and organizations through data-driven insights and methods. Previously Grant was a postdoctoral fellow at UC San Diego and received his PhD from UC Berkeley. His work has been recognized by the 2017 Internet Defense Prize and four distinguished/best papers awards across the top security conferences, such as IEEE S&P and Usenix Security. Zoom info:   Meeting ID: 945 5603 5878   Password: 865039  TBD

November 18, 2025

Visual Computing Seminar: Support Function Parameterization of Convex Hulls for Fast Distance Queries

Anh Truong
CSAIL
12:00P
- 1:00P

Location

32-D463
Star Room
Add to Calendar 2025-11-18 12:00:00 2025-11-18 13:00:00 America/New_York Visual Computing Seminar: Support Function Parameterization of Convex Hulls for Fast Distance Queries Abstract:Convex hulls are ubiquitous in computational geometry, and they are particularly useful for fast distance queries and collision detection. Standard algorithms for computing distances between convex shapes (e.g., GJK) require input shapes to be represented as support functions (a dual representation of convex shapes). However, support functions are generally difficult to obtain. They are only known in closed form for a limited number of simple primitives--such as ellipsoids, cylinders, or cones--through case-by-case analysis. In general, there is no closed-form expression for the support function of an arbitrary shape. In this talk, we present a variational approach to obtain the global support function of an arbitrary shape, given only a way to sample points from the shape. By characterizing the desired support function as the minimizer of a variational problem over sublinear functions, we bypass the need to solve a typical regression problem. Beyond fast distance queries, our variational formulation also provides an easy way to parameterize continuously-deforming convex hulls. TBD

CSAIL Forum with Sam Madden

Sam Madden
CSAIL

Part Of

CSAIL Forum
12:00P
- 1:00P

Location

TBD
Add to Calendar 2025-11-18 12:00:00 2025-11-18 13:00:00 America/New_York CSAIL Forum with Sam Madden Please join us for CSAIL Forum with Prof. Sam MaddenSpeaker: Sam Madden, College of Computing Distinguished ProfessorDate/time: Tuesday 12:00-1:00 EDT, November 18, 2025 Venue: Live stream via Zoom: Registration requiredTitle: How I Learned to Start Querying and Love AIAbstract: Over the past five decades, the relational database model has proven to be a scaleable and adaptable model for querying a variety of structured data, with use cases in analytics, transactions, graphs, streaming and more. However, most of the world’s data is unstructured. Thus, despite their success, the reality is that the vast majority of the world’s data has remained beyond the reach of relational systems. The rise of deep learning and generative AI offers an opportunity to change this. These models provide a stunning capability to extract semantic understanding from almost any type of document, including text, images, and video which can extend the reach of databases to all the world's data. In this talk I explore how these new technologies will transform the way we build database management software, creating new systems that can ingest, store, process, and query all data. Building such systems presents many opportunities and challenges. In this talk I focus on three: scalability, correctness, and reliability, and argue that the declarative programming paradigm that has served relational systems so well offers a path forward in the new world of AI data systems as well. To illustrate this, I describe several examples of such declarative AI systems we have built in document and video processing, and provide a set of research challenges and opportunities to guide research in this exciting area going forward. Bio:  Samuel Madden is the College of Computing Distinguished Professor of Computing at MIT. His research interests include databases, distributed computing, and AI systems. Past research projects include learned database systems, the C-Store column-oriented database system, and the CarTel mobile sensor network system. Madden heads the Data Systems Group at MIT and the Data Science and AI Lab (DSAIL), an industry supported collaboration focused on developing systems that use AI and machine learning. Registration requiredMadden received his Ph.D. from the University of California at Berkeley in 2003 where he worked on the TinyDB system for data collection from sensor networks. Madden was named one of Technology Review's Top 35 Under 35 in 2005 and an ACM Fellow in 2020, and is the recipient of several awards including the SIGMOD Edgar F. Codd Innovations Award and "test of time" awards from VLDB, SIGMOD, SIGMOBILE, and SenSys. He is the co-founder and Chief Scientist at Cambridge Mobile Telematics, which develops technology to make roads safer and drivers better.  TBD

AI4Society Seminar - Nikhil Garg - Recommendations in High-Stakes Settings

Nikhil Garg
Cornell University
4:00P
- 5:00P

Location

45-102
Add to Calendar 2025-11-18 16:00:00 2025-11-18 17:00:00 America/New_York AI4Society Seminar - Nikhil Garg - Recommendations in High-Stakes Settings Talk Abstract: Recommendation and search systems are now used in high-stakes settings, including to help find jobs, schools, and partners. Building public interest recommender systems in such settings bring both individual-level (enabling exploration, diversity, data quality) and societal (fairness, capacity constraints, algorithmic monoculture) challenges. In this talk, I’ll discuss our theoretical, empirical, and deployment work in tackling these challenges, including ongoing work on (a) applicant behavior and recommendations for the NYC HS match, (b) a platform to help discharge patients to long-term care facilities, (c) feed ranking algorithms on Bluesky for research paper recommendations. Speaker Bio: Nikhil Garg is an Assistant Professor of Operations Research and Information Engineering at Cornell Tech as part of the Jacobs Institute. He uses algorithms, data science, and economics approaches to study democracy, markets, and societal systems at large. Nikhil has received the NSF CAREER, INFORMS George Dantzig Dissertation Award, an honorable mention for the ACM SIGecom dissertation award, and paper awards including from CSCW, EAAMO, and CHIL. He received his PhD from Stanford University and has spent considerable collaborating with government agencies and non-profits. His work has been supported by the NSF, NASA, Sloan Foundation, and other organizations.   TBD

November 19, 2025

TBA

Mile Sikic
Genome Institute of Singapore / University of Zagreb

Part Of

Bioinformatics Seminar 2025
11:30A
- 1:00P

Location

32-G575
Projected in 32-G575
Add to Calendar 2025-11-19 11:30:00 2025-11-19 13:00:00 America/New_York TBA TBA TBD

Databases for AI: The Case for Vector Databases [Zoom Talk]

Jianguo Wang
Purdue University
1:00P
- 2:00P

Location

32-G882
Add to Calendar 2025-11-19 13:00:00 2025-11-19 14:00:00 America/New_York Databases for AI: The Case for Vector Databases [Zoom Talk] Abstract: Vector databases have recently emerged as a hot topic due to the widespread interest in LLMs, where they provide relevant context that enables LLMs to generate more accurate responses. Current vector databases can be broadly categorized into two types: specialized and integrated. Specialized vector databases are explicitly designed for managing vector data, while integrated vector databases support vector search within existing database systems (mostly relational databases). While specialized vector databases are interesting, there is a significant customer base interested in integrated vector databases for various reasons, such as reluctance to move data out, the desire to link vector embeddings with their source data, and the need for advanced vector search capabilities. However, integrated vector databases face challenges in performance and interoperability. In this talk, I will share our recent experience building integrated vector databases within two relational databases: SingleStore (VLDB'24) and PostgreSQL (CIDR'26). I will show how we address performance and interoperability challenges, resulting in more powerful vector databases that support advanced RAGs. I will also present additional challenges in vector databases and our ongoing research to address them. Finally, I will discuss the broader role of database systems in the era of LLMs and how to build future data infrastructure that extends beyond vector databases to better support AI.Bio: Jianguo Wang is an Assistant Professor of Computer Science at Purdue University. He received his Ph.D. from the University of California, San Diego. His research focuses on database systems for the Cloud and LLMs, with a particular focus on Disaggregated Databases and Vector Databases. He has worked and interned at Zilliz, Amazon AWS, Microsoft Research, Oracle, and Samsung, contributing to the development of various database systems. He regularly publishes and serves on program committees for premier database conferences, including SIGMOD, VLDB, and ICDE. He also moderated the VLDB 2024 panel on vector databases and was invited to the Dagstuhl Seminar on vector databases. His research has impacted multiple industrial-strength database systems, including Amazon Aurora, Zilliz Milvus, SingleStore, and TigerGraph. His research has been recognized with multiple awards, including the NSF CAREER Award, the ACM SIGMOD Research Highlight Award, the Google ML and Systems Junior Faculty Award, and the IEEE TCDE Rising Star Award.----Please reach out to markakis@mit.edu for the Zoom password. TBD

Fast Algorithms for Graph Arboricity and Related Problems

George Zhaoqi Li
Carnegie Mellon

Part Of

Algorithms and Complexity (A&C) 2025 - 2026
4:00P
- 5:00P

Location

32-G575
Add to Calendar 2025-11-19 16:00:00 2025-11-19 17:00:00 America/New_York Fast Algorithms for Graph Arboricity and Related Problems We give an algorithm for finding the arboricity of a weighted, undirected graph, defined as the minimum number of spanning forests that cover all edges of the graph, in \sqrt{n} m^{1+o(1)} time. This improves on the previous best bound of O(nm) for weighted graphs and O(m^{3/2}) for unweighted graphs (Gabow 1995) for this problem. The running time of our algorithm is dominated by a logarithmic number of calls to a directed global minimum cut subroutine—if the running time of the latter problem improves to m^{1+o(1)} (thereby matching the running time of maximum flow), the running time of our arboricity algorithm would improve further to m^{1+o(1)}.As an application, we also give a new algorithm for computing the entire cut hierarchy—laminar multiway cuts with minimum cut ratio in recursively defined induced subgraphs—in m n^{1+o(1)} time. For the cut hierarchy problem, the previous best bound was O(n^2 m) for weighted graphs and O(n m^{3/2}) for unweighted graphs.This is joint work with Ruoxu Cen, Henry Fleischmann, Jason Li, and Debmalya Panigrahi. TBD

November 21, 2025

Seminar: Ben Eysenbach: Temporal Representations Enable Generalization and Exploration

2:00P
- 3:00P

Location

32-D463
Star Conference Room in Stata Center
Add to Calendar 2025-11-21 14:00:00 2025-11-21 15:00:00 America/New_York Seminar: Ben Eysenbach: Temporal Representations Enable Generalization and Exploration Title: Temporal Representations Enable Generalization and ExplorationAbstract: In the same way that deep computer vision models find structures and patterns in images, how might deep reinforcement learning models find structures and patterns in solutions to control problems? This talk will focus on learning temporal representations, which map high-dimensional observations to compact representations where distances reflect shortest paths. Once learned, these temporal representations encode the value function for certain tasks – learning temporal representations is itself an RL algorithm. In both robotics and reasoning problems, we'll show how such representations capture temporal patterns. Temporal representations also facilitate a form of (temporal) generalization: navigating between pairs of states that are more distant than those seen during training. Finally, we'll share some evidence that agents trained via temporal representations exhibit surprising exploration strategies in both single-agent and multi-agent settings.Bio: Benjamin Eysenbach is an Assistant Professor of Computer Science at Princeton University, where he runs the Princeton Reinforcement Learning Lab. His research focuses on reinforcement learning algorithms: AI methods that learn how to make intelligent decisions from trial and error. His group has developed several successful algorithms and analysis for self-supervised methods, which enable agents to explore and learn without any human supervision. His work has been recognized by an NSF CAREER Award, a Hertz Fellowship, an NSF GRFP Fellowship, and the Alfred Rheinstein Faculty Award. Prior to joining Princeton, he received his PhD in machine learning from Carnegie Mellon University, worked at Google AI, and studied math as an undergraduate at MIT.  TBD

November 24, 2025

Concurrent Trees Supporting Complex Queries

Panagiota Fatourou
University of Crete, Department of Computer Science & Foundation for Research and Technology-Hellas, Institute of Computer Science
1:00P
- 2:00P

Location

32-D463
Star
Add to Calendar 2025-11-24 13:00:00 2025-11-24 14:00:00 America/New_York Concurrent Trees Supporting Complex Queries AbstractAugmenting an existing sequential data structure with extra information to support greater functionality is a widely used technique. For example, search trees are augmented to build sequential data structures like order-statistic trees, interval trees, tango trees, link/cut trees and many others. We study how to design concurrent augmented tree data structures. We present a new, general technique that can augment a lock-free tree to add any new fields to each tree node, provided the new fields’ values can be computed from information in the node and its children. This enables the design of lock-free, linearizable analogues of a wide variety of classical augmented data structures.We apply our technique to a lock-free binary search tree (BST). Our augmented BST supports order statistic queries in O(h) steps on a tree of height h. The augmentation does not affect the asymptotic running time of the updates. We give an alternative augmentation to improve searches and order-statistic queries to run in O(log |S|) steps, where |S| is the size of the impleemented set (with a small increase in step complexity of updates). As an added bonus, our technique supports arbitrary multi-point queries (such as range queries) with the same time complexity as they would have in the corresponding sequential data structure.Speaker BioP. Fatourou is a Professor of Computer Science at the University of Crete and the Foundation for Research and Technology - Hellas (FORTH). She has worked as a Marie-Curie Individual Fellow at Université Paris Cité (UPC), as a visiting Professor at École Polytechnique Fédérale de Lausanne (EPFL), and as a visiting researcher at the University of York, the University of Toronto, and the University of Cyprus. She has been a postdoc at Max-Planck Institut für Informatik (MPII) and at the University of Toronto (UoT), and a visiting postdoc at the University of Brown. Her research interests focuses on all aspects of parallel and distributed computing. P. Fatourou is currently serving as the Steering Committee chair of the ACM Symposium on Principles of Distributed Computing (PODC). She has served as the chair (July 2019 – June 2021) and the past chair (July 2021 – June 2023) of the ACM Europe Council. She has been the editor of the Distributed Computing Column of the Bulletin of the European Association for Theoretical Computer Science (BEATCS), the PC chair of the 20th International Conference on Principles of Distributed Systems (OPODIS) and of the 19th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). She has served as the general chair of PODC, as an ACM Distinguished Speaker and as a Featured ACM Member. She is currently the chair of the Evaluation Committee of the ACM Grace Murray Hopper Award. She is at the editorial board of the Communications of ACM (Regional Special Section) and of IEEE Transactions on Parallel and Distributed Systems. She has received many distinctions for her research achievements, including two best paper awards in top conferences of her field (PPoPP 2022 and DISC 2024). Ms. Fatourou has attracted significant funding from national and international sources. She has significant activity in supporting diversity and equity in science, including establishing and acting as the first chair of the Greek ACM-W chapter. TBD

November 25, 2025

Vector Similarity Search: Past, Present, and Future

Themis Palpanas
Université Paris Cité
1:00P
- 2:00P

Location

32-G882
Add to Calendar 2025-11-25 13:00:00 2025-11-25 14:00:00 America/New_York Vector Similarity Search: Past, Present, and Future Abstract: Very large amounts of high-dimensional vectors are now omnipresent (ranging from traditional multidimensional data to time series and deep embeddings), and the performance requirements (i.e., response-time and accuracy) of a variety of applications that need to process and analyze these data have become very stringent and demanding. In the past years, high-dimensional similarity search has been studied in its many flavors. Similarity search algorithms for exact and approximate, one-off and progressive query answering. Approximate algorithms with and without (deterministic or probabilistic) quality guarantees. Solutions for on-disk and in-memory data, static and streaming data. Approaches based on multidimensional space-partitioning and metric trees, random projections and locality-sensitive hashing (LSH), product quantization (PQ) and inverted files, k-nearest neighbor graphs and optimized linear scans. Surprisingly, the work on data-series (or time-series) similarity search has recently been shown to achieve the state-of-the-art performance for several variations of the problem, on both time-series and general high-dimensional vector data. In this talk, we will touch upon the different solutions proposed for scalable vector similarity search, present some of the state-of-the-art solutions, describe the role of machine learning in designing solutions in this space, and discuss open research problems.Bio: Themis Palpanas is an elected Senior Fellow of the French University Institute (IUF), a distinction that recognizes excellence across all academic disciplines, and Distinguished Professor of computer science at the University Paris Cite (France), where he is director of the data management group, diNo, and was the founding director of the Data Intelligence Institute of Paris (diiP). He received the BS degree from the National Technical University of Athens, Greece, and the MSc and PhD degrees from the University of Toronto, Canada. He has previously held positions at the University of California at Riverside, University of Trento, and at IBM T.J. Watson Research Center, and visited Microsoft Research, and the IBM Almaden Research Center. His interests include problems related to data science (big data analytics and machine learning applications). He is the author of 15 patents. He is the recipient of 3 Best Paper awards, and the IBM Shared University Research (SUR) Award. His service includes the VLDB Endowment Board of Trustees (2018-2023), Editor-in-Chief for PVLDB Journal (2024-2025) and BDR Journal (2016-2021), PC Chair for IEEE BigData 2023 and ICDE 2023 Industry and Applications Track, General Chair for VLDB 2013, Associate Editor for the TKDE Journal (2014-2020), and Research PC Vice Chair for ICDE 2020.----Please reach out to markakis@mit.edu for the Zoom password. TBD

December 01, 2025

Don’t shout “Bingo!” Understanding (and Addressing) the Shortcomings of Enterprise Threat Detection Products

Adam Bates
University of Illinois at Urbana-Champaign

Part Of

CSAIL Security Seminar 2025 - 2026
12:00P
- 1:00P

Location

32-G449
Kiva
Add to Calendar 2025-12-01 12:00:00 2025-12-01 13:00:00 America/New_York Don’t shout “Bingo!” Understanding (and Addressing) the Shortcomings of Enterprise Threat Detection Products Abstract: Update -- We are still awful at preventing data breaches and other cybersecurity incidents. Why are these sophisticated (and costly) commercial threat detection products continuing to fail? In this talk, I'll describe our efforts to better understand, and even address, these failure points. First, I'll provide evidence that the extraordinarily high false alarm rates observed in Endpoint Detection & Response (EDR) products can be eliminated by examining the history of alert-triggering processes. Second, I'll explain how the metrics used to evaluate threat detection products often paint a deeply misleading picture of organizations' security readiness. I will conclude by discussing how our ongoing work seeks to resolve industry shortcomings by providing more principled foundations for threat detection and assessment. Bio: Adam Bates is an Associate Professor at the University of Illinois at Urbana-Champaign, where he studies a broad range of topics in computer security. He is best known for his work on data provenance, the practice of examining suspicious activities on computing systems based on their historical context. Fittingly, Adam also appreciates the historical context of computer security research, regularly forcing students in his courses to read James Anderson's 1972 Computer Security Technology and Planning Study… both volumes. Adam is the recipient of two distinguished paper awards (S&P'23, ESORICS'22) and was the runner-up for the ACM SIGSAC Dissertation Award. His research has been recognized and supported by an NSF SaTC FRONTIER, NSF CISE Research Initiation Initiative (CRII), and NSF CAREER Awards, as well as a gift from the VMWare University Research Fund.    TBD

December 02, 2025

Visual Computing Seminar: One String to Pull Them All: Fast Assembly of Curved Structures from Flat Auxetic Linkages

Akib Zaman
CSAIL
12:00P
- 1:00P

Location

32-D463
Star Room
Add to Calendar 2025-12-02 12:00:00 2025-12-02 13:00:00 America/New_York Visual Computing Seminar: One String to Pull Them All: Fast Assembly of Curved Structures from Flat Auxetic Linkages Abstract: We present a computational approach for designing freeform structures that can be rapidly assembled from initially flat configurations by a single string pull. The target structures are decomposed into rigid spatially varied quad tiles that are optimized to approximate the user-provided surface, forming a flat mechanical linkage. Our algorithm then uses a two-step method to find a physically realizable string path that controls only a subset of tiles to smoothly actuate the structure from flat to assembled configuration. We initially compute the minimal subset of tiles that are required to be controlled with the string considering the geometry of the structure and interaction among the tiles. We then find a valid string path through these tiles that minimizes friction, which will assemble the flat linkage into the target 3D structure upon tightening a single string. The resulting designs can be easily manufactured with computational fabrication techniques such as 3D printing, CNC milling, molding, etc. in flat configuration that, in addition to manufacturing, facilitates storage and transportation. We validate our approach by developing a series of physical prototypes and showcasing various application case studies, ranging from medical devices, space shelters, to architectural designs. TBD

December 03, 2025

TBA

Fabian Theis
Helmholtz Munich

Part Of

Bioinformatics Seminar 2025
11:30A
- 1:00P

Location

32-G575
Projected in 32-G575
Add to Calendar 2025-12-03 11:30:00 2025-12-03 13:00:00 America/New_York TBA TBA TBD

TBA

Soheil Behnezhad
Northeastern

Part Of

Algorithms and Complexity (A&C) 2025 - 2026
4:00P
- 5:00P

Location

32-G575
Add to Calendar 2025-12-03 16:00:00 2025-12-03 17:00:00 America/New_York TBA TBA TBD

December 04, 2025

Will Artificial Intelligence Be the End of Civilization, or the Beginning?

Henry Liebermand and Christopher Fry

Part Of

Boston IEEE/ACM 2025 -2026
6:30P
- 8:00P

Location

32-G449
Add to Calendar 2025-12-04 18:30:00 2025-12-04 20:00:00 America/New_York Will Artificial Intelligence Be the End of Civilization, or the Beginning? Will Artificial Intelligence Be the End of Civilization, or the Beginning?Henry Lieberman, MIT Computer Science and Artificial Intelligence LabChristopher Fry, MIT Media Lab, Sloan, IBM, startups (Retired)                  https://www.whycantwe.orgPopular press articles whipsaw the public between two starkly different views of Artificial Intelligence.  On one hand, AI is presented as a magic genie that can solve all of our problems with superhuman intelligence. On the other hand, it's presented as an unprecedented threat to humanity, with the danger of loss of jobs, loss of privacy, automated discrimination, even some kind of "robot rebellion". No wonder the public is confused. Which is it?We present a view that is different from both the self-interested promotion of the tech companies, and from the pessimism of the social critics. Believe it or not, the biggest value of AI will lie, not insimply improving the operations of today's industry and government, but in making it possible to have a more cooperative, less competitive world.Our view is:- Optimistic. Mitigating possible dangers of AI in today's society is important. But we don't want to let fear cause us to miss the potential for AI to tackle big problems people now think are intractable: war, poverty, climate, etc.- Radical. Many tech boosters imagine simply pouring AI into today's economy and electoral politics. We think these systems need to be redesigned from scratch for the AI era. We have two concrete proposals: Makerism (economics) and  Reasonocracy (governance).    - Original. Not conventionally Left or Right, though our ideas share some design goals with both sides. Not (yet) heard on mainstream or activist media. TBD

December 09, 2025

Visual Computing Seminar: Addressing the Unexpected - Anomaly Detection and AI Safety

Niv Cohen
NYU
12:00P
- 1:00P

Location

TBD
Add to Calendar 2025-12-09 12:00:00 2025-12-09 13:00:00 America/New_York Visual Computing Seminar: Addressing the Unexpected - Anomaly Detection and AI Safety Abstract:While AI models are becoming an ever-increasing part of our lives, our understanding of their behavior in unexpected situations is drifting even further out of reach. This gap poses significant risks to users, model owners, and society at large.In the first part of the talk, I will overview my research on detecting unexpected phenomena with and within deep learning models. Specifically, detecting (i) anomalous samples, (ii) unexpected model behavior, and (iii) unexpected security threats. In the second part of the talk, I will dive into my recent research on a specific type of unexpected security threat: attacks on image watermarks. I will review such attacks and present my recent work toward addressing them. I will conclude with a discussion of future research directions.Bio: Niv Cohen is a postdoctoral researcher at the school of Computer Science & Engineering at New York University. He received his Ph.D. in Computer Science from the Hebrew University in 2024. His research interests include representation learning, computer vision, and AI safety. He is a recipient of the VATAT Scholarship for Outstanding Postdoctoral Fellows in Data Science and the 2024 Blavatnik Prize for Outstanding Israeli Doctoral Students in Computer Science. TBD

TBA

Ronitt Rubinfeld
CSAIL, EECS

Part Of

Theory of Computation (ToC) 2025 - 2026
4:15P
- 5:15P

Location

32-G449
Refreshments at 4:00 PM
Add to Calendar 2025-12-09 16:15:00 2025-12-09 17:15:00 America/New_York TBA TBA TBD

December 10, 2025

TBA

Mona Singh
Princeton University

Part Of

Bioinformatics Seminar 2025
11:30A
- 1:00P

Location

32-G575
Add to Calendar 2025-12-10 11:30:00 2025-12-10 13:00:00 America/New_York TBA TBA TBD
  • CSAIL Forum
  • Dertouzos Distinguished Lecture
  • Hot Topics in Computing
  • AI@MIT Reading Group
  • Algorithms and Complexity (A&C) 2025 - 2026
  • Bioinformatics Seminar 2025
  • Biomedical Imaging and Analysis 2025 - 2026
  • Boston IEEE/ACM 2025 -2026
  • CIS Seminar 2025-2026
  • CSAIL Security Seminar 2025 - 2026
  • EECS Special Seminar
  • Embodied Intelligence 2025-2026
  • HCI Seminar 2025-2026
  • ML Tea
  • Theory of Computation (ToC) 2025 - 2026
  • Thesis Defense
  • Algorithms and Complexity (A&C) 2024 - 2025
  • Biomedical Imaging and Analysis 2024 - 2025
  • Boston IEEE/ACM 2024 -2025
  • Brains, Minds and Machines 2024 - 2025
  • CIS Seminar 2024 - 2025
  • CSAIL Security Seminar 2024 - 2025
  • Embodied Intelligence 2024-2025
  • Theory of Computation (ToC) 2024 - 2025
  • HCI Seminar Series 2024
  • Theory of Computation (ToC) Seminar 2024
  • Brains, Minds and Machines 2023 - 2024
  • Boston IEEE/ACM Joint Seminar Series 2023 - 2024
  • CIS Seminar Series 2023 - 2024
  • Theory of Computation (ToC) Seminar 2023
  • Biomedical Imaging and Analysis 2023 - 2024
  • Bioinformatics Seminar Series 2023
  • Machine Learning and Health Seminar Series, Fall 2023
  • CSAIL Security Seminar Series 2023 - 2024
  • Algorithms and Complexity Seminar 2023
  • Brains, Minds and Machines Seminar Series 2022 - 2023
  • Biomedical Imaging and Analysis 2022 - 2023
  • Boston IEEE/ACM Joint Seminar Series 2022 - 2023
  • CSAIL Security Seminar Series 2022-2023
  • Cryptography and Information (CIS) Seminar 2022
  • HCI Seminar Series 2022 - 2023
  • CSAIL Security Seminar Series 2020
  • IEEE Computer Society and GBC/ACM 2019-2020
  • Brains, Minds and Machines Seminar Series 2019 - 2020
  • Algorithms and Complexity Seminar 2019-2020
  • Biomedical Imaging and Analysis 2019 - 2020
  • Fast Code Seminar 2019
  • Machine Learning Seminar Series 2019
  • Robotics@MIT Seminar Series 2019
  • CSAIL Security Seminar Series 2019
  • EECS Special Seminar Series 2019
  • Bioinformatics Seminar Series 2019
  • HCI Seminar Series 2019
  • Theory of Computation Seminar (ToC) 2019
  • Cryptography and Information Security (CIS) Seminar 2019
  • CSAIL Alliances Tech Talk 2018 - 2019
  • Programming Languages & Software Engineering Seminar 2018-2019
  • HCI Seminar Series 2018
  • Algorithms & Complexity Seminars 2018-2019
  • Biomedical Imaging and Analysis 2018 - 2019
  • IEEE Computer Society and GBC/ACM 2018-2019
  • Brains, Minds and Machines 2018/2019
  • Machine Learning Seminar Series 2018
  • Theory and Beyond
  • CSAIL Security Seminar 2018/2019
  • Robotics@MIT Seminar Series 2018
  • Bioinformatics Seminar Series 2018
  • Theory of Computation (TOC) 2018
  • Cryptography and Information Seminar (CIS) 2018
  • Brains, Minds and Machines Seminar Series 2017/2018
  • IEEE Computer Society and GBC/ACM 2017/2018
  • Machine Learning Seminar Series
  • CSAIL Security Seminar 2017/2018
  • Algorithms and Complexity Seminar Series 2017/2018
  • Biomedical Imaging and Analysis 2017/2018
  • Brains, Minds and Machines Seminar Series 2017
  • Machine Learning Seminar Series
  • Vision Seminar Series 2017
  • Robotics@MIT Seminar Series 2017
  • Bioinformatics Seminar Series 2017
  • EECS Special Seminar Series 2017
  • Cryptography and Information Seminar (CIS) 2017
  • Theory of Computation (TOC) 2017
  • HCI Seminar Series
  • Biomedical Imaging and Analysis 2016/2017
  • PL/SE Serminar Series 2016/2017
  • Algorithms and Complexity Seminar Series 2016/2017
  • CSAIL Security Seminar 2016/2017
  • Boston IEEE/ACM Joint Seminar Series 2016/2017

Event Type

  • Social Event
  • Private Event
  • Seminar
  • Thesis Defence

Impact Area

  • Big Data
  • Cybersecurity
  • Education
  • Energy
  • Entertainment
  • Health Care
  • Internet of Things
  • Manufacturing
  • Transportation
  • Wireless

Research Area

  • Algorithms & Theory
  • AI & ML
  • Computational Biology
  • Computer Architecture
  • Graphics & Vision
  • Human-Computer Interaction
  • Programming Languages & Software Engineering
  • Robotics
  • Security & Cryptography
  • Systems & Networking

MIT CSAIL

Massachusetts Institute of Technology

Computer Science & Artificial Intelligence Laboratory

32 Vassar St, Cambridge MA 02139

  • Contact
  • Press Requests
  • Accessibility
MIT Schwarzman College of Computing