June 02

Add to Calendar 2025-06-02 9:00:00 2025-06-02 11:00:00 America/New_York [Thesis Defense] Personalizing Robot Assistance under Uncertainty about the Human Date: June 2Time: 9:00-11:00 AM ETLocation: 32-155Zoom: https://mit.zoom.us/j/9731989629Title: Personalizing Robot Assistance under Uncertainty about the HumanAbstractRobots have the potential to improve the quality of life by assisting with daily tasks, such as helping older adults and people with disabilities get dressed. But meaningful assistance requires personalization: each person has unique preferences, behaviors, and needs.A central challenge is that robots often operate under uncertainty about the human they are helping. This uncertainty may involve the person's preferences, hidden physical states, or reactions to assistance. If not properly addressed, such uncertainty can lead to ineffective, undesired, or even unsafe outcomes.This thesis asks: How should a robot behave when it is uncertain about the human? I present a unified framework for uncertainty-aware personalization in human-robot interaction, spanning three core components of robot intelligence: preference learning, state estimation, and motion planning.1. Preference learning: I introduce the first method that uses response time, a subtle but informative cognitive signal, as implicit feedback. By combining human choices with response times, robots can infer not only what a person prefers but also how strongly they prefer it. This reduces uncertainty and accelerates preference learning.2. State estimation: To support safe physical assistance when parts of the human body (e.g., the elbow) are occluded, I introduce a state estimator that models uncertainty in learned human dynamics and robot sensing. It constructs a geometric set (e.g., a 3D box) that reliably contains the true hidden human state, enabling safer and more precise robot behavior.3. Motion planning: When a robot is uncertain about future human motion, it may behave overly conservatively to avoid causing harm, resulting in ineffective assistance. To address this, I propose a relaxed safety formulation that allows the robot to either avoid collisions or make low-impact contact. This approach enables the robot to act more effectively while still maintaining safety under uncertainty.Together, these contributions lay a foundation for assistive robots that personalize their behavior while adapting to the uncertain and dynamic nature of human needs.Thesis Supervisor: Julie A. ShahCommittee Members: Julie A. Shah, Dylan Hadfield-Menell, Na (Lina) Li, Aude BillardThesis Readers: Vaibhav Unhelkar, Tariq IqbalContact: shenli@mit.edu TBD

May 12

Add to Calendar 2025-05-12 11:00:00 2025-05-12 12:30:00 America/New_York Thesis Defense: Scaling Cooperative Intelligence via Inverse Planning and Probabilistic Programming Thesis Defense: Scaling Cooperative Intelligence via Inverse Planning and Probabilistic ProgrammingPresenter: Tan Zhi-XuanPlease email xuan@mit.edu for Zoom linkHow can we build cooperative machines that model and understand human minds — machines that assist us with our goals, coordinate on shared plans, infer the intentions behind our words, and even learn our norms and values? In this talk, I will introduce a scalable Bayesian approach to building such systems via inverse planning and probabilistic programming. By combining online model-based planners and sequential Monte Carlo inference into a single architecture, Sequential Inverse Plan Search (SIPS), we can infer human goals from actions in faster-than-real-time, while scaling to environments with hundreds of possible goals and long planning horizons that have proved intractable for earlier methods. SIPS can additionally make use of large language models (LLMs) as likelihood functions within probabilistic programs, allowing us to build AI assistants and copilots that reliably infer human goals from ambiguous instructions, then provide assistance under uncertainty with much higher success rates than LLMs can on their own. By applying this Bayesian approach in many-agent environments, we are also able to design agents that rapidly learn cooperative social norms from others' behavior, achieving mutually beneficial outcomes with orders of magnitude less data than model-free deep RL. I will conclude by charting out how this research program could deliver a new generation of cooperative AI systems grounded in rational AI engineering, while illuminating the computational foundations of human cooperation and addressing fundamental challenges in building human-aligned AI.Thesis Committee: Vikash Mansinghka, Joshua Tenenbaum, Dylan Hadfield-Menell, Leslie Kaelbling  TBD

May 08

Add to Calendar 2025-05-08 15:00:00 2025-05-08 16:00:00 America/New_York Mark Hamilton's Thesis Defense - Unsupervised Discovery of Structure in Complex Systems How does the human mind make sense of raw information without being taught how to see or hear? This thesis will explore how to build algorithms that can uncover interpretable structure from large collections of data like images and video without needing human annotations or labels. First, we will see how to build algorithms that can perform tasks like classifying every pixel of the world, localizing sound, and decoding natural language, just by watching unlabeled videos without any knowledge of text. Second, we will see how these ideas lead us to a new unifying theory of representation learning. In particular, I will show how 20+ common machine learning methods, such as dimensionality reduction, clustering, contrastive learning, and spectral methods emerge from a single unified equation. Finally, we will discuss how this unifying theory applies to our ongoing efforts to decode animal communication using large-scale, unsupervised, and interpretable learners. We will conclude with some preliminary analysis of the complex vocalizations of Atlantic Spotted Dolphins. TBD
Add to Calendar 2025-05-08 14:15:00 2025-05-08 15:30:00 America/New_York Automatic Integration and Differentiation of Probabilistic Programs Automatic Integration and Differentiation of Probabilistic ProgramsPresenter: Alexander LewThesis Supervisors: Vikash K. Mansinghka and Joshua B. TenenbaumDate: May 8, 2025Time: 2:15pm ETLocation: Building 46, room 3310Please contact alexlew@mit.edu for a Zoom link.Abstract:By automating the error-prone math behind deep learning, systems such as TensorFlow and PyTorch have supercharged machine learning research, empowering hundreds of thousands of practitioners to rapidly explore the design space of neural network architectures and training algorithms. In this talk, I will show how new programming language techniques, especially generalizations of automatic differentiation, make it possible to generalize and extend such systems to support probabilistic models. Our automation is implemented as a suite of composable program transformations for integrating, differentiating, and deriving densities of probabilistic programs. These transformations are rigorously proven sound using new semantic techniques for reasoning about expressive probabilistic programs, and static types are employed to ensure important preconditions for soundness, eliminating large classes of implementation bugs. Providing a further boost, our tools can help users correctly implement fast, low-variance, unbiased estimators of integrals, gradients, and probability densities that are too expensive to compute exactly, enabling orders-of-magnitude speedups in downstream optimization and inference algorithms.To illustrate the value of these techniques, I’ll show how they have helped us experiment with new architectures that could address key challenges with today’s dominant AI models. In particular, I’ll showcase systems we’ve built for (1) auditable reasoning and learning in relational domains, enabling the detection of thousands of errors across millions of Medicare records, and (2) probabilistic inference over large language models, enabling small open models to outperform GPT-4 on several code generation and constrained generation benchmarks. TBD

May 07

Add to Calendar 2025-05-07 14:15:00 2025-05-07 15:15:00 America/New_York Thesis Defense, Noah Golowich - Title: Theoretical Foundations for Learning in Games and Decision-Making Abstract: As learning algorithms become increasingly capable of acting autonomously, it is important to better understand the behavior that results from their interactions (1) amongst themselves and (2) with their environments. This talk will present work addressing each of these aspects:(1) A pervasive challenge in multi-agent learning settings, which spans both theory and practice and dates back decades, has been the failure of convergence for iterative algorithms such as gradient descent. Accordingly, a longstanding central question with broad relevance is: how quickly can we compute solution concepts, i.e., equilibria, in multi-agent settings? I will discuss results which address this question at several scales, starting with simpler normal-form games and building up to larger games such as extensive-form games.(2) To understand how agents can optimally act in dynamic environments, the framework of reinforcement learning (RL) is used. A notorious challenge in RL is partial observability of the environment, which is typically modeled using Partially Observable Markov Decision Processes (POMDPs). Many existing provable guarantees for POMDPs relied on computationally intractable oracles. I will present the first guarantees for end-to-end learning of a near-optimal policy under a simple condition on the environment known as observability.  TBD

May 02

Add to Calendar 2025-05-02 15:30:00 2025-05-02 17:30:00 America/New_York [Thesis Defense - Ce Jin] Exploiting Additive Structure in Algorithm Design and Fine-Grained Complexity Abstract:In this thesis, we investigate the fine-grained complexity of various algorithmic problems with an additive flavor, including 3SUM, Subset Sum, and their close relatives. We explore their connections to various areas, such as graph algorithms, discrete optimization, combinatorial pattern matching, and computational geometry. Our new results include improved algorithms and conditional lower bounds for a wide range of problems, answering multiple open questions from the literature:-Conditional lower bounds for graph problems:  We prove new lower bounds for 4-Cycle Listing and Approximate Distance Oracles conditioned on the 3SUM Hypothesis. As a key intermediate step, we show a fine-grained reduction from 3SUM to the special case of 3SUM where all pairwise sums of input numbers are distinct.-Combinatorial pattern matching:  We design improved algorithms for Text-to-Pattern Hamming Distances, Pattern Matching with Wildcards, and Geometric Pattern Matching, by drawing connections from 3SUM and sparse convolution.-Knapsack-type problems:  We obtain a pseudo-polynomial time algorithm for 0-1 Knapsack with (conditionally) near-optimal dependence on the maximum item weight, an improved approximation scheme for the counting problem #Knapsack, and improved exponential time algorithms for the total search problem Pigeonhole Equal Subset Sum.In order to obtain these results, we employ and develop techniques based on convolution algorithms and their extensions, as well as classic tools from additive combinatorics. Thesis Committee: Ryan Williams (advisor), Virginia Vassilevska Williams (advisor), and Mohsen Ghaffari TBD
Add to Calendar 2025-05-02 9:30:00 2025-05-02 10:30:00 America/New_York (Thesis Defense) Building Intelligence that can Interact with the Physical World Speaker: Johnson Tsun-Hsuan WangAffiliation: MIT EECS (CSAIL)Title: [Thesis Defense] Building Intelligence that can Interact with the Physical WorldDate: Friday, May 2nd 2025Time: 9:30 am EDTLocation: 32-G449 (Patil/Kiva)Zoom: https://mit.zoom.us/j/95448197150?pwd=W0uEtKXgUjoXawXp2cGWrIcsFGtGlO.1Abstract: Recent advances in Artificial Intelligence (AI) have demonstrated remarkable success in parsing, reasoning, and generating digital content across modalities such as natural language, speech, images, videos, and 3D data. However, these breakthroughs have yet to extend meaningfully beyond the digital realm into the physical world. Developing AI for physical interaction poses challenges such as limited grounding, scarce physical data, and high reliability demands in safety-critical settings.This talk outlines a holistic approach to physical AI—through the lenses of data, brain, and body. We begin with data, the foundation of learning, and introduce data-driven and knowledge-driven robot simulation that generates data to improve policy learning and to systematically evaluate and probe existing models. Next, we turn to the brain, focusing on how to bridge the internet-scale knowledge of digital AI with the physical world to improve generalization and interpretability. Finally, we examine the body—the morphological component of intelligence—demonstrating how pre-trained generative models, when integrated with physics-based simulation, can automate the design of robot bodies. Together, this talk explores how digital AI can be extended into the physical world through a comprehensive investigation of data, brain, and body – laying the groundwork for building physical AI.Committee:Prof. Daniela Rus, MIT CSAIL (Advisor)Prof. Sertac Karaman, MIT LIDSProf. Wojciech Matusik, MIT CSAIL TBD

February 25

Add to Calendar 2025-02-25 12:00:00 2025-02-25 13:30:00 America/New_York [Thesis Defense] Steering Robots with Inference-Time Interactions Date: Tuesday, February 25, 2025Time: 12:00 PM - 1:30 PMLocation: 45-792Zoom: https://mit.zoom.us/j/95052951960Abstract:Imitation learning has driven the development of generalist policies capable of autonomously solving multiple tasks. However, when a pretrained policy makes errors during deployment, there are limited mechanisms for users to steer its behavior. While collecting additional data for fine-tuning can address such issues, doing so for each downstream use case is inefficient at scale. My research proposes an alternative perspective: framing policy errors as task misspecifications rather than skill deficiencies. By enabling users to specify tasks unambiguously via interactions at inference-time, the appropriate skill for a given context can be retrieved without fine-tuning. Specifically, I propose (1) inference-time steering, which leverages human interactions for single-step task specification, and (2) task and motion imitation, which uses symbolic plans for multi-step task specification. These frameworks correct misaligned policy predictions without requiring additional training, maximizing the utility of pretrained models while achieving inference-time user objectives.Thesis Supervisor: Julie ShahCommittee Members: Leslie Kaelbling, Jacob Andreas, Dorsa SadighContact: felixw@mit.edu TBD

February 05

Add to Calendar 2025-02-05 15:00:00 2025-02-05 16:30:00 America/New_York Thesis Defense: Designing Hardware Accelerators for Solving Sparse Linear Systems - Axel Feldmann Solving systems of linear equations with sparse coefficient matrices is a key primitive that sits at the heart of many important numeric algorithms. Because of this primitive's importance, algorithm designers have spent many decades optimizing linear solvers for high performance hardware. However, despite their efforts, existing hardware has let them down. State-of-the-art linear solvers often utilize <1% of available compute throughput on existing architectures such as CPUs and GPUs.There are many different algorithms used to solve sparse linear systems. These algorithms are diverse and often have very different computational bottlenecks. These include low arithmetic intensity, fine-grained parallellism, common control dependences, and sparsity-induced load imbalance.This thesis studies the problem of designing hardware accelerators for sparse linear solvers. We propose three novel architectures that explore different parts of the design space. First, we introduce Spatula, an architecture designed to accelerate direct solvers. Then, we propose Azul, a hardware accelerator targeted at iterative solvers. Taken together, Spatula and Azul demonstrate significant speedups on both of the main classes of sparse linear solver algorithms. Finally, to show that our techniques are useful for end-to-end applications, we present Ōmeteōtl, an accelerator targeted at applications that use iterative solvers in their inner loop. Ōmeteōtl also shows that the techniques in this thesis generalize to sparse matrix computations beyond linear solvers.https://mit.zoom.us/j/98122373906 (no password) TBD

January 22

Add to Calendar 2025-01-22 15:00:00 2025-01-22 16:30:00 America/New_York Thesis Defense: Taming Data Movement Overheads in Latency-Critical Cloud Services Cloud providers are being urged to enhance the efficiency, performance, and reliability of datacenter infrastructures to support applications across many domains with diverse requirements for quality of service. Data movement is a significant source of overhead in today’s servers, and it is particularly critical for the recent emerging interactive and realtime cloud applications. In this thesis, I investigate and propose a set of novel approaches to mitigate the data movement overheads in general-purpose datacenters. This allows to establish a roadmap towards more efficient and reliable cloud services which are severely bottlenecked by data movement. In particular, I propose, implement, and evaluate three systems for the applications in (1) microservices, (2) serverelss, and (3) realtime cloud-native services on the example of virtualized radio access networks (vRAN), which are known to raise challenges to existing cloud infrastructures.First, we discuss Dagger – a system for mitigating the overheads of remote procedure calls in interactive cloud microservices. Dagger introduces a novel yet practical solution enabling fast and low-latency communication between distributed fine-granular application components. We then present Sabre – a practical and efficient system for mitigating the challenging overhead of cold start in serverless. Sabre relies on emerging tightly-coupled accelerators for compression and allows to dramatically reduce the latency of page movement in serverless microVMs without compromising the CPU cost. Finally, we build Slingshot – the first to the best of our knowledge infrastructure that enables fault tolerance in realtime cloud-native services such as vRAN. With Slingshot, we make substantial progress towards deploying reliable distributed systems working in realtime in the general purpose cloud by addressing the key challenges of fast state migration, realtime fault detection, and low-latency disaggregation.Thesis Committee: Christina Delimitrou (MIT), Zhiru Zhang (Cornell University), Mohammad Alizadeh (MIT) TBD

January 07

Add to Calendar 2019-01-07 11:00:00 2019-01-07 12:00:00 America/New_York Risk-bounded Coordination of Human-Robot Teams through Concurrent Intent Recognition and Adaptation In order for humans and robots to collaborate together fluidly, robots must be able to (1) recognize their human teammate's intentions, and (2) automatically adapt to those intentions in an intelligent manner. This thesis makes progress in these areas by proposing a framework that solves these two problems (task-level intent recognition and robotic adaptation) concurrently and holistically, using a single model and set of algorithms for both. The result is a mixed-initiative human-robot interaction that achieves the team's goals.We extend this framework by additionally maintaining a probabilistic belief over the human's intentions. This allows the robot to continuously assess the risk associated with plan execution, and thereby select adaptations that are safe enough, ask uncertainty-reducing questions when appropriate, and provide a proactive early warning of likely failure. Thesis Committee: Brian Williams, Leslie Kaelbling, Julie Shah, Patrick Winston, Andreas Hofmann 32-D463 (Star Conference Room)

October 19

Add to Calendar 2018-10-19 13:00:00 2018-10-19 14:00:00 America/New_York Reliably arranging objects: a conformant planning approach to robot manipulation A crucial challenge in robotics is achieving reliable results in spite of sensing and control uncertainty. In this work, we explore the conformant planning approach to reliable robot manipulation. In particular, we tackle the problem of pushing multiple planar objects simultaneously to achieve a specified arrangement without using external sensing. A conformant plan is a sequence of manipulation actions that reliably achieve a goal arrangement in spite of uncertainty in object pose and nondeterministic action outcomes, and without assuming the availability of additional observations. To find conformant plans, we explored two different approaches:1) Conformant planning through plan improvement. This approach takes a deterministic manipulation plan and augments it by adding fixtures (movable obstacles) to push parts up against. This method uses an optimization-based approach to determine the ideal fixture placement location. 2) Conformant planning by construction. This approach reformalizes conformant planning as a belief-state planning problem. A belief state is the set of all possible states of the world, and the objective is to find a sequence of actions that will bring an initial belief state to a goal belief state. To do forward belief-state planning, we created a deterministic belief-state transition model from on-line physics-based simulations and supervised learning based on off-line physics simulations. This thesis provides insight and develops approaches toward scalable methods for solving challenging planar manipulation problems with multiple objects or concave shape geometry. We show the success of these approaches based on planning times and robustness in real and simulated experiments. Committee: Tomas Lozano-Perez Leslie Pack KaelblingSertac Karaman 32-G449

October 15

Add to Calendar 2018-10-15 12:30:00 2018-10-15 13:30:00 America/New_York Learning from Value Function Intervals for Contact-Aware Robot Controllers Abstract: The problem of handling contact is central to the task of controlling a walking robot. Robots can only move through the world by exerting forces on their environment, and choosing where, when, and how to touch the world is the fundamental challenge of locomotion. Because the space of possible contacts is a high-dimensional mix of discrete and continuous decisions, it has historically been difficult or impossible to make complex contact decisions online at control rates. Even offline optimization of motions with contact generally suffers from poor local minima, non-convexity, and potentially non-unique solutions. This thesis introduces LVIS (Learned Value Interval Supervision) which circumvents the issue of local minima through global mixed-integer optimization and the issue of non-uniqueness through learning the optimal value function (or cost-to-go) rather than the optimal policy. To avoid the expense of solving the mixed-integer programs to full global optimality, we instead solve them only partially, extracting intervals containing the true cost-to-go from early termination of the branch-and-bound algorithm. These interval samples are used to weakly supervise the training of a neural net which approximates the true cost-to-go. Online, we use that learned cost-to-go as the terminal cost of a one-step model-predictive controller, which we solve via a small mixed-integer optimization. We demonstrate this technique on a simplified humanoid robot model and discuss how it compares to our prior work with the Atlas humanoid robot. Seminar Room G449 (Patil/Kiva)

September 18

Add to Calendar 2018-09-18 10:00:00 2018-09-18 12:00:00 America/New_York Type System for Resource Bounds with Type-Preserving Compilation and Its Application for Ethereum Smart Contracts Thesis Supervisor: Adam ChlipalaCommittee Members: Armando Solar-Lezama, Michael CarbinAbstract: This thesis studies the problem of statically bounding the resource usage of computer programs, from programs written in high-level languages to those in assembly languages. Resource usage is an aspect of programs not covered by conventional software-verification techniques, which focus mostly on functional correctness; but it is important because when resource usage exceeds the programmer's expectation by a large amount, user experience can be disrupted and large fees (such as cloud-service fees) can be charged. I designed TiML, a new typed functional programming language whose types contain resource bounds; when a TiML program passes the typechecking phase, upper bounds on its resource usage can be guaranteed. TiML uses indexed types to express sizes of data structures and upper bounds on running time of functions; and refinement kinds to constrain these indices, expressing data-structure invariants and pre/post-conditions. TiML's distinguishing characteristic is supporting highly automated time-bound verification applicable to data structures with nontrivial invariants. Type and index inference are supported to lower annotation burden, and, furthermore, big-O complexity can be inferred from recurrences generated during typechecking by a recurrence solver based on heuristic pattern matching.I also designed a typed assembly language with resource bounds, and a type-preserving compiler that compiles well-typed TiML programs into well-typed assembly programs, conforming to the same bounds. Typechecking at the assembly level reestablishes the soundness of the bounds, and the types can serve as resource-usage certificates for the assembly programs.I used Ethereum smart contracts as a real-world application of the techniques developed in this thesis. The assembly language I designed, TiEVM, is a typed version of the Ethereum Virtual Machine (EVM) bytecode language. I will demonstrate that TiML can be used as a new language to write smart contracts, and the generated TiEVM code is equipped with types proving that its resource usage -- "gas" in Ethereum terminology -- is bounded. 32-G882

September 17

Add to Calendar 2018-09-17 15:00:00 2018-09-17 16:00:00 America/New_York Thesis Defense: Architectural Support to Unlock Fine-grained Parallelism Abstract:Current multicores suffer from two major limitations: they can only exploit a fraction of the parallelism available in applications and they are very hard to program. This is because they are limited to programs with coarse-grained tasks that synchronize infrequently. However, many applications have abundant parallelism when divided into small tasks (of a few tens to hundreds of instructions each). Current systems cannot exploit this fine-grained parallelism because synchronization and task management overheads overwhelm the benefits of parallelism. In this talk I will describe three techniques that tackle the scalability and programmability issues of current multicores. First, Swarm is a parallel architecture that makes fine-grained parallelism practical by leveraging order as a general synchronization primitive. Swarm programs consist of tasks with programmer-specified order constraints. Swarm hardware provides support for fine-grained task management, and executes tasks speculatively and out of order to scale. Second, Fractal extends Swarm to harness nested speculative parallelism, which is crucial to scale large, complex applications and to compose parallel speculative algorithms. Third, Amalgam makes more efficient use of speculation resources by splitting and merging tasks to create fixed-size units of speculative work. Amalgam can improve performance and reduce implementation costs.Together, these techniques unlock abundant fine-grained parallelism in applications from a broad set of domains, including graph analytics, databases, machine learning, and discrete-event simulation. At 256 cores, our system is 40x--512x faster than a single core system and outperforms state-of-the-art software-only parallel algorithms by one to two orders of magnitude. Besides achieving near-linear scalability, the resulting programs are almost as simple as their sequential counterparts, as they do not use explicit synchronization. 32 G-449 (Patil/Kiva)

August 16

Add to Calendar 2018-08-16 10:00:00 2018-08-16 11:00:00 America/New_York Thesis Defense: Below P vs NP: Fine-Grained Hardness for Big Data Problems Abstract: The theory of NP-hardness has been remarkably successful in identifying problems that are unlikely to be solvable in polynomial time. However, many other important problems do have polynomial-time algorithms, but large exponents in their runtime bounds can make them inefficient in practice. For example, quadratic-time algorithms, although practical on moderately sized inputs, can become inefficient on big data problems that involve gigabytes or more of data. Although for many data analysis problems no sub-quadratic time algorithms are known, any evidence of quadratic-time hardness has remained elusive. In this thesis we present hardness for several text analysis and machine learning tasks:* Lower bounds for edit distance, regular expression matching and other pattern matching and string processing problems.* Lower bounds for empirical risk minimization such as kernel support vectors machines and other kernel machine learning problems.All of these problems have polynomial time algorithms, but despite extensive amount of research, no near-linear time algorithms have been found. We show that, under a natural complexity-theoretic conjecture, such algorithms do not exist. We also show how these lower bounds have inspired the development of efficient algorithms for some variants of these problems. 32-D463 (STAR)

July 18

Add to Calendar 2018-07-18 16:00:00 2018-07-18 18:00:00 America/New_York Efficient and Egalitarian Consensus Abstract:The decentralized cryptocurrency Bitcoin popularized the notion of permissionless consensus. Bitcoin's solution, now known as the Nakamoto consensus, is to build a proof-of-work (PoW) chain and treat the longest PoW chain as consensus decisions. However, this elegant solution does have limitations. In this talk, I will present techniques to improve consensus protocols on multiple fronts.A primary drawback of Nakamoto consensus is its long latency to confirm transactions. In contrast, the classic permissioned Byzantine consensus commits consensus decisions instantly. I will present Solida, a permissionless consensus protocol based on reconfigurable Byzantine consensus. I will then present an improved protocol for synchronous and authenticated Byzantine consensus, which tolerates 1/2 malicious participants and runs in expected 4 rounds (compared to expected 24 rounds from best prior work). Finally, Bitcoin's hash-based PoW has raised concerns about fairness and energy consumption. I will describe my work on bandwidth-hard functions and proofs of space to address these concerns. Seminar Room G882 (Hewlett Room)

July 16

Add to Calendar 2018-07-16 14:00:00 2018-07-16 15:15:00 America/New_York Thesis Defense: Prashant Vasudevan: Fine-Grained Cryptography Abstract:Fine-grained cryptography is the study of cryptographic objects that are required to be secure only against adversaries that are moderately more powerful than the honest parties. This weakening in security requirements opens up possibilities for meaningful cryptographic constructions in various settings using hardness assumptions that are considerably weaker than those used in standard cryptography. In this thesis, we study these possibilities in two different settings:1. First, we construct several unconditionally secure cryptographic primitives that are computable by and secure against constant-depth circuits. Under a reasonable complexity-theoretic assumption, we do the same for log-depth circuits.2. Second, we present functions that are hard to compute on average for algorithms running in some fixed polynomial time, assuming widely-conjectured worst-case hardness of certain problems from the study of fine-grained complexity. We also construct a proof-of-work protocol based on this hardness and certain structural properties of our functions. Patil/Kiva G449

July 02

Add to Calendar 2018-07-02 14:00:00 2018-07-02 15:00:00 America/New_York Improving Web Applications with Fine-grained Data Flows Abstract:Web applications are integral to today’s society, hosting a variety of services ranging from banking and e-commerce to mapping and social media. To support these rich services, web applications have evolved into complex distributed systems, making critical tasks such as performance optimization and debugging difficult.In this talk, I will describe how we can address this growing complexity by efficiently identifying and analyzing the fine-grained, distributed data flows in web applications. Tracking data flows at the granularity of individual pieces of program state, like JavaScript variables on the client-side, and key/value pairs in storage systems on the server-side, provides invaluable insights into the low-level behavior of complex web services. This information enables a variety of systems with new, more powerful performance optimizations and debugging techniques. I will describe two such systems that we have built. The first is Polaris, a web page load optimizer that identifies data dependencies between web objects to improve browser request scheduling and reduce page load times by 34%-59%. I will then discuss Vesper, the first system to accurately measure how quickly web pages become interactive for users. Vesper uses fine-grained data flows to automatically identify a page's interactive state and reduce page time-to-interactivity by 32%.Bio:Ravi Netravali is a Ph.D. student at MIT, advised by Professors Hari Balakrishnan and James Mickens. He will join the UCLA Computer Science Department as an assistant professor in the fall of 2019. His research interests are in computer systems and networks, with a recent focus on building practical systems to improve the performance and debugging of large-scale, distributed web applications. He is a recipient of the 2017 Qualcomm Innovation Fellowship, and shared the Internet Research Task Force's Applied Networking Research Prize in 2018. Star (32-D463)

June 18

Add to Calendar 2018-06-18 15:00:00 2018-06-18 16:00:00 America/New_York Connections between Circuit Analysis Problems and Circuit Lower Bounds Abstract: A circuit analysis problem takes a Boolean function f as input (where f is represented either as a logical circuit, or as a truth table) and determines some interesting property of f. Examples of circuit analysis problems include Circuit Satisfiability, Circuit Composition, and the Minimum Size Circuit Problem (MCSP). A circuit lower bound presents an interesting function f and shows that no "easy" family of logical circuits can compute f correctly on all inputs, for some definition of "easy". Lower bounds are infamously hard to prove, but are of significant interest for understanding computation.In this thesis, we derive new connections between circuit analysis problems and circuit lower bounds, to prove new lower bounds for various well-studied circuit classes. We show how faster algorithms for Circuit Satisfiability can imply non-uniform lower bounds for functions in NP and related classes. We prove that MCSP cannot be NP-hard under "local" gadget reductions of the kind that appear in textbooks, and if MCSP proved to be NP-hard in the usual (polynomial-time reduction) sense then we would also prove longstanding lower bounds in other regimes. We also prove that natural versions of the Circuit Composition problem do not have small circuits that are constructible in very small (logarithmic) space. 32-D463 (Star)