Add to Calendar
2025-02-05 15:00:00
2025-02-05 16:30:00
America/New_York
Thesis Defense: Designing Hardware Accelerators for Solving Sparse Linear Systems - Axel Feldmann
Solving systems of linear equations with sparse coefficient matrices is a key primitive that sits at the heart of many important numeric algorithms. Because of this primitive's importance, algorithm designers have spent many decades optimizing linear solvers for high performance hardware. However, despite their efforts, existing hardware has let them down. State-of-the-art linear solvers often utilize <1% of available compute throughput on existing architectures such as CPUs and GPUs.There are many different algorithms used to solve sparse linear systems. These algorithms are diverse and often have very different computational bottlenecks. These include low arithmetic intensity, fine-grained parallellism, common control dependences, and sparsity-induced load imbalance.This thesis studies the problem of designing hardware accelerators for sparse linear solvers. We propose three novel architectures that explore different parts of the design space. First, we introduce Spatula, an architecture designed to accelerate direct solvers. Then, we propose Azul, a hardware accelerator targeted at iterative solvers. Taken together, Spatula and Azul demonstrate significant speedups on both of the main classes of sparse linear solver algorithms. Finally, to show that our techniques are useful for end-to-end applications, we present Ōmeteōtl, an accelerator targeted at applications that use iterative solvers in their inner loop. Ōmeteōtl also shows that the techniques in this thesis generalize to sparse matrix computations beyond linear solvers.https://mit.zoom.us/j/98122373906 (no password)
TBD
February 05
January 22
Thesis Defense: Taming Data Movement Overheads in Latency-Critical Cloud Services
Nikita Lazarev
CSAIL
Add to Calendar
2025-01-22 15:00:00
2025-01-22 16:30:00
America/New_York
Thesis Defense: Taming Data Movement Overheads in Latency-Critical Cloud Services
Cloud providers are being urged to enhance the efficiency, performance, and reliability of datacenter infrastructures to support applications across many domains with diverse requirements for quality of service. Data movement is a significant source of overhead in today’s servers, and it is particularly critical for the recent emerging interactive and realtime cloud applications. In this thesis, I investigate and propose a set of novel approaches to mitigate the data movement overheads in general-purpose datacenters. This allows to establish a roadmap towards more efficient and reliable cloud services which are severely bottlenecked by data movement. In particular, I propose, implement, and evaluate three systems for the applications in (1) microservices, (2) serverelss, and (3) realtime cloud-native services on the example of virtualized radio access networks (vRAN), which are known to raise challenges to existing cloud infrastructures.First, we discuss Dagger – a system for mitigating the overheads of remote procedure calls in interactive cloud microservices. Dagger introduces a novel yet practical solution enabling fast and low-latency communication between distributed fine-granular application components. We then present Sabre – a practical and efficient system for mitigating the challenging overhead of cold start in serverless. Sabre relies on emerging tightly-coupled accelerators for compression and allows to dramatically reduce the latency of page movement in serverless microVMs without compromising the CPU cost. Finally, we build Slingshot – the first to the best of our knowledge infrastructure that enables fault tolerance in realtime cloud-native services such as vRAN. With Slingshot, we make substantial progress towards deploying reliable distributed systems working in realtime in the general purpose cloud by addressing the key challenges of fast state migration, realtime fault detection, and low-latency disaggregation.Thesis Committee: Christina Delimitrou (MIT), Zhiru Zhang (Cornell University), Mohammad Alizadeh (MIT)
TBD
January 07
Risk-bounded Coordination of Human-Robot Teams through Concurrent Intent Recognition and Adaptation
MIT CSAIL
Add to Calendar
2019-01-07 11:00:00
2019-01-07 12:00:00
America/New_York
Risk-bounded Coordination of Human-Robot Teams through Concurrent Intent Recognition and Adaptation
In order for humans and robots to collaborate together fluidly, robots must be able to (1) recognize their human teammate's intentions, and (2) automatically adapt to those intentions in an intelligent manner. This thesis makes progress in these areas by proposing a framework that solves these two problems (task-level intent recognition and robotic adaptation) concurrently and holistically, using a single model and set of algorithms for both. The result is a mixed-initiative human-robot interaction that achieves the team's goals.We extend this framework by additionally maintaining a probabilistic belief over the human's intentions. This allows the robot to continuously assess the risk associated with plan execution, and thereby select adaptations that are safe enough, ask uncertainty-reducing questions when appropriate, and provide a proactive early warning of likely failure. Thesis Committee: Brian Williams, Leslie Kaelbling, Julie Shah, Patrick Winston, Andreas Hofmann
32-D463 (Star Conference Room)
October 19
Add to Calendar
2018-10-19 13:00:00
2018-10-19 14:00:00
America/New_York
Reliably arranging objects: a conformant planning approach to robot manipulation
A crucial challenge in robotics is achieving reliable results in spite of sensing and control uncertainty. In this work, we explore the conformant planning approach to reliable robot manipulation. In particular, we tackle the problem of pushing multiple planar objects simultaneously to achieve a specified arrangement without using external sensing. A conformant plan is a sequence of manipulation actions that reliably achieve a goal arrangement in spite of uncertainty in object pose and nondeterministic action outcomes, and without assuming the availability of additional observations. To find conformant plans, we explored two different approaches:1) Conformant planning through plan improvement. This approach takes a deterministic manipulation plan and augments it by adding fixtures (movable obstacles) to push parts up against. This method uses an optimization-based approach to determine the ideal fixture placement location. 2) Conformant planning by construction. This approach reformalizes conformant planning as a belief-state planning problem. A belief state is the set of all possible states of the world, and the objective is to find a sequence of actions that will bring an initial belief state to a goal belief state. To do forward belief-state planning, we created a deterministic belief-state transition model from on-line physics-based simulations and supervised learning based on off-line physics simulations. This thesis provides insight and develops approaches toward scalable methods for solving challenging planar manipulation problems with multiple objects or concave shape geometry. We show the success of these approaches based on planning times and robustness in real and simulated experiments. Committee: Tomas Lozano-Perez Leslie Pack KaelblingSertac Karaman
32-G449
October 15
Learning from Value Function Intervals for Contact-Aware Robot Controllers
Robin Deits
MIT CSAIL
Add to Calendar
2018-10-15 12:30:00
2018-10-15 13:30:00
America/New_York
Learning from Value Function Intervals for Contact-Aware Robot Controllers
Abstract: The problem of handling contact is central to the task of controlling a walking robot. Robots can only move through the world by exerting forces on their environment, and choosing where, when, and how to touch the world is the fundamental challenge of locomotion. Because the space of possible contacts is a high-dimensional mix of discrete and continuous decisions, it has historically been difficult or impossible to make complex contact decisions online at control rates. Even offline optimization of motions with contact generally suffers from poor local minima, non-convexity, and potentially non-unique solutions. This thesis introduces LVIS (Learned Value Interval Supervision) which circumvents the issue of local minima through global mixed-integer optimization and the issue of non-uniqueness through learning the optimal value function (or cost-to-go) rather than the optimal policy. To avoid the expense of solving the mixed-integer programs to full global optimality, we instead solve them only partially, extracting intervals containing the true cost-to-go from early termination of the branch-and-bound algorithm. These interval samples are used to weakly supervise the training of a neural net which approximates the true cost-to-go. Online, we use that learned cost-to-go as the terminal cost of a one-step model-predictive controller, which we solve via a small mixed-integer optimization. We demonstrate this technique on a simplified humanoid robot model and discuss how it compares to our prior work with the Atlas humanoid robot.
Seminar Room G449 (Patil/Kiva)
September 18
Add to Calendar
2018-09-18 10:00:00
2018-09-18 12:00:00
America/New_York
Type System for Resource Bounds with Type-Preserving Compilation and Its Application for Ethereum Smart Contracts
Thesis Supervisor: Adam ChlipalaCommittee Members: Armando Solar-Lezama, Michael CarbinAbstract: This thesis studies the problem of statically bounding the resource usage of computer programs, from programs written in high-level languages to those in assembly languages. Resource usage is an aspect of programs not covered by conventional software-verification techniques, which focus mostly on functional correctness; but it is important because when resource usage exceeds the programmer's expectation by a large amount, user experience can be disrupted and large fees (such as cloud-service fees) can be charged. I designed TiML, a new typed functional programming language whose types contain resource bounds; when a TiML program passes the typechecking phase, upper bounds on its resource usage can be guaranteed. TiML uses indexed types to express sizes of data structures and upper bounds on running time of functions; and refinement kinds to constrain these indices, expressing data-structure invariants and pre/post-conditions. TiML's distinguishing characteristic is supporting highly automated time-bound verification applicable to data structures with nontrivial invariants. Type and index inference are supported to lower annotation burden, and, furthermore, big-O complexity can be inferred from recurrences generated during typechecking by a recurrence solver based on heuristic pattern matching.I also designed a typed assembly language with resource bounds, and a type-preserving compiler that compiles well-typed TiML programs into well-typed assembly programs, conforming to the same bounds. Typechecking at the assembly level reestablishes the soundness of the bounds, and the types can serve as resource-usage certificates for the assembly programs.I used Ethereum smart contracts as a real-world application of the techniques developed in this thesis. The assembly language I designed, TiEVM, is a typed version of the Ethereum Virtual Machine (EVM) bytecode language. I will demonstrate that TiML can be used as a new language to write smart contracts, and the generated TiEVM code is equipped with types proving that its resource usage -- "gas" in Ethereum terminology -- is bounded.
32-G882
September 17
Add to Calendar
2018-09-17 15:00:00
2018-09-17 16:00:00
America/New_York
Thesis Defense: Architectural Support to Unlock Fine-grained Parallelism
Abstract:Current multicores suffer from two major limitations: they can only exploit a fraction of the parallelism available in applications and they are very hard to program. This is because they are limited to programs with coarse-grained tasks that synchronize infrequently. However, many applications have abundant parallelism when divided into small tasks (of a few tens to hundreds of instructions each). Current systems cannot exploit this fine-grained parallelism because synchronization and task management overheads overwhelm the benefits of parallelism. In this talk I will describe three techniques that tackle the scalability and programmability issues of current multicores. First, Swarm is a parallel architecture that makes fine-grained parallelism practical by leveraging order as a general synchronization primitive. Swarm programs consist of tasks with programmer-specified order constraints. Swarm hardware provides support for fine-grained task management, and executes tasks speculatively and out of order to scale. Second, Fractal extends Swarm to harness nested speculative parallelism, which is crucial to scale large, complex applications and to compose parallel speculative algorithms. Third, Amalgam makes more efficient use of speculation resources by splitting and merging tasks to create fixed-size units of speculative work. Amalgam can improve performance and reduce implementation costs.Together, these techniques unlock abundant fine-grained parallelism in applications from a broad set of domains, including graph analytics, databases, machine learning, and discrete-event simulation. At 256 cores, our system is 40x--512x faster than a single core system and outperforms state-of-the-art software-only parallel algorithms by one to two orders of magnitude. Besides achieving near-linear scalability, the resulting programs are almost as simple as their sequential counterparts, as they do not use explicit synchronization.
32 G-449 (Patil/Kiva)
August 16
Thesis Defense: Below P vs NP: Fine-Grained Hardness for Big Data Problems
Arturs Backurs
CSAIL MIT
Add to Calendar
2018-08-16 10:00:00
2018-08-16 11:00:00
America/New_York
Thesis Defense: Below P vs NP: Fine-Grained Hardness for Big Data Problems
Abstract: The theory of NP-hardness has been remarkably successful in identifying problems that are unlikely to be solvable in polynomial time. However, many other important problems do have polynomial-time algorithms, but large exponents in their runtime bounds can make them inefficient in practice. For example, quadratic-time algorithms, although practical on moderately sized inputs, can become inefficient on big data problems that involve gigabytes or more of data. Although for many data analysis problems no sub-quadratic time algorithms are known, any evidence of quadratic-time hardness has remained elusive. In this thesis we present hardness for several text analysis and machine learning tasks:* Lower bounds for edit distance, regular expression matching and other pattern matching and string processing problems.* Lower bounds for empirical risk minimization such as kernel support vectors machines and other kernel machine learning problems.All of these problems have polynomial time algorithms, but despite extensive amount of research, no near-linear time algorithms have been found. We show that, under a natural complexity-theoretic conjecture, such algorithms do not exist. We also show how these lower bounds have inspired the development of efficient algorithms for some variants of these problems.
32-D463 (STAR)
July 18
Efficient and Egalitarian Consensus
Ling Ren
MIT CSAIL
Add to Calendar
2018-07-18 16:00:00
2018-07-18 18:00:00
America/New_York
Efficient and Egalitarian Consensus
Abstract:The decentralized cryptocurrency Bitcoin popularized the notion of permissionless consensus. Bitcoin's solution, now known as the Nakamoto consensus, is to build a proof-of-work (PoW) chain and treat the longest PoW chain as consensus decisions. However, this elegant solution does have limitations. In this talk, I will present techniques to improve consensus protocols on multiple fronts.A primary drawback of Nakamoto consensus is its long latency to confirm transactions. In contrast, the classic permissioned Byzantine consensus commits consensus decisions instantly. I will present Solida, a permissionless consensus protocol based on reconfigurable Byzantine consensus. I will then present an improved protocol for synchronous and authenticated Byzantine consensus, which tolerates 1/2 malicious participants and runs in expected 4 rounds (compared to expected 24 rounds from best prior work). Finally, Bitcoin's hash-based PoW has raised concerns about fairness and energy consumption. I will describe my work on bandwidth-hard functions and proofs of space to address these concerns.
Seminar Room G882 (Hewlett Room)
July 16
Thesis Defense: Prashant Vasudevan: Fine-Grained Cryptography
Prashant Vasudevan
Add to Calendar
2018-07-16 14:00:00
2018-07-16 15:15:00
America/New_York
Thesis Defense: Prashant Vasudevan: Fine-Grained Cryptography
Abstract:Fine-grained cryptography is the study of cryptographic objects that are required to be secure only against adversaries that are moderately more powerful than the honest parties. This weakening in security requirements opens up possibilities for meaningful cryptographic constructions in various settings using hardness assumptions that are considerably weaker than those used in standard cryptography. In this thesis, we study these possibilities in two different settings:1. First, we construct several unconditionally secure cryptographic primitives that are computable by and secure against constant-depth circuits. Under a reasonable complexity-theoretic assumption, we do the same for log-depth circuits.2. Second, we present functions that are hard to compute on average for algorithms running in some fixed polynomial time, assuming widely-conjectured worst-case hardness of certain problems from the study of fine-grained complexity. We also construct a proof-of-work protocol based on this hardness and certain structural properties of our functions.
Patil/Kiva G449
July 02
Improving Web Applications with Fine-grained Data Flows
Ravi Netravali
MIT, CSAIL
Add to Calendar
2018-07-02 14:00:00
2018-07-02 15:00:00
America/New_York
Improving Web Applications with Fine-grained Data Flows
Abstract:Web applications are integral to today’s society, hosting a variety of services ranging from banking and e-commerce to mapping and social media. To support these rich services, web applications have evolved into complex distributed systems, making critical tasks such as performance optimization and debugging difficult.In this talk, I will describe how we can address this growing complexity by efficiently identifying and analyzing the fine-grained, distributed data flows in web applications. Tracking data flows at the granularity of individual pieces of program state, like JavaScript variables on the client-side, and key/value pairs in storage systems on the server-side, provides invaluable insights into the low-level behavior of complex web services. This information enables a variety of systems with new, more powerful performance optimizations and debugging techniques. I will describe two such systems that we have built. The first is Polaris, a web page load optimizer that identifies data dependencies between web objects to improve browser request scheduling and reduce page load times by 34%-59%. I will then discuss Vesper, the first system to accurately measure how quickly web pages become interactive for users. Vesper uses fine-grained data flows to automatically identify a page's interactive state and reduce page time-to-interactivity by 32%.Bio:Ravi Netravali is a Ph.D. student at MIT, advised by Professors Hari Balakrishnan and James Mickens. He will join the UCLA Computer Science Department as an assistant professor in the fall of 2019. His research interests are in computer systems and networks, with a recent focus on building practical systems to improve the performance and debugging of large-scale, distributed web applications. He is a recipient of the 2017 Qualcomm Innovation Fellowship, and shared the Internet Research Task Force's Applied Networking Research Prize in 2018.
Star (32-D463)
June 18
Connections between Circuit Analysis Problems and Circuit Lower Bounds
Cody Murray
MIT CSAIL
Add to Calendar
2018-06-18 15:00:00
2018-06-18 16:00:00
America/New_York
Connections between Circuit Analysis Problems and Circuit Lower Bounds
Abstract: A circuit analysis problem takes a Boolean function f as input (where f is represented either as a logical circuit, or as a truth table) and determines some interesting property of f. Examples of circuit analysis problems include Circuit Satisfiability, Circuit Composition, and the Minimum Size Circuit Problem (MCSP). A circuit lower bound presents an interesting function f and shows that no "easy" family of logical circuits can compute f correctly on all inputs, for some definition of "easy". Lower bounds are infamously hard to prove, but are of significant interest for understanding computation.In this thesis, we derive new connections between circuit analysis problems and circuit lower bounds, to prove new lower bounds for various well-studied circuit classes. We show how faster algorithms for Circuit Satisfiability can imply non-uniform lower bounds for functions in NP and related classes. We prove that MCSP cannot be NP-hard under "local" gadget reductions of the kind that appear in textbooks, and if MCSP proved to be NP-hard in the usual (polynomial-time reduction) sense then we would also prove longstanding lower bounds in other regimes. We also prove that natural versions of the Circuit Composition problem do not have small circuits that are constructible in very small (logarithmic) space.
32-D463 (Star)
High-resolution Tactile Sensing for Robotic Perception
Wenzhen Yuan
Department of Mechanical Engineering, MIT
Add to Calendar
2018-06-18 10:00:00
2018-06-18 11:00:00
America/New_York
High-resolution Tactile Sensing for Robotic Perception
Abstract:Why is it so difficult for the present-day robots to act intelligently in the real-world environment? A major challenge lies in the lack of adequate tactile sensing technologies. Robots need tactile sensing to understand the physical environment, and detect the contact states during manipulation. A recently developed high-resolution tactile sensor, GelSight, which measures detailed information about the geometry and traction field on the contact surface, shows substantial potential for extending the application of robot tactile sensing. The major questions are: (1) What physical information is available from the high-resolution sensor? (2) How can the robot interpret and use this information?This thesis aims at addressing the two questions above. On the one hand, the tactile feedback helps robots on manipulation tasks. I investigate various techniques for detecting incipient slip and full slip during contact with objects. On the other hand, tactile sensing also helps a robot to better understand the physical environment, i.e. exploration tasks. I will present my work on using tactile sensing to estimate the hardness of arbitrary objects, and making a robot autonomously explore the comprehensive properties of common clothing. I also show our work on unsupervised exploration of latent properties of fabrics through cross-modal learning with vision and touch.
32-G449
May 21
Modern Challenges in Distribution Testing
Gautam Kamath
CSAIL MIT
Add to Calendar
2018-05-21 11:00:00
2018-05-21 12:00:00
America/New_York
Modern Challenges in Distribution Testing
Abstract: Given samples from an unknown distribution p, does it satisfy some hypothesis? This question has received enormous attention in statistics, information theory, and theoretical computer science. Classically, hypothesis testing has been studied in the asymptotic setting. However, a flurry of recent work has focused on non-asymptotic setting, and determining the sample complexity required to achieve non-trivial error rates. In my thesis, I develop methods for hypothesis testing in a number of settings of modern interest, with a focus on sample and computational efficiency. Some settings considered include testing against composite hypotheses, testing with multivariate data, testing on sensitive data, and testing when we have stronger access to the underlying distribution. I will discuss our results as well as interesting directions for future study.
32-D463 (Star)
May 18
"Building and Controlling Fluidically Actuated Soft Robots: From Open Loop to Model-based Control"
Robert Kevin Katzschmann
CSAIL
Add to Calendar
2018-05-18 14:30:00
2018-05-18 15:30:00
America/New_York
"Building and Controlling Fluidically Actuated Soft Robots: From Open Loop to Model-based Control"
Thesis title: "Building and Controlling Fluidically Actuated Soft Robots: From Open Loop to Model-based Control"Author: Robert Kevin KatzschmannDate: May 18th, 2018Time: 2:30 P.M.Location: 32-G449 Patil/Kiva, Stata CenterAbstract:This thesis describes the creation and control of soft robots made of deformable elastomer materials and powered by fluidics. We embed soft fluidic actuators into self-contained soft robotic systems, such as fish for underwater exploration or soft arms for dynamic manipulation. We present models describing the physical characteristics of these continuously deformable and fully soft robots, and then leverage these models for motion planning and closed-loop feedback control in order to realize quasi-static manipulation, dynamic arm motions, and dynamic interactions with an environment.The design and fabrication techniques for our soft robots include the development of soft actuator morphologies, soft casting techniques, as well as closed-circuit pneumatic and hydraulic powering methods. With a modular design approach, we combine these soft actuator morphologies into robotic systems. We create a robotic fish for underwater locomotion, as well as multi-finger hands and multi-segment arms for the use in object manipulation and interaction with an environment. The robotic fish uses a soft hydraulic actuator as its deformable tail to perform open-loop controlled swimming motions through cyclic undulation. The swimming movement of the robotic fish is enabled by a custom-made displacement pump and a custom-made buoyancy control unit, all embedded within a fish-like underwater system. The fish robot receives high-level control commands via acoustic signals to move in marine environments. The control of the multi-segment arms is enabled by models describing the geometry, kinematics, impedance, and dynamics. We use the models for quasi-static closed-loop control and dynamic closed-loop control of the soft arms. The quasi-static controllers work in combination with the kinematic models and geometric motion planners to enable the soft arms to move in confined spaces, and to autonomously perform object grasping. Leveraging the models for impedance and dynamics, we also demonstrate dynamic arm motions and end-effector interactions of the arm with an environment. Our dynamic model allows the application of control techniques developed for rigid robots to be applied for the dynamic control of soft robots. The resulting model-based closed-loop controllers enable dynamic curvature tracking as well as surface following in Cartesian space.Committee:Daniela Rus (Advisor)Andrew and Erna Viterbi Professor of EECSEmail: rus@csail.mit.eduRuss Tedrake (Chair)Toyota Professor of EECS, Aero/Astro, MechE Email: russt@mit.eduAnette HosoiNeil and Jane Pappalardo Professor of Mechanical Engineering Email: peko@mit.eduJohn LeonardSamuel C. Collins Professor of Mechanical and Ocean Engineering Email: jleonard@mit.edu
Seminar Room G449 (Patil/Kiva)
May 17
Why Do Approximate Algorithms Work Well in Machine Learning?
Ludwig Schmidt
CSAIL & EECS
Add to Calendar
2018-05-17 15:00:00
2018-05-17 16:00:00
America/New_York
Why Do Approximate Algorithms Work Well in Machine Learning?
Abstract: Many success stories in machine learning share an intriguing algorithmic phenomenon: while the core algorithmic problems might seem costly to solve or even intractable at first, simple heuristics or approximation algorithms often perform surprisingly well in practice. Common examples include optimizing over non-convex functions or non-convex sets. Even in convex problems, we often settle for sub-optimal solutions returned by stochastic gradient descent. And in nearest neighbor search, a variety of algorithms works remarkably well considering the “curse of dimensionality”.In this thesis, we study this phenomenon in the context of three algorithmic problems that appear widely in the data sciences: constrained optimization (what non-convex sets can we optimize over?), unconstrained optimization (why don’t we compute high-accuracy ERM solutions?), and nearest neighbor search (can the LSH framework explain the empirical state of the art?). The common theme is that the computational hardness of many algorithmic problems appears only below the inherent noise floor of the overall statistical problem.
32-D463 (Star)
Thesis Defense: Large-Scale Probabilistic Aerial Reconstruction
Randi Cabezas
MIT CSAIL
Add to Calendar
2018-05-17 13:00:00
2018-05-17 14:00:00
America/New_York
Thesis Defense: Large-Scale Probabilistic Aerial Reconstruction
Abstract: While much emphasis has been placed on large-scale 3D scene reconstruction from a single data source such as images or distance sensors, models that jointly utilize multiple data types remain largely unexplored. In this work, we will present a Bayesian formulation of scene reconstruction from multi-modal data as well as two critical components that enable large-scale reconstructions with adaptive resolution and high-level scene understanding with meaningful prior-probability distributions.Our first contribution is to formulate the 3D reconstruction problem within the Bayesian framework. We develop an integrated probabilistic model that allows us to naturally represent uncertainty and to fuse complementary information provided by different sensor modalities (imagery and LiDAR). Maximum-a-Posteriori inference within this model leverages GPGPUs for efficient likelihood evaluations. Our dense reconstructions (triangular mesh with texture information) are feasible with fewer observations of a given modality by relaying on others without sacrificing quality.Secondly, to enable large-scale reconstructions our formulation supports adaptive resolutions in both appearance and geometry. This change is motivated by the need for a representation that can adjust to a wide variability in data quality and availability. By coupling edge transformations within a reversible-jump MCMC framework, we allow changes in the number of triangles and mesh connectivity. We demonstrate that these data-driven updates lead to more accurate representations while reducing modeling assumptions and utilizing fewer triangles.Lastly, to enable high-level scene understanding, we include a categorization of reconstruction elements in our formulation. This scene-specific classification of triangles is estimated from semantic annotations (which are noisy and incomplete) and other scene features (e.g., geometry and appearance). The categorization provides a class-specific prior-probability distribution, thus helping to obtain more accurate and interpretable representations by regularizing the reconstruction. Collectively, these models enable complex reasoning about urban scenes by fusing all available data across modalities, a crucial necessity for future autonomous agents and large-scale augmented-reality applications.Committee: John W. Fisher, Polina Golland and John J. Leonard
32-G449 (Kiva)
May 16
Protecting User Data in Large-Scale Web Services
Frank Wang
MIT CSAIL
Add to Calendar
2018-05-16 15:30:00
2018-05-16 16:30:00
America/New_York
Protecting User Data in Large-Scale Web Services
Abstract: Web services like Google, Facebook, and Dropbox are now an essential part of people’s lives. In order to provide value to users, these services collect, store, and analyze large amounts of their users’ sensitive data. However, once the user provides her information to the web service, she loses control over how the application manipulates that data. For example, a user cannot control where the application forwards her data. Even if the service wanted to allow users to define access controls, it is unclear how these access controls should be expressed and enforced. Not only is it difficult to develop these secure access control mechanisms, but it is also difficult to ensure these mechanisms are practical. My research addresses these concerns. More specifically, it focuses on building practical, secure mechanisms for protecting user data in large-scale, distributed web services.Bio: Frank Wang is a Ph.D. student at the MIT CSAIL, advised by Nickolai Zeldovich and James Mickens. He completed his undergraduate studies at Stanford University, focusing on applied cryptography. He runs the MIT security seminar and co-founded a summer program for early stage security companies called Cybersecurity Factory. Committee: Nickolai Zeldovich, James Mickens, Vinod Vaikuntanathan
32-G882
Justin Holmgren: Securing Computation on Untrusted Platforms
Justin Holmgren
Add to Calendar
2018-05-16 12:30:00
2018-05-16 14:30:00
America/New_York
Justin Holmgren: Securing Computation on Untrusted Platforms
Abstract:In today's networked world, weak devices increasingly rely on remote servers both to store data and to perform costly computations. Unfortunately, these servers may be easily hackable or otherwise untrustworthy. Therefore, without assuming honest behavior on the server's part, we would like to guarantee two basic security objectives:1. (Correctness) It is possible to verify the correctness of the server's computations much more efficiently than by re-executing the computation.2. (Privacy) A server learns nothing about the computation it performs, other than (perhaps) the output.I will present recent results that achieve both these goals for arbitrary computations, and I will conclude with a discussion of open problems and future directions.Thesis Committee: Ran Canetti, Shafi Goldwasser and Vinod Vaikuntanathan
Patil/Kiva G449
May 14
Improving Clinical Decisions Using Correspondence Within and Across Electronic Health Records
Jen Gong
MIT - CSAIL
Add to Calendar
2018-05-14 12:00:00
2018-05-14 13:00:00
America/New_York
Improving Clinical Decisions Using Correspondence Within and Across Electronic Health Records
Abstract:Electronic Health Record (EHR) adoption and large-scale retrospective analyses of health care data are part of a broader conversation about health care quality and cost in the United States. Clinical decision-making aids are one method of helping to improve quality and lower cost of care. In this thesis, we present three methods of leveraging correspondences across elements in health care records to aid clinicians in making care decisions. We focus on the critical care environment, where patient state can rapidly change and many care decisions need to be made in short periods of time.First, we introduce a method to leverage correspondences between structured fields from two different EHR systems to a shared space of clinical concepts encoded in an existing domain ontology. We use these correspondences to enable the transfer of machine learning models across different or evolving EHR systems. Second, we introduce a method to learn correspondences between structured health record data and topic distributions of clinical notes written by care team members. Finally, we present a method to characterize care processes by learning correspondences between observations of patient state and actions taken by care team members.Bio:Jen Gong is a Ph.D. candidate in the Data Driven Inference Group at MIT, supervised by John Guttag. Her research focuses on the application of machine learning to healthcare. She is interested in how different modalities of health care data (e.g., structured health record data, clinical notes, physiological time-series) and auxiliary sources (e.g., data from similar patient populations, expert-encoded ontologies) can be leveraged to improve clinical decision-making aids. Prior to MIT, Jen received an A.B. in Applied Mathematics from Harvard College.Committee: John Guttag, Collin Stultz, Jenna Wiens
Star (32-D463)