#### Research Group

## Algorithms Group

We devise new mathematical tools to tackle the increasing difficulty and importance of problems we pose to computers.

- Research Areas
- Impact Areas

20 Group Results

We devise new mathematical tools to tackle the increasing difficulty and importance of problems we pose to computers.

The MIT Center for Deployable Machine Learning (CDML) works towards creating AI systems that are robust, reliable and safe for real-world deployment.

Our interests span quantum complexity theory, barriers to solving P versus NP, theoretical computer science with a focus on probabilistically checkable proofs (PCP), pseudo-randomness, coding theory, and algorithms.

Our lab focuses on designing algorithms to gain biological insights from advances in automated data collection and the subsequent large data sets drawn from them.

Our group’s goal is to create, based on such microscopic connectivity and functional data, new mathematical models explaining how neural tissue computes.

This community is interested in understanding and affecting the interaction between computing systems and society through engineering, computer science and public policy research, education, and public engagement.

We are investigating decentralized technologies that affect social change.

We aim to develop the science of autonomy toward a future with robots and AI systems integrated into everyday life, supporting people with cognitive and physical tasks.

Our group studies geometric problems in computer graphics, computer vision, machine learning, optimization, and other disciplines.

We are an interdisciplinary group of researchers blending approaches from human-computer interaction, social computing, databases, information management, and databases.

Our projects are centered around the problems of navigation and mapping for autonomous mobile robots operating in underwater and terrestrial environments.

We develop techniques for designing, implementing, and reasoning about multiprocessor algorithms, in particular concurrent data structures for multicore machines and the mathematical foundations of the computation models that govern their behavior.

Our research interests center around the capabilities and limits of quantum computers, and computational complexity theory more generally.

We investigate the technologies that support scalable high-performance computing, including hardware, software, and theory.

The Systems CoR is focused on building and investigating large-scale software systems that power modern computers, phones, data centers, and networks, including operating systems, the Internet, wireless networks, databases, and other software infrastructure.

The goal of the Theory of Computation CoR is to study the fundamental strengths and limits of computation as well as how these interact with mathematics, computer science, and other disciplines.

We work on a wide range of problems in distributed computing theory. We study algorithms and lower bounds for typical problems that arise in distributed systems---like resource allocation, implementing shared memory abstractions, and reliable communication.

This CoR takes a unified approach to cover the full range of research areas required for success in artificial intelligence, including hardware, foundations, software systems, and applications.

Led by Web inventor and Director, Tim Berners-Lee and CEO Jeff Jaffe, the W3C focus is on leading the World Wide Web to its full potential by developing standards, protocols and guidelines that ensure the long-term growth of the Web

28 Project Results

We aim to develop a systematic framework for robots to build models of the world and to use these to make effective and safe choices of actions to take in complex scenarios.

Our goal is to develop an adaptive storage manager for analytical database workloads in a distributed setting. It works by partitioning datasets across a cluster and incrementally refining data partitioning as queries are run.

The project concerns algorithmic solutions for writing fast codes.

Aurum is a data discovery system that works at large scale, helping people find relevant data.

We study the fundamentals of Bayesian optimization and develop efficient Bayesian optimization methods for global optimization of expensive black-box functions originated from a range of different applications.

Traffic is not just a nuisance for drivers: It’s also a public health hazard and bad news for the economy.

BlueDBM is an architecture of computer clusters consisting of fast distributed flash storage and in-storage accelerators, which often outperforms larger and more expensive clusters in applications such as graph analytics.

This project aims to design parallel algorithms for shared-memory machines that are efficient both in theory and also in practice.

We plan to develop a suite of graph compression and reordering techniques as part of the Ligra parallel graph processing framework to reduce space usage and improve performance of graph algorithms.

Our goal is to design novel data compression techniques to accelerate popular machine learning algorithms in Big Data and streaming settings.

Data scientists universally report that they spend at least 80% of their time finding data sets of interest, accessing them, cleaning them and assembling them into a unified whole.

Historically, DBMSs in the warehouse space partitioned their data across a shared nothing

cluster.

cluster.

The conventional wisdom described in all text books for performing database design is never followed in practice.

Wikipedia is one of the most widely accessed encyclopedia sites in the world, including by scientists. Our project aims to investigate just how far Wikipedia’s influence goes in shaping science.

We aim to understand theory and applications of diversity-inducing probabilities (and, more generally, "negative dependence") in machine learning, and develop fast algorithms based on their mathematical properties.

Developing state-of-the-art tools that process 3D surfaces and volumes

We are designing new parallel algorithms, optimizations, and frameworks for clustering large-scale graph and geometric data.

We're developing a flexible, high-performance storage architecture for database-backed applications, based on a dynamic set of queries specified by the developer which Soup automatically optimizes.

Linking probability with geometry to improve the theory and practice of machine learning

Gerrymandering is a direct threat to our democracy, undermining founding principles like equal protection under the law and eroding public confidence in elections.

Printable Hydraulics allows fluid-actuated robots to be automatically fabricated using 3D printers.

We plan to develop a programming abstraction to enable programmers to write efficient parallel programs to process dynamic graphs.

37 People Results

Graduate Student

Graduate Student

Graduate Student

Graduate Student

Postdoctoral Fellow

Graduate Student

Research Affiliate

Mechanical Engineer/Project Manager

Postdoctoral Associate

Graduate Student

Graduate Student

Postdoctoral Associate

Postdoctoral Assoicate

15 News Results

Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence.

Storage tool adapts to what its datasets’ users want to search.

Research aims to make it easier for self-driving cars, robotics, and other applications to understand the 3D world.

MIT system “learns” how to optimally allocate workloads across thousands of servers to cut costs, save energy.

Speakers — all women — discuss everything from gravitational waves to robot nurses

CSAIL’s "RoCycle" system uses in-hand sensors to detect if an object is paper, metal or plastic.

New architecture promises to cut in half the energy and physical space required to store and manage user data.

Last week MIT’s Institute for Foundations of Data Science (MIFODS) held an interdisciplinary workshop aimed at tackling the underlying theory behind deep learning. Led by MIT professor Aleksander Madry, the event focused on a number of research discussions at the intersection of math, statistics, and theoretical computer science.

CSAIL’s approach uses algorithms and “2.5-D” sketches to let computers visualize images from any perspective

Harini Suresh, a PhD student at MIT CSAIL, studies how to make machine learning algorithms more understandable and less biased.

CSAIL's NanoMap system enables drones to avoid obstacles while flying at 20 miles per hour, by more deeply integrating sensing and control.

This week it was announced that MIT professors and CSAIL principal investigators Shafi Goldwasser, Silvio Micali, Ronald Rivest, and former MIT professor Adi Shamir won this year’s BBVA Foundation Frontiers of Knowledge Awards in the Information and Communication Technologies category for their work in cryptography.

This week the Association for Computer Machinery presented CSAIL principal investigator Daniel Jackson with the 2017 ACM SIGSOFT Outstanding Research Award for his pioneering work in software engineering. (This fall he also received the ACM SIGSOFT Impact Paper Award for his research method for finding bugs in code.)An EECS professor and associate director of CSAIL, Jackson was given the Outstanding Research Award for his “foundational contributions to software modeling, the creation of the modeling language Alloy, and the development of a widely used tool supporting model verification.”

CSAIL researchers recently helped develop "STEALTH," a system that uses artificial intelligence to combat tax evasion by corporations.

When a power company wants to build a new wind farm, it generally hires a consultant to make wind speed measurements at the proposed site for eight to 12 months. Those measurements are correlated with historical data and used to assess the site’s power-generation capacity.This month CSAIL researchers will present a new statistical technique that yields better wind-speed predictions than existing techniques do — even when it uses only three months’ worth of data. That could save power companies time and money, particularly in the evaluation of sites for offshore wind farms, where maintaining measurement stations is particularly costly.