We focus on finding novel approaches to improve the performance of modern computer systems without unduly increasing the complexity faced by application developers, compiler writers, or computer architects.
(This project is no longer active.) The T-1000, a prototype system of a thousand realistic processors embedded throughout an ensemble of interconnected FPGAs, seeks to demonstrate the scalability of timestamp-based cache coherence protocols on distributed shared memory systems.
Alloy is a language for describing structures and a tool for exploring them. It has been used in a wide range of applications from finding holes in security mechanisms to designing telephone switching networks. Hundreds of projects have used Alloy for design analysis, for verification, for simulation, and as a backend for many other kinds of analysis and synthesis tools, and Alloy is currently being taught in courses worldwide.
Self-driving cars are likely to be safer, on average, than human-driven cars. But they may fail in new and catastrophic ways that a human driver could prevent. This project is designing a new architecture for a highly dependable self-driving car.
The Arabic language is spoken by over one billion people around the world. Arabic presents a variety of challenges for speech and language processing technologies. In our group, we have several research topics examining Arabic, including dialect identification, speech recognition, machine translation, and language processing.
In order to be able to design synthetic organs that function autonomously, we will need to engineer artificial tissue homeostasis. To control the size of these artificial tissues, two major mechanisms will have to be engineered.
Automatic speech recognition (ASR) has been a grand challenge machine learning problem for decades. Our ongoing research in this area examines the use of deep learning models for distant and noisy recording conditions, multilingual, and low-resource scenarios.
BlueDBM is an architecture of computer clusters consisting of fast distributed flash storage and in-storage accelerators, which often outperforms larger and more expensive clusters in applications such as graph analytics.
We aim to study the impact of computer-supported roleplaying in changing social perspectives of digital media users. Such media could take the form of videogames, VR systems, training software, and other types of interactive narrative technology.
Knitting is the new 3d printing. It has become popular again with the widespread availability of patterns and templates, together with the maker movements. Lower-cost industrial knitting machines are starting to emerge, but we are still missing the corresponding design tools. Our goal is to fill this gap.
Mixed-methods qualitative (interviews and coding) and computational (AI) approach to understanding relationships between social identities, cultural values, and virtual identity technologies (e.g., online profiles and avatars).
Déjà Vu is a new platform for end-user development of apps with rich functionality. It features a novel theory of modularity for binding concepts; an extensive library of reusable concepts; and a WYSIWYG tool for specifying bindings and customizing visual layout
Last week MIT’s Institute for Foundations of Data Science (MIFODS) held an interdisciplinary workshop aimed at tackling the underlying theory behind deep learning. Led by MIT professor Aleksander Madry, the event focused on a number of research discussions at the intersection of math, statistics, and theoretical computer science.
Last week CSAIL hosted the second “Hot Topics in Computing” speaker series, a monthly forum where computing experts hold discussions with community members on various topics in the computer science field.
Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.
We live in the age of big data, but most of that data is “sparse.” Imagine, for instance, a massive table that mapped all of Amazon’s customers against all of its products, with a “1” for each product a given customer bought and a “0” otherwise. The table would be mostly zeroes.
In a traditional computer, a microprocessor is mounted on a “package,” a small circuit board with a grid of electrical leads on its bottom. The package snaps into the computer’s motherboard, and data travels between the processor and the computer’s main memory bank through the leads.
Most modern websites store data in databases, and since database queries are relatively slow, most sites also maintain so-called cache servers, which list the results of common queries for faster access. A data center for a major web service such as Google or Facebook might have as many as 1,000 servers dedicated just to caching.
When organic chemists identify a useful chemical compound — a new drug, for instance — it’s up to chemical engineers to determine how to mass-produce it. There could be 100 different sequences of reactions that yield the same end product. But some of them use cheaper reagents and lower temperatures than others, and perhaps most importantly, some are much easier to run continuously, with technicians occasionally topping up reagents in different reaction chambers.
Communicating through computers has become an extension of our daily reality. But as speaking via screens has become commonplace, our exchanges are losing inflection, body language, and empathy. Danielle Olson ’14, a first-year PhD student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), believes we can make digital information-sharing more natural and interpersonal, by creating immersive media to better understand each other’s feelings and backgrounds.
The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with their principal computing devices. In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.
Speech recognition systems, such as those that convert speech to text on cellphones, are generally the result of machine learning. A computer pores through thousands or even millions of audio files and their transcriptions, and learns which acoustic features correspond to which typed words.But transcribing recordings is costly, time-consuming work, which has limited speech recognition to a small subset of languages spoken in wealthy nations.