This CoR brings together researchers at CSAIL working across a broad swath of application domains. Within these lie novel and challenging machine learning problems serving science, social science and computer science.
This CoR aims to develop AI technology that synthesizes symbolic reasoning, probabilistic reasoning for dealing with uncertainty in the world, and statistical methods for extracting and exploiting regularities in the world, into an integrated picture of intelligence that is informed by computational insights and by cognitive science.
This community is interested in understanding and affecting the interaction between computing systems and society through engineering, computer science and public policy research, education, and public engagement.
We use visualization as a petri dish to study intelligence augmentation, or how can computational representations and software systems help amplify our cognition and creativity, while respecting our agency?
Alloy is a language for describing structures and a tool for exploring them. It has been used in a wide range of applications from finding holes in security mechanisms to designing telephone switching networks. Hundreds of projects have used Alloy for design analysis, for verification, for simulation, and as a backend for many other kinds of analysis and synthesis tools, and Alloy is currently being taught in courses worldwide.
Automatic speech recognition (ASR) has been a grand challenge machine learning problem for decades. Our ongoing research in this area examines the use of deep learning models for distant and noisy recording conditions, multilingual, and low-resource scenarios.
We aim to study the impact of computer-supported roleplaying in changing social perspectives of digital media users. Such media could take the form of videogames, VR systems, training software, and other types of interactive narrative technology.
Mixed-methods qualitative (interviews and coding) and computational (AI) approach to understanding relationships between social identities, cultural values, and virtual identity technologies (e.g., online profiles and avatars).
The Robot Compiler allows non-engineering users to rapidly fabricate customized robots, facilitating the proliferation of robots in everyday life. It thereby marks an important step towards the realization of personal robots that have captured imaginations for decades.
All humans process vast quantities of unannotated speech and manage to learn phonetic inventories, word boundaries, etc., and can use these abilities to acquire new word. Why can't ASR technology have similar capabilities? Our goal in this research project is to build speech technology using unannotated speech corpora.
Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.
Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.
The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with their principal computing devices. In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.
Every language has its own collection of phonemes, or the basic phonetic units from which spoken words are composed. Depending on how you count, English has somewhere between 35 and 45. Knowing a language’s phonemes can make it much easier for automated systems to learn to interpret speech.In the 2015 volume of Transactions of the Association for Computational Linguistics, CSAIL researchers describe a new machine-learning system that, like several systems before it, can learn to distinguish spoken words. But unlike its predecessors, it can also learn to distinguish lower-level phonetic units, such as syllables and phonemes.