We investigate language in different contexts: from how it is learned, to how it is grounded in visual perception, all the way to how machines can readily interact with humans.
Our researchers create state-of-the-art systems to better recognize objects, people, scenes, behaviors and more, with applications in health-care, gaming, tagging systems and more.
The shared mission of Visual Computing is to connect images and computation, spanning topics such as image and video generation and analysis, photography, human perception, touch, applied geometry, and more.
We aim to develop a systematic framework for robots to build models of the world and to use these to make effective and safe choices of actions to take in complex scenarios.
Automatic speech recognition (ASR) has been a grand challenge machine learning problem for decades. Our ongoing research in this area examines the use of deep learning models for distant and noisy recording conditions, multilingual, and low-resource scenarios.
EQ-Radio can infer a person’s emotions using wireless signals. It transmits an RF signal and analyzes its reflections off a person’s body to recognize his emotional state (happy, sad, etc.).
We are developing a general framework that enforces privacy transparently enabling different kinds of machine learning to be developed that are automatically privacy preserving.
Our research seeks to discover best practices for using avatars to enhance performance, engagement, and STEM identity development for diverse public middle and high school computer science students. As sites of our research we run workshops in which students learn computer science in fun, relevant ways, and develop self-images as computer scientists.
The Imagination, Computation, and Expression Laboratory at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has released a new video game called Grayscale, which is designed to sensitize players to problems of sexism, sexual harassment, and sexual assault in the workplace.
This week it was announced that MIT professors and CSAIL principal investigators Shafi Goldwasser, Silvio Micali, Ronald Rivest, and former MIT professor Adi Shamir won this year’s BBVA Foundation Frontiers of Knowledge Awards in the Information and Communication Technologies category for their work in cryptography.
Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.
Communicating through computers has become an extension of our daily reality. But as speaking via screens has become commonplace, our exchanges are losing inflection, body language, and empathy. Danielle Olson ’14, a first-year PhD student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), believes we can make digital information-sharing more natural and interpersonal, by creating immersive media to better understand each other’s feelings and backgrounds.
In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras. Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.
Every language has its own collection of phonemes, or the basic phonetic units from which spoken words are composed. Depending on how you count, English has somewhere between 35 and 45. Knowing a language’s phonemes can make it much easier for automated systems to learn to interpret speech.In the 2015 volume of Transactions of the Association for Computational Linguistics, CSAIL researchers describe a new machine-learning system that, like several systems before it, can learn to distinguish spoken words. But unlike its predecessors, it can also learn to distinguish lower-level phonetic units, such as syllables and phonemes.