The shared mission of Visual Computing is to connect images and computation, spanning topics such as image and video generation and analysis, photography, human perception, touch, applied geometry, and more.
Automatic speech recognition (ASR) has been a grand challenge machine learning problem for decades. Our ongoing research in this area examines the use of deep learning models for distant and noisy recording conditions, multilingual, and low-resource scenarios.
To achieve high-quality photo lighting in challenging environments, our prototype camera dynamically reconstructs a 3D scene model and directs a motor-controlled flash head at nearby walls and ceilings for soft indirect illumination.
Knitting is the new 3d printing. It has become popular again with the widespread availability of patterns and templates, together with the maker movements. Lower-cost industrial knitting machines are starting to emerge, but we are still missing the corresponding design tools. Our goal is to fill this gap.
Mixed-methods qualitative (interviews and coding) and computational (AI) approach to understanding relationships between social identities, cultural values, and virtual identity technologies (e.g., online profiles and avatars).
Our goal is to build a system that predicts where people are looking in images. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at.
Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.
Communicating through computers has become an extension of our daily reality. But as speaking via screens has become commonplace, our exchanges are losing inflection, body language, and empathy. Danielle Olson ’14, a first-year PhD student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), believes we can make digital information-sharing more natural and interpersonal, by creating immersive media to better understand each other’s feelings and backgrounds.
In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras. Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.
The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with their principal computing devices. In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.
Every language has its own collection of phonemes, or the basic phonetic units from which spoken words are composed. Depending on how you count, English has somewhere between 35 and 45. Knowing a language’s phonemes can make it much easier for automated systems to learn to interpret speech.In the 2015 volume of Transactions of the Association for Computational Linguistics, CSAIL researchers describe a new machine-learning system that, like several systems before it, can learn to distinguish spoken words. But unlike its predecessors, it can also learn to distinguish lower-level phonetic units, such as syllables and phonemes.