News

Spotlighted News

Filter options
  • All
  • Articles
  • Videos
  • Talks
List view

More efficient lidar sensing for self-driving cars

If you see a self-driving car out in the wild, you might notice a giant spinning cylinder on top of its roof. That’s a lidar sensor, and it works by sending out pulses of infrared light and measuring the time it takes for them to bounce off objects. This creates a map of 3D points that serve as a snapshot of the car’s surroundings.

Articles

Cinematography on the fly

In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras. Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.

Voice control everywhere

The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with their principal computing devices. In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.

Learning words from pictures

Speech recognition systems, such as those that convert speech to text on cellphones, are generally the result of machine learning. A computer pores through thousands or even millions of audio files and their transcriptions, and learns which acoustic features correspond to which typed words.But transcribing recordings is costly, time-consuming work, which has limited speech recognition to a small subset of languages spoken in wealthy nations.

Learning spoken language

Every language has its own collection of phonemes, or the basic phonetic units from which spoken words are composed. Depending on how you count, English has somewhere between 35 and 45. Knowing a language’s phonemes can make it much easier for automated systems to learn to interpret speech.In the 2015 volume of Transactions of the Association for Computational Linguistics, CSAIL researchers describe a new machine-learning system that, like several systems before it, can learn to distinguish spoken words. But unlike its predecessors, it can also learn to distinguish lower-level phonetic units, such as syllables and phonemes.

Videos

More efficient lidar sensing for self-driving cars

If you see a self-driving car out in the wild, you might notice a giant spinning cylinder on top of its roof. That’s a lidar sensor, and it works by sending out pulses of infrared light and measuring the time it takes for them to bounce off objects. This creates a map of 3D points that serve as a snapshot of the car’s surroundings.

Giving robots a sense of touch

Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.

Talks