This week the Association for Computer Machinery presented CSAIL principal investigator Daniel Jackson with the 2017 ACM SIGSOFT Outstanding Research Award for his pioneering work in software engineering. (This fall he also received the ACM SIGSOFT Impact Paper Award for his research method for finding bugs in code.)An EECS professor and associate director of CSAIL, Jackson was given the Outstanding Research Award for his “foundational contributions to software modeling, the creation of the modeling language Alloy, and the development of a widely used tool supporting model verification.”
Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.
The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with their principal computing devices. In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.
Speech recognition systems, such as those that convert speech to text on cellphones, are generally the result of machine learning. A computer pores through thousands or even millions of audio files and their transcriptions, and learns which acoustic features correspond to which typed words.But transcribing recordings is costly, time-consuming work, which has limited speech recognition to a small subset of languages spoken in wealthy nations.
Every language has its own collection of phonemes, or the basic phonetic units from which spoken words are composed. Depending on how you count, English has somewhere between 35 and 45. Knowing a language’s phonemes can make it much easier for automated systems to learn to interpret speech.In the 2015 volume of Transactions of the Association for Computational Linguistics, CSAIL researchers describe a new machine-learning system that, like several systems before it, can learn to distinguish spoken words. But unlike its predecessors, it can also learn to distinguish lower-level phonetic units, such as syllables and phonemes.
When a power company wants to build a new wind farm, it generally hires a consultant to make wind speed measurements at the proposed site for eight to 12 months. Those measurements are correlated with historical data and used to assess the site’s power-generation capacity.This month CSAIL researchers will present a new statistical technique that yields better wind-speed predictions than existing techniques do — even when it uses only three months’ worth of data. That could save power companies time and money, particularly in the evaluation of sites for offshore wind farms, where maintaining measurement stations is particularly costly.