Automatic speech recognition (ASR) has been a grand challenge machine learning problem for decades. Our ongoing research in this area examines the use of deep learning models for distant and noisy recording conditions, multilingual, and low-resource scenarios.
All humans process vast quantities of unannotated speech and manage to learn phonetic inventories, word boundaries, etc., and can use these abilities to acquire new word. Why can't ASR technology have similar capabilities? Our goal in this research project is to build speech technology using unannotated speech corpora.
Last week MIT’s Institute for Foundations of Data Science (MIFODS) held an interdisciplinary workshop aimed at tackling the underlying theory behind deep learning. Led by MIT professor Aleksander Madry, the event focused on a number of research discussions at the intersection of math, statistics, and theoretical computer science.
This week it was announced that MIT professors and CSAIL principal investigators Shafi Goldwasser, Silvio Micali, Ronald Rivest, and former MIT professor Adi Shamir won this year’s BBVA Foundation Frontiers of Knowledge Awards in the Information and Communication Technologies category for their work in cryptography.
Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.
Artificial intelligence (AI) in the form of “neural networks” are increasingly used in technologies like self-driving cars to be able to see and recognize objects. Such systems could even help with tasks like identifying explosives in airport security lines.
This week the Association for Computer Machinery presented CSAIL principal investigator Daniel Jackson with the 2017 ACM SIGSOFT Outstanding Research Award for his pioneering work in software engineering. (This fall he also received the ACM SIGSOFT Impact Paper Award for his research method for finding bugs in code.)An EECS professor and associate director of CSAIL, Jackson was given the Outstanding Research Award for his “foundational contributions to software modeling, the creation of the modeling language Alloy, and the development of a widely used tool supporting model verification.”
Speech recognition systems, such as those that convert speech to text on cellphones, are generally the result of machine learning. A computer pores through thousands or even millions of audio files and their transcriptions, and learns which acoustic features correspond to which typed words.But transcribing recordings is costly, time-consuming work, which has limited speech recognition to a small subset of languages spoken in wealthy nations.
Every language has its own collection of phonemes, or the basic phonetic units from which spoken words are composed. Depending on how you count, English has somewhere between 35 and 45. Knowing a language’s phonemes can make it much easier for automated systems to learn to interpret speech.In the 2015 volume of Transactions of the Association for Computational Linguistics, CSAIL researchers describe a new machine-learning system that, like several systems before it, can learn to distinguish spoken words. But unlike its predecessors, it can also learn to distinguish lower-level phonetic units, such as syllables and phonemes.
When a power company wants to build a new wind farm, it generally hires a consultant to make wind speed measurements at the proposed site for eight to 12 months. Those measurements are correlated with historical data and used to assess the site’s power-generation capacity.This month CSAIL researchers will present a new statistical technique that yields better wind-speed predictions than existing techniques do — even when it uses only three months’ worth of data. That could save power companies time and money, particularly in the evaluation of sites for offshore wind farms, where maintaining measurement stations is particularly costly.