This community is interested in understanding and affecting the interaction between computing systems and society through engineering, computer science and public policy research, education, and public engagement.
We design software for high performance computing, develop algorithms for numerical linear algebra, and research random matrix theory and its applications.
This community is interested in understanding and affecting the interaction between computing systems and society through engineering, computer science and public policy research, education, and public engagement.
We seek to develop techniques for securing tomorrow's global information infrastructure by exploring theoretical foundations, near-term practical applications, and long-range speculative research.
We are an interdisciplinary group of researchers blending approaches from human-computer interaction, social computing, databases, information management, and databases.
We investigate language in different contexts: from how it is learned, to how it is grounded in visual perception, all the way to how machines can readily interact with humans.
Our mission is to work with policy makers and cybersecurity technologists to increase the trustworthiness and effectiveness of interconnected digital systems.
We work on a wide range of problems in distributed computing theory. We study algorithms and lower bounds for typical problems that arise in distributed systems---like resource allocation, implementing shared memory abstractions, and reliable communication.
This CoR takes a unified approach to cover the full range of research areas required for success in artificial intelligence, including hardware, foundations, software systems, and applications.
Led by Web inventor and Director, Tim Berners-Lee and CEO Jeff Jaffe, the W3C focus is on leading the World Wide Web to its full potential by developing standards, protocols and guidelines that ensure the long-term growth of the Web
Alloy is a language for describing structures and a tool for exploring them. It has been used in a wide range of applications from finding holes in security mechanisms to designing telephone switching networks. Hundreds of projects have used Alloy for design analysis, for verification, for simulation, and as a backend for many other kinds of analysis and synthesis tools, and Alloy is currently being taught in courses worldwide.
Automatic speech recognition (ASR) has been a grand challenge machine learning problem for decades. Our ongoing research in this area examines the use of deep learning models for distant and noisy recording conditions, multilingual, and low-resource scenarios.
To further parallelize co-prime sampling based sparse sensing, we introduce Diophantine Equation in different algebraic structures to build generalized lattice arrays.
With strong relationship to generalized Chinese Remainder Theorem, the geometry properties in the remainder code space, a special lattice space, are explored.
The creation of low-power circuits capable of speech recognition and speaker verification will enable spoken interaction on a wide variety of devices in the era of Internet of Things.
To enable privacy preservation in decentralized optimization, differential privacy is the most commonly used approach. However, under such scenario, the trade-off between accuracy (even efficiency) and privacy is inevitable. In this project, distributed numerical optimization scheme incorporated with lightweight cryptographic information sharing are explored. The affect on the convergence rate from network topology is considered.
The Robot Compiler allows non-engineering users to rapidly fabricate customized robots, facilitating the proliferation of robots in everyday life. It thereby marks an important step towards the realization of personal robots that have captured imaginations for decades.
Starling is a scalable query execution engine built on cloud function services that computes at a fine granularity, helping people more easily match workload demand.
Last week MIT’s Institute for Foundations of Data Science (MIFODS) held an interdisciplinary workshop aimed at tackling the underlying theory behind deep learning. Led by MIT professor Aleksander Madry, the event focused on a number of research discussions at the intersection of math, statistics, and theoretical computer science.
This week it was announced that MIT professors and CSAIL principal investigators Shafi Goldwasser, Silvio Micali, Ronald Rivest, and former MIT professor Adi Shamir won this year’s BBVA Foundation Frontiers of Knowledge Awards in the Information and Communication Technologies category for their work in cryptography.
Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.
Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.
The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with their principal computing devices. In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.
Every language has its own collection of phonemes, or the basic phonetic units from which spoken words are composed. Depending on how you count, English has somewhere between 35 and 45. Knowing a language’s phonemes can make it much easier for automated systems to learn to interpret speech.In the 2015 volume of Transactions of the Association for Computational Linguistics, CSAIL researchers describe a new machine-learning system that, like several systems before it, can learn to distinguish spoken words. But unlike its predecessors, it can also learn to distinguish lower-level phonetic units, such as syllables and phonemes.