This community is interested in understanding and affecting the interaction between computing systems and society through engineering, computer science and public policy research, education, and public engagement.
We develop techniques for designing, implementing, and reasoning about multiprocessor algorithms, in particular concurrent data structures for multicore machines and the mathematical foundations of the computation models that govern their behavior.
Automatic speech recognition (ASR) has been a grand challenge machine learning problem for decades. Our ongoing research in this area examines the use of deep learning models for distant and noisy recording conditions, multilingual, and low-resource scenarios.
We develop algorithms, systems and software architectures for automating reconstruction of accurate representations of neural tissue structures, such as nanometer-scale neurons' morphology and synaptic connections in the mammalian cortex.
All humans process vast quantities of unannotated speech and manage to learn phonetic inventories, word boundaries, etc., and can use these abilities to acquire new word. Why can't ASR technology have similar capabilities? Our goal in this research project is to build speech technology using unannotated speech corpora.
The goal of this project is to develop and test a wearable ultrasonic echolocation aid for people who are blind and visually impaired. We combine concepts from engineering, acoustic physics, and neuroscience to make echolocation accessible as a research tool and mobility aid.
Our goal is to build a system that predicts where people are looking in images. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at.
The confluence of medicine and artificial intelligence stands to create truly high-performance, specialized care for patients, with enhanced precision diagnosis and personalized disease management. But to supercharge these systems we need massive amounts of personal health data, coupled with a delicate balance of privacy, transparency, and trust.
MIT’s Amar Gupta and his wife Poonam were on a trip to Los Angeles in 2016 when she fell and broke both wrists. She was whisked by ambulance to a reputable hospital. But staff informed the couple that they couldn’t treat her there, nor could they find another local hospital that would do so. In the end, the couple was forced to take the hospital’s stunning advice: return to Boston for treatment.
This week it was announced that MIT professors and CSAIL principal investigators Shafi Goldwasser, Silvio Micali, Ronald Rivest, and former MIT professor Adi Shamir won this year’s BBVA Foundation Frontiers of Knowledge Awards in the Information and Communication Technologies category for their work in cryptography.
Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.
Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals. In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.