Our vision is data-driven machine learning systems that advance the quality of healthcare, the understanding of cyber arms races and the delivery of online education.
Our research centers on digital manufacturing, 3D printing and computer graphics, as well as computational photography and displays, and virtual humans and robotics.
This community is interested in understanding and affecting the interaction between computing systems and society through engineering, computer science and public policy research, education, and public engagement.
We aim to develop the science of autonomy toward a future with robots and AI systems integrated into everyday life, supporting people with cognitive and physical tasks.
We are working to elevate robots from mechanical creations controlled by low-level scripts with a considerable amount of human guidance to truly cognitive robots.
This CoR takes a unified approach to cover the full range of research areas required for success in artificial intelligence, including hardware, foundations, software systems, and applications.
Our goal is to develop collaborative agents (software or robots) that can efficiently communicate with their human teammates. Key threads involve designing algorithms for inferring human behavior and for decision-making under uncertainty.
Our goal is to develop computational learning models for agents that intelligently use various types of human feedback to help an agent adapt and transfer knowledge to new situations.
We are developing a general framework that enforces privacy transparently enabling different kinds of machine learning to be developed that are automatically privacy preserving.
Our goal is to allow planners to exploit the forces and contacts between objects so as to carry out complex manipulation tasks in the presence of uncertainty.
People, including doctors, nurses, and military personnel, often learn their roles in complex organizations through a training and apprenticeship process.
Once a human-machine team has formed a high-quality, flexible plan for working together, the robot must execute its part of the plan and work with its teammate.
Our goal is to design algorithms that enable robots to operate in human environments by simultaneously reasoning about high-level task actions as well as low-level motions.
The goal of this project is to develop and test a wearable ultrasonic echolocation aid for people who are blind and visually impaired. We combine concepts from engineering, acoustic physics, and neuroscience to make echolocation accessible as a research tool and mobility aid.
In a pair of papers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), two teams enable better sense and perception for soft robotic grippers.
Developed at MIT’s Computer Science and Artificial Intelligence Laboratory, a team of robots can self-assemble to form different structures with applications in inspection, disaster response, and manufacturing
Genome-wide association studies, which look for links between particular genetic variants and incidence of disease, are the basis of much modern biomedical research.
This week it was announced that MIT professors and CSAIL principal investigators Shafi Goldwasser, Silvio Micali, Ronald Rivest, and former MIT professor Adi Shamir won this year’s BBVA Foundation Frontiers of Knowledge Awards in the Information and Communication Technologies category for their work in cryptography.
Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.