Our research centers on digital manufacturing, 3D printing and computer graphics, as well as computational photography and displays, and virtual humans and robotics.
We aim to develop the science of autonomy toward a future with robots and AI systems integrated into everyday life, supporting people with cognitive and physical tasks.
We are working to elevate robots from mechanical creations controlled by low-level scripts with a considerable amount of human guidance to truly cognitive robots.
We focus on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing educational methods.
Our goal is to develop collaborative agents (software or robots) that can efficiently communicate with their human teammates. Key threads involve designing algorithms for inferring human behavior and for decision-making under uncertainty.
Our goal is to develop computational learning models for agents that intelligently use various types of human feedback to help an agent adapt and transfer knowledge to new situations.
Our goal is to allow planners to exploit the forces and contacts between objects so as to carry out complex manipulation tasks in the presence of uncertainty.
People, including doctors, nurses, and military personnel, often learn their roles in complex organizations through a training and apprenticeship process.
Once a human-machine team has formed a high-quality, flexible plan for working together, the robot must execute its part of the plan and work with its teammate.
Our goal is to design algorithms that enable robots to operate in human environments by simultaneously reasoning about high-level task actions as well as low-level motions.
In a pair of papers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), two teams enable better sense and perception for soft robotic grippers.
Developed at MIT’s Computer Science and Artificial Intelligence Laboratory, a team of robots can self-assemble to form different structures with applications in inspection, disaster response, and manufacturing
Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.
Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.
One reason we don’t yet have robot personal assistants buzzing around doing our chores is because making them is hard. Assembling robots by hand is time-consuming, while automation — robots building other robots — is not yet fine-tuned enough to make robots that can do complex tasks.But if humans and robots can’t do the trick, what about 3-D printers?In a new paper, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) present the first-ever technique for 3-D printing robots that involves printing solid and liquid materials at the same time.The new method allows the team to automatically 3-D print dynamic robots in a single step, with no assembly required, using a commercially-available 3-D printer.