A robot's physical form and its motion are innately coupled - in order to change its physical design, one must often change the way it moves, and vice versa. Can computers automatically and simultaneously design robot structure and motion?
For all the progress made in self-driving technologies, there still aren’t many places where they can actually drive. Companies like Google only test their fleets in major cities where they’ve spent countless hours meticulously labeling the exact 3-D positions of lanes, curbs, off-ramps, and stop signs.
In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras. Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.
Self-driving cars are likely to be safer, on average, than human-driven cars. But they may fail in new and catastrophic ways that a human driver could prevent. This project is designing a new architecture for a highly dependable self-driving car.
Knitting is the new 3d printing. It has become popular again with the widespread availability of patterns and templates, together with the maker movements. Lower-cost industrial knitting machines are starting to emerge, but we are still missing the corresponding design tools. Our goal is to fill this gap.
Our goal is to develop collaborative agents (software or robots) that can efficiently communicate with their human teammates. Key threads involve designing algorithms for inferring human behavior and for decision-making under uncertainty.
Our goal is to develop unsupervised or minimally supervised marine learning frameworks that allow autonomous underwater vehicles (AUVs) to explore unknown marine environments and communicate their findings in a semantically meaningful manner.
Our goal is to enable robots to understand and execute natural language commands from human agents. We develop algorithms that allow a robot to interpret, learn and reason about semantic concepts embedded in language in the context of low-level metric representations perceived from sensors.
Our project focuses on developing a general human motion prediction framework that can be applied in a variety of domains, ranging from manufacturing to space robotics, in order to improve the safety and efficiency of human-robot interaction.
Our goal is to create a theoretical framework and effective machine learning algorithms for robust, reliable control of autonomous vehicles. Key threads include developing metrics of confidence; and designing deep learning algorithms for parallel autonomy.
In this project, we aim to develop a framework that can ensure and certify the safety of an autonomous vehicle. By leveraging research from the area of formal verification, this framework aims to assess the safety, i.e., free of collisions, of a broad class of autonomous car controllers/planners for a given traffic model.
This week it was announced that MIT professor and CSAIL principal investigator Tomas Lozano-Perez has been awarded the 2021 IEEE Robotics and Automation Award for his “foundational contributions to robot motion planning and visionary leadership in the field.”
Developed at MIT’s Computer Science and Artificial Intelligence Laboratory, a team of robots can self-assemble to form different structures with applications in inspection, disaster response, and manufacturing