Nicholas Roy is a Professor of Aeronautics and Astronautics at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The goal of his research program is to build autonomous robots that operate effectively in the real world. His approaach to building these robots is to develop algorithms and representations that tightly couple the perception and decision making processes needed for autonomy; these representations and algorithms allow robots to make decisions that are robust to ambiguity, react quickly as the world changes, and actively learn about the world.
To achieve these capabilities, he has developed novel planning algorithms that can efficiently predict how future sensor measurements might affect future performance, allowing a fast-moving, highly-dynamic robot to choose plans that both avoid failures due to uncertainty and improve the robot’s knowledge of the world. Secondly, he has developed novel algorithms that learn over time to predict how the world will change and evolve, allowing a robot to generate useful plans very quickly based on past experience. Thirdly, he has developed novel representations and algorithms that learn models of the world from rich and unconstrained natural language interaction with people. The learned models allow a robot operating in a populated environment to interpret spoken language from untrained partners, infer complex plans from imprecise and ambiguous instructions and react with questions to avoid predictable failures.
The practical significance of his research is robots that operate robustly in a wider set of conditions than have been seen to date, allowing new applications for the field of robotics. For example, the guidance and control of an unmanned air vehicle (UAV) has historically depended on both an external positioning signal, such as GPS, for position information, and a prior map for planning flight paths. He and his students have demonstrated unmanned vehicles in the air and on the ground that can not only operate autonomously without GPS in urban and indoor environments using onboard sensors and a prior map, but also plan aggressive motion through unknown environments, using learned models to make very fast predictions of safe trajectories. As another example, he has developed robots that learn to work with people using natural language, allowing these robots to be effective parts of human-robot teams without requiring special interfaces or training of human team-mates.
Our goal is to develop unsupervised or minimally supervised marine learning frameworks that allow autonomous underwater vehicles (AUVs) to explore unknown marine environments and communicate their findings in a semantically meaningful manner.
Despite what you might see in movies, today’s robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.