- About CSAIL
- News + Events
- Alumni & Friends
MERS Tackles The Human-Robot Divide
September 26, 2011
By Abby Abazorius, MIT CSAIL
CSAIL Professor Brian Williams is on the cutting-edge of developing control algorithms that enable successful human-robot coordination.
Photo: Jason Dorfman, CSAIL photographer
You might not have to wait until 2062 to travel to work in an aerocar like George Jetson, thanks to work currently underway at CSAIL. An autonomous personal air taxi capable of ferrying you to Paris by 5pm with a flyover of London may sound futuristic, but it is a current project of CSAIL Principal Investigator Brian Williams and his Model-based Embedded and Robotic Systems (MERS) group.
Williams’ group is on the cutting-edge of developing control algorithms that enable successful human-robot coordination. Williams strongly believes that robots and autonomous systems can play a major role in the coming years, assisting the elderly, leading search and rescue operations and even exploring outer space. To achieve successful integration of robots into the human work force requires systems that can execute simple, low-level reasoning quickly and efficiently, explained Williams. MERS’ methodology is a departure from the typical goal in artificial intelligence of creating systems able to tackle complicated tasks like chess.
Instilled with a lifelong interest in space exploration, Williams became intrigued by the idea of creating an explorer that could not only operate in unknown environments, but also possess the ability to think, diagnose and repair itself under varying circumstances after hearing of the loss of Mars Observer, which mysteriously vanished in the early 90s. Williams went on to co-invent Remote Agent, an autonomous system that could reason like an engineer by performing mission-planning diagnosis and repair from engineering models. Remote Agent controlled the NASA Deep Space One asteroid encounter mission in 1999.
Through his work, Williams is continuously grappling with the question of, “How can we develop explorers that can operate in unknown environments and are smart enough to diagnose and repair themselves, and how do we make explorers that can think, establish their own goals, and can also deal with all the things that go wrong along the way?”
The answer, for Williams, comes in the form of model-based autonomy, a new automated reasoning approach developed by MERS. Model-based autonomy allows humans to impart common-sense knowledge to autonomous systems through strategic guidance. Coupled with knowledge of its hardware and surrounding environment, systems using model-based autonomy have the ability to plan and execute actions to achieve goals, specified by humans, by reasoning from models of themselves and their environment.
“Part of the idea is to program autonomous systems using what looks like a traditional programming language, but the programs are really specifying a common-sense description of how we want the system to behave, not the actions that it should perform. This is often a description of what we want the system to do, assuming that things don’t go wrong,” said Williams.“Then the autonomous system needs to figure out what things could go wrong, by reasoning from common-sense, and needs to figure out how to recover. This involves significant low-level reasoning about how things break and common ways to repair these breaks.”
The autonomous personal air taxi, or PT, being simulated by MERS, in collaboration with Boeing Rearch and Technology, is one example of Williams’ work with model-based autonomy. When traveling in the PT, a passenger would interact with the vehicle as if it were a cab driver, offering information on the destination, desired arrival time and route. PT would check the weather, plan a safe itinerary and select alternative landing sites in case of emergency. In the event of a weather disruption or other unforeseen event, PT will diagnose the situation, alert the passenger and present alternatives, such as a new landing site or skipping a desired landmark, while explaining its reasoning along the way.
MERS' simulation of a trip on a personal air taxi ride from Provincetown, MA to Bedford, MA.
Through his work with model-based autonomy, Williams also explores human-robot coordination under varying and uncertain circumstances. For instance, the group has taught Athlete, a large rover being developed to support human exploration of lunar surfaces, how to perform tasks such as grasping through human demonstration and interaction.
Additionally, MERS and Boeing are researching the potential of increasing collaboration between humans and robots in the workplace. One joint effort focuses on how a human-robot team can work together fluidly using shared goals and plans, while taking each worker’s different capabilities into account. In one research example, students are acting as visual sensors to support the work of a robot team that consists of the whole-body robot PR2 and two single-armed manipulators. Williams is also working on developing robots with a strong basic instinct for risk, in an effort to instill greater trust in humans for using robots within the workforce.
Search and rescue is another area where Williams sees robots proving especially useful. “The idea is to have a set of robotic scouts that look at a mission plan, figure out areas where the plan is risky and then go off and take pictures of the area to make sure that it’s safe,” explained Williams. “The robots should be able to figure out where to go without anyone telling them. But if the robots assess that their actions are too risky, then the idea is that they will call humans and ask them for help.”
In dealing with risk, Williams and his group have developed risk-sensitive control algorithms, which allow autonomous systems to determine the safest possible plan of action for fulfilling their desired goals, and that allow systems to increase risk to a user-specified level, while maximizing the benefit of the risk that is taken. Risk-sensitive control applications that Williams and his group have explored include deep-sea exploration using autonomous vehicles, specifically underwater mapping of a deep-sea canyon and controlling a hovering deep-sea vehicle. Williams is also applying this work to controlling a grid of sustainable homes, as he feels that computer science can have a major impact in moving mankind towards greater energy efficiency.
Along with CSAIL Postdoctoral Fellow J. Zico Kolter and Assistant Professor Youssef Marzouk, Williams is co-organizing a seminar series for the fall of 2011 and is designing a graduate course for the spring of 2012, both of which are focused on computational methods for sustainability. He is also involved in designing a new, flexible undergraduate engineering degree that allows students to create their own field of concentration. Possibilities include the environment, energy, transportation and computational sustainability.
“Many problems of societal need are really computational problems. Their solution often involves modeling the environment and involves making decisions about how to improve the environment of make better use of resources,” said Williams. “This very much involves machine learning, optimization and control, together with higher level decision making, which is a lot of what CSAIL is about.”
For more on Williams’ work, check out http://groups.csail.mit.edu/mers/.
MERS focuses on ensuring successful interactions between humans and robots in the workplace.
Video: Tom Buehler