THESIS DEFENSE: Learning and Recognition of Hybrid Manipulation Tasks in Variable Environments using Probabilistic Flow Tubes
Speaker: Shannon Dong , MITContact:
Date: August 16 2012
Time: 3:00PM to 4:00PM
Host: Brian Williams, MIT CSAIL
Steve Levine, email@example.com
We want robots to act as proxies for human operators in environments where the human operator is not present or cannot directly perform a task, such as in dangerous or remote situations. Teleoperation is a common interface for controlling robots that are designed to be human proxies. Unfortunately, teleoperation may fail to preserve the natural fluidity of human motions due to interface limitations such as communication delays, non-immersive sensing, and controller uncertainty.
We envision a robot that can recognize a teleoperator's intended motion and autonomously complete the execution of recognized routine tasks. To do this, the robot first learns offline a library of generalized tasks from a training set of user demonstrations. During online operation, the robot can perform real-time recognition of a user's teleoperated motions, and if requested, autonomously execute the remainder of an activity.
We realize this vision by addressing three main problems: (1) learning primitive activities by identifying significant features of the motion and generalizing its behavior from user demonstration trajectories; (2) recognizing activities in real time by determining the likelihood that a user is currently executing one of several learned activities; and (3) learning complex plans by generalizing a sequence of activities, through auto-segmentation and incremental learning of previously unknown activities.
To solve these problems, we first present an approach to learning activities from human demonstration that (1) provides flexibility during execution while robustly encoding a human's intended motions, using a novel representation called a probabilistic flow tube, and (2) automatically determines the relevant features of a motion so that they can be preserved during autonomous execution in new situations. We next introduce an approach to real-time motion recognition that (1) uses temporal information to successfully model motions that may be non-Markovian, (2) provides fast real-time recognition of motions in progress by using an incremental approach, and (3) leverages the probabilistic flow tube representation to ensure robustness during recognition against varying environment states. Finally, we develop an approach to learn combinations of activities that (1) automatically determines where activities should be segmented in a sequence and (2) learns previously unknown activities on the fly.
Systematic testing in a two-dimensional environment shows significant improvement in activity recognition rates over prior art. We also demonstrate our learning capabilities on three different robotic platforms supporting user-teleoperated manipulation tasks in a variety of environments.
See other events that are part of
See other events happening in August 2012