From High-Level to Low-Level Robot Learning of Complex Tasks: Leveraging Priors, Metrics and Dynamical Systems

Speaker

Nadia Figueroa
EPFL

Host

Julie Shah
MIT
Humans have a remarkable way of learning, adapting and mastering new manipulation tasks. With the current advances in Machine Learning (ML), the promise of having robots with such capabilities seems to be on the cusp of reality. Transferring human-level skills to robots, however, is complicated as they involve a level of complexity that cannot be tackled by classical ML methods in an unsupervised way. Such complexities involve: (i) automatically decomposing tasks into control-oriented encodings, (ii) extracting invariances and handling idiosyncrasies of data acquired from human demonstrations and (iii) learning models that guarantee stability and convergence. The main goal of my research is to devise novel techniques to learn complex tasks from demonstrations, overcoming the aforementioned challenges with (i) a high-level of autonomy during learning, while (ii) providing adaptability during execution. To provide such capabilities we propose learning and control strategies that step over traditional disciplinary boundaries, seamlessly blending concepts from control theory, robotics and machine learning. Specifically, the techniques presented in this talk leverage Bayesian non-parametrics and kernel-methods with dynamical system (DS) theory to solve challenging open problems in the Learning from Demonstration (LfD) domain.

The first part of the talk will focus on learning complex sequential manipulations tasks from demonstrations. The particular challenge is learning these tasks without any prior knowledge on the number of actions or restriction as to how the human is demonstrating the task. We showcase these algorithms on two cooking case studies in which robots are taught to roll pizza dough and peel vegetables in an almost autonomous fashion. The second part of the talk will focus on the development of novel DS formulations and learning schemes that are capable of representing and executing a complex task with a single model without the need for switching or task discretization. The type of tasks that can be learned with these new approaches are un-paralleled to previous work in DS-based LfD and are validated on production line and household activities, as well as adaptive navigation strategies for mobile agents and locomotion and co-manipulation tasks of biped robots. Finally, we will showcase novel joint-space learning strategies that are used to resolve for kinematic singularities and self-collision avoidance in multi-arm robotic systems.