What if we could control robots more intuitively, using just hand gestures and brainwaves?

Robots are becoming more common in settings ranging from factories and labs to classrooms and homes, yet there's still somewhat of a language barrier when trying to communicate with them.  Instead of writing code or learning specific keywords and new interfaces, we'd like to interact with robots the way we do with other people.  This is especially important in safety-critical scenarios, where we want to detect and correct mistakes before they actually happen.

Taking a step towards this goal, we use brain and muscle signals that a person naturally generates to create a fast and intuitive interface for supervising a robot.  In our experiments, the robot chooses from multiple targets for a mock drilling task.  We process brain signals to detect whether the person thinks the robot is making a mistake, and we process muscle signals to detect when they gesture to the left or right; together, this allows the person to stop the robot immediately by just mentally evaluating its choices and then indicate the correct choice by scrolling through options with gestures.

 

Publication authors:

Joseph DelPretoAndres F. Salazar-Gomez, Stephanie Gil, Ramin M. Hasani, Frank H. Guenther, and Daniela Rus

Publication abstract:

Control of robots in safety-critical tasks and situations where costly errors may occur is paramount for realizing the vision of pervasive human-robot collaborations. For these cases, the ability to use human cognition in the loop can be key for recuperating safe robot operation. This paper combines two streams of human biosignals, electrical muscle and brain activity via EMG and EEG, respectively, to achieve fast and accurate human intervention in a supervisory control task. In particular, this paper presents an end-to-end system for continuous rolling-window classification of gestures that allows the human to actively correct the robot on demand, discrete classification of Error-Related Potential signals (unconsciously produced by the human supervisor's brain when observing a robot error), and a framework that integrates these two classification streams for fast and effective human intervention. The system also allows 'plug-and-play' operation, demonstrating accurate performance even with new users whose biosignals have not been used for training the classifiers. The resulting hybrid control system for safety-critical situations is evaluated with 7 untrained human subjects in a supervisory control scenario where an autonomous robot performs a multi-target selection task.

Members

Publications