What if we could control robots more intuitively, using just hand gestures and brainwaves?
video (brainwaves + gestures) | videos (brainwaves) | poster | publications | news
Collaborators: Andres F. Salazar-Gomez, Stephanie Gil,
Ramin M. Hasani, Frank H. Guenther, Daniela Rus
Robots are becoming more common in settings ranging from factories and labs to classrooms and homes, yet there’s still somewhat of a language barrier when trying to communicate with them. Instead of writing code or learning specific keywords and new interfaces, we’d like to interact with robots the way we do with other people. This is especially important in safety-critical scenarios, where we want to detect and correct mistakes before they actually happen.
We use brain and muscle signals that a person naturally generates to create a fast and intuitive interface for supervising a robot. In our experiments, the robot chooses from multiple targets for a mock drilling task. We process brain signals to detect whether the person thinks the robot is making a mistake, and we process muscle signals to detect when they gesture to the left or right; together, this allows the person to stop the robot immediately by just mentally evaluating its choices and then indicate the correct choice by scrolling through options with gestures.
Photos by Joseph DelPreto, MIT CSAIL
Two neural network pipelines are used to classify the brain and muscle signals. One classifies brain signals (EEG) to detect naturally occurring error-related potential signals at the moment the robot makes a choice, and the other network constantly classifies forearm muscle activity (EMG) to detect left or right gestures at any time. Both networks are only trained on data from previous users, so a person that hasn’t used the system before can immediately start controlling the robot without requiring additional training.
Poster presented at RSS 2018 |
Using brain signals to detect when you think the robot made a mistake
This portion of the project aims to let people detect robot mistakes with nothing more than their brain signals. It uses an EEG cap to measure brain activity, then uses a classifier to detect error-related potentials (ErrPs). These ErrPs occur naturally within the brain whenever we notice something going wrong, so they can be used to stop a robot whenever the person thinks it’s making the wrong choice.
This project in particular allows a human operator to supervise a robot as it chooses between two targets for a sorting or selection task – if they simply think it’s choosing wrongly, the robot immediately switches to the other option. An interesting result is that a stronger ErrP signal is generated when the classifier makes a mistake – the robot seemingly ignores the person’s feedback, so the person is probably more surprised and engaged. Using this ‘secondary interactive error-related potential’ generated during the closed-loop robot task could therefore improve classification performance in the future and suggests new ways in which robots can acquire human feedback.
Media Version | Conference Version (ICRA 2017) |
Publications
- J. DelPreto, A. F. Salazar-Gomez, S. Gil, R. Hasani, F. H. Guenther, and D. Rus, “Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection,” Autonomous Robots, 2020. doi:10.1007/s10514-020-09916-x
[BibTeX] [Abstract] [Download PDF]Effective human supervision of robots can be key for ensuring correct robot operation in a variety of potentially safety-critical scenarios. This paper takes a step towards fast and reliable human intervention in supervisory control tasks by combining two streams of human biosignals: muscle and brain activity acquired via EMG and EEG, respectively. It presents continuous classification of left and right hand-gestures using muscle signals, time-locked classification of error-related potentials using brain signals (unconsciously produced when observing an error), and a framework that combines these pipelines to detect and correct robot mistakes during multiple-choice tasks. The resulting hybrid system is evaluated in a “plug-and-play” fashion with 7 untrained subjects supervising an autonomous robot performing a target selection task. Offline analysis further explores the EMG classification performance, and investigates methods to select subsets of training data that may facilitate generalizable plug-and-play classifiers.
@article{delpreto2020emgeegsupervisory, title={Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection}, author={DelPreto, Joseph and Salazar-Gomez, Andres F. and Gil, Stephanie and Hasani, Ramin and Guenther, Frank H. and Rus, Daniela}, journal={Autonomous Robots}, year={2020}, month={August}, publisher={Springer}, doi={10.1007/s10514-020-09916-x}, url={https://link.springer.com/article/10.1007/s10514-020-09916-x}, abstract={Effective human supervision of robots can be key for ensuring correct robot operation in a variety of potentially safety-critical scenarios. This paper takes a step towards fast and reliable human intervention in supervisory control tasks by combining two streams of human biosignals: muscle and brain activity acquired via EMG and EEG, respectively. It presents continuous classification of left and right hand-gestures using muscle signals, time-locked classification of error-related potentials using brain signals (unconsciously produced when observing an error), and a framework that combines these pipelines to detect and correct robot mistakes during multiple-choice tasks. The resulting hybrid system is evaluated in a ``plug-and-play'' fashion with 7 untrained subjects supervising an autonomous robot performing a target selection task. Offline analysis further explores the EMG classification performance, and investigates methods to select subsets of training data that may facilitate generalizable plug-and-play classifiers.} }
- J. DelPreto, A. F. Salazar-Gomez, S. Gil, R. M. Hasani, F. H. Guenther, and D. Rus, “Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection,” in Robotics: Science and Systems (RSS), 2018. doi:10.15607/RSS.2018.XIV.063
[BibTeX] [Abstract] [Download PDF]Control of robots in safety-critical tasks and situations where costly errors may occur is paramount for realizing the vision of pervasive human-robot collaborations. For these cases, the ability to use human cognition in the loop can be key for recuperating safe robot operation. This paper combines two streams of human biosignals, electrical muscle and brain activity via EMG and EEG, respectively, to achieve fast and accurate human intervention in a supervisory control task. In particular, this paper presents an end-to-end system for continuous rolling-window classification of gestures that allows the human to actively correct the robot on demand, discrete classification of Error-Related Potential signals (unconsciously produced by the human supervisor’s brain when observing a robot error), and a framework that integrates these two classification streams for fast and effective human intervention. The system also allows ‘plug-and-play’ operation, demonstrating accurate performance even with new users whose biosignals have not been used for training the classifiers. The resulting hybrid control system for safety-critical situations is evaluated with 7 untrained human subjects in a supervisory control scenario where an autonomous robot performs a multi-target selection task.
@inproceedings{delpreto2018emgeegsupervisory, title={Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection}, author={DelPreto, Joseph and Salazar-Gomez, Andres F. and Gil, Stephanie and Hasani, Ramin M. and Guenther, Frank H. and Rus, Daniela}, booktitle={Robotics: Science and Systems (RSS)}, year={2018}, month={June}, doi={10.15607/RSS.2018.XIV.063}, url={http://groups.csail.mit.edu/drl/wiki/images/d/d8/delpreto_rss2018_emg_eeg.pdf}, abstract={Control of robots in safety-critical tasks and situations where costly errors may occur is paramount for realizing the vision of pervasive human-robot collaborations. For these cases, the ability to use human cognition in the loop can be key for recuperating safe robot operation. This paper combines two streams of human biosignals, electrical muscle and brain activity via EMG and EEG, respectively, to achieve fast and accurate human intervention in a supervisory control task. In particular, this paper presents an end-to-end system for continuous rolling-window classification of gestures that allows the human to actively correct the robot on demand, discrete classification of Error-Related Potential signals (unconsciously produced by the human supervisor's brain when observing a robot error), and a framework that integrates these two classification streams for fast and effective human intervention. The system also allows 'plug-and-play' operation, demonstrating accurate performance even with new users whose biosignals have not been used for training the classifiers. The resulting hybrid control system for safety-critical situations is evaluated with 7 untrained human subjects in a supervisory control scenario where an autonomous robot performs a multi-target selection task.} }
- A. F. Salazar-Gomez, J. DelPreto, S. Gil, F. H. Guenther, and D. Rus, “Correcting robot mistakes in real time using eeg signals,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017. doi:10.1109/ICRA.2017.7989777
[BibTeX] [Abstract] [Download PDF]Communication with a robot using brain activity from a human collaborator could provide a direct and fast feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.
@inproceedings{salazar2017eegcorrecting, title={Correcting robot mistakes in real time using eeg signals}, author={Salazar-Gomez, Andres F and DelPreto, Joseph and Gil, Stephanie and Guenther, Frank H and Rus, Daniela}, booktitle={2017 IEEE International Conference on Robotics and Automation (ICRA)}, organization={IEEE}, year={2017}, month={May}, doi={10.1109/ICRA.2017.7989777}, url={http://groups.csail.mit.edu/drl/wiki/images/e/ec/Correcting_Robot_Mistakes_in_Real_Time_Using_EEG_Signals.pdf}, abstract={Communication with a robot using brain activity from a human collaborator could provide a direct and fast feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.} }
In the News
Brainwaves and hand gestures (EEG + EMG)
Special thanks to the MIT CSAIL communications team,
especially Adam Conner-Simons and Rachel Gordon.
Fast Company
International Business Times
Digital Trends
VentureBeat
Tech Times
Gadgets360
Express UK
IEEE Spectrum
Daily Mail
Times of India
Brain signals (EEG)
Special thanks to the MIT CSAIL communications team,
especially Adam Conner-Simons, Tom Buehler, and Jason Dorfman.