System enables people to correct robot mistakes on multi-choice problems
Getting robots to do things isn’t easy: usually scientists have to either explicitly program them or get them to understand how humans communicate via language.
But what if we could control robots more intuitively, using just hand gestures and brainwaves?
A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.
Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.
By monitoring brain activity, the system can detect in real time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.
The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.
“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback,” says CSAIL director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”
PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoctoral associate Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.
Intuitive human-robot interaction
In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.
Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.
Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.
“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”
For the project the team used “Baxter”, a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.
To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.
Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.
“By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”
The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.
“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”
Receive an email update when we add a new HUMAN-ROBOT INTERACTION article.
The Latest on: Human-robot interaction
via Google News
The Latest on: Human-robot interaction
- How to Create Robots That Can Deal With Unpredictable Humans on November 5, 2018 at 4:28 am
Exactly. We presented this paper at the AAAI Fall Symposium Series: Artificial Intelligence for Human-Robot Interaction (2016) If your readers are interested, it's available online. Finally, how are y... […]
- Human: The brain on November 4, 2018 at 10:39 pm
Last Thursday, I attended a talk on the rise of artificial intelligence (AI) by Matthias Scheutz, professor of computer science and director of the Human-Robot Interaction Lab at Tufts. Towards the en... […]
- Safety solutions for intelligent human-robot collaboration on August 26, 2018 at 10:35 am
Human-robot interaction: a question of space and time Industry 4.0 is not the first time that industrial automation has focused on interaction between humans and machines. To date, two interaction pro... […]
- On the road to Damascus – Frontiers in Robotics and AI addresses the ‘uncanny valley’ of human-robot interaction on July 17, 2018 at 5:00 pm
Robots have been a part of our culture for centuries, but it is only in the last decade or two that robotic technology has really become integrated into daily life. Concurrent developments in robotic ... […]
- Towards a theory of theomorphic religious robots on June 8, 2018 at 6:15 am
[Evan Ackerman/IEEE Spectrum] Design Strategies for Representing the Divine in Robots [Gabriele Trovato, Cesar Lucho, Alexander Huerta-Mercado and Francisco Cuellar/HRI '18 Companion of the 2018 ACM/I... […]
- Robotic Tortoise Helps Kids to Learn That Robot Abuse Is a Bad Thing on March 14, 2018 at 10:54 am
At the ACM/IEEE International Conference on Human Robot Interaction (HRI) in Chicago last week, researchers from Naver Labs, KAIST, and Seoul National University in South Korea presented a robot calle... […]
- Snackbot serves up some human-robot interaction... and snacks on February 25, 2018 at 4:00 pm
If you’re a student at Carnegie Mellon University (CMU) who is left gasping for breath when forced to drag yourself away from your studies to get a snack, rejoice! A CMU team has created a robot that ... […]
- Cognitive Robots and Human Robot Interaction on January 26, 2018 at 4:26 pm
It is now the second week of the human robot interaction course at the University of Kansas (KU), and the professor decided to present us with the midterm project. Our goal is to “learn how to program ... […]
- Eyeris Partners with iPal® Robot to Enable Face-to-Face Interaction on January 10, 2018 at 5:00 am
Eyeris will provide the iPal Robot with human-like face-to-face interaction, creating a seamless Human-Robot-Interaction (HRI). With its advanced vision AI for emotion recognition from facial micro-ex... […]
via Bing News