Instantly correcting robot mistakes with nothing more than brain signals and the flick of a finger

BY MONITORING BRAIN ACTIVITY, THE SYSTEM CAN DETECT IN REAL TIME IF A PERSON NOTICES AN ERROR AS A ROBOT DOES A TASK. CREDIT: MIT CSAIL

System enables people to correct robot mistakes on multi-choice problems

Getting robots to do things isn’t easy: usually scientists have to either explicitly program them or get them to understand how humans communicate via language.

But what if we could control robots more intuitively, using just hand gestures and brainwaves?

A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.

Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.

By monitoring brain activity, the system can detect in real time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

https://www.youtube.com/watch?v=_Or8Lt3YtEA&feature=youtu.be

The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.

“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback,” says CSAIL director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoctoral associate Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.

Intuitive human-robot interaction

In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.

Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.

Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.

“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”

For the project the team used “Baxter”, a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

“By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”

The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.

“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

Learn more: Controlling robots with brainwaves and hand gestures

 

 

The Latest on: Human-robot interaction

via Google News

 

The Latest on: Human-robot interaction

via  Bing News

 

Monkeys’ brain waves offer paraplegics hope

This is a photo of a dummy BrainGate interface...

Image via Wikipedia

A major step on the road to developing robotic exoskeletons

Monkeys have been trained to control a virtual arm on a computer screen using only their brain waves.

Scientists say the animals were also able use the arm to sense the texture of different virtual objects.

Writing in the journal Nature, the researchers say their work could speed up the development of wearable exoskeletons.

This technology could help quadriplegic patients not only regain movement but a sense of touch as well.

In the experiments, a pair of rhesus monkeys was trained to control a virtual arm on a screen solely by the electrical activity generated in their brains.

Thanks to feedback from the experimental setup, the monkeys were also able to feel texture differences of objects on the screen.

The researchers involved say that just like a normal functioning limb, the monkeys were able to do both actions at the same time, sending out signals to control the arm while simultaneously getting electrical feedback to understand the texture of the objects that were touched.

Wireless future

Prof Miguel Nicolelis from the Duke University Centre for Neuroengineering in North Carolina was the senior author of the study. He believes it is a significant step in this field.

“It provides us with the demonstration that we can establish a bi-directional link between the brain and an artificial device without any interference from the subject’s body,” he said.

The researchers trained the monkeys, Mango and Tangerine, to play a video game using a joystick to move the virtual arm and capture three identical targets. Each target was associated with a different vibration of the joystick.

“In terms of rehabilitation of patients that suffer from severe neurological disorders this is a major step forward”, Prof Miguel Nicolelis Duke University Centre for Neuroengineering

Multiple electrodes were implanted in the brains of the monkeys and connected to the computer screen. The joystick was removed and motor signals from the monkey’s brains then controlled the arm.

At the same time, signals from the virtual fingers as they touched the targets were transmitted directly back into the brain.

The monkeys had to search for a target with a specific texture to gain a reward of fruit juice. It only took four attempts for one of the monkeys to figure out how to make the system work.

According to Prof Nicolelis, the system has now been developed so the monkeys can control the arm wirelessly.

“We have an interface for 600 channels of brain signal transmission, so we can transmit 600 channels of brain activity wirelessly as if you had 600 cell phones broadcasting this activity.

“For patients this will be very important because there will be no cables whatsoever connecting the patient to any equipment.”

The scientists say that this work represents a major step on the road to developing robotic exoskeletons – wearable technology would allow patients afflicted by paralysis to regain some movement.

Read more . . .

Bookmark this page for “aNeuroengineering” and check back regularly as these articles update on a very frequent basis. The view is set to “news”. Try clicking on “video” and “2” for more articles.

Brain-to-brain communication over the Internet

EEG, EOG, and EMG

Image via Wikipedia

Brain-Computer Interfacing (BCI) is a hot area of research. In the past year alone we’ve looked at a system to allow people to control a robotic arm and another that enables users to control an ASIMO robot with nothing but the power of thought. Such systems rely on the use of an electroencephalograph (EEG) to capture brain waves and translate them into commands to control a machine. Now researchers at the University of Southampton have used a similar technique to show it is possible to transmit thoughts from one person to another.

An experiment conducted by Dr Christopher James from the University’s Institute of Sound and Vibration Research saw a person attached to an EEG amplifier. The person would generate a series of binary digits, imagining moving their left arm for zero and their right arm for one. The stream of binary digits was then transmitted over the Internet to a second person who was also attached to an EEG amplifier and a PC that picked up the stream of digits and flashed an LED lamp at two different frequencies, one for zero and the other for one.

Read more . . .

Enhanced by Zemanta

Real-time Control Of Wheelchair With Brain Waves

Brain Control interface
Image by John Swords via Flickr

Japan’s BSI-TOYOTA Collaboration Center has successfully developed a system that controls a wheelchair using brain waves in as little as 125 milliseconds.

BTCC was established in 2007 by RIKEN, an independent Japanese research institution, as a collaborative project with Toyota Motor Corporation, Toyota Central R&D Labs, Inc., and Genesis Research Institute, Inc. Also collaborating in the research were Andrzej Cichocki, Unit Leader, and Kyuwan Choi, Research Scientist, of BTCC’s Noninvasive BMI Unit.

Recently technological developments in the area of brain machine interface (BMI) have received much attention. Such systems allow elderly or handicapped people to interact with the world through signals from their brains, without having to give voice commands.

Read more . . .

Reblog this post [with Zemanta]

Mind reading – scientists translate brain signals into words

microECoGs

Using the same technology that allowed them to accurately detect the brain signals controlling arm movements that we looked at last year, researchers at the University of Utah have gone one step further, translating brain signals into words. While the previous breakthrough was an important step towards giving amputees or people with severe paralysis a high level of control over a prosthetic limb or computer interface, this new development marks an early step toward letting severely paralyzed people speak with their thoughts.

Nonpenetrating microECoGs

For their study the research team placed grids of tiny microelectrodes over speech centers of the brain of a volunteer with severe epileptic seizures. These nonpenetrating microelectrodes, called microEC0Gs, are implanted beneath the skull but sit on top of the brain without poking into it. The volunteer already had a craniotomy – temporary partial skull removal – so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

Because the microelectrodes do not penetrate brain matter, they are considered safe to place on speech areas of the brain – something that cannot be done with penetrating electrodes that have been used in experimental devices to help paralyzed people control a computer cursor or an artificial arm. Additionally, EEG electrodes used on the skull to record brain waves are too big and record too many brain signals to be used easily for decoding speech signals from paralyzed people.

Each of two grids with 16 microECoGs spaced 1 millimeter (about one-25th of an inch) apart, was placed over one of two speech areas of the brain: First, the facial motor cortex, which controls movements of the mouth, lips, tongue and face – basically the muscles involved in speaking. Second, Wernicke’s area, a little understood part of the human brain tied to language comprehension and understanding.

Translating nerve signals into words

Once in place the experimental microelectrodes were used to detect weak electrical signals from the brain generated by a few thousand neurons or nerve cells. During one-hour sessions conducted over four days the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less. Each of the 10 words was repeated from 31 to 96 times, depending on how tired the patient was.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals – such as those generated when the man said the words “yes” and “no” – they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

Read more . . .

Enhanced by Zemanta