A collaborative study by researchers at Tokyo Institute of Technology has developed a new technique to decode motor intention of humans from Electroencephalography (EEG).
This technique is motivated by the well documented ability of the brain to predict sensory outcomes of self-generated and imagined actions utilizing so called forward models. The method enabled for the first time, nearly 90% single trial decoding accuracy across tested subjects, within 96 ms of the stimulation, with zero user training, and with no additional cognitive load on the users.
The ultimate dream of brain computer interface (BCI) research is to develop an efficient connection between machines and the human brain, such that the machines may be used at will. For example, enabling an amputee to use a robot arm attached to him, just by thinking of it, as if it was his own arm. A big challenge for such a task is the deciphering of a human user’s movement intention from his brain activity, while minimizing the user effort. While a plethora of methods have been suggested for this in the last two decades (1-2), they all require large effort in part of the human user- they either require extensive user training, work well with only a section of the users, or need to use a conspicuous stimulus, inducing additional attentional and cognitive loads on the users. In this study, Researchers from Tokyo Institute of Technology (Tokyo Tech), Le Centre national de la recherche scientifique (CNRS-France), AIST and Osaka University propose a new movement intention decoding philosophy and technique that overcomes all these issues while providing equally much better decoding performance.
The fundamental difference between the previous methods and what they propose is in what is decoded. All the previous methods decode what movement a user intends/imagines, either directly (as in the so called active BCI systems) or indirectly, by decoding what he is attending to (like the reactive BCI systems). Here the researchers propose to use a subliminal sensory stimulator with the Electroencephalography (EEG), and decode, not what movement a user intends/imagines, but to decode whether the movement he intends matches (or not) the sensory feedback sent to the user using the stimulator. Their proposal is motivated by the multitude of studies on so called Forward models in the brain; the neural circuitry implicated in predicting sensory outcomes of self-generated movements (3). The sensory prediction errors, between the forward model predictions and the actual sensory signals, are known to be fundamental for our sensory-motor abilities- for haptic perception (4), motor control (5), motor learning (6), and even inter-personal interactions (7-8) and the cognition of self (9). The researchers therefore hypothesized the predictions errors to have a large signature in EEG, and perturbing the prediction errors (using an external sensory stimulator) to be a promising way to decode movement intentions.
This proposal was tested in a binary simulated wheelchair task, in which users thought of turning their wheelchair either left or right. The researchers stimulated the user’s vestibular system (as this is the dominant sensory feedback during turning), towards either the left or right direction, subliminally using a galvanic vestibular stimulator. They then decode for the presence of prediction errors (ie. whether or stimulation direction matches the direction the user imagines, or not) and consequently, as the direction of stimulation is known, the direction the user imagines. This procedure provides excellent single trial decoding accuracy (87.2% median) in all tested subjects, and within 96 ms of stimulation. These results were obtained with zero user training and with no additional cognitive load on the users, as the stimulation was subliminal.
- Figure 2.
- Decoding performance summary:The across subject median decoding performance when decoding for the direction in which a subject wants to turn (ie. the cue direction), as has been tried in previous methods, is shown in red and pink, while decoding using the new proposed method is shown in black. The data at each time point represents the decoding performance using data from the time period between a reference point (?cue’ for red data, and ?GVS start’ for pink and black data) and that time point. Box plot boundaries represent the 25th and 75th percentile, while the whiskers represent the data range across subjects. The inset histograms shows the subject ensemble decoding performance in the 140 (twenty X7 subjects) test trials, with each subject data shown in a different color.
This proposal promises to radically change how movement intention is decoded, due to several reasons. Primarily, because the method promises better decoding accuracies with no user training and without inducing additional cognitive loads on the users. Furthermore, the fact that the decoding can be done in less than 100 ms of the stimulation highlights its use for real-time decoding. Finally, this method is distinct from other methods utilizing ERP, ERD and ERN, showing that it can be used in parallel to current methods to improve their accuracy.
The Latest on: Brain computer interface
via Google News
The Latest on: Brain computer interface
- Neualink soon to be launched that connect brain to PC on September 17, 2018 at 9:07 pm
Neuralinkis based on a very sci-fi concept that wants to link your brain directly to computer and other electronic devices, it allows mind to interface with gadgets and programs. […]
- Kano Touch Computer STEM Kit Teaches Kids How IPads Work on September 13, 2018 at 3:05 am
The touchscreen helps you learn about tactile interfaces and how touch controls ... You build a USB board and equip it with a sound sensor. The brain is based on the award-winning Raspberry Pi single ... […]
- Kano launches DIY touchscreen computer kit to help kids learn to code on September 13, 2018 at 2:00 am
and a Raspberry Pi 3 (the “brain”) — each of which have to be connected together with the driver board and a series of cables that are housed behind the screen. Above: Kano: Computer Kit ... […]
- Elon Musk suggests he's only months from 'merging' the human brain with AI on September 11, 2018 at 3:15 pm
6 interview on "The Joe Rogan Experience." Neuralink, founded in 2016 and headquartered in San Francisco, is developing brain-computer interfaces. These interfaces could augment a user's intelligence ... […]
- Brain Computer Interface (BCI) Market Analysis By Product, By Application, By End-Use, By Region, And Segment Forecasts, 2014 - 2026 on September 10, 2018 at 7:27 pm
Aftrex Market Research has conducted market research on the Brain Computer Interface (BCI) Market and has published a report that provides qualitative and quantitative data from 2014 to 2026. The stud... […]
- Brain-computer interface allows for telepathic piloting of drones on September 10, 2018 at 10:55 am
After years of research, trials, and major funding, the Defense Advanced Research Projects Agency (DARPA) is able to use a brain-computer interface that allows for the telepathic piloting of drones. “ ... […]
- Brain Computer Interface Market – Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2018 – 2023 on September 9, 2018 at 11:05 pm
New Study On “2018-2023 Brain Computer Interface Market Global Key Player, Demand, Growth, Opportunities and Analysis Forecast” Added to Wise Guy Reports Database A Brain computer interface market (BC... […]
- Brain control on September 8, 2018 at 3:01 pm
Communication between the brain and device is key. Brain Computer Interface (BCI) technology can make this happen. Brain cells communicate using small electrical pulses and the rate of these pulses de... […]
- Musk Says To Control AI Humans Need To Merge With It Over A High Bandwidth Brain-Machine Interface on September 8, 2018 at 7:47 am
The way Musk sees it, the human brain has a "bandwidth problem," and developing a high-bandwidth interface to the brain ... smarter are you with a phone or computer or without? […]
via Bing News