Filtering information for search engines, acting as an opponent during a board game or recognizing images: Artificial intelligence has far outpaced human intelligence in certain tasks. Several groups from the Freiburg excellence cluster BrainLinks-BrainTools led by neuroscientist private lecturer Dr. Tonio Ball are showing how ideas from computer science could revolutionize brain research.
In the scientific journal “Human Brain Mapping” they illustrate how a self-learning algorithm decodes human brain signals that were measured by an electroencephalogram (EEG). It included performed movements, but also hand and foot movements that were merely thought or an imaginary rotation of objects. Even though the algorithm was not given any characteristics ahead of time, it works as quickly and precisely as traditional systems that have been created to solve certain tasks based on predetermined brain signal characteristics, which are therefore not appropriate for every situation. The demand for such diverse intersections between man and machine is huge: At the University Hospital Freiburg, for instance, it could be used for early detection of epileptic seizures. It could also be used to improve communication possibilities for severely paralyzed patients or an automatic neurological diagnosis.
“Our software is based on brain-inspired models that have proven to be most helpful to decode various natural signals such as phonetic sounds,” says computer scientist Robin Tibor Schirrmeister. The researcher is using it to rewrite methods that the team has used for decoding EEG data: So-called artificial neural networks are the heart of the current project at BrainLinks-BrainTools. “The great thing about the program is we needn’t predetermine any characteristics. The information is processed layer for layer, that is in multiple steps with the help of a non-linear function. The system learns to recognize and differentiate between certain behavioral patterns from various movements as it goes along,” explains Schirrmeister. The model is based on the connections between nerve cells in the human body in which electric signals from synapses are directed from cellular protuberances to the cell’s core and back again. “Theories have been in circulation for decades, but it wasn’t until the emergence of today’s computer processing power that the model has become feasible,” comments Schirrmeister.
Customarily, the model’s precision improves with a large number of processing layers. Up to 31 were used during the study, otherwise known as “Deep Learning”. Up until now, it had been problematic to interpret the network’s circuitry after the learning process had been completed. All algorithmic processes take place in the background and are invisible. That is why the researchers developed the software to create cards from which they could understand the decoding decisions. The researchers can insert new datasets into the system at any time. “Unlike the old method, we are now able to go directly to the raw signals that the EEG records from the brain. Our system is as precise, if not better, than the old one,” says head investigator Tonio Ball, summarizing the study’s research contribution. The technology’s potential has yet to be exhausted – together with his team, the researcher would like to further pursue its development: “Our vision for the future includes self-learning algorithms that can reliably and quickly recognize the user’s various intentions based on their brain signals. In addition, such algorithms could assist neurological diagnoses.”
The Latest on: AI reads minds
- Humanity will abandon speech and communicate through a 'collective AI consciousness' using nothing but THOUGHTS by 2050 on February 16, 2018 at 7:01 am
It is an issue troubling some of the greatest minds in the world at the moment, from Professor Stephen Hawking to Bill Gates and Elon Musk. SpaceX and Tesla CEO Elon Musk described AI as our 'biggest existential threat' and likened its development as ... […]
- Can AI Be Trusted With Life-And-Death Decisions? on February 16, 2018 at 4:00 am
Take my company’s field of expertise as an example — radiology — where trained doctors read and interpret medical images such ... at the disposal of humans with that understanding in mind. AI is great at winning games, and its capabilities are ... […]
- AI technologies have come far, but the road is long on February 15, 2018 at 8:45 am
If you believe what you read, AI technologies have officially permeated the tech market ... "start small but with the big picture in mind." […]
- Roses are red, are you single, we wonder? 'Cos this moth-brain AI can read your phone number on February 14, 2018 at 6:12 pm
Neural networks are modeled very loosely on how the brain is connected, but how comparable artificial minds are to biological grey matter is open for debate. “Neural networks were originally inspired by the brain," Delahunt told us. "Then as the ... […]
- A Siri scriptwriter says composing lines for AI is like writing an “absurdist play” on February 13, 2018 at 12:00 am
You have a character, you have some goals in mind. But there’s no accounting for what ... So long, America: It’s the first time China’s AI startups surpassed those… Read more Of $15.2 billion invested in AI startups globally in 2017, 48 percent ... […]
- I never understood romance until an AI-generated love heart told me: LOVE 2000 HOGS YEA on February 12, 2018 at 11:57 am
But today, I read an AI-generated love heart that said ... stank love / Let me show your mind a new freaky side of love”). I mean, whether you prefer the straight-talking vibe of “YOU ARE BABE” or the inscrutable mystique of “LICK,” you ... […]
- Near miss: How AI’s woman pilot saved lives of 261 flyers on February 12, 2018 at 12:52 am
Read this story in Marathi AI has applauded the presence of mind shown by Captain Anupama Kohli. Vistara did not comment on the issue as the matter is being investigated, while maintaining its crew operated exactly as per rules without any violation. […]
- Why we are in danger of overestimating AI on February 6, 2018 at 4:09 am
Already it has meant that machines can read medical images as well as a radiologist ... of moment when machine intelligence matches the human variety. When we discuss AI today we are mainly referring to just one facet of it: deep learning. […]
- IBM’s New AI Can Predict Psychosis in Your Speech on February 5, 2018 at 12:00 am
Often enough, if used with clarity and precision, language leads to an accord of minds. Language is also the tool ... talking about a story they’d just read. By training their psychosis-predicting AI using what they’d learned from the 2015 study ... […]
via Google News and Bing News