Using a machine-learning system known as a deep neural network, MIT researchers have created the first model that can replicate human performance on auditory tasks such as identifying a musical genre.
This model, which consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks, was used by the researchers to shed light on how the human brain may be performing the same tasks.
“What these models give us, for the first time, is machine systems that can perform sensory tasks that matter to humans and that do so at human levels,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT and the senior author of the study. “Historically, this type of sensory processing has been difficult to understand, in part because we haven’t really had a very clear theoretical foundation and a good way to develop models of what might be going on.”
The study, which appears in the April 19 issue of Neuron, also offers evidence that the human auditory cortex is arranged in a hierarchical organization, much like the visual cortex. In this type of arrangement, sensory information passes through successive stages of processing, with basic information processed earlier and more advanced features such as word meaning extracted in later stages.
MIT graduate student Alexander Kell and Stanford University Assistant Professor Daniel Yamins are the paper’s lead authors. Other authors are former MIT visiting student Erica Shook and former MIT postdoc Sam Norman-Haignere.
Modeling the brain
When deep neural networks were first developed in the 1980s, neuroscientists hoped that such systems could be used to model the human brain. However, computers from that era were not powerful enough to build models large enough to perform real-world tasks such as object recognition or speech recognition.
Over the past five years, advances in computing power and neural network technology have made it possible to use neural networks to perform difficult real-world tasks, and they have become the standard approach in many engineering applications. In parallel, some neuroscientists have revisited the possibility that these systems might be used to model the human brain.
“That’s been an exciting opportunity for neuroscience, in that we can actually create systems that can do some of the things people can do, and we can then interrogate the models and compare them to the brain,” Kell says.
The MIT researchers trained their neural network to perform two auditory tasks, one involving speech and the other involving music. For the speech task, the researchers gave the model thousands of two-second recordings of a person talking. The task was to identify the word in the middle of the clip. For the music task, the model was asked to identify the genre of a two-second clip of music. Each clip also included background noise to make the task more realistic (and more difficult).
After many thousands of examples, the model learned to perform the task just as accurately as a human listener.
“The idea is over time the model gets better and better at the task,” Kell says. “The hope is that it’s learning something general, so if you present a new sound that the model has never heard before, it will do well, and in practice that is often the case.”
The model also tended to make mistakes on the same clips that humans made the most mistakes on.
The processing units that make up a neural network can be combined in a variety of ways, forming different architectures that affect the performance of the model.
The MIT team discovered that the best model for these two tasks was one that divided the processing into two sets of stages. The first set of stages was shared between tasks, but after that, it split into two branches for further analysis — one branch for the speech task, and one for the musical genre task.
Evidence for hierarchy
The researchers then used their model to explore a longstanding question about the structure of the auditory cortex: whether it is organized hierarchically.
In a hierarchical system, a series of brain regions performs different types of computation on sensory information as it flows through the system. It has been well documented that the visual cortex has this type of organization. Earlier regions, known as the primary visual cortex, respond to simple features such as color or orientation. Later stages enable more complex tasks such as object recognition.
However, it has been difficult to test whether this type of organization also exists in the auditory cortex, in part because there haven’t been good models that can replicate human auditory behavior.
“We thought that if we could construct a model that could do some of the same things that people do, we might then be able to compare different stages of the model to different parts of the brain and get some evidence for whether those parts of the brain might be hierarchically organized,” McDermott says.
The researchers found that in their model, basic features of sound such as frequency are easier to extract in the early stages. As information is processed and moves farther along the network, it becomes harder to extract frequency but easier to extract higher-level information such as words.
To see if the model stages might replicate how the human auditory cortex processes sound information, the researchers used functional magnetic resonance imaging (fMRI) to measure different regions of auditory cortex as the brain processes real-world sounds. They then compared the brain responses to the responses in the model when it processed the same sounds.
They found that the middle stages of the model corresponded best to activity in the primary auditory cortex, and later stages corresponded best to activity outside of the primary cortex. This provides evidence that the auditory cortex might be arranged in a hierarchical fashion, similar to the visual cortex, the researchers say.
“What we see very clearly is a distinction between primary auditory cortex and everything else,” McDermott says.
Alex Huth, an assistant professor of neuroscience and computer science at the University of Texas at Austin, says the paper is exciting in part because it offers convincing evidence that the early part of the auditory cortex performs generic sound processing while the higher auditory cortex performs more specialized tasks.
“This is one of the ongoing mysteries in auditory neuroscience: What distinguishes the early auditory cortex from the higher auditory cortex? This is the first paper I’ve seen that has a computational hypothesis for that,” says Huth, who was not involved in the research.
The Latest on: Deep neural network
via Google News
The Latest on: Deep neural network
- Network-level Brain Activity to Detect Specific Types of Cognitive Efforton August 16, 2019 at 10:54 pm
the researchers had to analyze the information at the network level. It was essential to look at how the activity of one region coordinated with the activity of another. "Using the same neural signals ...
- Learn AI and deep learning with this bundle, for a price you chooseon August 16, 2019 at 11:15 am
The bundle consists of 7 ebooks and 10 hours of course content that cover essentials on working with deep learning algorithms using Java, building AI applications with Python, using TensorFlow on real ...
- Avoid skin markings in dermoscopic images for analysis, training by convolutional neural networkson August 16, 2019 at 8:52 am
The convolutional neural network (CNN) architecture was trained with more than ... The researchers said that work needs to be done to understand the limitations of a deep learning CNN to successfully ...
- Squirrel AI Learning by Yixue Group Learning Won Best Paper & Best Student Paper Award at ACM KDD International Symposium on Deep Learning on Graphon August 16, 2019 at 2:41 am
However, this technology cannot be directly applied to graphical structure data, triggering the exploration of graph deep learning by the academic circles. In the past few years, neural networks ...
- Towards perturbation prediction of biological networks using deep learningon August 16, 2019 at 2:10 am
indicating that the graph-driven neural network model is robust and beneficial for accurate prediction of the perturbation spread modeling and giving an inspiration of the implementation of the deep ...
- Just dying to get into neural networks? Need a start in deep learning? We can workshop it outon August 15, 2019 at 11:24 pm
Simons says don't push us: FTC boss warns regulator could totally break up big tech companies if it wanted Event It’s great to see hear about the possibilities of machine learning, and AI, but to ...
- Microsoft Edge gets more natural 'read aloud' voices powered by neural networkson August 14, 2019 at 12:19 pm
New voices powered by deep neural networks are now available in Microsoft Edge. The voices are meant to sound much more natural and less robotic. You can try the new voices now in the Edge Dev and ...
- Ozcan Group Improves Inference Accuracy for All-Optical Diffractive Neural Networkson August 13, 2019 at 1:43 pm
coming close to the performance of some of the earlier generations of all-electronic deep neural networks. Additionally, the researchers independently optimized multiple diffractive networks and used ...
- Master data analytics and deep learning with this $35 Python certification bundleon August 13, 2019 at 10:02 am
Python is one of the easiest programming languages to learn, but mastering it allows you to build apps and games or even take advantage of neural networks for deep learning. But first, you’ll need to ...
via Bing News