via Baylor College of Medicine
UMass Amherst, Baylor researchers collaborate on imitating brain’s ‘replay’ ability
Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a “major, long-standing obstacle to increasing AI capabilities” by drawing inspiration from a human brain memory mechanism known as “replay.”
First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect – “surprisingly efficiently” – deep neural networks from “catastrophic forgetting” – upon learning new lessons, the networks forget what they had learned before.
Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting.
They write, “One solution would be to store previously encountered examples and revisit them when learning something new. Although such ‘replay’ or ‘rehearsal’ solves catastrophic forgetting,” they add, “constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly.”
Unlike AI neural networks, humans are able to continuously accumulate information throughout their life, building on earlier lessons. An important mechanism in the brain believed to protect memories against forgetting is the replay of neuronal activity patterns representing those memories, they explain.
Siegelmann says the team’s major insight is in “recognizing that replay in the brain does not store data.” Rather, “the brain generates representations of memories at a high, more abstract level with no need to generate detailed memories.” Inspired by this, she and colleagues created an artificial brain-like replay, in which no data is stored. Instead, like the brain, the network generates high-level representations of what it has seen before.
The “abstract generative brain replay” proved extremely efficient, and the team showed that replaying just a few generated representations is sufficient to remember older memories while learning new ones. Generative replay not only prevents catastrophic forgetting and provides a new, more streamlined path for system learning, it allows the system to generalize learning from one situation to another, they state.
For example, “if our network with generative replay first learns to separate cats from dogs, and then to separate bears from foxes, it will also tell cats from foxes without specifically being trained to do so. And notably, the more the system learns, the better it becomes at learning new tasks,” says van de Ven.
He and colleagues write, “We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks without storing data, and it provides a novel model for abstract level replay in the brain.”
Van de Ven says, “Our method makes several interesting predictions about the way replay might contribute to memory consolidation in the brain. We are already running an experiment to test some of these predictions.”
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
- Bixby Vision changes could spell trouble for Samsung’s AIon October 6, 2020 at 5:58 pm
Bixby is the AI-powered assistant nobody asked for at a time when Google Assistant and Amazon Alexa were already neck and neck in that market. It did bring at least one feature that even Google ...
- IBM Rolls Out New AI Products for Enhancing Advertising Reachon October 6, 2020 at 5:29 pm
At Advertising Week 2020, International Business Machines Corporation IBM announced the addition of three new AI-enabled offerings that will help advertising businesses to expand business reach while ...
- CMKL University, Thailand, Deploys Nvidia and DDN for AI Supercomputing and Researchon October 6, 2020 at 3:05 pm
DDN, provider of AI and data management software and hardware solutions, announced its A3I all-flash and hybrid storage system alongside Nvidia’s DGX POD will help amplify capabilities at CMKL ...
- How IBM Is Using AI At Scale To Benefit The Media Industryon October 6, 2020 at 12:00 pm
IBM (NYSE: IBM) recently announced 3 new products to add to its growing suite of artificial intelligence solutions for brands and publishers. These are the details.
- Salesforce to add Einstein Analytics AI to Tableau platformon October 6, 2020 at 9:57 am
Einstein Discovery is an AI and machine learning platform for predictive and ... Users can expect to see Einstein Discovery capabilities integrated with Tableau's calculation language, dashboards, ...
Go deeper with Google Headlines on:
Go deeper with Bing News on:
Deep neural networks
- AI Model Shows Deep Learning Can Detect Large Vessel Occlusionon September 29, 2020 at 5:00 pm
A deep learning model can detect large vessel occlusion (LVO) using multiphase computed tomography (CT) angiography, according to a study published online Sept. 29 in Radiology. Matthew T. Stib, M.D., ...
- Neural networks have been used to upscale footage of the Hindenburg disaster to 1080pon September 29, 2020 at 2:38 pm
Neural networking is one of these techniques, and with it, we can gain a fresh understanding of past events, or, as in today’s case, see disaster strike through a modern lens. The Hindenburg was an ...
- Deep Neural Networks Market Booming Worldwide with leading Players Google, Oracle, Microsoft, IBMon September 29, 2020 at 1:25 am
Global Deep Neural Networks Market The global Deep Neural Networks Market size was valued at USD 1.26 Billion in 2019 and is anticipated to reach USD 5.98 Billion by 2027 at a CAGR of 21.4%. The ...
- Neural network trained to control anesthetic doses, keep patients under during surgeryon September 28, 2020 at 2:11 am
Academics from the Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital have demonstrated how neural networks can be trained to administer anesthetic during surgery. Over the ...
- Google’s Geoff Hinton And His Team File A Patent For Capsule Neural Networkson September 25, 2020 at 4:41 am
Capsule networks are aimed at alleviating the extra dimensionality which surfaces with a convolutional neural network.