North Carolina State University researchers have developed a technique that reduces training time for deep learning networks by more than 60 percent without sacrificing accuracy, accelerating the development of new artificial intelligence (AI) applications.
“Deep learning networks are at the heart of AI applications used in everything from self-driving cars to computer vision technologies,” says Xipeng Shen, a professor of computer science at NC State and co-author of a paper on the work.
“One of the biggest challenges facing the development of new AI tools is the amount of time and computing power it takes to train deep learning networks to identify and respond to the data patterns that are relevant to their applications. We’ve come up with a way to expedite that process, which we call Adaptive Deep Reuse. We have demonstrated that it can reduce training times by up to 69 percent without accuracy loss.”
Training a deep learning network involves breaking a data sample into chunks of consecutive data points. Think of a network designed to determine whether there is a pedestrian in a given image. The process starts by dividing a digital image into blocks of pixels that are adjacent to each other. Each chunk of data is run through a set of computational filters. The results are then run through a second set of filters. This continues iteratively until all of the data have been run through all of the filters, allowing the network to reach a conclusion about the data sample.
When this process has been done for every data sample in a data set, that is called an epoch. In order to fine-tune a deep learning network, the network will likely run through the same data set for hundreds of epochs. And many data sets consist of between tens of thousands and millions of data samples. Lots of iterations of lots of filters being applied to lots of data means that training a deep learning network takes a lot of computing power.
The breakthrough moment for Shen’s research team came when it realized that many of the data chunks in a data set are similar to each other. For example, a patch of blue sky in one image may be similar to a patch of blue sky elsewhere in the same image or to a patch of sky in another image in the same data set.
By recognizing these similar data chunks, a deep learning network could apply filters to one chunk of data and apply the results to all of the similar chunks of data in the same set, saving a lot of computing power.
“We were not only able to demonstrate that these similarities exist, but that we can find these similarities for intermediate results at every step of the process,” says Lin Ning, a Ph.D. student at NC State and lead author of the paper. “And we were able to maximize this efficiency by applying a method called locality sensitive hashing.”
But this raises two additional questions. How large should each chunk of data be? And what threshold do data chunks need to meet in order to be deemed “similar”?
The researchers found that the most efficient approach was to begin by looking at relatively large chunks of data using a relatively low threshold for determining similarity. In subsequent epochs, the data chunks get smaller and the similarity threshold more stringent, improving the deep learning network’s accuracy. The researchers designed an adaptive algorithm that automatically implements these incremental changes during the training process.
To evaluate their new technique, the researchers tested it using three deep learning networks and data sets that are widely used as testbeds by deep learning researchers: CifarNet using Cifar10; AlexNet using ImageNet; and VGG-19 using ImageNet.
Adaptive Deep Reuse cut training time for AlexNet by 69 percent; for VGG-19 by 68 percent; and for CifarNet by 63 percent – all without accuracy loss.
“This demonstrates that the technique drastically reduces training times,” says Hui Guan, a Ph.D. student at NC State and co-author of the paper. “It also indicates that the larger the network, the more Adaptive Deep Reuse is able to reduce training times – since AlexNet and VGG-19 are both substantially larger than CifarNet.”
“We think Adaptive Deep Reuse is a valuable tool, and look forward to working with industry and research partners to demonstrate how it can be used to advance AI,” Shen says.
The Latest on: Artificial intelligence training
via Google News
The Latest on: Artificial intelligence training
- Artificial Intelligence in Military Market is estimated to reach $18.8 billion by 2025; growing at a CAGR of 14.9% from 2020 to 2025on November 28, 2020 at 2:28 pm
Global Artificial Intelligence in Military Market is estimated to reach $18.8 billion by 2025; growing at a CAGR of 14.9% from 2017 to 2025. Artificial intelligence is the capability of a computer ...
- Blockchain training courses: Master the basics with this online boot campon November 27, 2020 at 8:35 am
With this training bundle, you'll also learn how the financial services industry uses artificial intelligence and machine learning, how to run a startup, and more.
- UCF Investigates Artificial Intelligence to Help Children Teach Coding to Classmates While Learning Social Skillson November 27, 2020 at 8:33 am
UCF Investigates Artificial Intelligence to Help Children Teach Coding to Classmates While Learning Social Skills | Read more about UCF Colleges & Campus News, Community, Research, Orlando and Central ...
- Artificial Intelligence for the Indo-Pacific: A Blueprint for 2030on November 26, 2020 at 3:27 pm
Three AI-related technologies that could further the free, open, resilient, and inclusive character of the Indo-Pacific.
- Ericsson India is hiring for leadership roles in Artificial Intelligence, data analysts and more as it prepares for the 5G eraon November 25, 2020 at 12:41 am
Telecommunications giant Ericsson is gearing up for the 5G era and is now set to bolster its in-house talent. Ericsson, which already has almost 20,000 employees working out of India, is looking to ...
- MESOPHDIO breaking down the bottleneck of cancer cells recognition by artificial intelligence technologyon November 24, 2020 at 7:23 pm
The changing landscape of digital medical imaging solutions is enabled by artificial intelligence (AI) technology. And the new challenges in the fast changing development of Bioinformatics, or called ...
- Artificial intelligence technology helps Parkinson's patients during COVID-19 pandemicon November 24, 2020 at 12:59 am
The COVID-19 pandemic is leading a Purdue University innovator to make changes as she works to provide new options for people with Parkinson's disease.
- Dive into Python and AI with this online training bundleon November 20, 2020 at 12:52 pm
Whether you're a Python beginner or want to learn how to use the programming language with AI and data analytics, check out these training courses.
- Time may be right for professionalizing artificial intelligence practiceson November 20, 2020 at 8:30 am
"It's unlikely that you'd trust a 'citizen architect' to build your home in the same way that you wouldn't visit a 'citizen doctor' when you get sick." ...
- Artificial Intelligence in Aviation Market Forecast to 2027 – COVID-19 Impact and Regional Analysison November 19, 2020 at 8:55 am
Artificial Intelligence is nothing but a machine that mimics human psychological feature therefore on deliver the goods ...
via Bing News