Jan 172018

ORNL’s Steven Young (left) and Travis Johnston used Titan to prove the design and training of deep learning networks could be greatly accelerated with a capable computing system.

A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the laboratory’s Titan supercomputer.

Deep learning is a burgeoning field of artificial intelligence that uses networks modeled after the human brain to “learn” how to distinguish features and patterns in vast datasets. Such networks hold great promise in the realization of numerous technologies, from self-driving cars to intelligent robots.

Due to its ability to make sense of massive amounts of data, researchers across the scientific spectrum are eager to refine deep learning and apply it to some of today’s most challenging science problems. One such effort is ORNL’s Advances in Machine Learning to Improve Scientific Discovery at Exascale and Beyond (ASCEND) project, which aims to use deep learning to make sense of the massive datasets produced by the world’s most sophisticated scientific experiments, such as those located at ORNL.

Analysis of such datasets generally requires existing neural networks to be modified, or novel networks designed and then “trained” so that they know precisely what to look for and can produce valid results.

This is a time-consuming and difficult task, but one that an ORNL team led by Robert Patton and including Steven Young and Travis Johnston recently demonstrated can be dramatically expedited with a capable computing system such as ORNL’s Titan, the nation’s fastest supercomputer for science.

To efficiently design neural networks capable of tackling scientific datasets and expediting breakthroughs, Patton’s team developed two codes for evolving (MENNDL) and fine-tuning (RAvENNA) deep neural network architectures.

Both codes can generate and train as many as 18,600 neural networks simultaneously. Peak performance can be estimated by randomly sampling, and then carefully profiling, several hundred of these independently trained networks.

Both codes achieved a peak performance of 20 petaflops, or 20 thousand trillion calculations per second, on Titan (or just under half of Titan’s single precision total peak performance). In practical terms, that translates to training 40-50,000 networks per hour.

“The real measure of success in the deep learning community is time-to-solution,” said Johnston. “And with a machine like Titan we are able to train an unparalleled number of highly accurate networks.”

Titan is a Cray hybrid system, meaning that it uses both traditional CPUs and graphics processing units (GPUs) to tackle complex calculations for big science problems efficiently; the GPUs also happen to be the processor of choice for training deep learning networks.

The team’s work demonstrates that with the right high-performance computing system researchers can efficiently train large numbers of networks, which can then be used to help them tackle today’s increasingly data-heavy experiments and simulations.

This efficient design of deep neural networks will enable researchers to deploy highly accurate, custom-designed models, saving both time and money by freeing the scientist from the task of designing a network from the ground up.

And because the OLCF’s next leadership computing system, Summit, features a deep-learning friendly architecture with enhanced GPUs and complementary Tensor cores, the team is confident both codes will only get faster.

“Out of the box, without tuning to Summit’s unique architecture, we are expecting an increase in performance up to 50 times,” said Johnston.

With that sort of network training capability, Summit could be indispensable to researchers across the scientific spectrum looking to deep learning to help them tackle some of science’s most immense challenges.

Patton’s team is not waiting for the improved hardware to start tackling current scientific data challenges; they have already deployed their codes to assist domain scientists at the Department of Energy’s Fermilab in Batavia, Illinois.

Researchers at Fermilab used MENNDL to better understand how neutrinos interact with ordinary matter by producing a classification network to support their Main Injector Experiment for v-A (MINERvA), a neutrino scattering experiment. The task, known as vertex reconstruction, required a network to analyze images and precisely identify the location where neutrinos interact with one of many targets—a task akin to finding the aerial source of a starburst of fireworks.

In only 24 hours, MENNDL produced optimized networks that outperformed any previously handcrafted network—an achievement that could easily have taken scientists months to accomplish. To identify the high-performing network, MENNDL evaluated approximately 500,000 neural networks, training them on a data set consisting of 800,000 images of neutrino events, steadily using 18,000 of Titan’s nodes.

“You need something like MENNDL to explore this effectively infinite space of possible networks, but you want to do it efficiently,” Young said. “What Titan does is bring the time to solution down to something practical.”

And with Summit to come online this year, the future of deep learning in big science looks bright indeed.

Learn more: ORNL researchers use Titan to accelerate design, training of deep learning networks


The Latest on: Deep learning networks
  • MediaTek Joins Open Neural Network Exchange to Evolve its Edge AI Platform
    on February 22, 2018 at 9:19 am

    ONNX was created and launched by Amazon, Facebook and Microsoft to establish an interoperability standard for transferring deep learning models between frameworks ... Existing involvement in the Android Neural Network (ANN), combined with its new support ... […]

  • Loveland Innovations Launches New Deep Learning Engine
    on February 18, 2018 at 11:04 pm

    “Deep Learning is built on a neural network capable of learning the same way a human brain learns. For instance, a true deep learning framework doesn’t just look for patterns in pixels that sort of look like hail damage, it’s trained to analyze ... […]

  • Building deep learning neural networks using TensorFlow layers
    on February 18, 2018 at 8:44 am

    Deep learning has proven its effectiveness in many fields, such as computer vision, natural language processing (NLP), text translation, or speech to text. It takes its name from the high number of layers used to build the neural network performing machine ... […]

  • Amazon Go – Deep Learning Conquers Retail
    on February 15, 2018 at 12:00 am

    The deep learning technology is able to recognize every item in the ... I can just hear all the GPUs humming along while training Amazon’s convolutional neural networks! Contributed by Daniel D. Gutierrez, Managing Editor and Resident Data Scientist ... […]

  • Neural Networks Are The New Apps
    on February 14, 2018 at 4:30 am

    “There’s been a deep learning revolution,” says Matt Sharifi ... which runs the Now Playing neural network. With a few seconds of a clean audio sample, the net creates a new song fingerprint on the device. Then, another algorithm tries to match ... […]

  • New study: The global machine learning as a service (MLaaS) market is booming at a CAGR of 40.3% during the forecast period 2024
    on February 12, 2018 at 9:07 pm

    MLaaS providers offer tools including data visualization, APIs, face recognition, natural language processing, predictive analytics and deep learning. It has its wide ... augmented reality, network analytics and automated traffic management, and others. […]

  • IBM Research achieves new milestone in deep learning performance
    on August 7, 2017 at 9:01 pm

    Read More The research tackles one of the major challenges of deploying deep learning: Large neural networks and large datasets help deep learning thrive but also lead to longer training times. Training large-scale, deep learning-based AI models can take ... […]

  • Deep Learning & Neural Networks
    on March 23, 2016 at 5:00 pm

    Well known companies such as Google, Amazon and Facebook, as well as many smaller tech companies, are hiring computer scientists with backgrounds in machine learning. Machine learning—the art of teaching machines from data— has matured considerably in ... […]

via Google News and Bing News

Other Interesting Posts

Leave a Reply

%d bloggers like this: