CAMERA researchers develop highly efficient neural networks for analyzing experimental scientific images from limited training data
Mathematicians at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new approach to machine learning aimed at experimental imaging data. Rather than relying on the tens or hundreds of thousands of images used by typical machine learning methods, this new approach “learns” much more quickly and requires far fewer images.
Daniël Pelt and James Sethian of he Center for Advanced Mathematics for Energy Research Applications (CAMERA) turned the usual machine learning perspective on its head by developing what they call a “Mixed-Scale Dense Convolution Neural Network (MS-D)” that requires far fewer parameters than traditional methods, converges quickly, and has the ability to “learn” from a remarkably small training set. Their approach is already being used to extract biological structure from cell images, and is poised to provide a major new computational tool to analyze data across a wide range of research areas.
As experimental facilities generate higher resolution images at higher speeds, scientists can struggle to manage and analyze the resulting data, which is often done painstakingly by hand. In 2014, Sethian established CAMERA at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory as an integrated, cross-disciplinary center to develop and deliver fundamental new mathematics required to capitalize on experimental investigations at DOE Office of Science user facilities. CAMERA is part of the lab’s Computational Research Division.
“In many scientific applications, tremendous manual labor is required to annotate and tag images — it can take weeks to produce a handful of carefully delineated images,” said Sethian, who is also a mathematics professor at the University of California, Berkeley. “Our goal was to develop a technique that learns from a very small data set.”
Details of the algorithm were published Dec. 26, 2017 in a paper in the Proceedings of the National Academy of Sciences.
“The breakthrough resulted from realizing that the usual downscaling and upscaling that capture features at various image scales could be replaced by mathematical convolutions handling multiple scales within a single layer,” said Pelt, who is also a member of the Computational Imaging Group at the Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands.
To make the algorithm accessible to a wide set of researchers, a Berkeley team led by Olivia Jain and Simon Mo built a web portal “Segmenting Labeled Image Data Engine (SlideCAM)” as part of the CAMERA suite of tools for DOE experimental facilities.
One promising application is in understanding the internal structure of biological cells and a project in which Pelt’s and Sethian’s MS-D method needed only data from seven cells to determine the cell structure.
“In our laboratory, we are working to understand how cell structure and morphology influences or controls cell behavior. We spend countless hours hand-segmenting cells in order to extract structure, and identify, for example, differences between healthy vs. diseased cells,” said Carolyn Larabell, Director of the National Center for X-ray Tomography and Professor at the University of California San Francisco School of Medicine. “This new approach has the potential to radically transform our ability to understand disease, and is a key tool in our new Chan-Zuckerberg-sponsored project to establish a Human Cell Atlas, a global collaboration to map and characterize all cells in a healthy human body.”
The National Center for X-ray Tomography is located at the Advanced Light Source, a DOE Office of Science national user facility at Berkeley Lab.
Getting More Science from Less Data
Images are everywhere. Smart phones and sensors have produced a treasure trove of pictures, many tagged with pertinent information identifying content. Using this vast database of cross-referenced images, convolutional neural networks and other machine learning methods have revolutionized our ability to quickly identify natural images that look like ones previously seen and catalogued.
These methods “learn” by tuning a stunningly large set of hidden internal parameters, guided by millions of tagged images, and requiring large amounts of supercomputer time. But what if you don’t have so many tagged images? In many fields, such a database is an unachievable luxury. Biologists record cell images and painstakingly outline the borders and structure by hand: it’s not unusual for one person to spend weeks coming up with a single fully three-dimensional image. Materials scientists use tomographic reconstruction to peer inside rocks and materials, and then roll up their sleeves to label different regions, identifying cracks, fractures, and voids by hand. Contrasts between different yet important structures are often very small and “noise” in the data can mask features and confuse the best of algorithms (and humans).
These precious hand-curated images are nowhere near enough for traditional machine learning methods. To meet this challenge, mathematicians at CAMERA attacked the problem of machine learning from very limited amounts of data. Trying to do “more with less,” their goal was to figure out how to build an efficient set of mathematical “operators” that could greatly reduce the number of parameters. These mathematical operators might naturally incorporate key constraints to help in identification, such as by including requirements on scientifically plausible shapes and patterns.
Mixed-Scale Dense Convolution Neural Networks
Many applications of machine learning to imaging problems use deep convolutional neural networks (DCNNs), in which the input image and intermediate images are convolved in a large number of successive layers, allowing the network to learn highly nonlinear features. To achieve accurate results for difficult image processing problems, DCNNs typically rely on combinations of additional operations and connections including, for example, downscaling and upscaling operations to capture features at various image scales. To train deeper and more powerful networks, additional layer types and connections are often required. Finally, DCNNs typically use a large number of intermediate images and trainable parameters, often more than 100 million, to achieve results for difficult problems.
Instead, the new “Mixed-Scale Dense” network architecture avoids many of these complications and calculates dilated convolutions as a substitute to scaling operations to capture features at various spatial ranges, employing multiple scales within a single layer, and densely connecting all intermediate images. The new algorithm achieves accurate results with few intermediate images and parameters, eliminating both the need to tune hyperparameters and additional layers or connections to enable training.
Getting high resolution science from low resolution data
A different challenge is to produce high resolution images from low resolution input. As anyone who has tried to enlarge a small photo and found it only gets worse as it gets bigger, this sounds close to impossible. But a small set of training images processed with a Mixed-Scale Dense network can provide real headway. As an example, imagine trying to denoise tomographic reconstructions of a fiber-reinforced mini-composite material. In an experiment described in the paper, images were reconstructed using 1,024 acquired X-ray projections to obtain images with relatively low amounts of noise. Noisy images of the same object were then obtained by reconstructing using 128 projections. Training inputs were noisy images, with corresponding noiseless images used as target output during training. The trained network was then able to effectively take noisy input data and reconstruct higher resolution images.
Pelt and Sethian are taking their approach to a host of new areas, such as fast real-time analysis of images coming out of synchrotron light sources and reconstruction problems in biological reconstruction such as for cells and brain mapping.
“These new approaches are really exciting, since they will enable the application of machine learning to a much greater variety of imaging problems than currently possible,” Pelt said. “By reducing the amount of required training images and increasing the size of images that can be processed, the new architecture can be used to answer important questions in many research fields.”
The Latest on: Machine learning
Report shows Machine learning is helping make Google Play a whole lot safer
on March 15, 2018 at 11:35 am
In an effort to keep customers safer, Google Play is getting smarter. Google has released its Android Security report for 2017, which highlights machine learning as an increasingly better way to keep malicious software off your phone. According to the ... […]
Three examples of machine learning in the newsroom
on March 15, 2018 at 11:24 am
What experts have to say about the use of machine learning in the newsroom, and what data journalists can learn from it — Notes from the 2018 NICAR conference In 1959, Arthur Samuel, a pioneer in machine learning, defined it as the ‘field of study ... […]
How AI and Machine Learning Can Help Build a More Engaged Workforce
on March 15, 2018 at 11:22 am
Artificial intelligence and machine learning are making their way into all aspects of our lives and businesses. Every time you ask Amazon's Alexa for the weather forecast or book a car through Lyft, you're benefitting from the power of AI. Entrepreneurs ... […]
How to train and deploy deep learning at scale
on March 15, 2018 at 6:58 am
Hurry—early price ends March 16. I would say that you hear a lot about the modeling of problems associated with deep learning. How do I frame my problem as a machine learning problem? How do I pick my architecture? How do I debug things when things go wrong? […]
Google: 60.3% of potentially harmful Android apps in 2017 were detected via machine learning
on March 14, 2018 at 11:00 pm
Google released its Android Security 2017 Year in Review report today, the fourth installment of the company’s attempt to educate the public about Android’s various layers of security and its failings. One of the most interesting learnings to come out ... […]
How Machine Learning Improves Data Visualization
on March 14, 2018 at 10:24 pm
Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide. Lately, it seems like everyone in the business world is talking about machine learning. There's no doubt that it has changed just about every ... […]
This March Madness, we’re using machine learning to predict upsets
on March 14, 2018 at 12:12 pm
"Beware the Ides of March." Yes, it's finally that time of year again: when the emperors of college basketball must watch their backs, lest the lowly bottom seeds of the tournament strike. Before March 15, millions around the world will fill out their ... […]
Google uses machine learning to make computers faster as they age
on March 14, 2018 at 7:44 am
Google has released a research paper which details how computers could be sped up as they age. The technique uses machine learning to enhance the performance of "prefetching," a mechanism used to pull information out of memory in advance of its use. […]
Machine Learning: A New Weapon In Your Security Arsenal
on March 14, 2018 at 7:37 am
Companies spend up to $75 billion on security every year, but research estimates suggest up to two-thirds of companies still experience a breach. It’s clear that the perimeter-based approach to security won’t cut it, yet investments in these products ... […]
Population Health Tools May Benefit from AI, Machine Learning
on March 14, 2018 at 6:34 am
March 14, 2018 - Artificial intelligence and machine learning could help to accelerate a relatively immature market for population health management tools, according to a Chilmark Research report. Population health management (PHM) is closely associated ... […]
via Google News and Bing News