Berkeley Lab and UC Berkeley researchers say “iterative Random Forests” will deliver powerful scientific insights
While it may be the era of supercomputers and “big data,” without smart methods to mine all that data, it’s only so much digital detritus. Now researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time.
In a paper published recently in the Proceedings of the National Academy of Sciences (PNAS), the researchers describe a technique called “iterative Random Forests,” which they say could have a transformative effect on any area of science or engineering with complex systems, including biology, precision medicine, materials science, environmental science, and manufacturing, to name a few.
“Take a human cell, for example. There are 10170 possible molecular interactions in a single cell. That creates considerable computing challenges in searching for relationships,” said Ben Brown of Berkeley Lab’s Environmental Genomics and Systems Biology Division. “Our method enables the identification of interactions of high order at the same computational cost as main effects – even when those interactions are local with weak marginal effects.”
Brown and Bin Yu of UC Berkeley are lead senior authors of “Iterative Random Forests to Discover Predictive and Stable High-Order Interactions.” The co-first authors are Sumanta Basu (formerly a joint postdoc of Brown and Yu and now an assistant professor at Cornell University) and Karl Kumbier (a Ph.D. student of Yu in the UC Berkeley Statistics Department). The paper is the culmination of three years of work that the authors believe will transform the way science is done. “With our method we can gain radically richer information than we’ve ever been able to gain from a learning machine,” Brown said.
The needs of machine learning in science are different from that of industry, where machine learning has been used for things like playing chess, making self-driving cars, and predicting the stock market.
“The machine learning developed by industry is great if you want to do high-frequency trading on the stock market,” Brown said. “You don’t care why you’re able to predict the stock will go up or down. You just want to know that you can make the predictions.”
But in science, questions surrounding why a process behaves in certain ways are critical. Understanding “why” allows scientists to model or even engineer processes to improve or attain a desired outcome. As a result, machine learning for science needs to peer inside the black box and understand why and how computers reached the conclusions they reached. A long-term goal is to use this kind of information to model or engineer systems to obtain desired outcomes.
In highly complex systems – whether it’s a single cell, the human body, or even an entire ecosystem – there are a large number of variables interacting in nonlinear ways. That makes it difficult if not impossible to build a model that can determine cause and effect. “Unfortunately, in biology, you come across interactions of order 30, 40, 60 all the time,” Brown said. “It’s completely intractable with traditional approaches to statistical learning.”
The method developed by the team led by Brown and Yu, iterative Random Forests (iRF), builds on an algorithm called random forests, a popular and effective predictive modeling tool, translating the internal states of the black box learner into a human-interpretable form. Their approach allows researchers to search for complex interactions by decoupling the order, or size, of interactions from the computational cost of identification.
“There is no difference in the computational cost of detecting an interaction of order 30 versus an interaction of order two,” Brown said. “And that’s a sea change.”
In the PNAS paper, the scientists demonstrated their method on two genomics problems, the role of gene enhancers in the fruit fly embryo and alternative splicing in a human-derived cell line. In both cases, using iRF confirmed previous findings while also uncovering previously unidentified higher-order interactions for follow-up study.
Brown said they’re now using their method for designing phased array laser systems and optimizing sustainable agriculture systems.
“We believe this is a different paradigm for doing science,” said Yu, a professor in the departments of Statistics and Electrical Engineering & Computer Science at UC Berkeley. “We do prediction, but we introduce stability on top of prediction in iRF to more reliably learn the underlying structure in the predictors.”
“This enables us to learn how to engineer systems for goal-oriented optimization and more accurately targeted simulations and follow-up experiments,” Brown added.
The Latest on: Machine learning
Machine learning-driven delivery start-up, On the dot, lands £8m in funding
on August 16, 2018 at 12:03 pm
LONDON, 16 August 2018: On the dot, the machine learning-driven last mile delivery start-up, and part of CitySprint Group, has announced today it has received an additional investment of £8 million fr... […]
Oracle launches GraphPipe for deploying machine learning models
on August 16, 2018 at 11:28 am
Oracle wants to address machine learning problems with its newly announced open-source project GraphPipe. The project is a “dead simple machine learning model serving” solution. “There has been rapid ... […]
Argonne Leverages HPC And Machine Learning To Accelerate Science
on August 16, 2018 at 9:27 am
In 2021, the Argonne Leadership Computing Facility (ALCF) is planning to deploy Aurora A21, a new Intel-Cray system, slated to be the first exascale supercomputer in the United States. Aurora will be ... […]
Salesforce plans to open-source the technology behind its Einstein machine-learning services
on August 16, 2018 at 6:04 am
Salesforce is open-sourcing the method it has developed for using machine-learning techniques at scale — without mixing valuable customer data — in hopes other companies struggling with data science p... […]
Why there are no shortcuts to machine learning
on August 16, 2018 at 3:46 am
Or the 15 percent, as new O’Reilly survey data suggests. According to the survey, most enterprises (85 percent) still haven’t cracked the code on AI and machine learning. A mere 15 percent “sophistica... […]
Salesforce open-sources TransmogrifAI, the machine learning library that powers Einstein
on August 15, 2018 at 11:00 pm
Machine learning models — artificial intelligence (AI) that identifies relationships among hundreds, thousands, or even millions of data points — are rarely easy to architect. Data scientists spend we... […]
Analyzing and Preventing Unconscious Bias in Machine Learning
on August 15, 2018 at 6:04 pm
This article is based on Rachel Thomas’s keynote presentation, “Analyzing & Preventing Unconscious Bias in Machine Learning” at QCon.ai 2018. Rachel talks about the pitfalls and risk the bias in machi... […]
Oracle offers GraphPipe spec for machine learning data transmission
on August 15, 2018 at 10:16 am
Oracle has developed an open source specification for transmitting tensor data, which the company wants to become a standard for machine learning. Called GraphPipe, the specification provides a protoc... […]
Oracle open sources GraphPipe, a new standard for machine learning models
on August 15, 2018 at 9:00 am
Machine learning is expected to transform industries. However, its adoption in the enterprise has been slower than some might expect because it's difficult for organizations to deploy and manage machi... […]
Looking for machine learning experts? LinkedIn data shows how to find them
on August 15, 2018 at 6:54 am
More and more businesses are looking for employees with skills in the emerging field of machine learning. Recruiting those employees, however, can be difficult, given how in-demand they are. LinkedIn ... […]
via Google News and Bing News