
via Tech Xplore
Smarter, faster algorithm cuts number of steps to solve problems
What if a large class of algorithms used today — from the algorithms that help us avoid traffic to the algorithms that identify new drug molecules — worked exponentially faster?
Computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a completely new kind of algorithm, one that exponentially speeds up computation by dramatically reducing the number of parallel steps required to reach a solution.
The researchers will present their novel approach at two upcoming conferences: the ACM Symposium on Theory of Computing (STOC), June 25-29 and International Conference on Machine Learning (ICML), July 10 -15.
A lot of so-called optimization problems, problems that find the best solution from all possible solutions, such as mapping the fastest route from point A to point B, rely on sequential algorithms that haven’t changed since they were first described in the 1970s. These algorithms solve a problem by following a sequential step-by-step process. The number of steps is proportional to the size of the data. But this has led to a computational bottleneck, resulting in lines of questions and areas of research that are just too computationally expensive to explore.
“These optimization problems have a diminishing returns property,” said Yaron Singer, Assistant Professor of Computer Science at SEAS and senior author of the research. “As an algorithm progresses, its relative gain from each step becomes smaller and smaller.”
Singer and his colleague asked: what if, instead of taking hundreds or thousands of small steps to reach a solution, an algorithm could take just a few leaps?
“This algorithm and general approach allows us to dramatically speed up computation for an enormously large class of problems across many different fields, including computer vision, information retrieval, network analysis, computational biology, auction design, and many others,” said Singer. “We can now perform computations in just a few seconds that would have previously taken weeks or months.”
“This new algorithmic work, and the corresponding analysis, opens the doors to new large-scale parallelization strategies that have much larger speedups than what has ever been possible before,” said Jeff Bilmes, Professor in the Department of Electrical Engineering at the University of Washington, who was not involved in the research. “These abilities will, for example, enable real-world summarization processes to be developed at unprecedented scale.”
Traditionally, algorithms for optimization problems narrow down the search space for the best solution one step at a time. In contrast, this new algorithm samples a variety of directions in parallel. Based on that sample, the algorithm discards low-value directions from its search space and chooses the most valuable directions to progress towards a solution.
Take this toy example:
You’re in the mood to watch a movie similar to The Avengers. A traditional recommendation algorithm would sequentially add a single movie in every step which has similar attributes to those of The Avengers. In contrast, the new algorithm samples a group of movies at random, discarding those that are too dissimilar to The Avengers. What’s left is a batch of movies that are diverse (after all, you don’t want ten Batman movies) but similar to The Avengers. The algorithm continues to add batches in every step until it has enough movies to recommend.
This process of adaptive sampling is key to the algorithm’s ability to make the right decision at each step.
“Traditional algorithms for this class of problem greedily add data to the solution while considering the entire dataset at every step,” said Eric Balkanski, graduate student at SEAS and co-author of the research. “The strength of our algorithm is that in addition to adding data, it also selectively prunes data that will be ignored in future steps.”
In experiments, Singer and Balkanski demonstrated that their algorithm could sift through a data set which contained 1 million ratings from 6,000 users on 4,000 movies and recommend a personalized and diverse collection of movies for an individual user 20 times faster than the state-of-the-art.
The researchers also tested the algorithm on a taxi dispatch problem, where there are a certain number of taxis and the goal is to pick the best locations to cover the maximum number of potential customers. Using a data set of two million taxi trips from the New York City taxi and limousine commission, the adaptive-sampling algorithm found solutions 6 times faster.
“This gap would increase even more significantly on larger scale applications, such as clustering biological data, sponsored search auctions, or social media analytics,” said Balkanski.
Of course, the algorithm’s potential extends far beyond movie recommendations and taxi dispatch optimizations. It could be applied to:
- designing clinical trials for drugs to treat Alzheimer’s, multiple sclerosis, obesity, diabetes, hepatitis C, HIV and more
- evolutionary biology to find good representative subsets of different collections of genes from large datasets of genes from different species
- designing sensor arrays for medical imaging
- identifying drug-drug interaction detection from online health forums
This process of active learning is key to the algorithm’s ability to make the right decision at each step and solves the problem of diminishing returns.
“This research is a real breakthrough for large-scale discrete optimization,” said Andreas Krause, professor of Computer Science at ETH Zurich, who was not involved in the research. “One of the biggest challenges in machine learning is finding good, representative subsets of data from large collections of images or videos to train machine learning models. This research could identify those subsets quickly and have substantial practical impact on these large-scale data summarization problems.”
Singer-Balkanski model and variants of the algorithm developed in the paper could also be used to more quickly assess the accuracy of a machine learning model, said Vahab Mirrokni, a principal scientist at Google Research, who was not involved in the research.
“In some cases, we have a black-box access to the model accuracy function which is time-consuming to compute,” said Mirrokni. “At the same time, computing model accuracy for many feature settings can be done in parallel. This adaptive optimization framework is a great model for these important settings and the insights from the algorithmic techniques developed in this framework can have deep impact in this important area of machine learning research.”
Learn more: ‘Breakthrough’ algorithm exponentially faster than any previous one
Receive an email update when we add a new MACHINE LEARNING article.
The Latest on: Machine learning
via Google News
The Latest on: Machine learning
- New antibiotics are desperately needed: Machine learning could help on February 20, 2019 at 3:34 pm
As the threat of antibiotic resistance looms, microbiologists aren’t the only ones thinking up new solutions. James Zou, PhD, assistant professor of biomedical data science at Stanford, has applied ma... […]
- Best of arXiv.org for AI, Machine Learning, and Deep Learning – January 2019 on February 20, 2019 at 12:05 pm
In this recurring monthly feature, we filter recent research papers appearing on the arXiv.org preprint server for compelling subjects relating to AI, machine learning and deep learning – from discipl... […]
- Data61 using machine learning to track human infectious diseases in Australia on February 20, 2019 at 11:01 am
The Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61 has developed a tool to track infectious diseases and how they specifically might spread to Australia, using Bayesian ... […]
- Splice Machine raises $16 million for unified machine learning platform on February 20, 2019 at 8:48 am
Splice Machine, a self-described data platform for “operational” artificial intelligence (AI), today announced that it’s raised $16 million in series B financing led by GreatPoint Ventures ... […]
- Master Machine Learning and AI For Just $14 on February 20, 2019 at 7:16 am
You’re probably familiar with the terminology, but do you understand what data science is and how it fits in the big technological picture? Discover its secrets with the Machine Learning and Data Scie... […]
- The Morning Download: Top Data Tools Leverage Machine Learning on February 20, 2019 at 6:12 am
Good day, CIOs. Rivaling the amount of data available today to the enterprise is the growing number of capabilities designed to extract insight and support decision making. Gartner Inc. captured ... […]
- Master machine learning and data sciences for just $35 on February 20, 2019 at 6:00 am
The future of technology relies on machines and machine learning. If you’re interested in working with computers to further our knowledge of data sciences, sign up for the Machine Learning ... […]
- Schools Line Up AI, Machine-Learning Courses for Executives on February 20, 2019 at 5:45 am
Some of the nation’s top schools, including the Massachusetts Institute of Technology and Georgetown University, are offering educational programs for nontechnical senior managers looking to learn mor... […]
- Can Machine Learning Teach Us Anything? on February 19, 2019 at 12:13 pm
The breathless headline caught my eye: “Computer Shows Human Intuition—AI Breakthrough!” (or words to that effect). I was intrigued but skeptical. Reading further, I learned that a computer ... […]
via Bing News