UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence.
They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.
“What we’re reporting on here is a new approach to empowering a robot to learn,” said Professor Pieter Abbeel of UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”
The latest developments will be presented on Thursday, May 28, in Seattle at the International Conference on Robotics and Automation (ICRA). Abbeel is leading the project with fellow UC Berkeley faculty member Trevor Darrell, director of the Berkeley Vision and Learning Center. Other members of the research team are postdoctoral researcher Sergey Levine and Ph.D. student Chelsea Finn.
The work is part of a new People and Robots Initiative at UC’s Center for Information Technology Research in the Interest of Society (CITRIS). The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs.
“Most robotic applications are in controlled environments where objects are in predictable positions,” said Darrell. “The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings.”
Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.
Instead, the UC Berkeley researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.
“For all our versatility, humans are not born with a repertoire of behaviors that can be deployed like a Swiss army knife, and we do not need to be programmed,” said Levine. “Instead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.”
In the world of artificial intelligence, deep learning programs create “neural nets” in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognize patterns and categories among the data it is receiving. People who use Siri on their iPhones, Google’s speech-to-text program or Google Street View might already have benefited from the significant advances deep learning has provided in speech and vision recognition.
Applying deep reinforcement learning to motor tasks has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.
“Moving about in an unstructured 3D environment is a whole different ballgame,” said Finn. “There are no labeled directions, no examples of how to solve the problem in advance. There are no examples of the correct solution like one would have in speech and vision recognition programs.”
The Latest on: Deep Learning
via Google News
The Latest on: Deep Learning
- Power up your AI with Google Cloud deep learningon November 20, 2019 at 6:31 pm
Today’s AI applications can do some pretty incredible things, and it’s all thanks to deep learning. If you want to start building exciting applications using deep learning, it helps to have the might ...
- Slow-Reading is the New Deep Learningon November 20, 2019 at 4:18 pm
You Can Only Achieve Deep Learning by Slow-Reading Now that we have a good foundation and understanding of how memory works, we can focus on why slow-reading is necessary to achieve deep learning. We ...
- Learning From The Canadian Model Of AIon November 20, 2019 at 4:10 pm
As a result, Canadian researchers like Geoffrey Hinton, Yann LeCun (who is French-American, but worked with Hinton’s group in Toronto), and Yoshua Bengio pushed forward the methods we now call “deep ...
- Particle Mobility Analysis Using Deep Learning and the Moment Scaling Spectrumon November 20, 2019 at 3:16 am
Here a novel method is presented to address this need. It uses for the first time a deep learning approach to segment single particle trajectories into consistent tracklets (trajectory segments that ...
- DeepIso: A Deep Learning Model for Peptide Feature Detection from LC-MS mapon November 20, 2019 at 3:01 am
Typical analysis workflow begins with the peptide feature detection and intensity calculation from LC-MS map. We are the first to propose a deep learning based model, DeepIso, that combines recent ...
- Deep Learning Market 2019-2023: Key Findings, Business Trends, Regional Study, Emerging Technologies, Global Segments and Future Prospectson November 20, 2019 at 1:50 am
Nov 20, 2019 (AB Digital via COMTEX) -- Deep Learning Market Highlights: Rapid adoption in applications such as voice recognition, natural language processing amongst others is drawing substantial ...
- Deep-Learning Framework SINGA Graduates to Top-Level Apache Projecton November 19, 2019 at 6:03 am
It is essential to scale deep learning via distributed computing as...deep learning models are typically large and trained over big datasets, which may take hundreds of days using a single GPU. There ...
- The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chipon November 19, 2019 at 5:06 am
Deep learning is all the rage these days in enterprise circles, and it isn’t hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more ...
- Neural architecture search automates the development of deep learning-based models for cancer researchon November 18, 2019 at 12:58 pm
Argonne researchers have created a neural architecture search that automates the development of deep-learning-based predictive models for cancer data. While increasing swaths of collected data and ...
- Going deeper: Moveworks snags $75M in new money to bolster deep learning in enterpriseon November 14, 2019 at 4:30 pm
Unlike some applications of BERT or Transformer, which "fine tune" the system, a relatively simpler task, Moveworks is "pre-training" the BERT model, which means developing the initial corpus of text ...
via Bing News