UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.
Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles. Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.
“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”
The research team will perform a demonstration of the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on December 5.
At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.
“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.
With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.
“Humans learn object manipulation skills without any teacher through millions of interactions with a variety of objects during their lifetime. We have shown that it possible to build a robotic system that also leverages large amounts of autonomously collected data to learn widely applicable manipulation skills, specifically object pushing skills,” said Frederik Ebert, a graduate student in Levine’s lab who worked on the project.
Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. In contrast to conventional computer vision methods, which require humans to manually label thousands or even millions of images, building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously. Indeed, video prediction models have also been applied to datasets that represent everything from human activities to driving, with compelling results.
“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” Levine said. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”
The Berkeley scientists are continuing to research control through video prediction, focusing on further improving video prediction and prediction-based control, as well as developing more sophisticated methods by which robots can collected more focused video data, for complex tasks such as picking and placing objects and manipulating soft and deformable objects such as cloth or rope, and assembly.
Learn more: New robots can see into their future
The Latest on: Robotic learning technology
How Robots Are Serving The QSR Galaxy
on March 15, 2018 at 2:08 pm
But robots beware: Recipe development, food prep and pizza-tasting is still left to humans. Zume isn’t the only company embracing technology when it comes ... and it’s powered by deep machine learning — meaning it can use feedback loops to learn ... […]
WKCTC’s Robot Extreme Challenge draws 32 teams and nearly 200 students for annual competition
on March 13, 2018 at 10:07 pm
Challenger Learning Center at Paducah director and WKCTC RCX event coordinator. “As robotics continues to become more a part of our lives, more students could want to study in STEM (science, technology, engineering and mathematics) disciplines ... […]
Ag robot speeds data collection, analyses of crops as they grow
on March 13, 2018 at 7:53 am
TerraSentia is customizable and teachable, according to the researchers, who currently are developing machine-learning algorithms to "teach" the robot to detect and identify ... "We're getting this technology into the hands of the users so they can tell ... […]
Robot Rides Are Going to Deliver Pizza and Parcels Before People
on March 13, 2018 at 4:00 am
In the wait for self-driving technology ... drones or sidewalk robots may close the gap, but for now, consumers will have to pad out to the curb. "That’s part of what we’re learning," said Sherif Marakby, Ford’s vice president of autonomous vehicles ... […]
Robotic Process Automation: This Is Just the Beginning
on March 12, 2018 at 9:49 am
Software robots, often referred to as robotic process automation ... CIOs who embrace it can get early wins and fast time to value, and never touch advanced learning technology. For example, a multistage energy provider is using RPA for simple automated ... […]
Robots Are Learning to Handle With Care
on March 12, 2018 at 6:41 am
director of the Robot Learning Lab at the University of California, Berkeley, and co-founder of robotics startup Embodied Intelligence Inc. “But maybe that’s fine because they’re cheap,” he adds. Ocado Technology Ocado, an online grocer in the U.K ... […]
This bug-like robot is learning to improvise on the go
on March 12, 2018 at 12:48 am
Aside from simply being an interesting development in the field of robotics, this technology could have practical applications as well. The team is hopeful that this will make it easier to use robots for tasks that involve traversing unfamiliar terrain ... […]
Robot in 8th grade classroom takes 'hands on' learning to new levels
on March 10, 2018 at 8:10 pm
Technology is enhancing education and fun in a Sioux Falls middle school classroom. According to the Better Business Bureau, a scam is reported about every 11 minutes, resulting in more than 47,000 scams nationwide in 2017. […]
Video Friday: Human-Drone Interaction, Soft Robotics, and Basketball Robot
on March 9, 2018 at 3:06 pm
Soft robotics technology has been proposed for a number of applications ... We developed an interactive humanoid robotic platform with a real-time face learning algorithm for user identification and an emotional episodic memory to combine emotional ... […]
AI, Machine Learning, Robotics: Is There Room For Humanity In Technology Leadership?
on March 9, 2018 at 5:39 am
CTO/Partner at Virtina, an e-commerce-focused software engineering "agency" with certified expertize in over 10-plus e-commerce platforms. That was the question a developer recently posted on Reddit. Although the answers ranged from a few hours to full ... […]
via Google News and Bing News