Researchers from NVIDIA, led by Stan Birchfield and Jonathan Tremblay, developed a first of its kind deep learning-based system that can teach a robot to complete a task by just observing the actions of a human. The method is designed to enhance communication between humans and robots and at the same time further research that will enable people to work alongside robots seamlessly.
“For robots to perform useful tasks in real-world settings, it must be easy to communicate the task to the robot; this includes both the desired result and any hints as to the best means to achieve that result,” the researchers stated in their research paper. “With demonstrations, a user can communicate a task to the robot and provide clues as to how to best perform the task.”
Using NVIDIA TITAN X GPUs, the researchers trained a sequence of neural networks to perform duties associated with perception, program generation, and program execution. As a result, the robot was able to learn a task from a single demonstration in the real world.
Once the robot sees a task, it generates a human-readable description of the steps necessary to re-perform the task. The description allows the user to quickly identify and correct any issues with the robot’s interpretation of the human demonstration before execution on the real robot.
The key to achieving this capability is leveraging the power of synthetic data to train the neural networks. Current approaches to training neural networks require large amounts of labeled training data, which is a serious bottleneck in these systems. With synthetic data generation, an almost infinite amount of labeled training data can be produced with very little effort.
This is also the first time an image-centric domain randomization approach has been used on a robot. Domain randomization is a technique to produce synthetic data with large amounts of diversity, which then fools the perception network into seeing the real-world data as simply another variation of its training data. The researchers chose to process the data in an image-centric manner to ensure that the networks are not dependent on the camera or environment.
“The perception network as described applies to any rigid real-world object that can be reasonably approximated by its 3D bounding cuboid,” the researchers said. “Despite never observing a real image during training, the perception network reliably detects the bounding cuboids of objects in real images, even under severe occlusions.”
For their demonstration, the team trained object detectors on several colored blocks and a toy car. The system was taught the physical relationship of blocks, whether they are stacked on top of one another or placed next to each other.
In the video above, the human operator shows a pair of stacks of cubes to the robot. The system then infers an appropriate program and correctly places the cubes in the correct order. Because it takes the current state of the world into account during execution, the system is able to recover from mistakes in real time.
The researchers will present their research paper and work at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia this week.
The team says they will continue to explore the use of synthetic training data for robotics manipulation to extend the capabilities of their method to additional scenarios.
via NVIDIA: Read the research paper
The Latest on: Robot learning
via Google News
The Latest on: Robot learning
- Two-step training helps robots interpret human language on November 12, 2018 at 8:52 am
“Mapping Navigation Instructions to Continuous Control Actions With Position-Visitation Prediction,” presented at the Conference on Robot Learning Oct. 29-31 in Zurich, Switzerland. “Once it can predi... […]
- 'Tis The Season Of STEM With Learning Resources Toys Earning More Than 35 Industry Accolades on November 12, 2018 at 8:51 am
This Year's Hot STEM Must-Haves, Botley™ the Coding Robot and Beaker Creatures™ Liquid Reactor Super Lab, Lead the Award Winners CHICAGO, Nov. 12, 2018 /PRNewswire/ -- Educational toys from Learning R... […]
- This $180 toy robot has virtually limitless activities that teach my kids STEM skills, and it's a lot of fun on November 12, 2018 at 8:32 am
And both have experience with a variety of STEM learning tools. After seeing our review of the LittleBits Star Wars Droid Inventor Kit, Wonder Workshop wanted us to test out its Cue Coding Robot for f... […]
- Robot Equipped With 3D Camera Makes Swift Work of IKEA Chair Assembly on November 12, 2018 at 6:48 am
Robot vision can be challenging ... pattern recognition, understanding, learning, planning and problem solving. […]
- Robots Run Amok!: A Future Headline We Don’t Want to See on November 12, 2018 at 5:54 am
the idea of robotics has become a true sub-field in technology, incorporating all of our knowledge of AI and machine learning with law, sociology, and philosophy. As the tech and possibilities ... […]
- Meet the Robot Who Knows How to Trade Bonds Better Than You Do on November 11, 2018 at 9:00 pm
The robots have just got a step closer to managing your money ... according to a Greenwich Associates report. “Big data and machine learning are already being applied to augment the intelligence of de... […]
- Robotics team is making hundreds of ornaments for Festival of Trees on November 10, 2018 at 6:00 am
It's after 6 p.m. on a Tuesday night, and several rooms in the Arconic Learning Center ... white and green robots (doll-like creations); small picture frames into which they will place cut ... […]
- For Autistic Kids, Robots Can Be Social, Learning Study Buddy on November 10, 2018 at 12:20 am
Robots have been put to work assembling cars in factories, answering questions at conventions and hotel lobbies, moving packages in warehouses, and more. Now, a team at the University of Southern Cali... […]
- Robots are learning hand gestures by watching hours of TED talks on November 10, 2018 at 12:02 am
We say a lot with our hands. We spread them wide to indicate size, stab the air for emphasis and reach out to draw people in. Waving our hands about when we speak makes us appear less robotic – and th... […]
via Bing News