Researchers from NVIDIA, led by Stan Birchfield and Jonathan Tremblay, developed a first of its kind deep learning-based system that can teach a robot to complete a task by just observing the actions of a human. The method is designed to enhance communication between humans and robots and at the same time further research that will enable people to work alongside robots seamlessly.
“For robots to perform useful tasks in real-world settings, it must be easy to communicate the task to the robot; this includes both the desired result and any hints as to the best means to achieve that result,” the researchers stated in their research paper. “With demonstrations, a user can communicate a task to the robot and provide clues as to how to best perform the task.”
Using NVIDIA TITAN X GPUs, the researchers trained a sequence of neural networks to perform duties associated with perception, program generation, and program execution. As a result, the robot was able to learn a task from a single demonstration in the real world.
Once the robot sees a task, it generates a human-readable description of the steps necessary to re-perform the task. The description allows the user to quickly identify and correct any issues with the robot’s interpretation of the human demonstration before execution on the real robot.
The key to achieving this capability is leveraging the power of synthetic data to train the neural networks. Current approaches to training neural networks require large amounts of labeled training data, which is a serious bottleneck in these systems. With synthetic data generation, an almost infinite amount of labeled training data can be produced with very little effort.
This is also the first time an image-centric domain randomization approach has been used on a robot. Domain randomization is a technique to produce synthetic data with large amounts of diversity, which then fools the perception network into seeing the real-world data as simply another variation of its training data. The researchers chose to process the data in an image-centric manner to ensure that the networks are not dependent on the camera or environment.
“The perception network as described applies to any rigid real-world object that can be reasonably approximated by its 3D bounding cuboid,” the researchers said. “Despite never observing a real image during training, the perception network reliably detects the bounding cuboids of objects in real images, even under severe occlusions.”
For their demonstration, the team trained object detectors on several colored blocks and a toy car. The system was taught the physical relationship of blocks, whether they are stacked on top of one another or placed next to each other.
In the video above, the human operator shows a pair of stacks of cubes to the robot. The system then infers an appropriate program and correctly places the cubes in the correct order. Because it takes the current state of the world into account during execution, the system is able to recover from mistakes in real time.
The researchers will present their research paper and work at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia this week.
The team says they will continue to explore the use of synthetic training data for robotics manipulation to extend the capabilities of their method to additional scenarios.
via NVIDIA: Read the research paper
Receive an email update when we add a new ROBOT LEARNING article.
The Latest on: Robot learning
via Google News
The Latest on: Robot learning
- 300 Million People Are Addicted to Learning New Languages, Thanks to This $700 Million Pittsburgh Company on December 18, 2018 at 7:08 am
"The hardest thing about learning anything by yourself is staying motivated ... Duolingo is able to solve that problem for a much, much larger portion of the population." I am not a robot In 1986 von ... […]
- As robotics industry grows, so do system needs on December 17, 2018 at 10:27 am
As robots become more collaborative, there is a need for new mobility, deep learning, and artificial intelligence technologies. There are also advancements in systems integration and an increased need ... […]
- Soft actor critic – Deep reinforcement learning with real-world robots on December 16, 2018 at 2:35 pm
We are announcing the release of our state-of-the-art off-policy model-free reinforcement learning algorithm, soft actor-critic (SAC). This algorithm has been developed jointly at UC Berkeley and Goog... […]
- Putting STEM skills to work: Area students to compete in robotics tournament on December 15, 2018 at 4:39 am
With helpful guidance from her, he gave the robotics program a chance and now has fun learning new things. Another student, John Sandean, 12, also learned about the program from his mother. Since he d... […]
- Dreaming of artificial intelligence in ancient Greece and Silicon Valley on December 14, 2018 at 5:02 am
[and] all the learning of the immortals.” AGI, and then some. Eastern traditions also featured robots. Indian legend has mechanical soldiers defending the remains of the Buddha. And an ancient Chinese ... […]
- Waitrose trialling farm robots in new machine learning play on December 14, 2018 at 3:00 am
The John Lewis Partnership has announced a three-year trial with the Small Robot Company to test robots at a Waitrose farm in Leckford, Hampshire. In a move that is part of a wider big data and machin... […]
- Nvidia ships miniaturized Jetson AGX Xavier machine learning chip for robots on December 12, 2018 at 11:48 pm
Nvidia Corp. today started shipping Jetson AGX Xavier, a miniaturized machine learning chip geared toward industrial robots and other autonomous machines. The company first released the module ... […]
- New Sysmac AI Controller from Omron Extends Equipment Life with Machine Learning on December 12, 2018 at 12:35 pm
Leveraging the machine learning results during production is key to ensuring ... sells and services fully integrated automation solutions that include robotics, sensing, motion, logic, safety and more ... […]
- Could Robots Develop Prejudice on Their Own? on December 12, 2018 at 3:57 am
Whitaker cautions that AI robots developing their own damaging set of prejudices ... In other words, societies in which in-group diversity is present and that value global learning from interactions w... […]
via Bing News