Babies learn about the world by exploring how their bodies move in space, grabbing toys, pushing things off tables and by watching and imitating what adults are doing.
But when roboticists want to teach a robot how to do a task, they typically either write code or physically move a robot’s arm or body to show it how to perform an action.
Now a collaboration between University of Washington developmental psychologists and computer scientists has demonstrated that robots can “learn” much like kids — by amassing data through exploration, watching a human do something and determining how to perform that task on its own.
“You can look at this as a first step in building robots that can learn from humans in the same way that infants learn from humans,” said senior author Rajesh Rao, a UW professor of computer science and engineering.
“If you want people who don’t know anything about computer programming to be able to teach a robot, the way to do it is through demonstration — showing the robot how to clean your dishes, fold your clothes, or do household chores. But to achieve that goal, you need the robot to be able to understand those actions and perform them on their own.”
The research, which combines child development research from the UW’s Institute for Learning & Brain Sciences Lab (I-LABS) with machine learning approaches, was published in a paper in November in the journal PLOS ONE.
In the paper, the UW team developed a new probabilistic model aimed at solving a fundamental challenge in robotics: building robots that can learn new skills by watching people and imitating them.
The roboticists collaborated with UW psychology professor and I-LABS co-director Andrew Meltzoff, whose seminal research has shown that children as young as 18 months can infer the goal of an adult’s actions and develop alternate ways of reaching that goal themselves.
In one example, infants saw an adult try to pull apart a barbell-shaped toy, but the adult failed to achieve that goal because the toy was stuck together and his hands slipped off the ends. The infants watched carefully and then decided to use alternate methods — they wrapped their tiny fingers all the way around the ends and yanked especially hard — duplicating what the adult intended to do.
Children acquire intention-reading skills, in part, through self-exploration that helps them learn the laws of physics and how their own actions influence objects, eventually allowing them to amass enough knowledge to learn from others and to interpret their intentions. Meltzoff thinks that one of the reasons babies learn so quickly is that they are so playful.
“Babies engage in what looks like mindless play, but this enables future learning. It’s a baby’s secret sauce for innovation,” Meltzoff said. “If they’re trying to figure out how to work a new toy, they’re actually using knowledge they gained by playing with other toys. During play they’re learning a mental model of how their actions cause changes in the world. And once you have that model you can begin to solve novel problems and start to predict someone else’s intentions.”
Rao’s team used that infant research to develop machine learning algorithms that allow a robot to explore how its own actions result in different outcomes. Then the robot uses that learned probabilistic model to infer what a human wants it to do and complete the task, and even to “ask” for help if it’s not certain it can.
The team tested its robotic model in two different scenarios: a computer simulation experiment in which a robot learns to follow a human’s gaze, and another experiment in which an actual robot learns to imitate human actions involving moving toy food objects to different areas on a tabletop.
In the gaze experiment, the robot learns a model of its own head movements and assumes that the human’s head is governed by the same rules. The robot tracks the beginning and ending points of a human’s head movements as the human looks across the room and uses that information to figure out where the person is looking. The robot then uses its learned model of head movements to fixate on the same location as the human.
The team also recreated one of Meltzoff’s tests that showed infants who had experience with visual barriers and blindfolds weren’t interested in looking where a blindfolded adult was looking, because they understood the person couldn’t actually see. Once the team enabled the robot to “learn” what the consequences of being blindfolded were, it no longer followed the human’s head movement to look at the same spot.
The Latest on: Robots that learn
via Google News
The Latest on: Robots that learn
- CSUF professor, a Carnegie fellow, looks at ways to design ethical robotson April 8, 2020 at 11:01 pm
The pandemic also sidelined Liu’s plans to travel for her research, so she has been focused on her role as department chair and reading articles and studies about intelligent robots. Learning more ...
- Robots Are Cleaning Grocery Store Floors During The Coronavirus Outbreakon April 8, 2020 at 8:14 pm
The coronavirus outbreak has forced retailers to focus more on sanitation while they are struggling to restock shelves and hire more employees. Some grocery stores are turning to floor cleaning robots ...
- Toward Computers That Teach Themselveson April 8, 2020 at 8:14 am
“That encapsulates an awful lot of what we’d like A.I. to do.” Many people hope robots will eventually embody artificial intelligence and act freely in the world. But it will take more than supervised ...
- Researchers teach robots to move more like animalson April 8, 2020 at 6:42 am
Some of the things they’re able to do are very impressive. A group of researchers at the Berkeley Artificial Intelligence Research (BAIR) laboratory at Berkeley are trying to teach robots to move like ...
- AT&T and Brain Corp to Enable Autonomous Robotson April 7, 2020 at 4:16 pm
AT&T;* and Brain Corp are working together to support data-rich IoT applications for autonomous mobile robots as COVID-19 brings the value of automation sharply into focus.
- Robots learning to move like animalson April 5, 2020 at 9:57 pm
The superior agility seen in animals, as compared to robots, might lead one to wonder: can we create more agile robotic controllers with less effort by directly imitating animals? In this work, we ...
- Want To Build Robots at Home? Check Out This Training On Raspberry Pi and Arduinoon April 3, 2020 at 2:55 pm
If that sounds like you, then now's the perfect time to unleash your inner developer, gain some computer science skills, and even learn how to build robots. The 2020 Raspberry Pi & Arduino A-Z Hero ...
- Google’s AI teaches robots how to move by watching dogson April 3, 2020 at 2:10 pm
The coauthors believe their approach could bolster the development of robots that can complete tasks in the real world, for instance transporting materials between multilevel warehouses and ...
- Google taught this robotic dog to learn new tricks by imitating a real oneon April 3, 2020 at 8:52 am
Google researchers are using imitation learning to teach autonomous robots how to pace, spin, and move in more agile ways. What they did: Using a dataset of motion capture data recorded from various ...
- RoboAds, a New Breed of Service Robots to Combat COVID-19on April 3, 2020 at 5:38 am
One of the options are thermal cameras that can capture human temperature and notify visitors in case of alarming variations. The robot utilizes the latest advancements in AI, machine learning, ...
via Bing News