Babies learn about the world by exploring how their bodies move in space, grabbing toys, pushing things off tables and by watching and imitating what adults are doing.
But when roboticists want to teach a robot how to do a task, they typically either write code or physically move a robot’s arm or body to show it how to perform an action.
Now a collaboration between University of Washington developmental psychologists and computer scientists has demonstrated that robots can “learn” much like kids — by amassing data through exploration, watching a human do something and determining how to perform that task on its own.
“You can look at this as a first step in building robots that can learn from humans in the same way that infants learn from humans,” said senior author Rajesh Rao, a UW professor of computer science and engineering.
“If you want people who don’t know anything about computer programming to be able to teach a robot, the way to do it is through demonstration — showing the robot how to clean your dishes, fold your clothes, or do household chores. But to achieve that goal, you need the robot to be able to understand those actions and perform them on their own.”
The research, which combines child development research from the UW’s Institute for Learning & Brain Sciences Lab (I-LABS) with machine learning approaches, was published in a paper in November in the journal PLOS ONE.
In the paper, the UW team developed a new probabilistic model aimed at solving a fundamental challenge in robotics: building robots that can learn new skills by watching people and imitating them.
The roboticists collaborated with UW psychology professor and I-LABS co-director Andrew Meltzoff, whose seminal research has shown that children as young as 18 months can infer the goal of an adult’s actions and develop alternate ways of reaching that goal themselves.
In one example, infants saw an adult try to pull apart a barbell-shaped toy, but the adult failed to achieve that goal because the toy was stuck together and his hands slipped off the ends. The infants watched carefully and then decided to use alternate methods — they wrapped their tiny fingers all the way around the ends and yanked especially hard — duplicating what the adult intended to do.
Children acquire intention-reading skills, in part, through self-exploration that helps them learn the laws of physics and how their own actions influence objects, eventually allowing them to amass enough knowledge to learn from others and to interpret their intentions. Meltzoff thinks that one of the reasons babies learn so quickly is that they are so playful.
“Babies engage in what looks like mindless play, but this enables future learning. It’s a baby’s secret sauce for innovation,” Meltzoff said. “If they’re trying to figure out how to work a new toy, they’re actually using knowledge they gained by playing with other toys. During play they’re learning a mental model of how their actions cause changes in the world. And once you have that model you can begin to solve novel problems and start to predict someone else’s intentions.”
Rao’s team used that infant research to develop machine learning algorithms that allow a robot to explore how its own actions result in different outcomes. Then the robot uses that learned probabilistic model to infer what a human wants it to do and complete the task, and even to “ask” for help if it’s not certain it can.
The team tested its robotic model in two different scenarios: a computer simulation experiment in which a robot learns to follow a human’s gaze, and another experiment in which an actual robot learns to imitate human actions involving moving toy food objects to different areas on a tabletop.
In the gaze experiment, the robot learns a model of its own head movements and assumes that the human’s head is governed by the same rules. The robot tracks the beginning and ending points of a human’s head movements as the human looks across the room and uses that information to figure out where the person is looking. The robot then uses its learned model of head movements to fixate on the same location as the human.
The team also recreated one of Meltzoff’s tests that showed infants who had experience with visual barriers and blindfolds weren’t interested in looking where a blindfolded adult was looking, because they understood the person couldn’t actually see. Once the team enabled the robot to “learn” what the consequences of being blindfolded were, it no longer followed the human’s head movement to look at the same spot.
The Latest on: Robots that learn
via Google News
The Latest on: Robots that learn
- What will happen when robots have taken all the jobs?on January 26, 2020 at 2:00 am
One answer one often hears to such worries is that it doesn’t matter because there will still be a lot of jobs programming and designing the robots and doing new tech things no one has yet imagined: ...
- Robots, teamwork and lessons for lifeon January 25, 2020 at 7:13 pm
They learn the value of problem-solving.” Students participating in the robotics tournament ... Camilio DaRosa, a member of Pena’s team, said he was “definitely pleased with the robot’s performance.” ...
- A shocking number of people would opt for a robot boss over a human oneon January 23, 2020 at 9:54 am
“People are no longer afraid of the robots. They have experienced how AI and machine learning can improve the way they work in a very pragmatic way. And the more they use these technologies, the more ...
- Robot maker Boston Dynamics replaces CEO to prepare for ‘new stage of growth’on January 23, 2020 at 7:00 am
Advanced robots like Spot are moving into new industries and roles Raibert ... “We’re excited to get our sensor providers, software developers, and end-users together so they can learn from each other ...
- Amazon discounts these top-of-the-line Roomba robot vacuums, up to $300 offon January 23, 2020 at 4:15 am
What places it leagues ahead of the previous models, however, is the Imprint Smart Mapping. This technology is comprised of cameras and sensors that guide the robot in learning the floor arrangement ...
- Ole Miss rolls out food delivery robots on campuson January 22, 2020 at 12:26 pm
The robots can cross streets, climb curbs, travel at night and operate in rain and snow using a combination of sophisticated machine learning, artificial intelligence and sensors to navigate. Starship ...
- To advance robot swarms, UB engineers turn to video gameson January 22, 2020 at 7:39 am
The robots need to be able to effectively communicate and adapt to challenges like that ... But the potential to improve AI systems by learning from humans is enormous, Chowdhury says. The study, ...
- AI in the court: Are robot judges next?on January 22, 2020 at 3:00 am
What is AI? Everything you need to know about Artificial Intelligence A guide to artificial intelligence, from machine learning and general AI to neural networks. Read More "We've got to stop doing ...
- Facebook's new robot AI can get around efficiently without using a mapon January 22, 2020 at 1:45 am
It's already possible for robots to navigate without maps, but having them navigate well is another matter. You don't want them to waste time backtracking, let alone fall down if they bump into an ...
- Cruise unveils next-generation, self-driving robot taxion January 21, 2020 at 7:34 pm
are a familiar sight in San Francisco as they plot their own courses through the city’s gnarly traffic — the complexity helps the robots learn faster, Cruise said — with a backup driver ready to take ...
via Bing News