With a smartphone and a browser, people worldwide will be able to interact with a robot to speed the process of teaching robots how to do basic tasks.
In the basement of the Gates Computer Science Building at Stanford University, a screen attached to a red robotic arm lights up. A pair of cartoon eyes blinks. “Meet Bender,” says Ajay Mandlekar, PhD student in electrical engineering.
Bender is one of the robot arms that a team of Stanford researchers is using to test two frameworks that, together, could make it faster and easier to teach robots basic skills. The RoboTurk framework allows people to direct the robot arms in real time with a smartphone and a browser by showing the robot how to carry out tasks like picking up objects. SURREAL speeds the learning process by running multiple experiences at once, essentially allowing the robots to learn from many experiences simultaneously.
“With RoboTurk and SURREAL, we can push the boundary of what robots can do by combining lots of data collected by humans and coupling that with large-scale reinforcement learning,” said Mandlekar, a member of the team that developed the frameworks.
The group will be presenting RoboTurk and SURREAL Oct. 29 at the conference on robot learning in Zurich, Switzerland.
Humans teaching robots
Yuke Zhu, a PhD student in computer science and a member of the team, showed how the system works by opening the app on his iPhone and waving it through the air. He guided the robot arm – like a mechanical crane in an arcade game – to hover over his prize: a wooden block painted to look like a steak. This is a simple pick-and-place task that involves identifying objects, picking them up and putting them into the bin with the correct label.
To humans, the task seems ridiculously easy. But for the robots of today, it’s quite difficult. Robots typically learn by interacting with and exploring their environment – which usually results in lots of random arm waving – or from large datasets. Neither of these is as efficient as getting some human help. In the same way that parents teach their children to brush their teeth by guiding their hands, people can demonstrate to robots how to do specific tasks.
However, those lessons aren’t always perfect. When Zhu pressed hard on his phone screen and the robot released its grip, the wooden steak hit the edge of the bin and clattered onto the table. “Humans are by no means optimal at this,” Mandlekar said, “but this experience is still integral for the robots.”
Faster learning in parallel
These trials – even the failures – provide invaluable information. The demonstrations collected through RoboTurk will give the robots background knowledge to kickstart their learning. SURREAL can run thousands of simulated experiences by people worldwide at once to speed the learning process.
“With SURREAL, we want to accelerate this process of interacting with the environment,” said Linxi Fan, a PhD student in computer science and a member of the team. These frameworks drastically increase the amount of data for the robots to learn from.
“The twin frameworks combined can provide a mechanism for AI-assisted human performance of tasks where we can bring humans away from dangerous environments while still retaining a similar level of task execution proficiency,” said postdoctoral fellow Animesh Garg, a member of the team that developed the frameworks.
The team envisions that robots will be an integral part of everyday life in the future: helping with household chores, performing repetitive assembly tasks in manufacturing or completing dangerous tasks that may pose a threat to humans.
“You shouldn’t have to tell the robot to twist its arm 20 degrees and inch forward 10 centimeters,” said Zhu. “You want to be able to tell the robot to go to the kitchen and get an apple.”
The Latest on: Robot learning
via Google News
The Latest on: Robot learning
- This robot that can instantly find Waldo might be my favourite use of artificial intelligence yet on February 19, 2019 at 7:32 am
Creative agency redpepper built a camera-mounted robotic arm and connected it to Google’s machine learning service AutoML, which analyses the faces on any given page to find Waldo. Google’s Cloud Auto ... […]
- Robots take a step toward self-awareness on February 19, 2019 at 4:50 am
The robot then generates a self-model – essentially a simulation of itself that it can use to think about how its body will move – by processing the data with a machine learning technique known as dee... […]
- Evaluating robots as teachers or partners in language learning exercises on February 19, 2019 at 3:37 am
Researchers at the Royal Institute of Technology (KTH) in Sweden have carried out a study exploring the effects of using physical robots to assist humans during vocabulary learning exercises. The depa... […]
- Waldo-Hunting A.I. Robot Solves One of Life's Greatest Mysteries on February 18, 2019 at 10:14 am
Ditching the robot from the equation could make the process even faster: a system outlined by Machine Learning Mastery in 2014 described how developers could use OpenCV, Python and Template Matching t... […]
- Sex robots are raising hard questions on February 18, 2019 at 5:30 am
Sex robots are not just dolls with a microchip. They will use self-learning algorithms to engage their partner’s emotions. Consider the “Mark 1” robot, which resembles the actor Scarlett Johansson. It ... […]
- QTrobot Enhances Learning Opportunities For Children With Autism on February 18, 2019 at 3:55 am
A robot is perceived as less intimidating than a human figure ... medical technologies and devices that are currently being developed worldwide and learning how these can improve patient care. He hope... […]
- Delaware school adds robots, 3D printers to help struggling students on February 18, 2019 at 2:23 am
Teacher Laurie Drumm helps students program their robot to navigate a maze ... private schools offer special programs as they compete to enroll students with learning disabilities, Dougherty said. “We ... […]
- Sensors and Machine Learning Are Giving Robots a Sixth Sense on February 17, 2019 at 7:00 am
According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest ... […]
- A robot that can touch, eat and sleep? The science of cyborgs like Alita: Battle Angel on February 17, 2019 at 5:15 am
This is in stark contrast to current robots learning from scratch to walk – which can take thousands of hours of training in simulation and then on the robot to get right. Instead it’s likely ... […]
via Bing News