Breakthrough CSAIL system suggests robots could one day be able to see well enough to be useful in people’s homes and offices.
Humans have long been masters of dexterity, a skill that can largely be credited to the help of our eyes. Robots, meanwhile, are still catching up.
Certainly there’s been some progress: For decades, robots in controlled environments like assembly lines have been able to pick up the same object over and over again. More recently, breakthroughs in computer vision have enabled robots to make basic distinctions between objects. Even then, though, the systems don’t truly understand objects’ shapes, so there’s little the robots can do after a quick pick-up.
In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they’ve made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before.
The system, called Dense Object Nets (DON), looks at objects as collections of points that serve as sort of visual roadmaps. This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar — a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses.
For example, someone might use DON to get a robot to grab onto a specific spot on an object, say, the tongue of a shoe. From that, it can look at a shoe it has never seen before, and successfully grab its tongue.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” says PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow PhD student Pete Florence, alongside MIT Professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”
The team views potential applications not just in manufacturing settings, but also in homes. Imagine giving the system an image of a tidy house, and letting it clean while you’re at work, or using an image of dishes so that the system puts your plates away while you’re on vacation.
What’s also noteworthy is that none of the data was actually labeled by humans. Instead, the system is what the team calls “self-supervised,” not requiring any human annotations.
Two common approaches to robot grasping involve either task-specific learning, or creating a general grasping algorithm. These techniques both have obstacles: Task-specific methods are difficult to generalize to other tasks, and general grasping doesn’t get specific enough to deal with the nuances of particular tasks, like putting objects in specific spots.
The DON system, however, essentially creates a series of coordinates on a given object, which serve as a kind of visual roadmap, to give the robot a better understanding of what it needs to grasp, and where.
The team trained the system to look at objects as a series of points that make up a larger coordinate system. It can then map different points together to visualize an object’s 3-D shape, similar to how panoramic photos are stitched together from multiple photos. After training, if a person specifies a point on a object, the robot can take a photo of that object, and identify and match points to be able to then pick up the object at that specified point.
This is different from systems like UC-Berkeley’s DexNet, which can grasp many different items, but can’t satisfy a specific request. Imagine a child at 18 months old, who doesn’t understand which toy you want it to play with but can still grab lots of items, versus a four-year old who can respond to “go grab your truck by the red end of it.”
In one set of tests done on a soft caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy’s right ear from a range of different configurations. This showed that, among other things, the system has the ability to distinguish left from right on symmetrical objects.
When testing on a bin of different baseball hats, DON could pick out a specific target hat despite all of the hats having very similar designs — and having never seen pictures of the hats in training data before.
“In factories robots often need complex part feeders to work reliably,” says Florence. “But a system like this that can understand objects’ orientations could just take a picture and be able to grasp and adjust the object accordingly.”
In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk.
The Latest on: Computer vision
via Google News
The Latest on: Computer vision
- Computer Vision Market Size, Sales, Share, Analysis, Industry Demand and Forecasts Report From 2019-2026on August 16, 2019 at 1:03 am
Aug 16, 2019 (Hitech News Daily via COMTEX) -- The recent research report on the global Computer Vision market presents the latest industry data and future trends, allowing you to recognize the ...
- AI startup Gather uses drones and computer vision for warehouse inventoryon August 15, 2019 at 1:10 pm
Gather, a company that uses autonomous drones for warehouse inventory, launched out of stealth today. Founded in 2017, the company of about 10 employees is based in Pittsburgh. Gather’s founding ...
- Following its Series A, Deepomatic Brings its Enterprise-Level Computer Vision Technology to the U.S.on August 15, 2019 at 9:11 am
NEW YORK, Aug. 15, 2019 /PRNewswire-PRWeb/ -- Deepomatic, a computer vision company headquartered in France, has expanded into North America, opening a U.S. headquarters in New York City. Jesse ...
- Vinod Dham backs computer vision startup Orboon August 14, 2019 at 5:00 pm
Orbo helps improve low-light images, removes fog from security camera footage and cleans up noise from images captured by drones.Anandi Chandrashekhar | ETtech | August 15, 2019, 05:30 IST Vinod ...
- University Research Teams Open-Source Natural Adversarial Image DataSet for Computer-Vision AIon August 13, 2019 at 6:39 am
Research teams from three universities recently released a dataset called ImageNet-A, containing natural adversarial images: real-world images that are misclassified by image-recognition AI. When used ...
- Moshe Safran Named CEO of New RSIP Vision USA Officeon August 13, 2019 at 5:51 am
RSIP Vision is a global leader in artificial intelligence, computer vision, deep learning, algorithm development, and image processing technology for the medical device, pharmaceutical and ...
- Ola acqui-hires AI and computer vision start-up Pikup.ai, mum on deal valueon August 13, 2019 at 5:40 am
Ride-hailing firm Ola on Tuesday said it has acqui-hired Pikup.ai, a Bengaluru-based artificial intelligence start-up. Co-founded by Inder Singh and Ritwik Saikia, Pikup.ai uses autonomous ...
- Agricultural imagery and machine visionon August 10, 2019 at 12:08 am
But, technologies that make use of machine vision, known as “precision agriculture” or “agricultural intelligence”, are helping the industry prevent loss. Farmers get readouts alerting them to the ...
- Hebert Named Dean of Carnegie Mellon’s Top-Ranked School of Computer Scienceon August 8, 2019 at 12:37 pm
August 8, 2019 — Martial Hebert, a leading researcher in computer vision and robotics, has been named dean of Carnegie Mellon University’s world-renowned School of Computer Science (SCS), effective ...
via Bing News