ComText, from the Computer Science and Artificial Intelligence Laboratory, allows robots to understand contextual commands.
Despite what you might see in movies, today’s robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.
For example, if you put a specific tool in a toolbox and ask a robot to “pick it up,” it would be completely lost. Picking it up means being able to see and identify objects, understand commands, recognize that the “it” in question is the tool you put down, go back in time to remember the moment when you put down the tool, and distinguish the tool you put down from other ones of similar shapes and sizes.
Recently researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have gotten closer to making this type of request easier: In a new paper, they present an Alexa-like system that allows robots to understand a wide range of commands that require contextual knowledge about objects and their environments. They’ve dubbed the system “ComText,” for “commands in context.”
The toolbox situation above was among the types of tasks that ComText can handle. If you tell the system that “the tool I put down is my tool,” it adds that fact to its knowledge base. You can then update the robot with more information about other objects and have it execute a range of tasks like picking up different sets of objects based on different commands.
“Where humans understand the world as a collection of objects and people and abstract concepts, machines view it as pixels, point-clouds, and 3-D maps generated from sensors,” says CSAIL postdoc Rohan Paul, one of the lead authors of the paper. “This semantic gap means that, for robots to understand what we want them to do, they need a much richer representation of what we do and say.”
The team tested ComText on Baxter, a two-armed humanoid robot developed for Rethink Robotics by former CSAIL director Rodney Brooks.
The project was co-led by research scientist Andrei Barbu, alongside research scientist Sue Felshin, senior research scientist Boris Katz, and Professor Nicholas Roy. They presented the paper at last week’s International Joint Conference on Artificial Intelligence (IJCAI) in Australia.
How it works
Things like dates, birthdays, and facts are forms of “declarative memory.” There are two kinds of declarative memory: semantic memory, which is based on general facts like the “sky is blue,” and episodic memory, which is based on personal facts, like remembering what happened at a party.
Most approaches to robot learning have focused only on semantic memory, which obviously leaves a big knowledge gap about events or facts that may be relevant context for future actions. ComText, meanwhile, can observe a range of visuals and natural language to glean “episodic memory” about an object’s size, shape, position, type and even if it belongs to somebody. From this knowledge base, it can then reason, infer meaning and respond to commands.
“The main contribution is this idea that robots should have different kinds of memory, just like people,” says Barbu. “We have the first mathematical formulation to address this issue, and we’re exploring how these two types of memory play and work off of each other.”
With ComText, Baxter was successful in executing the right command about 90 percent of the time. In the future, the team hopes to enable robots to understand more complicated information, such as multi-step commands, the intent of actions, and using properties about objects to interact with them more naturally.
For example, if you tell a robot that one box on a table has crackers, and one box has sugar, and then ask the robot to “pick up the snack,” the hope is that the robot could deduce that sugar is a raw material and therefore unlikely to be somebody’s “snack.”
By creating much less constrained interactions, this line of research could enable better communications for a range of robotic systems, from self-driving cars to household helpers.
“This work is a nice step towards building robots that can interact much more naturally with people,” says Luke Zettlemoyer, an associate professor of computer science at the University of Washington who was not involved in the research. “In particular, it will help robots better understand the names that are used to identify objects in the world, and interpret instructions that use those names to better do what users ask.”
The Latest on: Contextual commands
- Film Review: ‘A Star Is Born’: Actually, Two Stars Are Born on October 9, 2018 at 4:32 pm
If you’re going to hire someone with the level of funny that Chappelle commands, let them be more funny ... Monday through Friday. China In Context Get the full picture of what's going on in China's s... […]
- Sit Down and Be Seen by Jesus: Beautifying the Beatitudes on October 9, 2018 at 4:17 pm
I’m in my late forties now and I think I’m only starting to understand what’s happening in this passage at the beginning of The Sermon on the Mount, courtesy of a fantastic sermon and sweet, wonderful ... […]
- Alexa, Should We Trust You? on October 9, 2018 at 5:26 am
“It requires conversational context, geographical context ... and those changes can be associated with motor commands,” Ellipsis’s chief science officer, Elizabeth Shriberg, explained; those commands ... […]
- Could villains clone themselves to take over the world? on October 8, 2018 at 10:30 am
Then this cell is somehow to be tricked to go through the steps for heart development in the absence of the context of the whole body and eventually ... Since they are humans, the clones will not simp... […]
- Tyler Technologies Launches New World ShieldForce Mobile Application on October 8, 2018 at 6:17 am
Command staff now have a tool that provides instant notifications ... Tyler can help public safety agencies increase situational and contextual awareness, improve their service, and ultimately, keep c... […]
- Command Post: Give Good Advice on October 1, 2018 at 4:10 am
The context of the article was quite simple indeed: People who have been there, walked the walk and succeeded offered advice to the people who are just entering the new arena of professional football. ... […]
- Defeating Polymorphic Malware with Cognitive Intelligence. Part 2: Command Line Argument Clustering on September 29, 2018 at 7:06 am
Cisco AMP for Endpoints is now able to convict polymorphic and evasive malware variants based on the command line arguments observed during ... actionable alerts with greater level of detail and conte... […]
- Robots that understand contextual commands on August 31, 2017 at 3:50 am
ComText allows robots to understand contextual commands such as, “Pick up the box I put down.” Credit: Tom Buehler/MIT CSAIL Despite what you might see in movies, today's robots are still very limited ... […]
- MIT CSAIL teaches a robot to follow contextual voice commands on August 30, 2017 at 11:31 am
MIT’s Computer Science and Artificial Intelligence Lab has devised a method by which robots can understand and respond to voice commands, stated in clear, plain language. The system is advanced enough ... […]
- Implementing Context Sensitive Commands in UI Design on April 28, 2016 at 8:20 am
The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. ... […]
via Google News and Bing News