Sep 042017
 

ComText allows robots to understand contextual commands such as, “Pick up the box I put down.”
Photo: Tom Buehler/MIT CSAIL

ComText, from the Computer Science and Artificial Intelligence Laboratory, allows robots to understand contextual commands.

Despite what you might see in movies, today’s robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.

For example, if you put a specific tool in a toolbox and ask a robot to “pick it up,” it would be completely lost. Picking it up means being able to see and identify objects, understand commands, recognize that the “it” in question is the tool you put down, go back in time to remember the moment when you put down the tool, and distinguish the tool you put down from other ones of similar shapes and sizes.

Recently researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have gotten closer to making this type of request easier: In a new paper, they present an Alexa-like system that allows robots to understand a wide range of commands that require contextual knowledge about objects and their environments. They’ve dubbed the system “ComText,” for “commands in context.”

The toolbox situation above was among the types of tasks that ComText can handle. If you tell the system that “the tool I put down is my tool,” it adds that fact to its knowledge base. You can then update the robot with more information about other objects and have it execute a range of tasks like picking up different sets of objects based on different commands.

“Where humans understand the world as a collection of objects and people and abstract concepts, machines view it as pixels, point-clouds, and 3-D maps generated from sensors,” says CSAIL postdoc Rohan Paul, one of the lead authors of the paper. “This semantic gap means that, for robots to understand what we want them to do, they need a much richer representation of what we do and say.”

The team tested ComText on Baxter, a two-armed humanoid robot developed for Rethink Robotics by former CSAIL director Rodney Brooks.

The project was co-led by research scientist Andrei Barbu, alongside research scientist Sue Felshin, senior research scientist Boris Katz, and Professor Nicholas Roy. They presented the paper at last week’s International Joint Conference on Artificial Intelligence (IJCAI) in Australia.

How it works

Things like dates, birthdays, and facts are forms of “declarative memory.” There are two kinds of declarative memory: semantic memory, which is based on general facts like the “sky is blue,” and episodic memory, which is based on personal facts, like remembering what happened at a party.

Most approaches to robot learning have focused only on semantic memory, which obviously leaves a big knowledge gap about events or facts that may be relevant context for future actions. ComText, meanwhile, can observe a range of visuals and natural language to glean “episodic memory” about an object’s size, shape, position, type and even if it belongs to somebody. From this knowledge base, it can then reason, infer meaning and respond to commands.

“The main contribution is this idea that robots should have different kinds of memory, just like people,” says Barbu. “We have the first mathematical formulation to address this issue, and we’re exploring how these two types of memory play and work off of each other.”

With ComText, Baxter was successful in executing the right command about 90 percent of the time. In the future, the team hopes to enable robots to understand more complicated information, such as multi-step commands, the intent of actions, and using properties about objects to interact with them more naturally.

For example, if you tell a robot that one box on a table has crackers, and one box has sugar, and then ask the robot to “pick up the snack,” the hope is that the robot could deduce that sugar is a raw material and therefore unlikely to be somebody’s “snack.”

By creating much less constrained interactions, this line of research could enable better communications for a range of robotic systems, from self-driving cars to household helpers.

“This work is a nice step towards building robots that can interact much more naturally with people,” says Luke Zettlemoyer, an associate professor of computer science at the University of Washington who was not involved in the research. “In particular, it will help robots better understand the names that are used to identify objects in the world, and interpret instructions that use those names to better do what users ask.”

Learn more: ComText, from the Computer Science and Artificial Intelligence Laboratory, allows robots to understand contextual commands.

 

The Latest on: Contextual commands
  • 'For the Loyal' inspired by the real-life Penn State Sandusky case
    on January 18, 2018 at 1:39 am

    Toby worries about the “chain of command,” while Hale, who is perfectly willing to ... Here, David’s Boy tells Mia that Carlson’s attentions weren’t always unwanted in the context of a life filled with abuse. (Like Sandusky, Carlson runs a ... […]

  • Former nuclear launch officer shares fears about Trump administration’s Nuclear Review Posture Review
    on January 18, 2018 at 1:37 am

    The nuclear launch officer, then 25, was one of two people who worked in an Air Force command and control centre deep underground ... weapons the world is supposed to be eradicating, in the context of an unstable political environment. […]

  • How John Young smuggled a corned-beef sandwich into space
    on January 18, 2018 at 1:37 am

    For context, early space food (by today's standards ... Corned beef did appear on the menu of the first space shuttle mission in April 1981 -- which Young happened to command. While the infamous sandwich is no longer available to historians, a similar ... […]

  • Economy or culture?
    on January 17, 2018 at 11:46 pm

    It's because we look upon our presidents more in the context of a cult of personality than in the context of economic theory. Not everyone has command of economic policy, but nearly everyone has a confident opinion of a president's behavior from that ... […]

  • Does the rise of Eurasia herald a new world order?
    on January 17, 2018 at 9:25 pm

    In 1904 Halford Mackinder, cited by Maçães in another context, famously defined the Eurasian landmass as “the World Island” — and argued that “who rules the World Island commands the world” (a prediction partially borne out by the titanic ... […]

  • Google Home: 7 Important Updates For 2018
    on January 17, 2018 at 4:41 pm

    More contextual awareness I’d like to see the Home understand ... Change wake word There’s something eminently clunky about prefacing all your Home commands with “OK Google” or “Hey Google”, a phrase that sounds even sillier when in the ... […]

  • What’s Behind Vietnam’s New Military Cyber Command?
    on January 12, 2018 at 5:08 am

    According to the defense ministry, the Cyperspace Operations Command, which would be directly under the ... In his remarks at the ceremony, Phuc framed the decision in the context of the country’s broader challenge, noting both the reality that ... […]

  • Polk Audio Command Bar Announced at CES
    on January 9, 2018 at 5:16 pm

    Though the sound bar may reach great volumes, a duck button on the remote control lowers playback volume, making it easier for Alexa to register voice commands. Adjustable EQ settings are accessible and can be changed depending on the context. A dedicated ... […]

  • MIT CSAIL teaches a robot to follow contextual voice commands
    on August 30, 2017 at 11:31 am

    MIT’s Computer Science and Artificial Intelligence Lab has devised a method by which robots can understand and respond to voice commands, stated in clear, plain language. The system is advanced enough to understand contextual commands, too, including ... […]

via Google News and Bing News

Other Interesting Posts

Leave a Reply

%d bloggers like this: