Why did the frog cross the road? Well, a new artificial intelligent (AI) agent that can play the classic arcade game Frogger not only can tell you why it crossed the road, but it can justify its every move in everyday language.
Developed by Georgia Tech, in collaboration with Cornell and the University of Kentucky, the work enables an AI agent to provide a rationale for a mistake or errant behavior, and to explain it in a way that is easy for non-experts to understand.
This, the researchers say, may help robots and other types of AI agents seem more relatable and trustworthy to humans. They also say their findings are an important step toward a more transparent, human-centered AI design that understands people’s preferences and prioritizes people’s needs.
“If the power of AI is to be democratized, it needs to be accessible to anyone regardless of their technical abilities,” said Upol Ehsan, Ph.D. student in the School of Interactive Computing at Georgia Tech and lead researcher.
“As AI pervades all aspects of our lives, there is a distinct need for human-centered AI design that makes black-boxed AI systems explainable to everyday users. Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them.”
The study was supported by the Office of Naval Research (ONR).
Researchers developed a participant study to determine if their AI agent could offer rationales that mimicked human responses. Spectators watched the AI agent play the videogame Frogger and then ranked three on-screen rationales in order of how well each described the AI’s game move.
Of the three anonymized justifications for each move – a human-generated response, the AI-agent response, and a randomly generated response – the participants preferred the human-generated rationales first, but the AI-generated responses were a close second.
Frogger offered the researchers the chance to train an AI in a “sequential decision-making environment,” which is a significant research challenge because decisions that the agent has already made influence future decisions. Therefore, explaining the chain of reasoning to experts is difficult, and even more so when communicating with non-experts, according to researchers.
In case the study participants weren’t familiar, the game’s goal of getting the frog safely home without being hit by moving vehicles or drowned in the river was explained to them. The simple game mechanics of moving up, down, left or right, allowed the participants to see what the AI was doing, and to reasonably evaluate if the rationales on the screen clearly justified the move.
The participants judged the rationales based on:
- Confidence – the person is confident in the AI to perform its task
- Human-likeness – looks like it was made by a human
- Adequate justification – adequately justifies the action taken
- Understandability – helps the person understand the AI’s behavior
AI-generated rationales that were ranked higher by participants were those that showed recognition of environmental conditions and adaptability, as well as those that communicated awareness of upcoming dangers and planned for them. Redundant information that just stated the obvious or misrepresented the environment were found to have a negative impact.
“This project is more about understanding human perceptions and preferences of these AI systems than it is about building new technologies,” said Ehsan. “At the heart of explainability is sensemaking. We are trying to understand that human factor.”
A second related study validated the researchers’ decision to design their AI agent to be able to offer one of two distinct types of rationales:
- Concise, “focused” rationales or
- Holistic, “complete picture” rationales
In this second study, participants were only offered AI-generated rationales after watching the AI play Frogger. They were asked to select the answer that they preferred in a scenario where an AI made a mistake or behaved unexpectedly. They did not know the rationales were grouped into the two categories.
By a 3-to-1 margin, participants favored answers that were classified in the “complete picture” category. Responses showed that people appreciated the AI thinking about future steps rather than just what was in the moment, which might make them more prone to making another mistake. People also wanted to know more so that they might directly help the AI fix the errant behavior.
“The situated understanding of the perceptions and preferences of people working with AI machines give us a powerful set of actionable insights that can help us design better human-centered, rationale-generating, autonomous agents,” said Mark Riedl, professor of Interactive Computing and lead faculty member on the project.
A possible future direction for the research will apply the findings to autonomous agents of various types, such as companion agents, and how they might respond based on the task at hand. Researchers will also look at how agents might respond in different scenarios, such as during an emergency response or when aiding teachers in the classroom.
The Latest on: AI agents
via Google News
The Latest on: AI agents
- How AI closes the curtain on human perceptionon December 3, 2020 at 7:29 pm
There is a famous psychological experiment where participants intently watch a basketball game – but when asked afterwards about the gorilla that had danced around amidst the players, nobody has seen ...
- Introducing Greet-An AI Phone Agent Built for Restaurants Everywhereon December 3, 2020 at 6:46 pm
Bensen AI, a Boston based voice technology company announced earlier this week the release of Greet, giving independent restaurants ...
- Cancer and Sexuality; AI 'Revolution' Grows; New CMO for ASCOon December 3, 2020 at 12:32 pm
A lung cancer diagnosis doesn't mean a person has lost all interest in sex or sexuality. ( ASCO Connection) Artificial intelligence (AI) could "revolutionize" medical research by helping solve ...
- Dobby, The AI-powered Home Maintenance Platform, Secures $1.7 Million in Seed Roundon December 3, 2020 at 8:41 am
PRNewswire/ -- Dobby, an AI-powered home maintenance platform, today announced its seed capital raise of $1.7 million.
- ultimate.ai scores $20M for a supportive approach to customer service automationon December 3, 2020 at 2:03 am
Ultimate.ai, a virtual customer service agent builder, has closed a $20 million Series A round of funding, led by Omers Ventures with participation from Felicis Ventures and existing investors HV ...
- Loon deploys self-learning navigation AIon December 2, 2020 at 12:50 pm
In a blog, Loon CTO Salvatore Candido said the system uses a type of machine learning which allows an AI agent to “learn by trial and error in an interactive environment using feedback from its own ...
- New AI-Based Navigation Helps Loon’s Balloons Hover in Placeon December 2, 2020 at 8:04 am
Reinforcement learning might be the next step in keeping broadband-carrying balloons afloat above remote areas. Could Venus be next?
- ServiceNow’s AI chief on solving the ‘last mile problem’ for enterprise AIon December 2, 2020 at 6:08 am
Vijay Narayanan, ServiceNow’s Chief AI Officer, explains how the workflow vendor is making AI scalable in the enterprise.
- Leveraging The Power Of AI In Telehealthon December 2, 2020 at 5:10 am
The main AI use cases in telehealth include information analysis and collaboration, remote patient monitoring, and intelligent diagnostics and assistance.
- AI’s Next Big Coup: Augmenting Intelligence To Combat Customer Service Burnouton December 2, 2020 at 4:40 am
Knowledge management with embedded AI can connect workers at scale to share best practices, crowdsource answers and close knowledge gaps in real time.
via Bing News