Researchers at the U.S. Army Research Laboratory and the University of Texas at Austin have developed new techniques for robots or computer programs to learn how to perform tasks by interacting with a human instructor.
The findings of the study will be presented and published at the Association for the Advancement of Artificial Intelligence Conference in New Orleans, Louisiana, Feb. 2-7.
ARL and UT researchers considered a specific case where a human provides real-time feedback in the form of critique. First introduced by collaborator Dr. Peter Stone, a professor at the University of Texas at Austin, along with his former doctoral student, Brad Knox, as TAMER, or Training an Agent Manually via Evaluative Reinforcement, the ARL/UT team developed a new algorithm called Deep TAMER.
It is an extension of TAMER that uses deep learning – a class of machine learning algorithms that are loosely inspired by the brain to provide a robot the ability to learn how to perform tasks by viewing video streams in a short amount of time with a human trainer.
According to Army researcher Dr. Garrett Warnell, the team considered situations where a human teaches an agent how to behave by observing it and providing critique, for example, “good job” or “bad job” -similar to the way a person might train a dog to do a trick. Warnell said the researchers extended earlier work in this field to enable this type of training for robots or computer programs that currently see the world through images, which is an important first step in designing learning agents that can operate in the real world.
Many current techniques in artificial intelligence require robots to interact with their environment for extended periods of time to learn how to optimally perform a task. During this process, the agent might perform actions that may not only be wrong, like a robot running into a wall for example, but catastrophic like a robot running off the side of a cliff. Warnell said help from humans will speed things up for the agents, and help them avoid potential pitfalls.
As a first step, the researchers demonstrated Deep TAMER’s success by using it with 15 minutes of human-provided feedback to train an agent to perform better than humans on the Atari game of bowling – a task that has proven difficult for even state-of-the-art methods in artificial intelligence. Deep-TAMER-trained agents exhibited superhuman performance, besting both their amateur trainers and, on average, an expert human Atari player.
Within the next one to two years, researchers are interested in exploring the applicability of their newest technique in a wider variety of environments: for example, video games other than Atari Bowling and additional simulation environments to better represent the types of agents and environments found when fielding robots in the real world.
Their work will be published in the AAAI 2018 conference proceedings.
“The Army of the future will consist of Soldiers and autonomous teammates working side-by-side,” Warnell said. “While both humans and autonomous agents can be trained in advance, the team will inevitably be asked to perform tasks, for example, search and rescue or surveillance, in new environments they have not seen before. In these situations, humans are remarkably good at generalizing their training, but current artificially-intelligent agents are not.”
Deep TAMER is the first step in a line of research its researchers envision will enable more successful human-autonomy teams in the Army. Ultimately, they want autonomous agents that can quickly and safely learn from their human teammates in a wide variety of styles such as demonstration, natural language instruction and critique.
The Latest on: Deep Learning
via Google News
The Latest on: Deep Learning
- Intel, NSF Name Winners of Wireless Machine Learning Research Fundingon June 25, 2020 at 12:07 pm
Intel and the National Science Foundation (NSF), joint funders of the Machine Learning for Wireless Networking Systems (MLWiNS) program, today announced recipients of awards for research projects into ...
- NSF, Intel Invest in Wireless-Specific Machine Learning Edge Researchon June 25, 2020 at 9:25 am
Intel and the National Science Foundation (NSF) announced award recipients of joint funding for research into the ...
- Video: Using deep learning to combat cheating in Counter-Strikeon June 24, 2020 at 12:08 pm
In this GDC 2018 talk Valve's John McDonald discusses how the company has utilized deep learning machine learning techniques to combat cheating in Counter-Strike: Global Offensive .
- How Product Placement Works In 2020 - With AI, Deep Learning And Moreon June 24, 2020 at 11:35 am
Lela London chats to BEN, the Bill Gates-owned product placement agency behind most of your streaming screen's magic.
- DIAWAY, Excelero Partner to Launch DIAWAY KEILA for AI/ML/Deep Learning and HPC Workloadson June 24, 2020 at 8:12 am
DIAWAY, the big-data storage and networking integrator, announced a strategic partnership with Excelero and the launch ...
- Deep Learning Market Growth 2020, Trends, Size, Share and Forecast By 2025on June 23, 2020 at 10:47 pm
According to the latest report by IMARC Group, titled “Deep Learning Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2020-2025,” the global deep learning market size is ...
- Assessing Injury Risk With Zone7’s Deep Learningon June 23, 2020 at 4:20 am
He ultimately settled on Zone7, which ingests data from wearables, fitness assessments and medical profiles and then uses deep learning to flag players who are at risk for injury due to recent ...
- Baidu’s deep-learning platform fuels the rise of industrial AIon June 22, 2020 at 8:12 am
AI is driving industrial transformation across a variety of sectors, and we’re just beginning to scratch the surface of AI capabilities. Some industrial innovations are barely noticed, such as forest ...
- Age and sex affect deep learning prediction of cardiometabolic risk factors from retinal imageson June 19, 2020 at 9:30 am
Deep neural networks can extract clinical information, such as diabetic retinopathy status and individual characteristics (e.g. age and sex), from retinal images. Here, we report the first study to ...
via Bing News