A child is presented with a picture of various shapes and is asked to find the big red circle. To come to the answer, she goes through a few steps of reasoning: First, find all the big things; next, find the big things that are red; and finally, pick out the big red thing that’s a circle.
We learn through reason how to interpret the world. So, too, do neural networks. Now a team of researchers from MIT Lincoln Laboratory’s Intelligence and Decision Technologies Group has developed a neural network that performs human-like reasoning steps to answer questions about the contents of images. Named the Transparency by Design Network (TbD-net), the model visually renders its thought process as it solves problems, allowing human analysts to interpret its decision-making process. The model performs better than today’s best visual-reasoning neural networks.
Understanding how a neural network comes to its decisions has been a long-standing challenge for artificial intelligence (AI) researchers. As the neural part of their name suggests, neural networks are brain-inspired AI systems intended to replicate the way that humans learn. They consist of input and output layers, and layers in between that transform the input into the correct output. Some deep neural networks have grown so complex that it’s practically impossible to follow this transformation process. That’s why they are referred to as “black box” systems, with their exact goings-on inside opaque even to the engineers who build them.
With TbD-net, the developers aim to make these inner workings transparent. Transparency is important because it allows humans to interpret an AI’s results.
It is important to know, for example, what exactly a neural network used in self-driving cars thinks the difference is between a pedestrian and stop sign, and at what point along its chain of reasoning does it see that difference. These insights allow researchers to teach the neural network to correct any incorrect assumptions. But the TbD-net developers say the best neural networks today lack an effective mechanism for enabling humans to understand their reasoning process.
“Progress on improving performance in visual reasoning has come at the cost of interpretability,” says Ryan Soklaski, who built TbD-net with fellow researchers Arjun Majumdar, David Mascharka, and Philip Tran.
The Lincoln Laboratory group was able to close the gap between performance and interpretability with TbD-net. One key to their system is a collection of “modules,” small neural networks that are specialized to perform specific subtasks. When TbD-net is asked a visual reasoning question about an image, it breaks down the question into subtasks and assigns the appropriate module to fulfill its part. Like workers down an assembly line, each module builds off what the module before it has figured out to eventually produce the final, correct answer. As a whole, TbD-net utilizes one AI technique that interprets human language questions and breaks those sentences into subtasks, followed by multiple computer vision AI techniques that interpret the imagery.
Majumdar says: “Breaking a complex chain of reasoning into a series of smaller subproblems, each of which can be solved independently and composed, is a powerful and intuitive means for reasoning.”
Each module’s output is depicted visually in what the group calls an “attention mask.” The attention mask shows heat-map blobs over objects in the image that the module is identifying as its answer. These visualizations let the human analyst see how a module is interpreting the image.
Take, for example, the following question posed to TbD-net: “In this image, what color is the large metal cube?” To answer the question, the first module locates large objects only, producing an attention mask with those large objects highlighted. The next module takes this output and finds which of those objects identified as large by the previous module are also metal. That module’s output is sent to the next module, which identifies which of those large, metal objects is also a cube. At last, this output is sent to a module that can determine the color of objects. TbD-net’s final output is “red,” the correct answer to the question.
When tested, TbD-net achieved results that surpass the best-performing visual reasoning models. The researchers evaluated the model using a visual question-answering dataset consisting of 70,000 training images and 700,000 questions, along with test and validation sets of 15,000 images and 150,000 questions. The initial model achieved 98.7 percent test accuracy on the dataset, which, according to the researchers, far outperforms other neural module network–based approaches.
Importantly, the researchers were able to then improve these results because of their model’s key advantage — transparency. By looking at the attention masks produced by the modules, they could see where things went wrong and refine the model. The end result was a state-of-the-art performance of 99.1 percent accuracy.
“Our model provides straightforward, interpretable outputs at every stage of the visual reasoning process,” Mascharka says.
Interpretability is especially valuable if deep learning algorithms are to be deployed alongside humans to help tackle complex real-world tasks. To build trust in these systems, users will need the ability to inspect the reasoning process so that they can understand why and how a model could make wrong predictions.
The Latest on: Artificial intelligence
via Google News
The Latest on: Artificial intelligence
- 'Applied Artificial Intelligence: A Handbook for Business Leaders': A Must-Read for Digital Professionals on December 17, 2018 at 10:47 am
The artificial intelligence (AI) hype isn't going away any time soon. So as a digital professional, your job is to figure out what AI is really about and how you can use it to your advantage. But wher... […]
- The Artificial Intelligence Imperative: Unlocking Data Insights To Fuel Business Growth on December 17, 2018 at 7:56 am
The data, gathered from more than 2,300 business leaders across the globe, explores the vast potential – and challenges – businesses must address to unlock data intelligence with AI.According to the n... […]
- Artificial Intelligence in Healthcare Market Worth $36.1 Billion by 2025 - Exclusive Report by MarketsandMarkets™ on December 17, 2018 at 7:54 am
According to the new market research report "Artificial Intelligence in Healthcare Market by Offering (Hardware, Software, Services), Technology (Machine Learning, NLP, Context-Aware Computing ... […]
- The Artificial Intelligence (AI) in Healthcare Market is Seeing Explosive Growth By Future Industry Winners: Forecast, 2017-2023 on December 17, 2018 at 7:23 am
The global Artificial intelligence (AI) in healthcare market was valued at $1,441 million in 2016, and is estimated to reach at $22,790 million by 2023, registering a CAGR of 48.7% from 2017 to 2023. ... […]
- Berkeley scientists developing artificial intelligence tool to combat ‘hate speech’ on social media on December 17, 2018 at 7:08 am
Scientists at the University of California, Berkeley, are developing a tool that uses artificial intelligence to identify “hate speech” on social media, a program that researchers hope will out-perfor... […]
- UK general practitioners skeptical that artificial intelligence could replace them on December 17, 2018 at 6:35 am
BOSTON - Artificial intelligence (AI) is poised to disrupt the world of work. In a UK-wide survey published in the journal PLOS ONE, Charlotte Blease, PhD, Postdoctoral Research Fellow and Fulbright S... […]
- Artificial Intelligence Is Propelling the Pharmaceutical Industry by Enabling Smart Drug Discovery Solutions on December 17, 2018 at 5:51 am
"Pharmaceutical companies are increasingly recognizing the value of deploying Artificial Intelligence (AI)-based platforms that can leverage data regarding gene mutations, protein targets, signaling p... […]
- How China Is Dominating Artificial Intelligence on December 16, 2018 at 8:59 am
China is widening their lead in AI globally by concentrating on a core set of best practices that energize entire industries to pilot and adopt AI for unique use cases. Thanks to its thriving start-up ... […]
- Aleksander Madry on building trustworthy artificial intelligence on December 16, 2018 at 6:47 am
Aleksander Madry is a leader in the emerging field of building guarantees into artificial intelligence, which has nearly become a branch of machine learning in its own right. Credit: CSAIL Machine ... […]
via Bing News