Artificial Intelligence (AI) and machine learning algorithms such as Deep Learning have become integral parts of our daily lives: they enable digital speech assistants or translation services, improve medical diagnostics and are an indispensable part of future technologies such as autonomous driving. Based on an ever increasing amount of data and powerful novel computer architectures, learning algorithms appear to reach human capabilities, sometimes even excelling beyond.
The issue: so far it often remains unknown to users, how exactly AI systems reach their conclusions. Therefore it may often remain unclear, whether the AI’s decision making behavior is truly ‘intelligent’ or whether the procedures are just averagely successful.
Researchers from TU Berlin, Fraunhofer Heinrich Hertz Institute HHI and Singapore University of Technology and Design (SUTD) have tackled this question and have provided a glimpse into the diverse “intelligence” spectrum observed in current AI systems, specifically analyzing these AI systems with a novel technology that allows automatized analysis and quantification.
The most important prerequisite for this novel technology is a method developed earlier by TU Berlin and Fraunhofer HHI, the so-called Layer-wise Relevance Propagation (LRP) algorithm that allows visualizing according to which input variables AI systems make their decisions. Extending LRP, the novel Spectral relevance analysis (SpRAy) can identify and quantify a wide spectrum of learned decision making behavior. In this manner it has now become possible to detect undesirable decision making even in very large data sets.
This so-called ‘explainable AI’ has been one of the most important steps towards a practical application of AI, according to Dr. Klaus-Robert Müller, Professor for Machine Learning at TU Berlin. “Specifically in medical diagnosis or in safety-critical systems, no AI systems that employ flaky or even cheating problem solving strategies should be used.”
By using their newly developed algorithms, researchers are finally able to put any existing AI system to a test and also derive quantitative information about them: a whole spectrum starting from naive problem solving behavior, to cheating strategies up to highly elaborate “intelligent” strategic solutions is observed.
Dr. Wojciech Samek, group leader at Fraunhofer HHI said: “We were very surprised by the wide range of learned problem-solving strategies. Even modern AI systems have not always found a solution that appears meaningful from a human perspective, but sometimes used so-called ‘Clever Hans Strategies’.”
Clever Hans was a horse that could supposedly count and was considered a scientific sensation during the 1900s. As it was discovered later, Hans did not master math but in about 90 percent of the cases, he was able to derive the correct answer from the questioner’s reaction.
The team around Klaus-Robert Müller and Wojciech Samek also discovered similar “Clever Hans” strategies in various AI systems. For example, an AI system that won several international image classification competitions a few years ago pursued a strategy that can be considered naïve from a human’s point of view. It classified images mainly on the basis of context. Images were assigned to the category “ship” when there was a lot of water in the picture. Other images were classified as “train” if rails were present. Still other pictures were assigned the correct category by their copyright watermark. The real task, namely to detect the concepts of ships or trains, was therefore not solved by this AI system – even if it indeed classified the majority of images correctly.
The researchers were also able to find these types of faulty problem-solving strategies in some of the state-of-the-art AI algorithms, the so-called deep neural networks – algorithms that were so far considered immune against such lapses. These networks based their classification decision in part on artifacts that were created during the preparation of the images and have nothing to do with the actual image content.
“Such AI systems are not useful in practice. Their use in medical diagnostics or in safety-critical areas would even entail enormous dangers,” said Klaus-Robert Müller. “It is quite conceivable that about half of the AI systems currently in use implicitly or explicitly rely on such ‘Clever Hans’ strategies. It’s time to systematically check that, so that secure AI systems can be developed.”
With their new technology, the researchers also identified AI systems that have unexpectedly learned “smart” strategies. Examples include systems that have learned to play the Atari games Breakout and Pinball. “Here the AI clearly understood the concept of the game and found an intelligent way to collect a lot of points in a targeted and low-risk manner. The system sometimes even intervenes in ways that a real player would not,” said Wojciech Samek.
“Beyond understanding AI strategies, our work establishes the usability of explainable AI for iterative dataset design, namely for removing artefacts in a dataset which would cause an AI to learn flawed strategies, as well as helping to decide which unlabeled examples need to be annotated and added so that failures of an AI system can be reduced,” said SUTD Assistant Professor Alexander Binder.
“Our automated technology is open source and available to all scientists. We see our work as an important first step in making AI systems more robust, explainable and secure in the future, and more will have to follow. This is an essential prerequisite for general use of AI,” said Klaus-Robert Müller.
Learn more: How intelligent is artificial intelligence?
The Latest on: Artificial intelligence
via Google News
The Latest on: Artificial intelligence
- David Staples: Edmonton biz whiz sets out to make big bucks in artificial intelligence on March 15, 2019 at 2:06 am
The University of Alberta is getting all kinds of praise and attention for its leading work in artificial intelligence, but entrepreneur Cory Janssen’s focus is to find a way to apply that A.I ... […]
- Sonasoft Corp (SSFT) Jointly Signs Definitive Purchase Agreement to Acquire Artificial Intelligence (AI) Company, Hotify, Inc. on March 15, 2019 at 1:44 am
San Jose, CA, March 13, 2019 (GLOBE NEWSWIRE) -- via NEWMEDIAWIRE -- Sonasoft Corp. (OTCQB: SSFT), a leader in innovative eDiscovery and artificial intelligence (AI) solutions, announced that it ... […]
- Artificial Intelligence and Cryptocurrency: Separating Hype from Reality on March 14, 2019 at 11:29 pm
AI is to tech what “blockchain” is to the cryptocurrency industry: a concept whose genuine applications are significantly outnumbered by the projects interested solely in latching onto the ... […]
- How Artificial Intelligence can detect emotions in children’s drawings on March 14, 2019 at 10:46 pm
Children’s drawings are a window to their feelings and experiences. Humans find different ways to express their feelings. When we are little, we can’t communicate very well through words and ... […]
- Real Education, Artificial Intelligence on March 14, 2019 at 10:44 pm
Siri: what is ‘artificial intelligence’? In computer science, she says, AI can refer to any device that senses its environment and responds to reach a goal. A simple translation of A. I. as, say, ... […]
- Artificial Intelligence (AI): Worldwide Opportunities & Projections - The Market is Set to Record a CAGR of 50% During 2018-2024 on March 14, 2019 at 12:40 pm
DUBLIN, March 14, 2019 /PRNewswire/ -- The "Artificial Intelligence (AI) Market: Global Industry Analysis, Trends, Market Size, and Forecasts up to 2024" report has been added to ... […]
- Can artificial intelligence help cut jobsite risks of construction? on March 14, 2019 at 10:08 am
After more than a decade of evolution in building information modeling software to help various member of the design and construction team foresee project challenges and quickly correct them, one of ... […]
- Do I actually need to care about artificial intelligence? on March 14, 2019 at 9:24 am
Asking if you need to care about artificial intelligence (or AI) is the equivalent of asking whether you need to care about your job. The answer to both? If you’re retired, probably not. […]
- Artificial intelligence learns to predict elementary particle signals on March 14, 2019 at 7:50 am
Scientists from the Higher School of Economics and Yandex have developed a method that accelerates the simulation of processes at the Large Hadron Collider (LHC). The research findings were published ... […]
- Artificial intelligence progress gets gummed up in silos and cultural issues on March 13, 2019 at 6:07 pm
Silos have always been considered a bad thing for enterprise IT environments, and today's push for artificial intelligence and other cognitive technologies is no exception. A recent survey shows fewer ... […]
via Bing News