Could deep learning analyze neurological problems?

Movement inaccuracies during reaching for a food pellet can reveal brain damage in rats and humans. Artificial neural networks can be trained to score motor deficits to monitor the progress of neurological disorders with human-expert accuracy.
CREDIT
Artur Luczak

Getting to the doctor’s office for a check-up can be challenging for someone with a neurological disorder that impairs their movement, such as a stroke. But what if the patient could just take a video clip of their movements with a smart phone and forward the results to their doctor?

Work by Dr Hardeep Ryait and colleagues at CCBN-University of Lethbridge in Alberta, Canada, publishing November 21 in the open-access journal PLOS Biology, shows how this might one day be possible.

Using rats that had incurred a stroke that affected the movement of their fore-limbs, the scientists first asked experts to score the rats’ degree of impairment based on how they reached for food. Then they input this information into a state-of-the-art deep neural network so that it could learn to score the rats’ reaching movements with human-expert accuracy. When the network was subsequently given video footage from a new set of rats reaching for food, it was then also able to score their impairments with similar human-like accuracy. The same program proved able to score other tests given to rats and mice, including tests of their ability to walk across a narrow beam and to pull a string to obtain a food reward.

Artificial neural networks are currently used to drive cars, to interpret video surveillance and to monitor and regulate traffic. This revolution in the use of artificial neural networks has encouraged behavioural neuroscientists to use such networks for scoring the complex behaviour of experimental subjects. Similarly, neurological disorders could also be assessed automatically, allowing quantification of behaviour as part of a check-up or to assess the effects of a drug treatment. This could help avoid the delay that can present a major roadblock to patient treatment.

Altogether, this research indicates that deep neural networks such as this can provide a reliable score for neurological assessment and can assist in designing behavioural metrics to diagnose and monitor neurological disorders. Interestingly, the results revealed that this network can use a wider range of information than that included by experts in a behavioural scoring system. A further distinct contribution of this research is that this network was able to identify features of the behaviour that are most indicative of motor impairments. This is important because this has the potential to improve monitoring the effects of rehabilitation. This method would aid standardization of the diagnosis and monitoring of neurological disorders, and in the future could be used by patients at home for monitoring of daily symptoms.

Learn more: Deep learning to analyze neurological problems

 

 

The Latest Google Headlines on:
Analyzing neurological problems
The Latest Bing News on:
Analyzing neurological problems
The Latest Google Headlines on:
Deep learning
The Latest Bing News on:
Deep learning

Could AI predict potential serious side effects of drug-drug interactions?

via medshadow.org

The more medications a patient takes, the greater the likelihood that interactions between those drugs could trigger negative side effects, including long-term organ damage and even death. Now, researchers at Penn State have developed a machine learning system that may be able to warn doctors and patients about possible negative side effects that might occur when drugs are mixed.

In a study, researchers designed an algorithm that analyzes data on drug-drug interactions listed in reports — compiled by the Food and Drug Administration and other organizations — for use in a possible alert system that would let patients know when a drug combination could prompt dangerous side effects.

“Let’s say I’m taking a popular over-the-counter pain reliever and then I’m put on blood pressure medicine, and these medications have an interaction with each other that, in turn, affects my liver,” said Soundar Kumara, the Allen E. Pearce and Allen M. Pearce Professor of Industrial Engineering, Penn State. “Essentially, what we have done, in this study, is to collect all of the data on all the diseases related to the liver and see what drugs interact with each other to affect the liver.”

Drug-drug interaction problems are significant because patients are frequently prescribed multiple drugs and they take over-the-counter medicine on their own, added Kumara, who also is an affiliate of the Institute for CyberScience, which provides supercomputing resources for Penn State researchers.

“This study is of very high importance,” said Kumara. “Most patients are not on one single drug. They’re on multiple drugs. A study like this is of immense use to these people.”

To create the alert system, the researchers relied on an autoencoder model, which is a type of artificial neural network that is loosely designed on how the human brain processes information. Traditionally, computers require labeled data, which means people need to describe the data for the system, to produce results. For drug-drug interactions, it might require programmers to label data from thousands of drugs and millions of different combinations of possible interactions. The autoencoder model, however, is suited for semi-supervised algorithms, which means it can use both data that is labeled by people, and unlabeled data.

The high number of possible adverse drug-drug interactions, which can range from minor to severe, may inadvertently cause doctors and patients to ignore alerts, which the researchers call “alert fatigue.” In order to avoid alert fatigue, the researchers identified only interactions that would be considered high priority, such as life-threatening, disability, hospitalization and required intervention.

Kumara said that analyzing how drugs interact is the first step. Further development and refinement of the technology could lead to more precise — and even more personalized — drug interaction alerts.

“The reactions are not independent of these chemicals interacting with each other — that’s the second level,” said Kumara. “The third level of this is the chemical-to-chemical interactions with the genomic data of the individual patient.”

The researchers, who released their findings in a recent issue of Biomedical and Health Informatics, used self-reported data from the FDA Adverse Event Reporting System and information on potentially severe drug-drug interactions from the Office of the National Coordinator for Health Information Technology. They also used information from online databases at DrugBank and Drugs.com. Duplicate reports and reports about non-serious interactions were removed.

The list included about 2,891 drugs, or approximately 110,495 drug combinations. The researchers found a total of 1,740,770 reports on serious health outcomes from drug-drug interactions.

Learn more: AI could offer warnings about serious side effects of drug-drug interactions

 

The Latest on: Drug-drug interactions

via  Bing News

 

The dawn of optical versions of artificial intelligence?

Researchers demonstrated the first two-layer, all-optical artificial neural network with nonlinear activation functions. These types of functions are required to perform complex tasks such as pattern recognition. Credit: Olivia Wang, Peng Cheng Laboratory

Even the most powerful computers are still no match for the human brain when it comes to pattern recognition, risk management, and other similarly complex tasks. Recent advances in optical neural networks, however, are closing that gap by simulating the way neurons respond in the human brain.

In a key step toward making large-scale optical neural networks practical, researchers have demonstrated a first-of-its-kind multilayer all-optical artificial neural network. Generally, this type of artificial intelligence can tackle complex problems that are impossible with traditional computational approaches, but current designs require extensive computational resources that are both time-consuming and energy intensive. For this reason, there is great interest developing practical optical artificial neural networks, which are faster and consume less power than those based on traditional computers.

In Optica, The Optical Society’s journal for high-impact research, researchers from The Hong Kong University of Science and Technology, Hong Kong detail their two-layer all-optical neural network and successfully apply it to a complex classification task.

“Our all-optical scheme could enable a neural network that performs optical parallel computation at the speed of light while consuming little energy,” said Junwei Liu, a member of the research team. “Large-scale, all-optical neural networks could be used for applications ranging from image recognition to scientific research.”

Building an all-optical network

In conventional hybrid optical neural networks, optical components are typically used for linear operations while nonlinear activation functions—the functions that simulate the way neurons in the human brain respond—are usually implemented electronically because nonlinear optics typically require high-power lasers that are difficult to implement in an optical neural network.

To overcome this challenge, the researchers used cold atoms with electromagnetically induced transparency to perform nonlinear functions. “This light-induced effect can be achieved with very weak laser power,” said Shengwang Du, a member of the research team. “Because this effect is based on nonlinear quantum interference, it might be possible to extend our system into a quantum neural network that could solve problems intractable by classical methods.”

To confirm the capability and feasibility of the new approach, the researchers constructed a two-layer fully-connected all optical neural network with 16 inputs and two outputs. The researchers used their all-optical network to classify the order and disorder phases of the Ising model, a statistical model of magnetism. The results showed that the all-optical neural network was as accurate as a well-trained computer-based neural network.

Optical neural networks at larger scales

The researchers plan to expand the all-optical approach to large-scale all-optical deep neural networks with complex architectures designed for specific practical applications such as image recognition. This will help demonstrate that the scheme works at larger scales.

“Although our work is a proof-of-principle demonstration, it shows that it may become possible in the future to develop optical versions of artificial intelligence,” said Du. “The next generation of artificial intelligence hardware will be intrinsically much faster and exhibit lower power consumption compared to today’s computer-based artificial intelligence,” added Liu.

Learn more: Researchers Demonstrate All-Optical Neural Network for Deep Learning

 

The Latest on: Optical artificial neural networks

via  Bing News

 

Controlling quantum computers with artificial intelligence

Learning quantum error correction: the image visualizes the activity of artificial neurons in the Erlangen researchers’ neural network while it is solving its task. via Max Planck Institute for the Science of Light

Neural networks enable learning of error correction strategies for computers based on quantum physics

Quantum computers could solve complex tasks that are beyond the capabilities of conventional computers. However, the quantum states are extremely sensitive to constant interference from their environment. The plan is to combat this using active protection based on quantum error correction. Florian Marquardt, Director at the Max Planck Institute for the Science of Light, and his team have now presented a quantum error correction system that is capable of learning thanks to artificial intelligence.

In 2016, the computer program AlphaGo won four out of five games of Go against the world’s best human player. Given that a game of Go has more combinations of moves than there are estimated to be atoms in the universe, this required more than just sheer processing power. Rather, AlphaGo used artificial neural networks, which can recognize visual patterns and are even capable of learning. Unlike a human, the program was able to practise hundreds of thousands of games in a short time, eventually surpassing the best human player. Now, the Erlangen-based researchers are using neural networks of this kind to develop error-correction learning for a quantum computer.

Artificial neural networks are computer programs that mimic the behaviour of interconnected nerve cells (neurons) – in the case of the research in Erlangen, around two thousand artificial neurons are connected with one another. “We take the latest ideas from computer science and apply them to physical systems,” explains Florian Marquardt. “By doing so, we profit from rapid progress in the area of artificial intelligence.”

Artificial neural networks could outstrip other error-correction strategies

The first area of application are quantum computers, as shown by the recent paper, which includes a significant contribution by Thomas Fösel, a doctoral student at the Max Planck Institute in Erlangen. In the paper, the team demonstrates that artificial neural networks with an AlphaGo-inspired architecture are capable of learning – for themselves – how to perform a task that will be essential for the operation of future quantum computers: quantum error correction. There is even the prospect that, with sufficient training, this approach will outstrip other error-correction strategies.

To understand what it involves, you need to look at the way quantum computers work. The basis for quantum information is the quantum bit, or qubit. Unlike conventional digital bits, a qubit can adopt not only the two states zero and one, but also superpositions of both states. In a quantum computer’s processor, there are even multiple qubits superimposed as part of a joint state. This entanglement explains the tremendous processing power of quantum computers when it comes to solving certain complex tasks at which conventional computers are doomed to fail. The downside is that quantum information is highly sensitive to noise from its environment. This and other peculiarities of the quantum world mean that quantum information needs regular repairs – that is, quantum error correction. However, the operations that this requires are not only complex but must also leave the quantum information itself intact.

Quantum error-correction is like a game of Go with strange rules

“You can imagine the elements of a quantum computer as being just like a Go board,” says Marquardt, getting to the core idea behind his project. The qubits are distributed across the board like pieces. However, there are certain key differences from a conventional game of Go: all the pieces are already distributed around the board, and each of them is white on one side and black on the other. One colour corresponds to the state zero, the other to one, and a move in a game of quantum Go involves turning pieces over. According to the rules of the quantum world, the pieces can also adopt grey mixed colours, which represent the superposition and entanglement of quantum states.

When it comes to playing the game, a player – we’ll call her Alice – makes moves that are intended to preserve a pattern representing a certain quantum state. These are the quantum error correction operations. In the meantime, her opponent does everything they can to destroy the pattern. This represents the constant noise from the plethora of interference that real qubits experience from their environment. In addition, a game of quantum Go is made especially difficult by a peculiar quantum rule: Alice is not allowed to look at the board during the game. Any glimpse that reveals the state of the qubit pieces to her destroys the sensitive quantum state that the game is currently occupying. The question is: how can she make the right moves despite this?

Auxiliary qubits reveal defects in the quantum computer

In quantum computers, this problem is solved by positioning additional qubits between the qubits that store the actual quantum information. Occasional measurements can be taken to monitor the state of these auxiliary qubits, allowing the quantum computer’s controller to identify where faults lie and to perform correction operations on the information-carrying qubits in those areas. In our game of quantum Go, the auxiliary qubits would be represented by additional pieces distributed between the actual game pieces. Alice is allowed to look occasionally, but only at these auxiliary pieces.

In the Erlangen researchers’ work, Alice’s role is performed by artificial neural networks. The idea is that, through training, the networks will become so good at this role that they can even outstrip correction strategies devised by intelligent human minds. However, when the team studied an example involving five simulated qubits, a number that is still manageable for conventional computers, they were able to show that one artificial neural network alone is not enough. As the network can only gather small amounts of information about the state of the quantum bits, or rather the game of quantum Go, it never gets beyond the stage of random trial and error. Ultimately, these attempts destroy the quantum state instead of restoring it.

One neural network uses its prior knowledge to train another

The solution comes in the form of an additional neural network that acts as a teacher to the first network. With its prior knowledge of the quantum computer that is to be controlled, this teacher network is able to train the other network – its student – and thus to guide its attempts towards successful quantum correction. First, however, the teacher network itself needs to learn enough about the quantum computer or the component of it that is to be controlled.

In principle, artificial neural networks are trained using a reward system, just like their natural models. The actual reward is provided for successfully restoring the original quantum state by quantum error correction. “However, if only the achievement of this long-term aim gave a reward, it would come at too late a stage in the numerous correction attempts,” Marquardt explains. The Erlangen-based researchers have therefore developed a reward system that, even at the training stage, incentivizes the teacher neural network to adopt a promising strategy. In the game of quantum Go, this reward system would provide Alice with an indication of the general state of the game at a given time without giving away the details.

The student network can surpass its teacher through its own actions

“Our first aim was for the teacher network to learn to perform successful quantum error correction operations without further human assistance,” says Marquardt. Unlike the school student network, the teacher network can do this based not only on measurement results but also on the overall quantum state of the computer. The student network trained by the teacher network will then be equally good at first, but can become even better through its own actions.

In addition to error correction in quantum computers, Florian Marquardt envisages other applications for artificial intelligence. In his opinion, physics offers many systems that could benefit from the use of pattern recognition by artificial neural networks.

Learn more: Artificial intelligence controls quantum computers

 

 

 

The Latest on: Quantum computers

via  Bing News

 

Using artificial neural networks to predict new stable materials

Schematic of an artificial neural network predicting a stable garnet crystal prototype. Image credit: Weike Ye

Artificial neural networks—algorithms inspired by connections in the brain—have “learned” to perform a variety of tasks, from pedestrian detection in self-driving cars, to analyzing medical images, to translating languages. Now, researchers at the University of California San Diego are training artificial neural networks to predict new stable materials.

“Predicting the stability of materials is a central problem in materials science, physics and chemistry,” said senior author Shyue Ping Ong, a nanoengineering professor at the UC San Diego Jacobs School of Engineering. “On one hand, you have traditional chemical intuition such as Linus Pauling’s five rules that describe stability for crystals in terms of the radii and packing of ions. On the other, you have expensive quantum mechanical computations to calculate the energy gained from forming a crystal that have to be done on supercomputers. What we have done is to use artificial neural networks to bridge these two worlds.”

By training artificial neural networks to predict a crystal’s formation energy using just two inputs—electronegativity and ionic radius of the constituent atoms—Ong and his team at the Materials Virtual Lab have developed models that can identify stable materials in two classes of crystals known as garnets and perovskites. These models are up to 10 times more accurate than previous machine learning models and are fast enough to efficiently screen thousands of materials in a matter of hours on a laptop. The team details the work in a paper published Sept. 18 in Nature Communications.

“Garnets and perovskites are used in LED lights, rechargeable lithium-ion batteries, and solar cells. These neural networks have the potential to greatly accelerate the discovery of new materials for these and other important applications,” noted first author Weike Ye, a chemistry Ph.D. student in Ong’s Materials Virtual Lab.

The team has made their models publicly accessible via a web application at http://crystals.ai. This allows other people to use these neural networks to compute the formation energy of any garnet or perovskite composition on the fly.

The researchers are planning to extend the application of neural networks to other crystal prototypes as well as other material properties.

Learn more: Scientists use artificial neural networks to predict new stable materials

 

 

The Latest on: Artificial neural network

via  Bing News