A machine learning algorithm can detect signs of anxiety and depression in the speech patterns of young children, potentially providing a fast and easy way of diagnosing conditions that are difficult to spot and often overlooked in young people, according to new research published in the Journal of Biomedical and Health Informatics.
Around one in five children suffer from anxiety and depression, collectively known as “internalizing disorders.” But because children under the age of eight can’t reliably articulate their emotional suffering, adults need to be able to infer their mental state, and recognise potential mental health problems. Waiting lists for appointments with psychologists, insurance issues, and failure to recognise the symptoms by parents all contribute to children missing out on vital treatment.
“We need quick, objective tests to catch kids when they are suffering,” says Ellen McGinnis, a clinical psychologist at the University of Vermont Medical Center’s Vermont Center for Children, Youth and Families and lead author of the study. “The majority of kids under eight are undiagnosed.”
Early diagnosis is critical because children respond well to treatment while their brains are still developing, but if they are left untreated they are at greater risk of substance abuse and suicide later in life. Standard diagnosis involves a 60-90 minute semi-structured interview with a trained clinician and their primary care-giver. McGinnis, along with University of Vermont biomedical engineer and study senior author Ryan McGinnis, has been looking for ways to use artificial intelligence and machine learning to make diagnosis faster and more reliable.
The researchers used an adapted version of a mood induction task called the Trier-Social Stress Task, which is intended to cause feelings of stress and anxiety in the subject. A group of 71 children between the ages of three and eight were asked to improvise a three-minute story, and told that they would be judged based on how interesting it was. The researcher acting as the judge remained stern throughout the speech, and gave only neutral or negative feedback. After 90 seconds, and again with 30 seconds left, a buzzer would sound and the judge would tell them how much time was left.
“The task is designed to be stressful, and to put them in the mindset that someone was judging them,” says Ellen McGinnis.
The children were also diagnosed using a structured clinical interview and parent questionnaire, both well-established ways of identifying internalizing disorders in children.
The researchers used a machine learning algorithm to analyze statistical features of the audio recordings of each kid’s story and relate them to the child’s diagnosis. They found the algorithm was highly successful at diagnosing children, and that the middle phase of the recordings, between the two buzzers, was the most predictive of a diagnosis.
“The algorithm was able to identify children with a diagnosis of an internalizing disorder with 80 percent accuracy, and in most cases that compared really well to the accuracy of the parent checklist,” says Ryan McGinnis. It can also give the results much more quickly – the algorithm requires just a few seconds of processing time once the task is complete to provide a diagnosis.
The algorithm identified eight different audio features of the children’s speech, but three in particular stood out as highly indicative of internalizing disorders: low-pitched voices, with repeatable speech inflections and content, and a higher-pitched response to the surprising buzzer. Ellen McGinnis says these features fit well with what you might expect from someone suffering from depression. “A low-pitched voice and repeatable speech elements mirrors what we think about when we think about depression: speaking in a monotone voice, repeating what you’re saying,” says Ellen McGinnis.
The higher-pitched response to the buzzer is also similar to the response the researchers found in their previous work, where children with internalizing disorders were found to exhibit a larger turning-away response from a fearful stimulus in a fear induction task.
The voice analysis has a similar accuracy in diagnosis to the motion analysis in that earlier work, but Ryan McGinnis thinks it would be much easier to use in a clinical setting. The fear task requires a darkened room, toy snake, motion sensors attached to the child and a guide, while the voice task only needs a judge, a way to record speech and a buzzer to interrupt. “This would be more feasible to deploy,” he says.
Ellen McGinnis says the next step will be to develop the speech analysis algorithm into a universal screening tool for clinical use, perhaps via a smartphone app that could record and analyze results immediately. The voice analysis could also be combined with the motion analysis into a battery of technology-assisted diagnostic tools to help identify children at risk of anxiety and depression before even their parents suspect that anything is wrong.
The Latest on: Voice analysis
via Google News
The Latest on: Voice analysis
- Insights on the Voice Prosthesis Devices Global Market to 2027 - Featuring Atos Medical & InHealth Technologies Among Otherson August 7, 2020 at 9:10 am
Voice Prosthesis Devices Competitor Market Share Scenario Worldwide (in %): 2019 & 2025 For more information about this report visit Research and Markets also offers Custom Research services ...
- Michigan’s Hunter Reynolds, N.J. native, is leading voice of historic Big Ten Unity Proposalon August 6, 2020 at 9:00 am
The Big Ten Unity Proposal was made Wednesday by a number of Big Ten players seeking protection and a real plan from conference leadership in combating the coronavirus pandemic.
- Global Car Audio Market (2019 to 2027) - Analysis and Forecasts by Componenton August 6, 2020 at 8:30 am
Global Analysis and Forecasts by Component" report has been added to ResearchAndMarkets.com's offering. The global car audio market was valued at US$ 43,085.6 million in 2018 and is expected to grow ...
- $183+ Million Worldwide Voice Prosthesis Devices Industry to 2027 - Impact of COVID-19 on the Market - ResearchAndMarkets.comon August 6, 2020 at 6:25 am
Global Market Trajectory & Analytics" report has been added to ResearchAndMarkets.com's offering. The publisher brings years of research experience to the 6th edition of this report. The 184-page ...
- Florida doctor's arrest a bittersweet moment for the voice of Hockeytownon August 6, 2020 at 5:10 am
Ken Daniels, voice of the Detroit Red Wings, had an emotional reaction after hearing news that Michael Ligotti, a Delray Beach, Florida, doctor, had been arrested for his alleged role in an insurance ...
- Infermedica raises $10.25 million for chatbots and voice assistants that triage health careon August 6, 2020 at 2:00 am
Infermedica, which offers AI symptom-checking services that draw on millions of clinician-verified health records, has raised $10.25 million in capital.
- ECRI Improves Value Analysis Decisions, Helps Reduce Healthcare Costs during COVID-19 Pandemicon August 4, 2020 at 5:29 pm
ECRI, the most trusted voice in healthcare, announces the release of an enhanced value analysis workflow solution that drives team transparency, clinical engagement, and improves efficiency of the ...
- Another Voice: Daemen sets record straight on its financial healthon August 4, 2020 at 1:00 pm
A Syracuse news forum noticed this blog and reported that 14 of these institutions were in New York, including several prestigious, high-profile institutions such as Skidmore College, Fordham ...
- Privacy problems are widespread for Alexa and Google Assistant voice apps, according to researcherson July 31, 2020 at 11:04 am
In a study, Clemson University School of Computing researchers found that tens of thousands of Google Assistant and Alexa apps lack privacy policies.
- Speech and Voice Recognition Market Strength, Major Type, Key Application And Leading Companies Forecast Till 2026on July 29, 2020 at 1:51 am
The overall speech and voice recognition market is expected to grow from USD 6.9 Billion in 2018 to USD 28.3 Billion ...
via Bing News