Future computers that recognize the ways we express attitudes as well as words will enable much more sophisticated, user-friendly systems capable of understanding and participating in conversations
In the future, computers may be capable of talking to us during meetings just like a remote teleconference participant. But to help move this science-fiction-sounding goal a step closer to reality, it’s first necessary to teach computers to recognize not only the words we use but also the myriad meanings, subtleties and attitudes they can convey.
During the 168th Meeting of the Acoustical Society of America (ASA), to be held October 27-31, 2014, at the Indianapolis Marriott Downtown Hotel, Valerie Freeman, a Ph.D. candidate in the Department of Linguistics at the University of Washington (UW), and colleagues will describe their National Science Foundation-sponsored work for the Automatic Tagging and Recognition of Stance (ATAROS) project. The project’s goal is to train computers to recognize the various stances, opinions and attitudes that can be revealed by human speech.
“What is it about the way we talk that makes our attitude clear while speaking the words, but not necessarily when we type the same thing? How do people manage to send different messages while using the same words? These are the types of questions the ATAROS project seeks to answer,” explained Freeman.
Identifying cues to “stance taking” in audio recordings of people talking is a good place to start searching for answers, according to Freeman and the principal investigators on the project, including Professors Gina-Anne Levow and Richard Wright in the Department of Linguistics, and Professor Mari Ostendorf in the Department of Electrical Engineering.
“In our recordings of pairs of people working together to complete different tasks, we’ve found they tend to talk faster, louder and with more exaggerated pitches when expressing strong opinions as opposed to weak opinions,” Freeman said.
Not too surprising? Maybe not in terms of heated arguments, but the researchers found the same patterns within ordinary conversations, too. “People talk faster and say more at once when they’re working on more engaging tasks such as balancing an imaginary budget as opposed to arranging items within an imaginary store,” Freeman noted.
The researchers’ also noticed that people also appear to be less fluent in the engaging tasks—displaying more false starts, cut-off words, “ums” and repetitions.
Further, it appears that “men might do this more than women—regardless of whether they’re talking to another man or a woman.” Freeman places a heavy emphasis on the word “might,” because to date they’ve only explored this particular lack of fluency with 24 people.
So far, for the entire project, the researchers have worked with and recorded a total of 68 people of varying ages and backgrounds, all from the Pacific Northwest.
“We plan to continue to analyze these conversations for subtler cues and more complex patterns—variations in pronunciations when comparing positive and negative opinions, men vs. women, and older vs. younger people,” said Freeman. “In the future, we hope to record people from other locations to see whether different regions have different ways of expressing the same opinions.”
The lessons learned from this work should help enable sophisticated speech recognition systems of the future. “Think of all of the amazing things the computer on Star Trek can do,” Freeman said. “To reach that level of sophistication, we need computers to understand all the subtle parts of a message—not just the words involved. Projects like ATAROS are working to help computers learn how to figure out what people really mean when they speak, so that in the future computers will be capable of responding in a much more ‘human-like’ manner.”
The Latest on: Voice recognition
via Google News
The Latest on: Voice recognition
- New Mayo Clinic Voice Test Could Detect COVIDon November 18, 2020 at 4:59 pm
How much do our voices say about our health? Researchers at Mayo Clinic have teamed up with a company specializing in artificial intelligence to find out.
- Voice Recognition Market to cross USD 7 Bn by 2026: Global Market Insights, Inc.on November 17, 2020 at 4:22 pm
Global Market Insights, Inc. has recently added a new report on voice recognition market which estimates the global market valuation for voice recognition will cross US$ 7 billion by 2026. The rapid ...
- A Facial Recognition Company’s First Amendment Theory Threatens Privacy—and Free Speechon November 17, 2020 at 10:21 am
Drawing the line around what is protected by the First Amendment is more challenging than you might think. It’s also absolutely necessary.
- Amazon moving facial recognition and voice processing off of Nvidia chipson November 13, 2020 at 1:55 pm
Amazon is switching to using its own custom chips to provide the cloud processing for its Alexa voice assistant service and facial recognition service Rekognition, Reuters reports, in an attempt ...
- Voice and Speech Recognition Market – Detailed Analysis of Current Industry Figures with Forecasts Growth By 2026on November 12, 2020 at 11:19 pm
Selbyville, Delaware According to the research report titled 'Voice and Speech Recognition Market Share, Size, Trends, Industry Analysis Report By Component; By Interface; By End-User; By Regions, ...
- Amazon shifts some voice assistant, face recognition computing to its own chipson November 12, 2020 at 2:40 pm
Amazon.com Inc on Thursday said it shifted part of the computing for its Alexa voice assistant to its own custom-designed chips, aiming to make the work faster and cheaper while moving it ...
- Voice and Speech Recognition Market by Size, Growth, Opportunity and Forecast to 2026on November 12, 2020 at 12:29 pm
Selbyville, Delaware, The Voice and Speech Recognition market study now available with Market Study Report, LLC, ...
- Speech and Voice Recognition Market Is Expected To Generate US$ 43 billion revenue By 2030, Globallyon November 10, 2020 at 12:42 pm
The Global Speech and Voice Recognition Market Share, Trends, Analysis and Forecasts, 2020-2030 provides insights on key developments, business strategies, research & development activities, supply ...
- SoundHound Inc.'s Houndify Voice AI Powers Dynamic Voice Recognition System in 2021 Hyundai Elantra and Elantra Hybridon November 10, 2020 at 9:00 am
SoundHound Inc., the leading innovator of voice AI and conversational intelligence technologies, announced the integration of its Houndify Voice AI pl ...
- History of voice recognition: from Audrey to Sirion November 3, 2020 at 4:00 pm
Looking back on the development of speech recognition technology is like watching a child grow up, progressing from the baby-talk level of recognizing single syllables, to building a vocabulary of ...
via Bing News