University of Washington researchers have developed new algorithms that solve a thorny challenge in the field of computer vision: turning audio clips into a realistic, lip-synced video of the person speaking those words.
As detailed in a paper to be presented Aug. 2 at SIGGRAPH 2017, the team successfully generated highly-realistic video of former president Barack Obama talking about terrorism, fatherhood, job creation and other topics using audio clips of those speeches and existing weekly video addresses that were originally on a different topic.
“These type of results have never been shown before,” said Ira Kemelmacher-Shlizerman, an assistant professor at the UW’s Paul G. Allen School of Computer Science & Engineering. “Realistic audio-to-video conversion has practical applications like improving video conferencing for meetings, as well as futuristic ones such as being able to hold a conversation with a historical figure in virtual reality by creating visuals just from audio. This is the kind of breakthrough that will help enable those next steps.”
In a visual form of lip-syncing, the system converts audio files of an individual’s speech into realistic mouth shapes, which are then grafted onto and blended with the head of that person from another existing video.
The team chose Obama because the machine learning technique needs available video of the person to learn from, and there were hours of presidential videos in the public domain. “In the future video, chat tools like Skype or Messenger will enable anyone to collect videos that could be used to train computer models,” Kemelmacher-Shlizerman said.
Because streaming audio over the internet takes up far less bandwidth than video, the new system has the potential to end video chats that are constantly timing out from poor connections.
“When you watch Skype or Google Hangouts, often the connection is stuttery and low-resolution and really unpleasant, but often the audio is pretty good,” said co-author and Allen School professor Steve Seitz. “So if you could use the audio to produce much higher-quality video, that would be terrific.”
By reversing the process — feeding video into the network instead of just audio — the team could also potentially develop algorithms that could detect whether a video is real or manufactured.
The new machine learning tool makes significant progress in overcoming what’s known as the “uncanny valley” problem, which has dogged efforts to create realistic video from audio. When synthesized human likenesses appear to be almost real — but still manage to somehow miss the mark — people find them creepy or off-putting.
“People are particularly sensitive to any areas of your mouth that don’t look realistic,” said lead author Supasorn Suwajanakorn, a recent doctoral graduate in the Allen School. “If you don’t render teeth right or the chin moves at the wrong time, people can spot it right away and it’s going to look fake. So you have to render the mouth region perfectly to get beyond the uncanny valley.”
Previously, audio-to-video conversion processes have involved filming multiple people in a studio saying the same sentences over and over to try to capture how a particular sound correlates to different mouth shapes, which is expensive, tedious and time-consuming. By contrast, Suwajanakorn developed algorithms that can learn from videos that exist “in the wild” on the internet or elsewhere.
“There are millions of hours of video that already exist from interviews, video chats, movies, television programs and other sources. And these deep learning algorithms are very data hungry, so it’s a good match to do it this way,” Suwajanakorn said.
Rather than synthesizing the final video directly from audio, the team tackled the problem in two steps. The first involved training a neural network to watch videos of an individual and translate different audio sounds into basic mouth shapes.
By combining previous research from the UW Graphics and Image Laboratory team with a new mouth synthesis technique, they were then able to realistically superimpose and blend those mouth shapes and textures on an existing reference video of that person. Another key insight was to allow a small time shift to enable the neural network to anticipate what the speaker is going to say next.
The new lip-syncing process enabled the researchers to create realistic videos of Obama speaking in the White House, using words he spoke on a television talk show or during an interview decades ago.
Currently, the neural network is designed to learn on one individual at a time, meaning that Obama’s voice — speaking words he actually uttered — is the only information used to “drive” the synthesized video. Future steps, however, include helping the algorithms generalize across situations to recognize a person’s voice and speech patterns with less data – with only an hour of video to learn from, for instance, instead of 14 hours.
“You can’t just take anyone’s voice and turn it into an Obama video,” Seitz said. “We very consciously decided against going down the path of putting other people’s words into someone’s mouth. We’re simply taking real words that someone spoke and turning them into realistic video of that individual.”
The Latest on: Audio-to-video conversion
- Computer Vision with OpenCV 3 and Qt5 on January 17, 2018 at 1:51 pm
This book is for readers interested in building computer vision applications. Intermediate knowledge of C++ programming is expected. Even though no knowledge of Qt5 and OpenCV 3 is assumed, if you're familiar with these frameworks, you'll benefit. […]
- Outer Space, Plants, and Computer Vision on January 12, 2018 at 1:15 pm
Hear the conversation about plants and space. Plants growing in space have no gravity to assist them, there is minimum light, and there is more radiation exposurethan the plants would receive on Earth. However, plant production is expected to be an ... […]
- 2017 Innovations in Computer Vision Applications Powered by AI - Research and Markets on January 12, 2018 at 9:37 am
The "Innovations in Computer Vision Applications Powered by AI" report has been added to ResearchAndMarkets.com's offering. Computer vision is a rapidly burgeoning field, with the potential to revolutionize various applications such as advanced driver ... […]
- Scandit and Mercaux Partner to Improve Customers’ In-store Experience with Computer Vision on January 10, 2018 at 4:42 am
Scandit, the leading developer of next-generation mobile data capture solutions based on computer vision, augmented reality and machine learning technologies, is partnering with Mercaux to transform the brick-and-mortar customer shopping experience with ... […]
- Computer vision and motor tests predict who can hit a baseball on January 8, 2018 at 5:34 pm
Jan. 8 (UPI) --The latest research out of Duke University Medical Center suggests baseball scouts could be replaced by computers in the not-too-disant future. A series of computer-based vision and motor tasks performed by baseball players predicted which ... […]
- CV Squared and Astak to Launch CV2 (Computer Vision + Connected Vehicle) Platform in CES 2018 on January 8, 2018 at 11:40 am
The CV2 platform integrates computer vision, telematics, and data analytics to enable customers to truly benefit from connected vehicle apps SAN JOSE, Calif., Jan. 8, 2018 /PRNewswire/ -- Through CV Squared's vision and technical expertise and Astak's ... […]
- How computer vision will redesign healthcare on January 5, 2018 at 11:23 am
QuHarrison Terry is the Marketing Director and in-house "Futurist" at Redox, the platform for healthcare data exchange. Named twice as a “LinkedIn Top Voice” in Technology, Terry focuses on the future of technology and how it will influence human behavior. […]
- Computer vision in insurance: See the future now on January 4, 2018 at 9:34 pm
One of my first significant impressions of computer vision came from The Terminator when the robot was able to see and identify objects as they moved. That computer vision of the future has started to emerge, and as computers start to see things, there are ... […]
- Hacked Dog Pics Can Play Tricks on Computer Vision AI on December 21, 2017 at 4:00 pm
Tricking Google’s computer vision AI into seeing a pair of human skiers as a dog may seem mostly harmless. But the possibilities become more unnerving when considering how hackers could trick a self-driving car’s AI into seeing a plastic bag instead of ... […]
via Google News and Bing News