Jul 152017
 

via University of Washington

University of Washington researchers have developed new algorithms that solve a thorny challenge in the field of computer vision: turning audio clips into a realistic, lip-synced video of the person speaking those words.

As detailed in a paper to be presented Aug. 2 at  SIGGRAPH 2017, the team successfully generated highly-realistic video of former president Barack Obama talking about terrorism, fatherhood, job creation and other topics using audio clips of those speeches and existing weekly video addresses that were originally on a different topic.

“These type of results have never been shown before,” said Ira Kemelmacher-Shlizerman, an assistant professor at the UW’s Paul G. Allen School of Computer Science & Engineering. “Realistic audio-to-video conversion has practical applications like improving video conferencing for meetings, as well as futuristic ones such as being able to hold a conversation with a historical figure in virtual reality by creating visuals just from audio. This is the kind of breakthrough that will help enable those next steps.”

In a visual form of lip-syncing, the system converts audio files of an individual’s speech into realistic mouth shapes, which are then grafted onto and blended with the head of that person from another existing video.

The team chose Obama because the machine learning technique needs available video of the person to learn from, and there were hours of presidential videos in the public domain. “In the future video, chat tools like Skype or Messenger will enable anyone to collect videos that could be used to train computer models,” Kemelmacher-Shlizerman said.

Because streaming audio over the internet takes up far less bandwidth than video, the new system has the potential to end video chats that are constantly timing out from poor connections.

“When you watch Skype or Google Hangouts, often the connection is stuttery and low-resolution and really unpleasant, but often the audio is pretty good,” said co-author and Allen School professor Steve Seitz. “So if you could use the audio to produce much higher-quality video, that would be terrific.”

By reversing the process — feeding video into the network instead of just audio — the team could also potentially develop algorithms that could detect whether a video is real or manufactured.

The new machine learning tool makes significant progress in overcoming what’s known as the “uncanny valley” problem, which has dogged efforts to create realistic video from audio. When synthesized human likenesses appear to be almost real — but still manage to somehow miss the mark — people find them creepy or off-putting.

“People are particularly sensitive to any areas of your mouth that don’t look realistic,” said lead author Supasorn Suwajanakorn, a recent doctoral graduate in the Allen School. “If you don’t render teeth right or the chin moves at the wrong time, people can spot it right away and it’s going to look fake. So you have to render the mouth region perfectly to get beyond the uncanny valley.”

graphic showing how the conversion from audio to video works

A neural network first converts the sounds from an audio file into basic mouth shapes. Then the system grafts and blends those mouth shapes onto an existing target video and adjusts the timing to create a new realistic, lip-synced video.University of Washington

Previously, audio-to-video conversion processes have involved filming multiple people in a studio saying the same sentences over and over to try to capture how a particular sound correlates to different mouth shapes, which is expensive, tedious and time-consuming. By contrast, Suwajanakorn developed algorithms that can learn from videos that exist “in the wild” on the internet or elsewhere.

“There are millions of hours of video that already exist from interviews, video chats, movies, television programs and other sources. And these deep learning algorithms are very data hungry, so it’s a good match to do it this way,” Suwajanakorn said.

Rather than synthesizing the final video directly from audio, the team tackled the problem in two steps. The first involved training a neural network to watch videos of an individual and translate different audio sounds into basic mouth shapes.

By combining previous research from the UW Graphics and Image Laboratory team with a new mouth synthesis technique, they were then able to realistically superimpose and blend those mouth shapes and textures on an existing reference video of that person. Another key insight was to allow a small time shift to enable the neural network to anticipate what the speaker is going to say next.

The new lip-syncing process enabled the researchers to create realistic videos of Obama speaking in the White House, using words he spoke on a television talk show or during an interview decades ago.

Currently, the neural network is designed to learn on one individual at a time, meaning that Obama’s voice — speaking words he actually uttered — is the only information used to “drive” the synthesized video. Future steps, however, include helping the algorithms generalize across situations to recognize a person’s voice and speech patterns with less data – with only an hour of video to learn from, for instance, instead of 14 hours.

“You can’t just take anyone’s voice and turn it into an Obama video,” Seitz said. “We very consciously decided against going down the path of putting other people’s words into someone’s mouth. We’re simply taking real words that someone spoke and turning them into realistic video of that individual.”

Learn more: Lip-syncing Obama: New tools turn audio clips into realistic video

 

The Latest on: Audio-to-video conversion
  • The State of Wearables, Computer Vision-Assisted Shopping and Security
    on September 19, 2017 at 4:14 pm

    Just as the fashion industry fixes its attention on what’s out, what’s now and what’s next, the tech industry has its own way of eyeing trends that could shape the consumer experience. One such microscope is the tech start-up showcase and one of the ... […]

  • Intel® Computer Vision SDK - A Brief Overview
    on September 19, 2017 at 2:59 pm

    In this blog I would like to share what I learned about the Intel® CV SDK and share some resources to help get you started in developing computer vision applications. This article is for our sponsors at CodeProject. These articles are intended to provide ... […]

  • Shasta develops camera-focused fund as computer-vision sector heats up
    on September 19, 2017 at 8:27 am

    Shasta Ventures launched Shasta Camera Fund, which will make seed and pre-seed investments from its $320 million fifth fund, which closed last year. The specialty fund’s belies its true intent. Although Shasta is no stranger to backing camera technology ... […]

  • I know how you feel: how computer vision is changing everything
    on September 18, 2017 at 10:04 pm

    From improving cancer diagnosis to powering animated poo emojis, Jon Stubley tells us how his favourite computer applications are powering the future. The world has been a flutter with the launch of the iPhone 8 and iPhone X in the past week. Working in ... […]

  • Shasta announces Camera Fund for AR and computer vision companies
    on September 18, 2017 at 9:20 am

    Entrepreneurs building companies around augmented reality, virtual reality, and computer vision have a new funding source to help get their businesses off the ground. Shasta Ventures today announced a new Camera Fund that’s designed to provide an early ... […]

  • Recognizing celebrities on pictures with Azure Computer Vision API and Azure Notebooks
    on September 15, 2017 at 1:20 am

    Hi there! My name is Tomasz and I study computer science at UCL. I’m an author of programming courses on YouTube covering Python, Visual Basic, and Android whose reach has recently passed one million views. This summer, I worked at a Fintech company on ... […]

  • How computer vision will help to deliver the UN Global Goals
    on September 14, 2017 at 1:00 am

    The Earth recently reached its 2017 ‘overshoot’ – the point in the year at which, due to human activity, we have chewed through more resources and generated more waste than our home can support in one trip around the sun. It’s a depressing thought ... […]

  • Using Computer Vision And AI To Personalise Experiences Inside And Outside The Home
    on September 13, 2017 at 9:56 am

    Gideon Shmuel His focus centers on developing sensing solutions to improve our daily lives and interactions with the home, car, and consumer electronics. There are various layers within a house that serve a specific purpose, and when combined they create ... […]

  • Solving real-world business problems with computer vision
    on September 9, 2017 at 8:41 am

    To learn more about how organizations are using deep learning methods to solve real business problems, check out the Enterprise Adoption sessions at the Strata Data Conference in New York City, Sept. 25-28, 2017. Registration is now open. The process of ... […]

  • How computer vision may impact the future of marketing
    on September 8, 2017 at 6:08 pm

    When people think about computer vision (sometimes called “machine vision”), they often think of smartphones and autonomous cars. Snapchat can give you a puppy dog face thanks to facial recognition (a subset of machine vision). Autonomous cars can ... […]

via Google News and Bing News

Leave a Reply

%d bloggers like this: