A new algorithm allows video editors to modify talking head videos as if they were editing text – copying, pasting, or adding and deleting words.
In television and film, actors often flub small bits of otherwise flawless performances. Other times they leave out a critical word. For editors, the only solution so far is to accept the flaws or fix them with expensive reshoots.
Imagine, however, if that editor could modify video using a text transcript. Much like word processing, the editor could easily add new words, delete unwanted ones or completely rearrange the pieces by dragging and dropping them as needed to assemble a finished video that looks almost flawless to the untrained eye.
A team of researchers from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research created such an algorithm for editing talking-head videos – videos showing speakers from the shoulders up.
The work could be a boon for video editors and producers but does raise concerns as people increasingly question the validity of images and videos online, the authors said. However, they propose some guidelines for using these tools that would alert viewers and performers that the video has been manipulated.
“Unfortunately, technologies like this will always attract bad actors,” said Ohad Fried, a postdoctoral scholar at Stanford. “But the struggle is worth it given the many creative video editing and content creation applications this enables.”
Reading lips
The application uses the new transcript to extract speech motions from various video pieces and, using machine learning, convert those into a final video that appears natural to the viewer – lip-synched and all.
“Visually, it’s seamless. There’s no need to rerecord anything,” said Fried, who is first author of a paper about the research published on the pre-publication website arXiv. It will also be in the journal ACM Transactions on Graphics. Fried works in the lab of Maneesh Agrawala, the Forest Baskett Professor in the School of Engineering and senior author of the paper. The project began when Fried was a graduate student working with computer scientist Adam Finkelstein at Princeton, more than two years ago.
Should an actor or performer flub a word or misspeak, the editor can simply edit the transcript and the application will assemble the right word from various words or portions of words spoken elsewhere in the video. It’s the equivalent of rewriting with video, much like a writer retypes a misspelled or unfit word. The algorithm does require at least 40 minutes of original video as input, however, so it won’t yet work with just any video sequence.
As the transcript is edited, the algorithm selects segments from elsewhere in the recorded video with motion that can be stitched to produce the new material. In their raw form these video segments would have jarring jump cuts and other visual flaws.
To make the video appear more natural, the algorithm applies intelligent smoothing to the motion parameters and renders a 3D animated version of the desired result. However, that rendered face is still far from realistic. As a final step, a machine learning technique called Neural Rendering converts the low-fidelity digital model into a photorealistic video in perfect lip-synch.
To test the capabilities of their system the researchers performed a series of complex edits including adding, removing and changing words, as well as translations to different languages, and even created full sentences as if from whole cloth.
In a crowd-sourced study with 138 participants, the team’s edits were rated as “real” almost 60 percent of the time. The visual quality is such that it is very close to the original, but Fried said there’s plenty of room for improvement.
“The implications for movie post-production are big,” said Ayush Tewari, a student at the Max Planck Institute for Informatics and the paper’s second author. It presents for the first time the possibility of fixing filmed dialogue without reshoots.
Ethical concerns
Nonetheless, in an era of synthesized fake videos such capabilities raise important ethical concerns, Fried added. There are very valuable and justifiable reasons to want to edit video in this way, namely the expense and effort required to rerecord or repair such flaws in video content, or to customize existing audio-visual video content by audience. Instructional videos might be fine-tuned to different languages or cultural backgrounds, for instance, or children’s stories could be adapted to different ages.
“This technology is really about better storytelling,” Fried said.
Fried acknowledges concerns that such a technology might be used for illicit purposes, but says the risk is worth taking. Photo-editing software went through a similar reckoning, but in the end, people want to live in a world where photo-editing software is available.
As a remedy, Fried says there are several options. One is to develop some sort of opt-in watermarking that would identify any content that had been edited and provide a full ledger of the edits. Moreover, researchers could develop better forensics such as digital or non-digital fingerprinting techniques to determine whether a video had been manipulated for ulterior purposes. In fact, this research and others like it also build the essential insights that are needed to develop better manipulation detection.
None of the solutions can fix everything, so viewers must remain skeptical and cautious, Fried said. Besides, he added, there are already many other ways to manipulate video that are much easier to execute. He said that perhaps the most pressing matter is to raise public awareness and education on video manipulation, so people are better equipped to question and assess the veracity of synthetic content.
Learn more: Stanford engineers make editing video as easy as editing text
The Latest on: Synthetic content
[google_news title=”” keyword=”synthetic content” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Synthetic content
- Debate about safety of synthetic turf and 'forever chemicals' raises concerns for someon March 28, 2024 at 4:00 am
Synthetic turf fields are made with "forever chemicals" perfluoroalkyl and polyfluoroalkyl substances, known as PFAS, according to the EPA.
- NIST’s new AI safety institute to focus on synthetic content, international outreachon March 25, 2024 at 11:53 am
Inaugural U.S. AI Safety Institute Director Elizabeth Kelly said she aims for the new NIST initiative to become the “leading safety institute in the world.” ...
- YouTube makes labeling of synthetic, AI-generated content compulsory for creatorson March 21, 2024 at 3:33 am
The platform is introducing a new tool in Creator Studio that mandates creators to disclose when realistic content, which could be mistaken for genuine, is produced using altered or synthetic media, i ...
- YouTube mandates labelling of synthetic, AI-generated contenton March 21, 2024 at 1:19 am
YouTube requires creators to label synthetic and AI-generated content for transparency. The platform aims to enhance viewer trust by disclosing altered or synthetic media. Enforcement measures may ...
- YouTube tells creators to label videos that have AI-generated contenton March 19, 2024 at 1:43 am
YouTube has introduced a new guideline requiring content creators to label realistic content created with AI. The label will either appear on the video player or the expanded description, depending on ...
- YouTube introduces labeling requirements for synthetic and altered contenton March 19, 2024 at 1:11 am
To ensure viewers know if what they are watching includes the use of the technology, the platform has introduced new rules that require creators to label the content as synthetic or altered, though ...
- YouTube Introduces Mandatory Disclosure For AI-Generated Contenton March 18, 2024 at 4:02 pm
YouTube forces creators to label AI-generated videos, aiming to help viewers distinguish real from synthetic content.
- YouTube details its AI and 'synthetic' content disclosureon March 18, 2024 at 2:35 pm
YouTube highlights the arrival of its new AI disclosure creators must adhere to when uploading videos. According to YouTube, today (Mar. 18), the platform is introducing a new set ...
- YouTube adds new AI-generated content labeling toolon March 18, 2024 at 12:48 pm
In its blog post today, YouTube says it may add an AI disclosure to videos even if the uploader hasn’t done so themselves, “especially if the altered or synthetic content has the potential to confuse ...
- YouTube videos are now required to disclose ‘altered or synthetic’ content including AIon March 18, 2024 at 11:01 am
After announcing the initiative late last year, YouTube is now launching its requirement for creators to disclose the use ...
via Bing News