Artificial intelligence to accurately detect altered photos is getting smarter
Seeing was believing until technology reared its mighty head and gave us powerful and inexpensive photo-editing tools. Now, realistic videos that map the facial expressions of one person onto those of another, known as deepfakes, present a formidable political weapon.
But whether it’s the benign smoothing of a wrinkle in a portrait, or a video manipulated to make it look like a politician saying something offensive, all photo editing leaves traces for the right tools to discover.
Research led by Amit Roy-Chowdhury’s Video Computing Group at the University of California, Riverside has developed a deep neural network architecture that can identify manipulated images at the pixel level with high precision. Roy-Chowdhury is a professor of electrical and computer engineering and the Bourns Family Faculty Fellow in the Marlan and Rosemary Bourns College of Engineering.
A deep neural network is what artificial intelligence researchers call computer systems that have been trained to do specific tasks, in this case, recognize altered images. These networks are organized in connected layers; “architecture” refers to the number of layers and structure of the connections between them.
Objects in images have boundaries and whenever an object is inserted or removed from an image, its boundary will have different qualities than the boundaries of objects in the image naturally. Someone with good Photoshop skills will do their best to make the inserted object looks as natural as possible by smoothing these boundaries.
While this might fool the naked eye, when examined pixel by pixel, the boundaries of the inserted object are different. For example, inserted boundaries are often smoother than the natural objects. By detecting boundaries of inserted and removed objects, a computer should be able to identify altered images.
The researchers labeled nonmanipulated images and the relevant pixels in boundary regions of manipulated images in a large dataset of photos. The aim was to teach the neural network general knowledge about the manipulated and natural regions of photos. They tested the neural network with a set of images it had never seen before, and it detected the altered ones most of the time. It even spotted the manipulated region.
“We trained the system to distinguish between manipulated and nonmanipulated images, and now if you give it a new image it is able to provide a probability that that image is manipulated or not, and to localize the region of the image where the manipulation occurred,” Roy-Chowdhury said.
The researchers are working on still images for now, but they point out that this can also help them detect deepfake videos.
“If you can understand the characteristics in a still image, in a video it’s basically just putting still images together one after another,” Roy-Chowdhury said. “The more fundamental challenge is probably figuring out whether a frame in a video is manipulated or not.”
Even a single manipulated frame would raise a red flag. But Roy-Chowdhury thinks we still have a long way to go before automated tools can detect deepfake videos in the wild.
“It’s a challenging problem,” Roy-Chowdhury said. “This is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defense mechanisms, but then the attacker also finds better mechanisms.”
He said completely automated deepfake detection might not be achievable in the near future.
“If you want to look at everything that’s on the internet, a human can’t do it on the one hand, and an automated system probably can’t do it reliably. So it has to be a mix of the two,” Roy-Chowdhury said.
Deep neural network architectures can produce lists of suspicious videos and images for people to review. Automated tools can reduce the amount of data that people — like Facebook content moderators — have to sift through to determine if an image has been manipulated.
For this use, the tools are right around the corner.
“That probably is something that these technologies will contribute to in a very short time frame, probably in a few years,” Roy-Chowdhury said.
The paper, “Hybrid LSTM and Encoder–Decoder Architecture for Detection of Image Forgeries,” is published in the July 2019 issue of IEEE Transactions on Image Processing and was funded by DARPA. Other authors include Jawadul H. Bappy, Cody Simons, Lakshmanan Nataraj, and B. S. Manjunath.
In related work, his group developed a method for detecting other types of image manipulation in addition to object insertion and removal. This method extends the identification of blurry boundaries into general knowledge about the kinds of transitions between manipulated and nonmanipulated regions to predict tampering more accurately than current tools.
Learn more: This deep neural network fights deepfakes
The Latest on: Deep fakes
via Google News
The Latest on: Deep fakes
- Deepfakes: For now women, not democracy, are the main victimson October 16, 2019 at 4:34 am
The problem is that the apps can be used to harass and intimidate women. SEE: Special report: How to automate the enterprise (free ebook) Although there's the obvious threat that deepfakes pose to ...
- What 'deepfakes' are and how they may be dangerouson October 14, 2019 at 9:58 am
The technology used to create such digital content has quickly become accessible to the masses, and they are called "deepfakes." Deepfakes refer to manipulated videos, or other digital representations ...
- California bans 'deep fakes' video, audio close to electionson October 12, 2019 at 11:40 am
SACRAMENTO, Calif. (AP) — California is trying to stop people from deceptively editing videos and audio in ways that are aimed at influencing elections. Legislation signed by Gov. Gavin Newsom on ...
- The biggest threat of deepfakes isn’t the deepfakes themselveson October 10, 2019 at 9:41 am
Subsequent forensic analysis never found anything altered or manipulated in the video. That didn’t matter. The mere idea of deepfakes had been enough to accelerate the unraveling of an already ...
- How deepfakes evolved so rapidly in just a few yearson October 8, 2019 at 9:00 am
For all the heightened concern over deepfakes being used to manipulate elections and sow chaos in countries around the world, the vast majority of them are actually much more juvenile in ...
- California takes on deepfakes in porn and politicson October 8, 2019 at 4:27 am
Sometimes, deepfakes are used purely for satire, such as to poke fun at government officials and the current political landscape. However, AII-manipulated content can take a dark turn when used to ...
- Porn, Politics Are Key Targets In 'Deepfakes': Studyon October 7, 2019 at 9:27 am
A report by the security firm Deeptrace identified 96 percent of deepfakes as "nonconsensual pornography," which use the images of female celebrities or others in videos manipulated using artificial ...
- Forget fake news—nearly all deepfakes are being made for pornon October 7, 2019 at 8:53 am
The internet is home to at least 14,678 deepfakes, according to a new report by DeepTrace, a company that builds tools to spot synthetic media. But most of them weren’t created to mess with elections.
- The Morning After: California's crackdown on political and porn deepfakeson October 7, 2019 at 4:37 am
Welcome to your week! Over the weekend, the state of California cracked down on deepfakes, New York's East River has floating LEDs to show if it's safe to swim and a torrent of Star Trek shows get ...
- Most Deepfakes Are Porn, and They're Multiplying Faston October 7, 2019 at 3:00 am
In November 2017, a Reddit account called deepfakes posted pornographic clips made with software that pasted the faces of Hollywood actresses over those of the real performers. Nearly two years later, ...
via Bing News