Exposing deepfake imagery

01 September 2023
Neil Savage

In May, an image of an explosion at the Pentagon circulated, causing stocks traded on the S&P index to drop by $500 billion in value. But the explosion didn’t really happen. The photo was quickly debunked as an AI-generated fake, and fortunately the stock market recovered.

Other such instances keep popping up. In late spring 2022, a video showed up on a Ukraine-news website depicting President Volodymyr Zelenskyy apparently telling his soldiers to lay down their arms and surrender to Russia. Again, the fake was quickly exposed, but it demonstrated the potential for serious consequences from manipulated media. In June, the Federal Bureau of Investigation warned that extortionists were creating phony videos of people engaging in embarrassing sexual acts and demanding money to prevent their circulation.

Deepfakes—images and video created or altered by AI—are growing more sophisticated with each passing month and have led to widespread concern among scientists, journalists, and government officials. The Zelenskyy video was probably the first use of deepfakes in the context of war, said William Corvey, a program manager at the Defense Advanced Research Projects Agency (DARPA) in a video released by the agency. “If it had been more compelling, it may have changed the trajectory of the Russia-Ukraine conflict,” he said. Others worry that plausible forgeries could lead to scams directed at individuals or altered satellite imagery relied upon by national defense agencies. Given the possibly dire consequences of deepfakes, computer scientists are hard at work on ways to detect them so that they can be blocked or neutralized.

Deepfakes get their name from the deep-learning techniques used to create them. Deep learning works on an analogy to the operation of a human brain. Input data is passed through a series of artificial neurons, each of which performs a small mathematical function on the data and passes the results to the next neuron. Eventually, this neural network learns about the statistical distribution of the data—a particular pattern of pixels with certain brightness or color and the way they relate to other pixels would, for instance, describe a cat and not a camel.

Top row, left to right: The first image is a real person, the second and third images are faces generated with different AI models. The middle row shows the fitting of a 3D model to each face to study facial geometry, and the last row is a modified version of the 3D model with the face textures mapped back on to study lighting on the face. Photo credit: Hany Farid

In deepfakes, once the machine has learned the statistical distribution in real images, it can use that distribution to create new images. One method is an autoencoder, a compression algorithm that picks out the most important features in image data to create a smaller dataset that still contains enough information to recreate the original. Only instead of the original image, it can create a partially or wholly new one.

Another popular method of creating deepfakes uses generative adversarial networks (GANs). In a GAN, one neural network, the generator, takes random noise as input and uses the statistical distribution from real images to turn the noise into new images. A second neural network, the discriminator, then tries to determine whether the generator’s images are real or fake. It passes its findings back to the generator, which learns from the feedback and tries to create a new set of images to fool the discriminator, and this continues until the discriminator can’t tell real from fake.

Recently popular services available online, such as Dall-E from OpenAI, and Stable Diffusion, rely on diffusion models. Diffusion models start with real images and add some statistical noise, then reverse the process and take the noise out. After enough repetition, they learn to create an image out of noise, and can generate, for example, faces that never existed.

Trained models are now widely available and allow anyone, even people with no computer skills, to create images both real and fanciful, and that ease of use means people with malicious intent have free reign to sow chaos. “Very sophisticated technology that used to be in the hands of a few is now in the hands of the many,” says Hany Farid, a computer scientist at the University of California, Berkeley. “You’ve created this very powerful weapon to distort reality, and you’ve put it in the hands of anybody on the internet.” Farid has created a website to track the use of deepfakes in the upcoming presidential election.

People now have the capability of swapping one person’s face with another, while keeping other aspects of the video—such as audio, movement, and background—unaltered. A different approach changes only the pixels around the mouth, to match its movements to new audio. A more sophisticated method called puppeteering allows manipulators to replace the movement and expressions in a target’s face with those from another video.

Sometimes the manipulation can be easy to spot. The mouth movements don’t quite sync up with the audio, or there’s some weird stillness or color deviation in the face. But as the generation of such videos gets more sophisticated, it often takes a computer to find the flaws.

Farid turned to a technique called soft biometrics to teach a machine to distinguish between real and fake videos of Zelenskyy. He trained a computer using more than eight hours of real video of Zelenskyy. The computer model, a simple image classifier, learned features characteristic of the Ukrainian president—how he furrowed his brow or wrinkled his nose at certain points, how he moved his hands when he talked, how his voice changed when he emphasized a particular word. That works really well, Farid says, for high-profile people. A lot of high-quality video of them exists, and they’re the people most likely to be targeted. But it’s not a technique that would work for the vast numbers of lesser-known people.

Farid’s work was part of DARPA’s Media Forensics program, the precursor to the program that Corvey runs, Semantic Forensics (SemaFor). The latter aims to develop new methods for analyzing the semantic content of videos, looking for parts of the image that don’t make sense. Some of those might be detectable to humans if they don’t flash by too fast; a person in the background of a manipulated image may have the wrong number of fingers, for instance. In one recently debunked deepfake video showing Donald Trump kissing Anthony Fauci’s nose, the words on the White House seal behind them were gibberish.

But some inconsistencies are only noticeable to a computer. Amit Roy-Chowdhury, a professor in the University of California, Riverside’s Video Computing Group, trained a neural network on images that were both untouched and altered, and the computer learned to pick out the altered ones. The intensity of pixels can change across an image, but it tends to do so smoothly, so that the characteristics of any one pixel differ only slightly from its neighbors. When someone alters an image, such as by adding or removing an object, that causes a disruption in the natural pattern of pixels. The computer learned what a natural pattern of pixels looked like and was then able to flag unnatural patterns.

Roy-Chowdhury combined this method with systems trained to identify facial expressions by learning, for example, what sequence of pixels corresponds to a person smiling. “When you have a manipulation, that sequence is altered,” he says. “Although the attacker is trying to make an alteration so that you cannot detect it, it’s very hard for them to do it precisely, and that allows us to identify that these regions are probably manipulated.”

Intel has adopted FakeCatcher, a detector designed in collaboration with researchers at the State University of New York at Binghamton. The system examines videos of people’s faces for subtle signs of color change caused by blood flow as the heart pumps. The company says it can catch 96 percent of deepfake videos in real time.

Of course, once scientists figure out the characteristics that give away the fakery, attackers start trying to fix those tells. “It’s an arms race between the attacker and defender,” says Zahid Akhtar, a professor in the Department of Network and Computer Security at the State University of New York Polytechnic Institute. “Within two, three months the attackers are trying to create deepfakes that would bypass those detectors.”

The Pope might wear Balenciaga but this photo is a deepfake. Photo credit: Guerrero Art

The threat of deepfakes goes beyond circulating phony videos, argues Rushit Dave, a computer scientist at Minnesota State University, Mankato. Attackers could create believable images of ordinary people that could fool facial recognition systems and give them access to people’s cell phones, and thus their private data. “People are just aware of deepfakes in the sense of some high-level thing going on in the news,” he says. “I am talking about cybersecurity.”

Dave says security experts need to come up with generalizable detection systems that can be placed on every phone and flag fake images. That’s challenging, he says, because the systems will have to be able to cope with images and video coming from many different types of cameras with individual characteristics. Many of the detection systems demonstrated in academia work well on the types of images they’re trained on but start to break down when confronted with different types of fakes, he says. He’s also concerned about deepfakes in areas outside of internet videos. Malicious actors might make all sorts of mischief by feeding forged video information to drones or self-driving cars, for instance.

Another area where it could prove important to detect manipulated visuals is in satellite imagery, which can be crucial to national defense. Someone might, for example, fool military planning software into thinking a bridge was in the wrong location. Back in 2014, the Russian Ministry of Defense released photoshopped images that it said showed Ukraine was to blame for shooting down a Malaysia Airlines flight, though forensic analysis showed the images had been altered.

Scientists at Los Alamos National Laboratory used a GAN to add vegetation to a satellite image of an area, as well as to create the appearance of a drought that didn’t exist. A lot of that type of imagery is multispectral, showing images not just in normal lighting but at various wavelengths including infrared, and the GAN was able to paint in realistic additions in those wavelengths as well. “These inpainting tools that were trained on natural images were shockingly good at doing inpainting for not only red, green, and blue satellite imagery, but for satellite imagery that’s not at all like the training data they saw,” says Juston Moore, a scientist in the Advanced Research in Cyber Systems group at Los Alamos. In this case, a detector was easily able to identify the manipulated imagery, but the researchers also found an attack that could fool the detector.

Of course, people have been manipulating images as long as there have been photographs, but now it can be done at scale and distributed widely, Farid says. “What’s happening here is not fundamentally new, but it is what we’ve been doing for a long time on steroids,” he says. And it’s not only a question of image manipulation. Deepfake audio is also a component of a convincing deepfake video and can also be used on its own. The ease of sharing forgeries on social media gives fabrications a reach and an impact they’ve never had before. And the willingness of people in our divided society to believe information that fits their prejudices also plays an important role, Farid says.

To sway those who might reject a debunking, it’s important to not only flag deepfakes, but to give people the particular reasons why a video is probably fake, Akhtar says. The trouble is that explainability is still a challenge in deep learning, where much of what AI systems do remains inside a black box. “We don’t know most of the time why the detector is saying this video is fake, and why in some fake samples, it is saying it is real.”

One approach that could complement deepfake detectors would be to embed a cryptographic signature in real videos and to provide metadata that shows a history of any alteration using tools like Photoshop. In 2019, Adobe launched the Content Authenticity Initiative to create and promote such a standard, and others, including IBM and Intel, have signed on. Farid hopes governments will put requirements for using such a system in place as well. “Bad people are going to do bad things with the technology,” he says. “So, I think we need to start thinking more seriously about technological and regulatory innovations.”

Neil Savage writes about science and technology in Lowell, Massachusetts.

Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research