July 25, 2024

In the first six months of 2024 alone, over 300 articles focused on creating detection tools for deepfakes have been published in the IEEE Xplore digital library. 

“While AI is enabling deepfake algorithms to produce harder-to-detect fakes, AI-enabled detection technologies are keeping pace by employing diverse techniques and algorithms to identify fakes,” said IEEE Senior Member Aiyappan Pillai.

What Are Deepfakes? 

The term deepfake is a mix of two terms: deep learning and fake, as in forgery. Deepfakes are AI-generated realistic videos, audio clips or still images that depict real people doing or saying things they didn’t do or say. They’ve appeared in political realms in recent years. They’ve also impacted the world of entertainment. Deepfakes threaten to disrupt the entertainment economy through songs that imitate popular musical acts, usually without the performers’ consent. 

Emerging Detection Methods

Two broad categories of technology exist for spotting deepfakes, and significant research has been devoted to determining how well they work.  

Machine Learning: One method for identifying deepfakes involves feeding a machine learning model lots and lots of deepfakes and real content so it can learn to spot the differences between them. These techniques may not involve machine vision at all. Rather, they convert the image into data and learn from its patterns. One challenge with this method may have trouble identifying a new deepfake if it differs significantly from the data it was trained on. 

Semantic Analysis: In contrast to machine learning methods, which rely on raw data, semantic analysis looks at the content and context of the image using the same machine vision techniques that help artificial intelligence systems recognize apples or books in a picture. These methods can analyze the pattern of blood flow in a speaker’s face, the shape of their head or whether their appearance is consistent over time. Semantic analysis also covers relationships between objects that don’t make sense. For example, imagine an architectural rendering of a bathroom. An AI-generated image may place a shower head in a location where it cannot be used functionally. 

Watermarking

The need to identify deepfakes has led some generative AI companies to create markings for this purpose. In some cases, these markings are visible to users; in others, they are not. 

“One of the most effective techniques is digital watermarking of the images generated using the generative AI platforms,” said Rahul Vishwakarma, IEEE Senior Member.

Questions of Bias

About five or six commonly used data sets — videos and images of people — are used to train deep learning models to detect deepfakes. One dataset consists entirely of celebrities. One challenge researchers have is that the people featured in these data sets are more likely to be white and male. That has led to questions over whether deepfake detection tools may have a harder time when faced with data from people from diverse backgrounds.

Are Humans Any Better? 

While deepfakes are realistic, humans can spot them. One recent study published in IEEE Privacy & Security pitted humans against machines. Researchers found that humans were, on average, able to identify about 71% of deepfakes, while cutting-edge detection methods identified 93%. 

However, some deepfake images fooled the detection algorithms, while humans were able to spot the hoax. 

Some people are much better at spotting deepfakes than others, but researchers are only beginning to research why. In another study, researchers examined how well police officers and “super-recognizers” could detect deepfakes. Super-recognizers, whose abilities are certified by a lab, are people who are really good at recognizing and identifying faces. The study showed that super-recognizers were no better at spotting deepfakes than ordinary people. This suggests that being able to tell if something is a deepfake differs from being good at recognizing faces.

Learn More: To see a presentation on cutting-edge deepfake detection methods, check out this video from the IEEE Signal Processing Society.

INTERACTIVE EXPERIENCES

Close Navigation