September 27, 2021

More than perhaps any other time in history, people are asking: “what is true?”

But this is no longer a question for armchair philosophers. With the rise of sophisticated digital technologies that can alter visual and aural inputs in a way that can seamlessly change their meaning, a task as simple as declaring an image “real” or a video “accurate” has become surprisingly difficult. This blurring can be traced to the use of AI-powered “deepfake technologies” that can enable cybercriminals to bypass biometric security protections, commit synthetic identity fraud, engage in cyber-bullying and blackmail,and exaggerate or falsify personal achievements.

The emergence of deepfakes—digitally rendered images or video that convincingly depict people doing or saying things they’ve never done —  represents a serious cybersecurity threat at a time when people are more likely to be influenced by images and less likely to critically assess them. Those of us who are seeking to stop this trend find ourselves in a battle of one-upmanship, as our opponents share the same “toolkit” in the form of cutting-edge AI-based technologies that have been created for media production in the entertainment, advertisement and social media industries.

Combating fake or misleading media has become a cat-and-mouse game that has smart adversaries countering detection strategies almost as soon as they are introduced. Up until now, most researchers have focused on ways to determine if image content is real or fake. This is important work and must continue, but with profit motives upping the ante – indeed, a potential attacker can now order “DaaS” (DeepFakes as a Service) through the dark web – it is clear that it’s time to go on the offensive.

Fighting Back: Digital Watermarks and Blockchains

So, how do we fight back?

Ideally, with tools that make it extremely difficult — even impossible — to change an image and escape detection.

But what if we could design tools that prevent tampering and build detection capabilities into the process of image creation? What if we could “fake-proof” digital assets by, for example, creating hidden digital tripwires — dead giveaways that an image has been tampered with? Two years ago, working with NYU Tandon colleague Dr. Pawel Korus, we designed a camera pipeline that replaced the traditional photo development pipeline with a neural network trained to jointly optimize for high-fidelity photo rendering and reliable provenance analysis. The resulting artifacts bear resemblance to digital watermarking, the process of embedding code in digital media to verify its authenticity, but if a party attempts to alter the image, the watermark will break.

Today, we are taking this technology even further by inserting digital fingerprints right at the point the image is first created, providing for an additional means of testing media authenticity. The fingerprints are dynamic and can be generated on the fly based on a secret key and photo’s authentication context, such as location or timestamp information, which is typically housed in a database.

In essence, digital fingerprints improve the robustness, performance and security privacy properties of classic sensor fingerprints. Specifically, a system can be trained to work under complex post-processing methods and be more reliable for certain types of content.

Additional work needs to be done before this technology can be made commercially viable. For one thing, the improvement in security properties could come at the cost of some image distortion. However, this distortion tends to be well masked by image content. Importantly, initial tests have confirmed that neural imaging pipelines of the type that are likely to power next-generation cameras can be trained to optimize both for high-fidelity photo development and reliable provenance analysis.

It is our hope that such findings should encourage further research towards building more reliable imaging pipelines with explicit provenance-guaranteeing properties.

Another proactive prevention strategy that is creating a certain amount of buzz is the use of blockchains to provide strong verifiable authenticity. One application of blockchains is the affixation of stamps to websites that links to an associated key – whenever one of these sites is appended to future stories or references, the accompanying key gets tracked, enabling the blockchain to provide an attestation to the authenticity of the media in question.

With the growing enthusiasm for another blockchain technology, non-fungible tokens, we can foresee provenance analysis building on NFT’s current usage, which is primarily to establish digital ownership, to include authentication of “digital originals.” This application of blockchain technology could, in a sense, be the best possible combination of detection and prevention strategies.

The Need to Push Back on Untruths

The proliferation of false media poses a unique security risk.

Think of the influential role individual photographs or segments of news footage have played throughout history — how, for example, the images of Neil Armstrong and Buzz Aldrin walking on the Moon filled Americans on Earth with pride and served as an inspirational catalyst for a generation of would-be innovators; or how the iconic photo of Mahatma Gandhi at his spinning wheel served as a defining portrait of the global leader perhaps most synonymous with compassion.

If the notion of believing what you see and hear is under attack, how can we function as a society?

We must restore our ability to believe our eyes and ears.

 

ABOUT OUR AUTHOR 

Nasir Memon is an IEEE Fellow and Vice Dean for Academics and Student Affairs and a Professor of Computer Science and Engineering at the New York University Tandon School of Engineering. Memon is a co-founder of NYU’s Center for Cyber Security at New York and NYU Abu Dhabi; he also is the founder of the OSIRIS Lab, CSAW, the NYU Tandon Bridge program, and the Cyber Fellows program at NYU.

Memon has been on the editorial boards of several journals and was the Editor-in-Chief of the IEEE Transactions on Information Forensics and Security.

INTERACTIVE EXPERIENCES

Close Navigation