Attack of the Deepfakes

Surely they’ve invaded your screen by now: altered videos where words are put into a speaker’s mouth, replete with lip movements and voices that appear authentic. Leveraging AI technology, deepfakes have started a path to ubiquity along with their less elaborate relatives, videos altered or edited by more conventional means — videos slowed down or sped up just enough to make the speaker sound a bit tipsy or to make the actions depicted seem a bit more violent than what actually occurred in reality. In the past few weeks, we’ve seen Mark Zuckerberg, Nancy Pelosi and actor Kit Harington, who plays John Snow in Game of Thrones (I’ll let you search for this yourself due to offensive language) all victimized by this rapidly advancing technology.

While editing photos has literally become child’s play over the past years, deepfakes have recently made a big splash as the video equivalent. Although trained eyes may frequently question the authenticity of photos, videos have always elicited a higher degree of trust – until now. The age-old cliché of ‘seeing is believing’ has been fully upended.

Should you find yourself in the camp of thinking that deepfakes are still easy to spot, be assured that the technology is still in its infancy. It’s simply a matter of time before the altered videos will become virtually impossible to detect with the human eye.

So how do we combat deepfakes? Two new technologies are emerging to do so:

1) Detection Solutions

While conventional wisdom suggests deepfake detection is the appropriate solution, a recent article in The Verge raises serious questions about this approach. Drawing a very relevant analogy to computer viruses which are rapidly evolving to outwit virus scanners, deepfakes have hit a similar trajectory of outwitting detection software. This battle of one-upmanship promises to be constant, rendering any static deepfake detection technology virtually useless over time.

Universities and other organizations have recently announced new deepfake detection technologies with over 90% accuracy, like a recent breakthrough offering 92% accuracy. Forgetting for a moment that 92%+ accuracy is not an ideal number (would you be pleased if your automobile or mobile phone worked 92% of the time?), a more unnerving aspect is that even the creators of deepfake detection acknowledge the accuracy of their detection techniques is degrading rapidly as deepfake technology advances. So what can be done?

2) Traceability solutions

Emerging alternatives to detection are blockchain solutions that can register original and authentic video assets on a distributed ledger, either from the point when they are created or at least from a well-known, trusted point of origination. An example of the former might be an original video that is registered on the ledger at the point of creation. An example of the latter might be a video originating from a trusted newswire, whose reputation relies on the accuracy of their content.

Having the ability to trace the authenticity or origin of a video is valuable, and moreover, much faster and easier than a forensic analysis of all the bits of a video to detect tampering. Truth is, the time required for validation is essential. Consider a recent interview of Instagram CEO Adam Mosseri where he suggests that hours of displaying a deepfake video can result in millions of views and significant harm. Wouldn’t it be better to validate authenticity in seconds?

Who is responsible for detection and validation?

A more subtle question that arises when social platforms are thrust into the spotlight is whether it is truly the platforms’ responsibility to remove or censor deepfakes. Rather than open a potentially endless debate around the topic of deepfakes and free speech, wouldn’t it better if a typical consumer could discern a deepfake for themselves, via a universal solution? Surely, such a solution would largely avoid the ambitious and somewhat unrealistic suggestion to regulate every platform.

Consider a more compelling approach such as marrying traceability with self-authentication, where it becomes extremely straightforward for anyone to validate the video content in real-time. Unfortunately, some primitive approaches to self-authentication, such as watermarking, are not entirely secure and can obstruct video content. Instead, think how a web browser today can validate a secure website by checking its certificate in real-time, alerting you of any security issues, and you have the right idea. A simple, accessible, universal solution for important online video content represents a great starting point for addressing deepfakes.

While the debate and buzz around the best way to address deepfakes is bound to continue, what has become apparent is we are facing one of the biggest threats to video authenticity we could ever imagine. The threat calls for a thoughtful solution that can avoid unending technology escalations, free speech debates, and can rapidly re-establish order to a world that has become very dependent on video and media.

Leave a Reply

Your email address will not be published. Required fields are marked *