Fighting Fire with Fire: Can AI Prevent Calamitous Misuse of Deepfake?

On: October 16, 2020
Print Friendly, PDF & Email
About Neelesh Vasistha


   

Members:

Neelesh Vasistha — neelesh.vasistha@student.uva.nl

Sarah Burkhardt — sarah.burkhardt@student.uva.nl

Wen Li — wen.li@student.uva.nl

Sofia Rastelli — sofia.rastelli@student.uva.nl


It is late evening on November 2nd, the night before U.S citizens will vote in a pivotal election. You’re sitting at home, abseiling down your Facebook feed, when your eyes drift over the words ‘BREAKING NEWS’. You scroll on, unfazed, but your attention is soon headlocked by a slew of links screaming ‘bombshell video’, ‘political earthquake at the 11th hour’ and ‘This. Changes. Everything.’ You click on one, and land on a leaked video of your chosen candidate in the middle of a puerile and highly racist rant. It’s trending on Twitter now. You watch his mouth as it unloads a flurry of unforgivable ethnic slurs. The story hits CNN. This is, you say to yourself, a political suicide in real time. But how could this politician even think such things, let alone say them? 

He didn’t. The video is fake: a landmark attack using advanced deepfake technology. The creators know the video will be denied and eventually disproved, but release it strategically to devastate their target’s public image hours before polls open. This nightmare scenario has been theorised by many — but just how realistic is it? In this paper, we examine the technology behind deepfake, and whether the AI armament that enables such a weapon could also be the key to stopping it. 

One of these images is real. The other is fake. Can you tell which one? [Source: still from “Synthesizing Obama: Learning Lip Sync from Audio”. Video by Supasorn Suwajanakorn, Steven M. Seitz, Ira Kemelmacher-Shlizerman]

Imitation game 

Image manipulation in politics is nothing new (Chesney & Citron 148). The Soviet Union used it famously in state propaganda, adding or erasing people from official photographs according to the dictates of Stalin. And indeed, recently, the depths of the internet have spawned countless fake photos of varying quality. Some of these rise to the echelons of government — such as when Republican congressman Paul A. Gosar retweeted a photoshopped image of Obama seemingly shaking hands with the Iranian president. Indeed, whilst the concept of manipulated or outright fake imagery has history, what separates a deepfake attack is its egregious consequence: a quality deepfake in this scenario can, quite literally, put anybody’s words in anybody’s mouth.  

But what is deepfake? The term is portmanteau of ‘deep learning’ and ‘fake’, referring chiefly to videos generated by machine learning. These can synthesise a person’s appearance and voice, thus creating a ‘live’ fake. The use of machine learning elevates this above so-called “cheapfakes”, which are manipulated either by less sophisticated automation, or manually by software programs such as Adobe Photoshop. (Paris et al. 10-11, 24)

From its origins, deepfake belonged to the public and was meant for the public: born as user-generated content, it improves with the continuous tweaking of online machine learning enthusiasts (Hsiang 22). Given the novel value of a deepfake, as visual entertainment as well as a technological (or even artistic) feat, quality deepfakes are often shared widely. The democratic nature of the technology, believability and accessibility, in turn encourages the creation and spread of viral content (Hsiang 24; Kietzmann et al. 3).

There are several configurations and gradations of ‘fakeness’. You can fake a real person’s face, or voice, or both. Or fake their body without altering the face, or subtly change their face to make it unrecognisable.  One variation that has grown an outsized audience on YouTube is face swapping. Here, the target face (usually a public figure) is mapped onto another face in real time. In this technique, the new fake face smoothly reenacts the subtle and complex facial expressions of the original person. The audio equivalent to this is the voice swap. 

Seeing is believing

The damage of the deepfake in our scenario lies in human nature: people tend to believe what they see (Maras & Alexandrou 257). And in the information era, videos have the strongest persuasive power: the audience has weak filters to critically analyse such visual conveyance of information. Whilst Photoshop and other image editing software have acclimatised the public to picture manipulation (Hsiang 21), deepfake holds such potent deceptive value because of its novel ‘truthiness’. It is, in essence, a photo-realistic optical illusion with movement and sound: the natural setting for the human brain visual system (Kietzmann et al. 2; Vaccari and Chadwick 2). This ‘visual presentation’ makes deepfake an inherently believable medium — regardless of whether the images are real. (Maras & Alexandrou 257) 

Deepfake: a deskilled art?

The danger is, many claim, the relative ease of creating a believe deepfake (Chesney & Citron 155). This is somewhat true. Certainly, anyone with access to a computer and image data can produce deepfakes on open source software like DeepFaceLab and faceswap. Nevertheless, machine learning is a technology that must be fed a massive amount of high-quality data to churn convincing results. You would need, firstly, to collect source footage of the person you want to fake, which professional deepfake ‘artists’ estimate as between 1,500 – 6,000 images. You also need a powerful computer that can support cloud-computing platforms like Paperspace that accelerate the training of your neural network. According to advice forums, this process of creating a quality deepfake can take weeks to months. 

[Source: image from “3 THINGS YOU NEED TO KNOW ABOUT AI-POWERED “DEEP FAKES” IN ART & CULTURE”. Creator unknown]

But simpler software exists, like Avatarify or Reface. These require merely a single image of the clone’s face, which is then applied to a video, or can even mimic your facial expressions in real time. The increasing flexibility of such apps will make it easier to create your own deepfake media in the future, without it being traceable to one specific software. But as the accessibility and democratisation of deepfake rises, so too does its chance of being detected. 

The first line of defence 

Deepfake detection is manifold. One technique analyses colouration in human skin to determine the presence of blood under the tissues (Hernandez-Ortega et al. 1), another spies inconsistencies in corneal specular highlights of AI-generated faces (Hu et al.). Also available are classic digital image forensic techniques that use deep learning to examine details not perceivable by humans. Multimodal approaches can compare feelings, voices and facial expressions to flag potential fakes (Mittal et al. 1.) One example, Gfycat, provided a rudimentary approach using contextual comparison, such as the presence of certain background footage, to indicate probability of a fake. 

However, these approaches are specific methods of deepfake detection. And, like any good cat and mouse relationship, any specific research method for detection is quickly thwarted by techniques created to evade this way of detection (Huang et al. 1). Naturally, this advances the quality of deepfakes, which grow better and smarter. Indeed, because detection algorithms are modular and reproducible, they can even be embedded into the algorithmic learning process of deepfakes, making them even more elusive (Carlini et al. 1). Far from extinguishing the fire, deepfake detection could be pouring gasoline over it. 

There is great demand to find general detection models that are less susceptible to countering, and which can apply to social media. This prompted Facebook to open a public challenge on Kaggle, testing people to develop the most general and robust detection technique. They offered the hitherto largest deepfake footage database (Dolhansky et al. 1), explicitly created for the challenge, and which is still used as a main reference point for research. The challenge did not yield the result that Facebook wanted: the winning model was able to detect deepfakes ‘in the wild’ with an accuracy of only 65%. 

One recent, significant initiative was the Microsoft Video Authenticator, part of Microsoft’s Defending Democracy Program to fight political disinformation in the upcoming US election. They combined digital image forensics with digital watermarking technology, offering a new approach that not only detects manipulated content with AI, but also through hashes in metadata — which users can apply as a browser extension to check a video’s origin and authenticity. But Microsoft also concedes difficulty in deepfake detection. Considering AI alone as insufficient to fight fake information, they call for stronger methods within a broader approach.

Blocking danger at the source 

One viable and equally novel technology is blockchain — a shared database. Its inherent attributes, like non-forgery, traceability, transparency and collective maintenance, make it a soaring candidate to fight deepfake at the source. 

[Source: ai.51cto.com]

Across the block structure, information like images, videos and audio can be cryptographically signed, geotagged, and timestamped to establish their origins (Nassar 2). Imagine an original video as sitting on top of a hierarchical tree, whose branches connect each and every single manipulated or multiplied video to their original source (Nguyen 8). This kind of “verified capture” urges applications to perform a number of checks, ensuring that transmitted data conforms with the source material. Researchers can compare the target data with the original data, and thus determine whether the object has been tampered with (Nguyen 8). 

Blockchain could bolster AI to solve many of the concerns around deepfake. One of the earliest proposals to integrate AI and blockchain was for data analysis. Much like centralised data sets, blockchain offers AI a massive and transparent base to collect and parse from, possibly fuelling better insights and solutions. 

But this comes with limitations. Currently, the technology has stunted scalability due to the amount of computational resources and memory required to combat deepfakes. And importantly, whilst allowing you to check whether a media item has been tampered with, blockchain is intrinsically unable to determine whether this media is genuine or not. As Corin Faife, senior coordinator for Witness mentioned, “It’s not an ultimate guarantee of truth and shouldn’t be taken as an endorsement of the content itself.”

Rising to the threat 

Enough concern exists on the destabilising potential of deepfake to prompt watchful action from platforms. In 2018, Reddit banned its deepfake subreddit of almost 100,000 members, and Facebook updated its content rules in early 2020 to prohibit content that has been edited or synthesised “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” Twitter too, announced a raft of new measures this year to combat ‘synthetic and manipulated media’, where, depending on the nature and motive of the content, it may either be removed or labelled.

A table explaining Twitter’s new policy to combat deepfake [Source: Twitter. Building rules in public: Our approach to synthetic & manipulated media

Government bodies across the world have also taken policy action. On a federal level, the US passed the DEEPFAKES Accountability Act. This criminalises deepfakes unless the creator explicitly discloses that the video has been tampered with. It also allows victims of deepfakes to take legal action against the creators of deepfakes which use their image. With the Deep Fake Reporting Act of 2019, the Senate instructed Homeland Security to regularly compile a report on deepfake, as a basis for continuously amending the law. And on a state level, many states have instituted stringent laws surrounding the creation and spread of deepfakes. In 2019, for example, Texas notably became the first state in the US to ban the creating and spreading of deepfake content. 

Over the ocean in Europe, the birthplace of the formidable GDPR framework, the EU has its ears pinned back. In 2018, it commissioned Tackling Online Disinformation: A European Approach. This organisation coordinates the movements of EU member states in limiting and responding to deepfake threats. In a government paper published in late 2019, the German government recognised deepfake as a technology that could damage trust in public information, with the effect of possibly undermining political discourse. The paper declares the several government and academic institutions tasked with forensically detecting deepfake. 

Separating hype from reality 

As tempting as it may be to ring the alarm bells, one academic believes the credible threat of deepfake shrivels in comparison to less hypothetical issues. Claire Wardle, leader of First Draft and fellow of Harvard University, positions deepfake as the latest in a long history of ‘weaponization of context’. She stresses how content doesn’t need to misuse sophisticated AI in order to cause political damage. 

One example would be a viral video of Nancy Pelosi seemingly drunk whilst giving a speech earlier this year (Paris et al. 30). The video was later proved to be slowed to 75%, giving the impression of slurred speech. Indeed, one winning element of the Brexit campaign was its imagery — hitting the public with a fusillade of fake images and videos that seemed to show illegal immigrants trying to enter Britain, many of which were either staged or misappropriated. With deepfake, Wardle claims, the real danger lies in the fog surrounding the technology — where guilty people can exploit widespread skepticism to dismiss the truth as fake. She terms this the ‘liar’s dividend’.  

In another example, it was an allegation of deepfake, rather than deepfake itself, which triggered a coup in the African nation of Gabon. Here, the opposition party declared the sitting President’s New Year’s speech a deepfake video, using this as justification for military intervention. The cyberdefense firm McAffee processed the video through two forensic tests, both returning a nearly 92% probability that the video was real. This brings us elsewhere: whilst most discussions swirl around the harm deepfake could cause in the West, this example knocks on another door: the potential of deepfake to destabilise politically fragile developing countries. This, however, lies outside the scope of our scenario. 

A high price

The nightmare scenario we have painted, that of a political deepfake detonating hours before polling day, is a captivating one. It tickles our sense of science fiction, a dystopian intrusion into our democracy. But by admission of its biggest proponents in tech, AI alone may not be capable of detecting a malicious deepfake hitting social media. Here, blockchain may offer limited help, but if the threat of a deepfake attack is credible, then the strongest defence for now is likely deterrence. Any group willing to launch such a calculated, heinous attack must be prepared to break a growing litany of laws, from a broad and intensifying coalition of platforms and governments. As to whether this could still happen — luckily, we won’t have to wait too long to find out. 


Bibliography

Carlini, Nicholas, and Hany Farid. “Evading Deepfake-Image Detectors with White-and Black-Box Attacks.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020. 

Chesney, Robert, and Danielle Citron. “Deepfakes and the new disinformation war: The coming age of post-truth geopolitics.” Foreign Aff. 98 (2019): 147-155.

Chintha, Akash, et al. “Recurrent Convolutional Structures for Audio Spoof and Video Deepfake Detection.” IEEE Journal of Selected Topics in Signal Processing 14.5 (2020): 1024-1037. 

Code of Practice on Disinformation.” European Commission, 26 Sept. 2018. 

COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS.” European Union Law, 26 Apr. 2018.

Dolhansky, Brian, et al. “The deepfake detection challenge dataset.” arXiv preprint arXiv:2006.07397 (2020).

Hernandez-Ortega, Javier, et al. “DeepFakesON-Phys: DeepFakes Detection based on Heart Rate Estimation.” arXiv preprint arXiv:2010.00400 (2020). 

Hsiang, Emily. “Deepfake: An Emerging New Media Object in the Age of Online Content.” (2020). 

Hu, Shu, Yuezun Li, and Siwei Lyu. “Exposing GAN-generated Faces Using Inconsistent Corneal Specular Highlights.” arXiv preprint arXiv:2009.11924 (2020). 

Huang, Yihao, et al. “FakePolisher: Making DeepFakes More Detection-Evasive by Shallow Reconstruction.” arXiv preprint arXiv:2006.07533 (2020).

Jagati, Shiraz. “Deep Truths of Deepfakes — Tech That Can Fool Anyone.” Cointelegraph, 22 Dec. 2019.

Kietzmann, Jan, et al. “Deepfakes: Trick or treat?.” Business Horizons 63.2 (2020): 135-146.

Kuznetsov, Nikolai. “Blockchain and AI Bond, Explained.” Cointelegraph, 6 July 2019.

Love, Dylan. “Blockchain Might Be a Silver Bullet for Fighting Deepfakes.” Cointelegraph, 16 Dec. 2019.

Maras, Marie-Helen, and Alex Alexandrou. “Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos.” The International Journal of Evidence & Proof 23.3 (2019): 255-262. 

Mittal, Trisha, et al. “Emotions Don’t Lie: A Deepfake Detection Method using Audio-Visual Affective Cues.” arXiv preprint arXiv:2003.06711 (2020). 

Nassar, Mohamed, et al. “Blockchain for explainable and trustworthy artificial intelligence.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10.1 (2020): e1340.

Nguyen, Thanh Thi, et al. “Deep learning for deepfakes creation and detection.” arXiv preprint arXiv:1909.11573 1 (2019).

Paris, Britt, and Joan Donovan. “Deepfakes and Cheap Fakes.” United States of America: Data & Society (2019).

Sinha, Sritanshu. “As Deepfake Videos Spread, Blockchain Can Be Used to Stop Them.” Cointelegraph, 9 Oct. 2019.

Vaccari, Cristian, and Andrew Chadwick. “Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news.” Social Media + Society 6.1 (2020): 2056305120903408.

Comments are closed.