The Liar’s Dividend: dismissing reality as fake
From face swapping applications to uncanny videos of politicians, everybody has, knowingly or unknowingly, seen a fake video. They are ubiquitous and they are becoming increasingly more realistic, which makes them harder to identify as ‘fake’. This poses a new ethical problem where it is possible to dismiss reality as fake, known as the liar’s dividend.
The manipulation of media
“Analog and digital content manipulations are not new, and the art of doctoring content is as old as the media industry itself” (Kietzmann 145).
The first known fake in photography was made by Hippolyte Bayard in 1840, just one year after the official ‘start’ of photography. In this image he staged his own drowning accompanied by a note explaining why he drowned himself (O’Hearn). This photograph might not have been a doctored image, but it did convey an altered truth to the people seeing the image.
A newer form of manipulation is CGI, short for Computer-Generated Imagery. CGI has been used in cinema for years, but is quite labour intensive and not accessible to everyone.
Deepfake technology, however, is on a whole new level compared to the older ways of altering images. The techniques and technology used for deepfakes are continually changing and becoming more realistic every day, and, contrary to other manipulation methods, they are easily accessible and can be used by everyone (Kietzmann 136-137).
What are ‘Deepfakes’?
The emerging popular term ‘deepfake’, a portmanteau of deep learning and fake, is commonly used for fake images or videos made by a set of machine-learning techniques used to synthesise new visual products. The most common use for this technology is replacing someone’s face in a video (Floridi 320) as is possible with the face-swap application ‘Zao’. With ‘Zao’ users can convincingly swap their faces with characters in a movie or a TV show (France-Presse). There are even YouTube channels dedicated to making deepfake videos, most notably a channel named Ctrl Shift Face. In the video below we see actor Sylvester Stallone face-swapped with the lead role in Home Alone. An uncanny sight which exemplifies the possibilities of deepfake technology.
There is a distinction between four kinds of facial manipulation from higher to lower level of intensity and realism:
- Entire Face Synthesis: creating an entire non-existent face with a high level of realism.
- Identity swap: replacing the face of one person with the face of another person.
- Attribute manipulation: modifying attributes such as age, skin colour etc.
- Expression swap: modifying facial expressions of a person (Tolosano et al. 3).
Changing or creating a new narrative
“Deepfakes are scary because they allow anyone’s image to be co-opted, and call into question our ability to trust what we see” (Benjamin).
Fake media, such as deepfakes, have the power to manipulate facts. This is especially common with graphical visualisations. In statistical graphics it is possible to manipulate facts by using a visual representation which is not consistent with the numerical data (Tufte 55). Even if the real numerical data is given, it is not a given that people will look at it and some might rely on the graphical falsified data.
An interesting deepfake video was posted by ‘Extinction Rebellion’, an activist group in Belgium, where Sophie Wilmès, the Prime Minister of Belgium, told the nation that there was a direct relation between the exploitation of nature by people and the COVID-19 pandemic (XR). Even though it was explicitly said that this was a speech that Sophie Wilmès could have given, but did not give, a lot of people still believed it was real. Just like with statistical graphs, people relied on the altered data in the video and not the real data that was also given.
An important factor when watching videos or images, is knowing the context. As Miltner states in her articles about GIFs, decontextualization has the effect of creating a new or partial narrative (Miltner 5). The same decontextualization happens with deepfakes, but in this case, it is harder to determine what the real context was due to the increasing realism of the technology. This brings us to an emergent and ethical problem: ‘the Liar’s Dividend’.
The Liar’s Dividend
The mere existence of technology that exists to make realistic fakes, undermines confidence and trust (Benjamin). People that get caught on video, could claim that this sophisticated deepfake technology was used to create a fake video (The Poynter Institute). Deepfakes create an environment in which it is unclear what is fake and what is real and it is already a potent threat in parts of the world (Lamb).
An example of the liar’s dividend in action was in the United States in 2016. Prior to his election as President of the United States, there surfaced a video of Donald Trump having a vulgar exchange about women in 2005, and while he later on apologized for this, there have been mentions of him questioning the authenticity of the video (Stewart).
Another prominent example is a gay sex tape of Malaysia’s Minister of Economic Affairs which surfaced in 2019. The minister and the prime minister dismissed it as a deepfake. Due to plausible deniability, the minister got away without any consequences in the very socially conservative Malaysia (Lamb).
How can we possibly know what to believe if there is a widely accessible technology that can create fake videos with an increasingly higher level of realism? According to Benjamin it is not the deepfake videos that are the real problem, but the underlying social structure. He calls it a ‘post-trust society’ (Benjamin). We currently live in a society where calling news fake is normal. In this age, more than ever, trusting media is not the baseline.
“The timing is perfect. At a time of much-touted fake news, deepfakes will add a powerful tool to fool voters, buyers, and competitors, among others” (Kietzmann 145).
Deepfakes will become a new weapon for politicians, activists and others to not just doctor videos, but also to dismiss their documented actions as fake. Now that fake videos and deepfake applications have become ubiquitous, we need a greater understanding of deepfakes and its effects on society.
Benjamin, Garfield. ‘Deepfake Videos Could Destroy Trust in Society – Here’s How to Restore It’. The Conversation. <http://theconversation.com/deepfake-videos-could-destroy-trust-in-society-heres-how-to-restore-it-110999>.
Ctrl Shift Face. Home Stallone [DeepFake]. YouTube, 24 Dec. 2019. 25 Sept. 2020. <https://www.youtube.com/watch?v=2svOtXaD3gg&ab_channel=CtrlShiftFace>.
Floridi, Luciano. ‘Artificial Intelligence, Deepfakes and a Future of Ectypes’. Philosophy & Technology, vol. 31, nr. 3, Sept. 2018, pp. 317–321, doi:10.1007/s13347-018-0325-3.
France-Presse, Agence. ‘Chinese Deepfake App Zao Sparks Privacy Row after Going Viral’. The Guardian, 2 Sept. 2019. 23 Sept. 2020. <https://www.theguardian.com/technology/2019/sep/02/chinese-face-swap-app-zao-triggers-privacy-fears-viral>.
Kietzmann, Jan, e.a. ‘Deepfakes: Trick or Treat?’. Elsevier Enhanced Reader. 2020, doi:10.1016/j.bushor.2019.11.006.
Lamb, Hilary. ‘Sex, Coups, and the Liar’s Dividend: What Are Deepfakes Doing to Us?’. 8 Apr. 2020. 26 Sept 2020. <https://eandt.theiet.org/content/articles/2020/04/sex-coups-and-the-liar-s-dividend-what-are-deepfakes-doing-to-us/>.
Miltner, K. & Highfield, T. ‘Never gonna GIF you up’. Social Media + Society 3(3). 2017. <https://journals.sagepub.com/doi/10.1177/2056305117725223>.
O’Hearn, Meg. ‘Fake News: The Drowning of Hippolyte Bayard’. Artstor. 12 Sept. 2018. 24 Sept 2020. <https://www.artstor.org/2018/09/12/fake-news-the-drowning-of-hippolyte-bayard/>.
Stewart, Emily. ‘Trump Has Started Suggesting the Access Hollywood Tape Is Fake. It’s Not.’ Vox, 28 Nov. 2017. 23 Sept 2020. <https://www.vox.com/policy-and-politics/2017/11/28/16710130/trump-says-access-hollywood-tape-fake>.
‘#TellTheTruthBelgium’. Extinction Rebellion België. 2020. 23 Sept 2020. <https://www.extinctionrebellion.be/nl/vertel-de-waarheid>.
The Poynter Institute. “Liar’s Dividend.” YouTube. 16 May 2019. 23 Sept 2020. <https://www.youtube.com/watch?v=gYCYlAtYXl0&ab_channel=ThePoynterInstitute>.
Tolosana, Ruben, e.a. ‘DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection’. arXiv:2001.00179 [cs], Jun. 2020. arXiv.org, <http://arxiv.org/abs/2001.00179>.
Tufte, E. R.. The Visual Display of Quantitative Information. 2nd ed. Cheshire, Conn: Graphics Press. 2001.