Identity theft and fake news at phone’s reach: the rise of DeepFake tools
What do you mean, deepfake?
An unravelling of the word deepfake is very much needed to pursue this blog post. Deepfake is the contraction between two terms: deep learning and Fake. Perhaps a reminder or an explanation of what deep learning is is also welcomed to be able to grasp this concept fully. Deep learning is a type of artificial intelligence derived from the machine learning, where the machine is capable of learning by itself, in opposition to programming where it limits itself to execute predetermined rules (Deluzarche). The contraction of the words deep learning and fake intends that this type of deep learning is somewhat related to fake, which is what I will expand on now by explaining how this phenomenon started.
In 2014, Ian Goodfellow became the inventor of the machine learning technique named Generative Adversarial Network (GAN) (Vyas). It is a deep generative model which is created by a generator and a discriminator (Goodfellow et al.) The GAN machine learning technique soon became an efficient way to create very well made deepfakes. However, the term itself came from the username of a Reddit user (called ‘deepfakes’) who posted pornographic deepfake videos, at the end of 2017 (Goggin). Pornographic deepfake videos consist of putting the face of someone that is not one of the original porn actor/actress. These videos are now banned from Facebook, Twitter and Pornhub (Hatmaker). Since then, several easier ways to create deepfakes emerged, making it highly accessible to people. This is also what we call cheap fakes. Cheap fakes could be generated thanks to certain free and easy-to-use softwares, or for example Adobe After Effects, which is not highly difficult to navigate through. Most importantly, recently we have been gifted by certain apps that can generate deep fakes, or I would say cheap fakes, for us (e.g FaceSwap) (Paris and Donovan 14).
But why should you worry about that?
Deepfake is an emergent theme now more than ever because today almost anyone can use it. Thus, stories led to deepfake dramas keep increasing. Although one of the primary uses of deepfakes is just to joke around with your friends and create funny content, it is certainly being employed for other more serious cases, unfortunately not only positive ones. Deepfakes have been brought up back to the surface by the media these past two weeks as a new app, ZAO, only available in China, has been launched on the 30th August. It is accessible by all Chinese citizens who possess iOS, and the results look incredibly realistic. There is a well-known example of a Chinese user who inserted his face into various movie scenes from Leonardo DiCaprio that I will post just below (Khandelwal).
Deepfakes are not only created visually but can be focused solely on audio too. Overdub is a new app that allows its users to create new words based on the sound of their voice. Although still very recent and in need of improvement, these newly created apps could popularise deepfakes, for better or worse (Newman). As a matter of fact, deepfakes already started to raise serious concerns as two weeks ago, there has been a transaction of 200.000€ from a chief executive of a UK energy company to a supplier from Hungary, due to the false instructions of the boss. The boss was, in fact, an usurper using a deepfake software, to replicate the tone, voice, accent and punctuation of the original boss. Although recent, it is not the first time that this type of crime occurs (Statt).
Fake news 2.0?
We can definitely state that deepfake videos, photos or audios are a new way to disinform people. It can be either used to trick them into doing something (as shown in the example explained above), or to create some very realistic fake news. Disinformation is now made possible on a whole new level, and will be harder to prove wrong. Deepfakes could, for example, create a national crisis by evoking a dangerous attack, or alter the choice of voters if, close to an election, a deepfake video of a candidate saying certain things would be released (Porup). These could have enormous consequences on the future of one’s country. Although there have been certain techniques developed to trace deepfakes, such as IA, detection algorithms, digital provenance solution and life logs, it is not working well enough (Dack).
According to Stover, the rise of deepfakes could mark the debut of an information apocalypse, the distinction between facts and fiction would be harder to make. In this new digital age, three factors are important to consider when looking at modern fake news: the rapidity of its dispersion, its scale of production and how professional the end result looks (McGonagle 206). There is no doubt that deepfakes considerably improved the last point. Because deepfakes will resemble the truth almost identically, it will then just up to the public to choose the version that they would like to believe, and it will be the end of objectivity. This could really mark a turning point in terms of news, politics and privacy (Dack). Users should be extra cautious when selecting their news sources, especially those from alternative news sites as they are more prone to differ from fake news (Martens et al. 33) And even though it will be hard to distinguish facts from deepfakes, if proven, according to Martens et al. it will still be complicated to change the readers’ mind because of directional reasoning (34). Directional reasoning is when the reader deliberately sticks with a certain type of information because it is coherent with his/her beliefs. It is specifically affiliated with topics like politics and individual and social identity (Martens et al. 34). When looking at social media, it seems like the most important truth factor is who shared the story (Sterrett et al. 795) These affirmations are not reassuring when looking at the future of disinformation with the posting of deepfake content.
And last but not least
To finish this blog post, we can highlight how this new trend of deepfakes is gradually causing a threat to our society. Perhaps in the near future, we will be able to develop certain advanced tools, to be able to recognise the validity of the news. For now, it is important to remember to read, watch and hear with a critical eye. Let’s conclude this with some wise words by Barack Obama… Or not.
Dack, Sean. “Deep fake news and what comes next”. University of Washington, 20 Mar. 19, https://jsis.washington.edu/news/deep-fakes-fake-news-and-what-comes-next/. Accessed 17 September 2019.
Deluzarche, Céline. “Deep Learning”. Futura, (n.d), https://www.futura-sciences.com/tech/definitions/intelligence-artificielle-deep-learning-17262/. Accessed 20 September 2019.
Goggin, Benjamin. “From porn to ‘Game of Thrones’: How deepfakes and realistic-looking fake videos hit it big”. Business Insider, 23 Jun. 2019, https://www.businessinsider.com/deepfakes-explained-the-rise-of-fake-realistic-videos-online-2019-6. Accessed 21 September 2019.
Goodfellow, Ian, et al. “Generative adversarial nets.”. University of Montreal, 2014
Hatmaker, Taylor. “Reddit bans ‘involuntary porn’ communities that trade AI-generated celebrity videos”. Tech Crunch, 8 Feb 2018, https://techcrunch.com/2018/02/07/deepfakes-fake-porn-reddit-twitter-ban/. Accessed 19 September 2019.
Khandelwal, Swati. “Chinese Face-Swapping App ZAO Sparks Privacy Concerns After Going Crazily Viral”. The Hackers News, 03 Sep. 2019, https://thehackernews.com/2019/09/face-swapping-deepfake-zao.html. Accessed 17 September 2019.
Martens, Bertin, et al. “The Digital Transformation of News Media and the Rise of Disinformation and Fake News”, SSRN Electronic Journal, Jan 2018, https://doi.org/0.2139/ssrn.3164170
McGonagle, Tarlach. “Fake news” False fears or real concerns?.” Netherlands Quarterly of Human Rights vol.35, no.4, 01 Dec 2017 pp. 203-209. https://doi.org/10.1177/0924051917738685
Newman, Jared. “This app will let you deepfake your own voice for podcasting purposes”. Fast Company, 18 Sep. 2019, https://www.fastcompany.com/90405518/this-app-will-let-you-deepfake-your-own-voice-for-podcasting-purposes. Accessed 21 September 2019.
Paris, Britt and Donovan, Joan. “Deepfakes and Cheapfakes”. Data & Society, 2019, pp 02-48
Porup, J.M., “How and why deepfake videos work – and what is at risk”, CSO, 10 Apr 2019, https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html. Accessed 21 September 2019.
Statt, Nick. “Thieves are now using AI deepfakes to trick companies into sending them money”. The Verge, 05 Sep 2019, https://www.theverge.com/2019/9/5/20851248/deepfakes-ai-fake-audio-phone-calls-thieves-trick-companies-stealing-money. Accessed 20 September 2019.
Sterrett, David, et al. “Who Shared It? : Deciding What News to Trust on Social Media”, Digital Journalism 7.6, 13 Jun 2019, pp: 783-801, https://doi.org/10.1080/21670811.2019.1623702
Stover, Dawn. “Garlin Gilchrist: Fighting fake news and the information apocalypse.” Bulletin of the Atomic Scientists, vol.74, no.4, 19 Jun 2018, pp. 283-288. https://doi.org/10.1080/00963402.2018.1486618
Vyas, Kashyap. “Generative Adversarial Networks: The Tech Behind Deepfake and FaceApp”. Interesting Engineering, 12 Aug. 2019, https://interestingengineering.com/generative-adversarial-networks-the-tech-behind-deepfake-and-faceapp. Accessed 18 September 2019.