Pay or you are a porn star: The problem with RansomFakes
When in the beginning of 2017 deep fakes of politicians arose, people assumed that the new technology would be a threat to democracy. In a world in which we believe everything we see, the Katie Hill scandal has shown how a deepfake porn can influence the audience’s mind and destroy the reputation of a political representative.
According to J.M Porup, writer at the CSO, deepfakes are not always “about gaslighting a population, but about bullying or harassment”. Porn is a great way to harass people with. The Deeptrace report states that 96% of the deepfake videos on the Internet are of pornographic nature (Wang 2019). The harassment Porup mentions extends further than only famous people committing taboo behavior; Deepfakes are all fun and games, until the target shifts to one on a more personal level, namely: you.
Caldwell et al. state in AI-enabled future crime (2020) that deepfakes, or audio/video impersonation, are ranked as the “overall most-concerning type of crime out of all those considered” in their broad research (6). A fairly new form of this crime combines the concept of ransomware with deepfakes of its victims. Paul Bricman from Medium refers to this as a RansomFake, what according to him is the lovechild of both concepts (Bricman 2019). This kind of ransomware is not aiming for your files, but at your reputation.It is like an advanced version of the ‘Shut Up and Dance’ episode from the Netflix serie Black Mirror, in which a teenage boy is pressured into performing certain activities because of the fear that a sexual video of him will be released. In the case of a RansomFake, the reality of the sexual video is nonexistent and therefore completely fabricated by the use of AI.
It is all in the simple plan of making a harmful deepfake video of a victim of choice, send it anonymously on social media to the victim and last but not least, threaten to spread the video on the Web if a large amount of money will not be payed. If this problem was not bad enough: In this day and age, everyone can do it.
Welcome to your own Black Mirror episode.
How does a RansomFake work?
RansomFakes are based on the techniques used by normal deepfakes, which rely on Artificial Intelligence and are particularly active in the field of deep learning (Fletcher 458). Face detection algorithms learn to analyse the key features of the face, by analyzing images labeled by facial landmarks, in order to be able to manipulate them (Bricman 2019). This technique is used by the funny face filters from e.g. Snapchat, but are also the basis of the act of face-swapping (Ibid). The material of the victim needed for the face-swapping, can be retrieved from the countless social media profiles openly on the Internet. Generative adversarial networks are used in the process of deepfaking, causing the fake material not to be distinguished from real video’s (Fletcher 459; Caldwell et al. 5).
The next step is to send the RansomFake to the victim on social media. According to Bricman, the footage shows the victim “performing an incriminatory or intimate action and threatens to distribute it unless a ransom is paid” (Bricman 2019).
More people, more money
A dangerous aspect of the RansomFake is that the process can easily be automated and scaled. It is not that hard to create a script of code which can find and retrieve material from open sources like social media (Bricman 2019). API’s are made to facilitate programmatic acces to profiles and its content. Connected to an API, a programmer can obtain images and videos by simply making a web request (Ibid). Then, a video can automatically be made and send to the victim on social media. In return for cryptocurrency, the deepfake video will be deleted or, if the victim will not pay the ransom, the video will be posted.
When this code is created, lots of social media users can be targeted at the same time. The more people, the more money. All of this is without the fuss of traditional ransomware, due to the fact that everything happens over public information channels and no computer needs to be hacked.
The Internet: the place to DIY
The AI techniques used for deepfaking footage seems to most people a world apart. Only (amateur)programmers with a specialization in deep learning had access to create this kind of videos, due to their specialized knowledge.
This changed in 2017, when a Reddit user created fake porn videos featuring celebrities and politicians (Huang et al 3-4). Flecher et al. conclude that after this event, the Reddit user shared the code they used to create them, which let to the “micro-economy of face-swappped videos” called deepfakes (461). Computer scientist Steven Charleer has acknowledged the exploitative and sexist potentials of deepfaking, but nuanced that “Any tool can be used for evil“ (Ibid 462).
In 2018, deepfaking became accessible for everyone, when open-AI desktop apps like Fakeapp entered the market, which made the process of creating deepfake video’s easy enough that users without the knowledge of coding are able to make them (Ibid 463). The user just has to upload two video clips on the app, which will deepfake the content for you within hours (Ibid).
If now anyone can do it, everybody can be a potential victim. You do not have to be a celebrity anymore to unwillingly star in a porn movie. Personal revenge has never been more effective. But more importantly: the potential that anyone can deprive dozens of money of dozens of people on the Internet, is a real threat concerning the privacy of every social media user.
We live in a digital world in which we do not know what is true and what is fake. Deepfakes reinforce these uncertainties. Making use of this new technology in a new form of cybercrime is not only targeting money from a lot of people, but destroys the lives of all of them on a personal level.
For the protection of social media users themselves, privacy settings are key. Keep your photos private and thereby let Black Mirror be just be a show instead of potentially a part of your life.
Bibliography/sources
Bricman, Paul. “DeepFake Ransomware.” Medium, 2 Feb. 2019, medium.com/@paubric/deepfake-ransomware-oaas-part-1-b6d98c305cd9.
Caldwell, M. et al. “AI-Enabled Future Crime.” Crime science 9.1 (2020): 1–13.
Huang, K. et al. “Casting the Dark Web in a New Light.” MIT Sloan management review 60.4 (2019): 1–9.
Fletcher, John. “Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance.” Theatre journal 70.4 (2018): 455–471.
Porup, J.M. “How and why deepfake videos work — and what is at risk”. CSO 10 Apr 2019, https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html
Wang, Chenxi. “Deepfakes, Revenge Porn, And The Impact On Women” Forbes 1 November 2019. https://www.forbes.com/sites/chenxiwang/2019/11/01/deepfakes-revenge-porn-and-the-impact-on-women/#1b3983fb1f53