Fawkes: “Cloaking” Your Photos to Elude Facial Recognition Systems

By:
On: September 27, 2020
Print Friendly, PDF & Email
About


   

For too long we have lived under the fear brought by the deepfake. In that case, it’s time for us to meet the photo-editing tool, Fawkes. Since the deepfake is generally based on images, Fawkes can fool the AI programs by making subtle adjustments on our photos to prevent them from being used by the facial recognition system.

Figure 1: Guy Fawkes Mask which is also this technology’s name “Fawkes” comes from. (Ahmed Zayan on Unsplash)

We are all aware of how much information can be contained in a photo. Meanwhile, unfortunately, we also know the risks that we may encounter when uploading our personal photos to the social network. Items in social media such as photos may be co-owned by multiple users. (Such et al. 3821) With the development of facial recognition technology, the importance of photo information and the risk of leakage and misuse are further enhanced. The deepfake, which gained its name from an anonymous user of the platform Reddit who went by the name ‘deepfakes’ (deep learning + fakes), (Kietzmann 136) which produce fakes through deep learning can cause faked situations like celebrities performing in pornography, which will lead to serious social consequences.

In 2019, ZAO, a face-changing application, is widely used at a thunderous speed. It was also taken off from the Appstore at the same speed because of privacy concerns. Earlier this year, Clearview AI has downloaded over 3 billion photos of people from the Internet and social media without their knowledge or permission and used them to build facial recognition models for millions of citizens. The privacy situation is becoming more urgent, people are getting more and more anxious.

Although many companies involved have adapted to the situation, for example, Reddit, the birthplace of deepfake bans impersonation content, including deepfake. But the problem has not been effectively solved yet. In such an urgent situation, the arrival of Fawkes is a great start to better protect information security. Its emergence may bring about a turnaround in the current situation.

What is Fawkes?

Fawkes is developed by The SAND Lab at The University of Chicago. According to the developers, “Fawkes Poisons Models that try to learn what you look like, by putting hidden changes into your photos, and using them as Trojan to deliver that poison to any facial recognition models of you” .(Shan et al.) Specifically, it operates against the overly sensitive nature of AI. Fawkes increased the noise, which had little effect on human vision but had a large effect on the machine model. It does this by modifying the pixels of the image slightly, for example by turning down the pixels around the eyes.

Users only need to spend few minutes to run one of your photos through this program before posting it on social media, then you will get a photo that looks exactly like you but is at the same time, not you, which means the facial recognition model could not recognize it. That is because Fawkes will make tiny, pixel-level changes that the developers refer to “image cloaking” and this change cannot be told by human eyes, but it can give the facial recognition technology a hard time. In this way, Fawkes protects these photos without affecting the user’s beauty.

Figure 2: Photos of the research team, quite hard to tell the cloaked ones from the original photos. (University of Chicago, SAND Lab)

Imaging a future with Fawkes

With Fawkes, you can say that photos that have been processed are quite safe since researchers found this technology was nearly 100 percent effective at blocking facial recognition models from Amazon, Microsoft and other companies. For users who have already uploaded lots of uncloaked photos, it is never too late to use Fawkes. Since data are aggregative. They pile up. (Gitelman, Jackson 8) And every time you post a new photo the processed features will be learned by facial recognition technology, so when you upload enough photos, it can eventually confuse the machine’s perception of your appearance. We are using machine learning against itself.

Most notably, using Fawkes does not take much time or effort. Just by run your photos through the program for a few minutes, we can safely post them of social media without worrying about unauthorized companies that may collect them for other uses. At least for now, this looks like a quite an ideal solution.

Existing obstacles

Fawkes has already been downloaded by over 100,000 times, but it is not sufficient to obfuscate algorithms and fight against deepfakes. This can be attributed to that not enough people know Fawkes as a tool to help with blocking facial recognition. When searching Fawkes on searching engines, outcomes are more related to Fawkes the phoenix in Harry Potter and Guy Fawkes mask in V for Vendetta. The searching engines as well as social media which we rely on most to find new information are exactly the ones who gain the most benefits by collecting our data including our photos. Algorithm-based recommendations increase the amount of content of interest, making it more difficult for people who seldom pay attention to such news.

Moreover, as the team mentioned, there may not have mobile apps because “it requires significant computational power that would be challenging for the most powerful mobile devices”.(Shan et al.) This has also hindered the further popularity of the program.

Figure 3: Facial Recognition Art Mural(Creative Commons Search)

What happens next?

The problem of the deepfake is becoming more and more concerned by people, the good thing is solutions keep appearing. With the help of Fawkes and other technologies like plugins which can help us recognize those third parties who get our data we can protect our privacy in a certain way. We know that the change is happening but at a slower pace. During this period of time, it is absolutely reasonable to use only a tiny bit of time in exchange for information security and to resist those unauthorized models.

However, Fawkes could not solve the problem related to photo privacy for good. We still need some time to make this technology ubiquitous. At the same time, we should not only focus on methods corresponding to problems like deepfakes, establish the sense of responsibility of the relevant companies is also an important part to solve the problems of privacy in the long run.

References

Cole, Samantha. “We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now.” Tech by Vice, 25 Jan. 2020, https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley.

Feathers, Todd. “Researchers Want to Protect Your Selfies From Facial Recognition.” Tech by Vice, 10 Mar. 2020, https://www.vice.com/en_us/article/m7qxwq/researchers-want-to-protect-your-selfies-from-facial-recognition.

Gitelman, Lisa, editor. “Raw Data” Is an Oxymoron. The MIT Press, 2013.

“Guy Fawkes.” Wikipedia, 19 Aug. 2020. Wikipedia, https://en.wikipedia.org/w/index.php?title=Guy_Fawkes&oldid=973894340.

Hasan, Haya R., and Khaled Salah. “Combating Deepfake Videos Using Blockchain and Smart Contracts.” IEEE Access, vol. 7, 2019, pp. 41596–606. doi:10.1109/ACCESS.2019.2905689.

Hill, Kashmir. “The Secretive Company That Might End Privacy as We Know It – The New York Times.” New York Times, 18 Jan. 2020, https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.

Hill, Kashmir. “This Tool Could Protect Your Photos From Facial Recognition. “The New York Times. 3 August 2020. https://www.nytimes.com/2020/08/03/technology/fawkes-tool-protects-photos-from-facial-recognition.html

Kietzmann, Jan, et al. “Deepfakes: Trick or Treat?” Business Horizons, vol. 63, no. 2, Mar. 2020, pp. 135–46. doi:10.1016/j.bushor.2019.11.006.

Marks, Paul. “Blocking Facial Recognition.” Communications of the ACM, 25 June 2020, https://cacm.acm.org/news/245844-blocking-facial-recognition/fulltext.

Shan, Shawn, et al. Fawkes: Protecting Privacy against Unauthorized Deep Learning Models. 2020, p. 16.

Shan, Shawn, et al. Image “Cloaking” for Personal Privacy. http://sandlab.cs.uchicago.edu/fawkes/#paper. Accessed 27 Sept. 2020.

Such, Jose M., et al. “Photo Privacy Conflicts in Social Media: A Large-Scale Empirical Study.” Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, 2017, pp. 3821–32. doi:10.1145/3025453.3025668.

UChicago CS. “UChicago CS Researchers Create New Protection Against Facial Recognition” 3 August 2020. The University of Chicago department of computer science. https://www.cs.uchicago.edu/news/article/fawkes-cloaking/

Comments are closed.