Coding queer bias: An artist’s response to the lack of diversity in machine learning

By:
On: December 4, 2020
Print Friendly, PDF & Email
About


   

Machine learning technology is known not only to reinforce but even deepen social bias, consequently fostering discrimination and oppressing diversity in multiple ways. Counterintuitively, in “Zizi – Queering the Dataset”, the artist Jake Elwes shows how machine learning can afford ambiguity and fluidity. His work can be read as a critique towards a lack of represented diversity within society and technology.

Machine learning and coded bias

Nowadays, it is well known that machine learning (ML) technology is prone to reinforce social bias. Multiple studies have shown how this has already led to fatal consequences for minorities (Chiusi 2020; Eubanks 2020). Cases of racial discrimination have been reported where people of colour were mistakenly or not at all detected by facial recognition systems due to an underrepresentation in training data (Sample 2019). Not only can racism be amplified, but also ableism and sexism (Shew 2020; Zou and Schiebinger 2018). Here, cases were reported where transgender Uber drivers were detained from working because facial recognition systems could not match them correctly (Melendez 2018). These are only two out of many cases. Spinning the consequences of such cases further, one can imagine how they can squash the efforts at creating a diverse and open society.

With respect to such societal concerns, one of the main flaws of machine learning technology is that their functioning relies heavily on data. And with the collection of data, dominant power structures are often reified (Deutch 2020, p. 5064). Since data are the result of a selective process, “data are always already ‘cooked’ and never entirely ‘raw’.” (Gitelman 2013, p.2) 
One of the most prominent and controversial ML algorithms are facial recognition algorithms. Once image material depicting people is stored in a database, these people can ideally be recognised within seconds. However, this is not always the case since the categories these algorithms use for matching people are emerging from the training data. Through learning patterns of highly fine-grained and complex facial features from given data samples, categories are built through the features that occur most dominantly in data (Efron 2020). These categories thus become products of cultural and social bias (Benjamin 2019). And with the underrepresentation of minorities in training data, the application of automatised recognition becomes critical.
Given that facial recognition systems have been deployed “at an alarming rate throughout Europe” this year, whereas they stayed nearly absent in 2019 (Chiusi 2020), it is high time to bring their diversity threatening potential to public awareness – especially since the systematic oppression of minorities through implemented automated systems often remains in the dark (Eubanks 2020).   

An artist’s response to coded bias

“Zizi – Queering the Dataset” is a response of the artist Jake Elwes to the lack of diversity in training data of facial recognition systems. For his ML art work, he enriched normative data sets by “drag and gender fluid faces found online”. For implementing the idea of a diversified ML algorithm with a gender-fluid bias, he uses Generative Adversarial Networks (GANs) – which belong to the field of creative Artificial Intelligence (De Vries). In contrast to facial recognition models that learn to classify, GANs are generative algorithms that learn to synthesise new images after data samples. If a GAN is fed thousands of images of different apples, it is able to reproduce synthetic images of seemingly real and unique apples. A famous GAN application is “This person does not exist” (Karras), where one can generate an image of an artificial person with a single click. You are welcome to try how many times you need to click until a drag queen appears. 

A fascinating feature of GANs is their so-called latent space. This latent space is an abstract mathematical representation of detected features in the training data. From multiple points in this space that hold feature information, artificial images can be generated. Because this latent space is of higher dimensionality, the amount of combinations for generating possible images is vast.
In his work “Zizi – Queering the Dataset”, Elwes gives a tour through the latent “space of queerness”, captured in a video loop of 135 minutes duration. What can one expect to see when a machine learning algorithm is trained to generate images after an ideal that “challenges gender and explores otherness” (Elwes 2019)?  


“Zizi – Queering the Dataset, 2019 (30 second extract)”. Jake Elwes. Vimeo. 2 December 2020.

Watching the fluid transitions between artificially generated drag, with blurry contours, I sense a certain freedom – the colourful shapes that appear for a few seconds on the screen glow like subliminal shimmering beings that come alive and present – while staying fluid, intangible and ever changing. It makes one wonder, where such nuances and fluid transitions of gender are represented in society, as people are usually expected to decide between two options: being male or female. And it questions why one should even stop this fascinating transition – as I catch myself wanting to see all of the possible transitions and new shapes hiding in the latent space generated by the GAN. It is an invitation to explore.

Machine learning and diversity: a medium-specific concern

Coming back to our present reality, we are dealing with facial recognition systems that are steered to capture every detail – in order to classify and recognize. The openness, fluidity and diversity afforded through GANs become overwritten by the narrative of dominant facial recognition algorithms that assign humans into categories that rely upon biased ideas and imaginaries – such as dual gender or dominant ethnicity. 
The potential of GANs to generate spaces of variability and ambiguity in which one can continuously interpolate to find new states and impressions opposes facial recognition algorithms that are trained to create spaces of order and stability. Where GANs can be seen as striving towards the future through the generation of new content, facial recognition algorithms are still striving towards the past through reinforcing cultural bias in categorisation and recognition. Although both types of ML algorithms share a substantial kernel in their design, we can find them to have a contrasting relationship to diversity through their medium-specificity.

Despite that Elwes’ work can be aligned to criticise the technical encoding of bias, by showing how to use machine learning as a means of diversification, I encounter his critique to fall on grounds as well with a social criticism rather than with a purely technical one. He explicitly challenges the normative ideas in our society by “interrupting” these through drag and gender-fluidity bias – leading to astonishing results of ambiguity and non-conformity. Through his work, Elwes makes visible that bias in data can also be coded for good: As a means to celebrate diversity, to generate creative visions and to tackle urgent questions which need to be answered for not only a better future – but our here and now. 

If you liked this article, I would recommend to see the current exhibition “The Coded Gaze” in the Nxt Museum Amsterdam. It explores “personal frustrations” with facial recognition systems – which is a great opportunity to dig deeper into the topic of coded bias.

References

Benjamin, Ruha. “Race after technology: Abolitionist tools for The New Jim Code.” Social Forces (2019).

Chiusi, Fabio (2020). “Life in the automated society: How automated decision-making systems became mainstream, and what to do about it”. Web. AlgorithmWatch. 28 November 2020. https://automatingsociety.algorithmwatch.org

De Vries, Katja. “You never fake alone. Creative AI in action.” Information, Communication & Society (2020): 1-18.

Deutch, Jeff. “Image Activism After the Arab Uprisings | Challenges in Codifying Events Within Large and Diverse Data Sets of Human Rights Documentation: Memory, Intent, and Bias.” International Journal of Communication 14 (2020): 17.

Efron, James (9 March 2020). “How machine learning changed facial recognition technology”. ShuftiPro. 4 December 2020.
https://shuftipro.com/blog/how-machine-learning-changed-facial-recognition-technology.

Elwes, Jake (2019). “Zizi Queering the Dataset”. Web. 28 November 2020.
 https://www.jakeelwes.com/project-zizi-2019.html.

Eubanks, Virginia. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, 2018.

Gitelman, Lisa. Raw data is an oxymoron. MIT press, 2013.

Karras, Tero. “This Person Does Not Exist.” Web. 28 November 2020.
https://thispersondoesnotexist.com.

Melendez, Steven (8 September 2018). “Uber driver troubles raise concerns about transgender face recognition”. 2 December 2020.
https://www.fastcompany.com/90216258/uber-face-recognition-tool-has-locked-out-some-transgender-drivers.

Nxt Museum Amsterdam (2020). “The Coded Gaze – Equitable and accountable AI”. 2 December 2020.
https://nxtmuseum.com/artist/the-coded-gaze/.

Sample, Ian (2019, July 29). “What is facial recognition – and how sinister is it?” The Guardian. 28 November 2020. https://www.theguardian.com/technology/2019/jul/29/what-is-facial-recognition-and-how-sinister-is-it

Shew, Ashley. “Ableism, Technoableism, and Future AI.” IEEE Technology and Society Magazine 39.1 (2020): 40-85. 

Zou, James, and Londa Schiebinger. “AI can be sexist and racist—it’s time to make it fair.” (2018): 324-326.

Comments are closed.