Who gets previewed? Racial Bias and User Experiments within Twitter’s Algorithmic Image Cropping

By:
On: September 27, 2020
About


   

Recently Twitter users have been discovering the algorithmic bias of Twitter’s image cropping system. How have they experimented and what do we do with the results? Why is Twitter selecting white faces? Why does it prefer Ted Cruz with over-sized anime breasts? Who can fix it?

A Horrible Experiment

On September 20th 2020 Twitter User @bascule posted the below tweet. A static screenshot cannot explain the “experiment” it runs, the algorithmic result being experimented with here is the bias of the Twitter Image Preview.

What is Twitter’s problem?

When posting an image to Twitter that is larger than the “image preview” window the Twitter image cropping algorithm will crop a section of the image to be previewed in an aspect ratio of 16:9. In order to see the ‘full’ image a user must click into the image – although the reality of users having the knowledge of that action speaks to further issues – If a user is unaware of the image crop they may assume the cropped image is the entire image.

This “image preview view” is observed in the main twitter feed, on specific tweet threads, and in replies to a tweet. When @bascule posted this tweet he was observing which section of the below images the Twitter image algorithm would select to preview:

Is Twitter racist?

While the tone of @bascule‘s tweet may suggest that Twitter has a specific political goal related to the preview of the images, the reality is that the platform Twitter does not regard the content of the image in such semiotic terms. Twitter’s algorithm considers the digital object in terms of its material content. To explore further why this is happening twitter, observe the below thread which appeared almost 24 hours before @bascule‘s. While @bascule was concerned first and foremost with this Twitter question @Colinmadland came to Twitter with a concern over a different instance of racial bias in AI programming.

While attempting to comment on the issue of Zoom’s virtual background AI lacking recognition for black faces @Colinmadland inadvertently uncovered one of Twitter’s AI bias issues. The above image, due to image cropping on Twitter for mobile, when viewed on the Twitter mobile app is biased to preview the right-side of the image.

Not only did Twitter prefer the right-side of the image, when @Colinmadland reversed the image in a further tweet it was still the selected preview area. Why is this?

Twitter uses “salience” as its metric for image preview selection, rather than facial recognition. Twitter’s algorithm is informed by data obtained through “Academics studying saliency by using eye trackers, which record the pixels people fixated with their eyes. In general, people tend to pay more attention to faces, text, animals, but also other objects and regions of high contrast” (Theis & Wang, 2018). This algorithmically predicts what the user “wants” to see.

As Twitter posted in a 2018 infrastructure update, ” The basic idea [of salience based image preview selection] is to use these predictions to center a crop around the most interesting region”. Which on the surface seems to be an innocuous goal, but we must consider the inherent bias of data in this construction of a salience metric.

When both @bascule and @colinmadland‘s tweets gained traction some Twitter staff members involved themselves in posting. After considering factors other than race when responding to @colinmadland – namely the facial hair being a factor in preferring one face over the other – Chief Data Officer of Twitter Dantley Davis stated “I am as frustrated as everyone else.”

The nature of Twitter being an endlessly updated and edited platform means there will be further uncovering of issues with Twitter as long as its platform remains updating. This does not negate the perceptions of users of racist AI (intentional or otherwise) nor does it remove responsibility from either Twitter or its staff members to consciously act on this racial bias. The fact that Twitter already performs bias testing on new features does not remove the responsibility from Twitter when a new bias is either unearthed or culturally perceived, As @Dantley says in a further tweet which acknowledges this responsibility.

User Experiments with the algorithm

Although the intention of salience algorithmic selection was not to result in racial bias, the fact remains that this biased algorithm is (currently) available to observe, engage with, and manipulate. Users have provided not only further examples of bias, but are intentionally attempting to manipulate the result of this bias.

Algorithm vs. User-Manipulated Image

https://twitter.com/julinhacreicrei/status/1307776871124406283?s=20

Even when presented with a visually disturbing face-on-another-face the algorithm will sooner preview white-body-with-black-face over the unaltered black person’s image. Although the intention of the salience algorithm is not to enact racial bias, the reality of this preview and the perception of such as fact by the end-user is vital to the feedback loop of updating future twitter algorithms.

Algorithm vs. @DistortBot

To move back to the @bascule tweet for a moment, with the example of two images of Michael Jackson, Twitter chose to preview the lighter-skinned photograph. Twitter user @maranguigo then replied to the tweet tagging @DistortBot (DistortBot is an automatic twitter bot that “distorts” images when tagged and posts them as a reply).

https://twitter.com/DistortBot/status/1307475440714625024?s=20

Even through intense image distortion to remove recognizable traces of human in the image, the salience algorithm will still preview the area that previously was a lighter-skinned face.

This isn’t new: what now?

Racial bias in technology and specifically in AI technologies has been a cause for concern for users, developers, and academics for eons of technology and media developments. With documented examples of technology-evolution-enabled racial bias seen in: photography & film (1, 2), automatic soap dispensers, and facial recognition (to begin a long list). Historically racial bias in the furthering of technology has been inadvertently enacted, noticed, resolved, and intentionally implemented. Is it the obligation of the individual to view the result on their screen with a critical lens and to hold platforms and institutions to account? Twitter may have been aware of the racial bias in the image preview, but without user outcry would there be acknowledgement and promises of rectification?

Moving forward with examining this specific issue on Twitter, user @vinayprabhu has performed an initial experiment to analyse the result of this user-perceived bias. They have also developed @cropping_bias in order to “run the complete experiment”, which would be analysis of all images posted on Twitter with documentation of the result of salience-based algorithmic cropping. As of time of writing the @cropping_bias account is awaiting approval of Developers Credentials from Twitter.

https://twitter.com/vinayprabhu/status/1307497736191635458?s=20

Bibliography

Ardizzone, Edoardo, et al. “Saliency Based Image Cropping.” Lecture Notes in Computer Science, www.academia.edu/35825403/Saliency_based_image_cropping.

“Introduction: Digital Media and Social Theory.” Media, Society, World: Social Theory and Digital Media Practice, by Nick Couldry, Polity, 2013, pp. 1–28.

Lewis, Sarah. “The Racial Bias Built Into Photography.” The New York Times, The New York Times, 25 Apr. 2019, www.nytimes.com/2019/04/25/lens/sarah-lewis-racial-bias-photography.html.

Mehta, Ivan. “Why Twitters Image Cropping Algorithm Appears to Have White Bias.” The Next Web, 21 Sept. 2020, thenextweb.com/neural/2020/09/21/why-twitters-image-cropping-algorithm-appears-to-have-white-bias/.

Simonite, Tom. “The Best Algorithms Still Struggle to Recognize Black Faces.” Wired, Conde Nast, 22 July 2019, www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/.

Theis, Lucas, and Zehan Wang. Speedy Neural Networks for Smart Auto-Cropping of Images. 24 Jan. 2018, blog.twitter.com/engineering/en_us/topics/infrastructure/2018/Smart-Auto-Cropping-of-Images.html.

Comments are closed.