You Won’t Believe What These Young Liberal Students Did Their Project On

Fake news

When Pheidippides burst into the Greek assembly to exclaim ‘nenikēkamen’, after running from the battle of Marathon to Athens where the Greeks had just conquered the Persians, his death gave birth to the concept of the marathon sporting event which still prevails. The problem is, we actually don’t know if his name was Pheidippides. Plutarch for example gives him several different names, other stories claim that the Greek army returned from the battle to the undefended Athens to inform them of the victory and that it was not the young man after running the vastly debated 240 km to get there. Or was it 40 km, or was it 35 km, or did he go to Sparta and not Athens at all?

Misinformation has been around for thousands of years, and has given birth to many modern phenomenon like the marathon, footnotes, Christmas, and Donald Trump’s presidency. ‘Fake news’ is one of the former reality tv-show host’s favourite words, along with “fake books”, “the fake dossier”, and “fake CNN”. However, like the distance from Marathon to Athens, our definition of ‘fake news’ and ‘misinformation’ is changing and expanding every day. ‘Fake news’ no longer applies to just blatant misinformation, but to a tactic, a style of writing and language, which is becoming increasingly harder and harder to notice or even care about. Everyone ‘thinks’ that they know what fake news is; it’s a website with an unregistered domain, it’s an article that pops up when you’re on another website illegally watching The Office US, it’s a headline that says ‘How Colleges Fail Young Trump Supporters’ or ‘You Won’t Believe the Secret in Traveling to Your First Half-Marathon, And Successfully Finishing’, but these last two headlines are from trusted news sources The Republic and Forbes Magazine, according to my fake news plug-in Trusted News. Both of these articles are ‘true news’: they contain facts and quotes from reputable databases and sources. Yet fake news linguistic tactics in the wording of the articles titles (clickbait) were used to entice you to click on it, generate revenue for the website, but then leave when you were bored, adding to the additional fake news arch of the ‘click-and-go’ economy. Talking to The Verge, Sarah Roberts, Assistant Professor of Information Studies at UCLA, said that ‘[i]n the era of abundant information, people need that expertise now more than ever”, to detect what makes fake news, fake. Enter our application: Trumpeter.

Image 1: This statistic shows public opinion on tech companies restricting fake news online in the US in 2018. 42% stated that freedom of information should be protected, even if it means false information can still be published.  

 

Fake News Plug-ins

‘Fake news’ is a far reaching phenomenon with a lot of different industries responding to it. The tech and AI industry has responded with plug-ins such as Trusted News, B.S Detector, Surf-Safe, and many other browser add-ons which will alert you to the trustworthiness of a news source. Some of these add-ons use Politifact, Snopes.com, and other statistic sharing sites to measure the fakeness. Others will flag the news source if it is satirical or possibly fake, but may accidentally flag art or feel-good news stories, or just offers you the chance to alert the programmers to fact-check the source. The journalism field itself has responded by urging their readers to trust only their sources, to use these plug-ins, or have created subheadings on their websites like the BBC to educate their readers on what they should be looking for, giving advice such as ‘does this seem trustworthy to you?’. The academic response has been in institutions such as Reuters and Stanford, who are working to create large databases of fake news, and creating large quantities of data on how these are written and distributed.

Our intervention is to use a combination of both contemporary media and academic sources–like those being offered by the institutions above–to use these databases to help inform the consumer of the lexical demarkers of fake news, so that we can both educate and empower the readers.

Image 2, 3, 4: SurfSafe Plug-in, Trusted News, Fake News Detector.

 

Trumpeter

Our application, Trumpeter, is a browser add-on which can be used to scan articles and news sources on news sites or social media streams on their reliability based on linguistic elements present in the text. If a user opens the Trumpeter plug-in window, the simple view will pop up. The simple view of Trumpeter shows the user the reliability of the article on a scale of 0-100%. The user can click through to the detailed view, which flags the linguistic elements that cause the reliability rating to go down. Both windows include a hyperlink to the Trumpeter website which thoroughly explains the algorithms behind Trumpeter. A user can choose to show or hide this window, making Trumpeter a non-invasive plug-in, giving users the option to browse their favourite click-baity entertainment websites without being notified of the potential fakeness of it all.

Image 5: Demo of the simple view of Trumpeter when used on an article.

 

How does it work?

The output of the plug-in appears to be a standard across many fake news, tracker, or cookie related browser plug-ins; to establish whether or not the article is fake news and alert the user to its status. However, our plug-in is designed not to simply highlight or flag something as fake news like other add-ons, as you can see from our above examples that these do not react for example to content created by news sources simply designed for entertainment or opinion, but to equip the user with what can be misleading or inflammatory about a piece using lexical diameters. Other fake-news plug-ins focus on IP addresses, photos, videos with text, pop-ups etc, flagging an item for such based on coding or visual clues. Our plug-in uses lexical algorithms to scan the articles based on fake news databases, and then highlights them for the user. Using a combination of web scraping, and a lexical algorithm which we will feed our database of words used most frequently in fake news articles, sourced from institutions like Reuters and Stanford Media Lab, we will not only be able to stay up to date with how media and fake news is evolving, but equip our users with their own tools to also register this problem in their own consumption of media.

Image 6: Demo of the detailed view of Trumpeter when used on an article.

 

Our List

Our list was compiled using research we had found looking at open source content provided by Reuters and Stanford, as well as studies done by the Pacific Northwest National Laboratory, Birmingham University, MIT, and the Paul G. Allen School of Computer Science & Engineering University of Washington. While the entire databases were not accessible, we were able to compile a workable list from the papers that they published. Markers such a hyper sensationalist words (Attack, Benghazi, Bleak, Damned, Destroy, Endless, Eviscerate), over use of these words, spelling, grammar, and over used punctuation (!!), were compiled and then cross referenced with news articles published on or immediately following the day of the speech President Donald Trump made apologising to alleged sexual assault perpetrator Brett Kavanaugh, as we had the text of the actual speech given and had full audio visual clips of the news event to cross-reference to prove the legitimacy of what reported and quoted to further split hairs when understanding whether or not articles were legitimate or truthful.

 

Demo

To establish the groundwork for the plug-in to work, we ran a manual demonstration without the fully established or coded software necessary for a successfully implemented intervention such as Trumpeter to succeed as an internet browser plug-in. With the example of Trump apologising to the now-Supreme Court Justice Brett Kavanaugh in mind, we found articles from the day of the incident or (8th October 2018) the day immediately after (9th October 2018) from some of the news sources as established by a public American poll for most trustworthy news sites and also to some degree a list of trusted sites. The sites and articles we established as most pertinent to our demo were The BBC, Breitbart, Buzzfeed News, The Guardian, InfoWars, Reuters, and finally, to establish if it would work for smaller social media posts we also included a post by OccupyDemocrats Facebook page on the topic.


Image 7: The meme format somewhat defies the software due to its limitations, including image posts which at present cannot be recognised as easily as more raw text. Source: OccupyDemocrats Facebook page.

By manually listing every word in an Excel sheet and cross-referencing each article with the established list of words as mentioned above, we scored the six sites/sources with a concise and simplified one point per word system. Once finished, each sites’ scores were added to create their respective total. With our total scores lined out, we established that our linguistic tracking approach revealed many ‘trustworthy’ news sites along with feel-good stories scored much higher than average including The Guardian at 17 points and Buzzfeed News at 23 points garnering the two highest scores. The OccupyDemocrats Facebook page scored unreasonably low (3 points) comparatively which we can infer is from the lack of ‘incriminating’ words, with the short and snappy meme format defying the linguistic tracking. The two sites that scored in the middle however (10-19 points), with Breitbart at 14 points and InfoWars at 12 points, upon a further and more qualitative and critical approach given our understanding of the way fake news actually operates seem to be the most likely to post fake news. Therefore, given the depth of our studies on this it appears that a potentially flag worthy score may be in the middle of the range. This is likely due to 1, the prominence of many of the often emotive ‘flag’ words in articles that attempt to attract readers or to subscription via emotive or persuasive language, and 2, the relatively ‘slim-pickings’ in terms of available words to be linguistically analysed within the plug-in.

It needs to be said however, that while we did quantitative research by compiling as much as was available from the academic sources available, it is important to note that qualitatively this sadly at the present stage of development amounts to a drop in the ocean when there is an expansive library of research we could access with sufficient time and funding. This would be achieved through being established as a legitimate web browser plug-in. Therefore, by using the facilities and academic sources available to us from the library at the University of Amsterdam, we are legitimising our searches, yet we are unable to fully compound them due to the limited access at this moment in time. Hopefully however with further development we could reach out to these databases, or indeed news sites or news or information institutes leveraging any possible incentive for social media labs to outsource their findings to use or allow us access that furthers our development. With the presence of a fully implemented software there would be significantly faster, and therefore more readily available and analysable data, to establish common trends with articles of similar lengths and also similarities between all articles or sources. From there on we could even further finetune the scope of our definitions to the point where there is a fully comprehensive list of ‘flag’ words for each variety of content format predominant online. Further categorising content into categories could also contribute to furthering the scope and effectiveness of the app by being able to account for any potential nuances found within the content the app seeks to analyse.

 

The future of Trumpeter

Just as the other fake news plug-ins, Trumpeters has its limitations. For example, Trumpeter will only function in English, as our data sources are in English and we do not have similar resources in other languages. Another concern is the ever-changing nature of journalism and the internet. As fake news continues to be analysed, fake news writers will find new methods to write convincing fake news. This means we must remain up to date on said developments.

Currently, Trumpeter is proposed as a free add-on. Since it will continue to grow and need continuous updating to remain accurate, we will need to monetize it somehow. This can be in the form of paid versions of the plug-in, however the added features are still to be discussed.

The potential of Trumpeter for one includes the growth of the plug-in with time to develop more complex algorithms to encompass not only traditional news articles or the standard format for news information online such as news sites or messaging or social boards. This could include formats such as memes and social media posts that base their style off of a more light and conversational tone.

Trumpeter aims to not only to help people get to the truth, but it also aims to make the internet great again.

 

References

Keith, Tamara. “President Trump’s Description of What’s ‘Fake’ Is Expanding.” NPR, NPR, 2 Sept. 2018, www.npr.org/2018/09/02/643761979/president-trumps-description-of-whats-fake-is-expanding?t=1540323237329.

Meyer, Robinson. “The Grim Conclusions of the Largest-Ever Study of Fake News.” The Atlantic, Atlantic Media Company, 12 Mar. 2018, www.theatlantic.com/technology/archive/2018/03/largest-study-ever-fake-news-mit-twitter/555104/.

Vincent, James. “Browser Plug-Ins That Spot Fake News Show the Difficulty of Tackling the ‘Information Apocalypse.’” The Verge, The Verge, 23 Aug. 2018, www.theverge.com/2018/8/23/17383912/fake-news-browser-plug-ins-ai-information-

apocalypse.

 

Introduction

Users that roam the European Internet on a daily basis will have come across them countless of times. Each time a new website is visited it pops up in the middle of the screen or it rests in the corner of the website in the form of a banner: a cookie notice. It asks you for your consent to the use of cookies. What these cookies may actually entail remains unknown for the majority of its visitors. For the majority of users, the cookie notice is just an annoying pop-up that blocks their view. Clicking the ‘I accept’ button as quickly as possible, and thus allowing these tracking cookies, has become just another part of their everyday browsing ritual.

The spur of cookie consent requests on nearly every website is a consequence of the General Data Protection Regulation (GDPR) that came into effect earlier this year. The regulation is there to protect user privacy in the digital realm: tracking cookies may only be used if the user consents first. In theory, this regulation should give users more control over their own data. However, the problem here is that users still do not have an easy possibility to actually opt-out of this practice. Most users blindly accept the cookies and continue on with their browsing activities. The moment the cookie, a text file that ‘memorizes’ things about websites, is placed upon a browser, access is granted to first and third parties to collect, store and analyze possibly every movement from that point on. Moreover, websites are even able to track users across the web: social media sites such as Facebook are generating enormous profits just by selling data that they extracted from their users. The user-generated content and browser activity thus creates value, but there is no monetary compensation for this. In a sense, the audience performs free digital labour. Users now function as both consumers and creators of media products.

With our project, we aim to make users of Web 2.0 aware of the fact that they are in a sense part of a digital labour system and that their personal data is being “exploited” by third parties through the use of cookies. There still remains a lack of transparency within this practice and general knowledge about different types of cookies and what they do is still quite scarce. As Yeung points out, there exists what Nissenbaum has called a ‘transparency paradox’ within the practice of third-party tracking right now: people must be informed about what kind of data is being collected and for what purposes (Yeung, 126). However, to avoid information overload, the information that is provided is often not very detailed. Yeung emphasizes that most users within the Big Data environment are not fully aware of what they are actually sharing, but are focused on the convenience and efficiency that digital services have to offer. (Yeung, 126). Our project, “Cookie Monster”, is designed to make users more aware of what the ‘I accept’ button means for their personal data and to assist them in making an informed decision about cookies via a plug-in and a website. The goal of our technological intervention would ultimately be to bring awareness to how much of our data is being used without our knowledge and to show people the ways to protect themselves from these forms of data-exploitation.

First, we will frame the issue of third-party tracking by discussing the notion of free labour through various literary work. Secondly, the methodology section will further elaborate on how the plug-in and website were designed and how the data for this was accumulated. Then the Cookie Monster plug-in and website will be introduced and finally a conclusion will be given.

 

Allowing cookies: a form of free labour?

The classic form of labour took place within the traditional environment of the firm or factory (Lazzarato, 4). However, the rise of the Internet has defined new forms of labour that can take many different shapes and sizes. As opposed to the classic or traditional form of labour, immaterial labour in the digital media environment takes place in the society, in a system of networks and flows (Lazzarato, 4). We can apply this shift to the the “society-factory” that finds its roots in Italian Marxist theory (Gill & Pratt, 1). Terranova links the “society-factory” to the digital economy and broadly defines it as “a process whereby work processes have shifted from the factory to the society” (34). As Terranova argues, the Internet is not an empty space: it includes forms of cultural and technical labour that produce a continuous flow of labour that is inherent “to the flows of the network society at large” (34). This new economic system of the digital economy has important implications for the way that labour is defined in advanced capitalist societies. In the age of the digital economy, the source of added value is generated by users (Terranova). In a sense, personal behavioral data that is continuously being created by users has actually become a currency itself. (Libert, 3544).

By surfing the web and generating content online, everyday users of the Internet are generating large amounts of valuable data for companies. Scholz mentions waged and unwaged ‘digital labour’ and explains that this concept can entail activities from writing fanfiction or “game modding” for example (1-2) or, as in our case, ‘producing’ data for third-party advertisers. Personal information such as browser activity and user-generated content, but also user settings, information about previous purchases etcetera is turned into value. However users are not being compensated for the “labour,” i.e. time and effort,  they have put into this. The exploitation of personal data for commercial and profiling purposes has grown to be a big part of the digital economy. A large quantitative analysis conducted in 2015 indicated that nearly nine out of ten websites leak user data to parties of which the user is most likely unaware and that third-party cookies are placed on over six in ten websites (Libert, 3544). The labour that is being performed can be viewed as free labour, because it generates value. As Read has pointed out, (potential) labour in the age of neoliberal capitalism can be any activity that works towards desired ends (Read, 8).

 

Relevance

Our aim with this research is to create awareness and to make visible how much valuable data users produce every day and how little users are actually compensated for this. Users are usually unaware of the fact that their every movement is being monitored by third-party tracking devices on the entire web. Most users would for example be unaware that even if they visit health-related websites their browsing data is being passed on to third-party entities. We want to show users how essential audience labour power is to corporations, as they function as both consumers and creators of media products. Furthermore, we to make all kinds of everyday users of Web 2.0 aware of the current existing technologies and methods they can actually implement to circumvent the tracking of their data in a simple way. By building this type of awareness we are showing users the ways in which they can gain back some control over their personal data in a world where data-mining is an ever growing practice.  

 

Methodology

The objective of Cookie Monster relies largely on a literary review of free labour. The literature was found in the Google Scholar database by searching for the keywords “free labour” and “immaterial labour”. Not every literary source explicitly mentioned the connection between their notion of labour with the digital aspect, but this is done on purpose to provide a clear picture of the full academic debate around free labour. The plug-in for Cookie Monster was designed with the digital design application called Sketch. The data that is shown in this plug-in is retrieved by two different already existing applications: Data Calculator by Datum and Ghostery by Evidon Inc. The Data Calculator is developed by Datum draws on available information from both public as well as privately-owned companies, and information that their companies as gathered at trade shows (Theo, n.pag.).

For the designing of the mock-up of the website Cookie Monster, we used the website building platform, Wix. The information about the cookies has been found through Google search.

 

Cookie Monster

With the increasing computerization of our world, it is important to bring attention to the commodification and exploitation of users online data. More than three billion people have access to the Internet and the number of people that roam the Web on a daily basis will most likely only continue to grow. Most of these people are either unaware that their data is being tracked and for which purposes, or ill-informed about this phenomenon (Tuunainen, Pitkänen & Hovi, 9). In the context of Neoliberalism, the Internet has given people the freedom to participate online without showing them how these online activities benefit the accumulation of capital for corporations. While social media provides the space for open communication, the cultural capital produced by consumers is seen in economic terms that are motivated by the desire to accumulate money. Social media platforms, such as Facebook and Twitter, depend on its users to produce content that can be leveraged and sold to advertisers for profit. This can also be seen as a form of free labour.

Cookie Monster is designed to raise awareness about what cookies are on the visited website, how they work and how much value is generated from them. It takes the form of a plug-in that can be added to your browser which shows you how much your extracted data is worth and how many cookies are placed (figure 1). By revealing the actual worth of the users’ data, Cookie Monster aims to raise awareness about the free labour aspect of sharing your data as a user. The plug-in also shows how many cookies are placed to show that there can be many and that. Additional to the plug-in, there is a general website with basic information about cookies and their implications. This website contains information about various types of cookies and their purposes, but also proposes options that can protect the user while roaming the Internet.  

Cookie Monsters’ aim to raise awareness about this issue but also make the information more accessible to the general public. Most of the existing information that is available about how cookies work and what their implications are might not always understandable to for the not digitally natives and/or have a subjective tone.

The website includes the EU regulations since Cookie Monster is catered specifically to European users that are affected by the new cookies laws. Firstly, the website explains what a cookie is: it is a small text file that is placed on the users hard drive when they visit a website and is used for storing information about users. Secondly, the different types of cookies are explained. The EU law namely distinguishes between two types of cookies: session and persistent. Session cookies are strictly required for website functionality and don’t track user activity once the browser window is closed. Persistent cookie however, do track user behavior even after a user has moved on from the site or closed the browser window. It is important that users know the difference between these, because these persistent cookies can pose a greater risk than session cookies, since they can track your activity over time on multiple websites.Cookie Monster also explains what cookies are used for: how websites use cookies to store your preferences, such as your username and password – and then how advertisers use the information stored in cookies to create unique profiles for targeting users (Penland, n.pag.). Finally, Cookie Monster gives users the option to protect their privacy. These options include different ways that users can avoid cookies tracking their data,  for example links to install internet security software, or a company called Evidon created a consolidated link of the most common third-party cookies where you can opt-out of all of them.

 

Figure 1

Figure 2

 

 

 

Cookie Monster Website: https://danalamb3.wixsite.com/cookiemonster

 

Conclusion

The introduction of Web 2.0 has essentially allowed companies to profit off of users activities online. With this project we wanted to evaluate how corporations are able to use cookies to identify people and track their online behaviour. Companies are now starting to be held accountable for their practice of data collection as the new European Cookie Laws make all platforms display notices on their sites when Cookies are placed to track users movements across the web. While some of these notices may provide users with the opportunity to read more about what cookies are, this information is often presented in long and confusing text. Therefore, most people just accept cookie notices without understanding the implications.

By creating Cookie Monster, our hope is to make the public aware of their involvement in a process of unwaged labour where everything they post online is monitored and analyzed in the form of data and then sold to tertiary parties. As many users accept Cookies without a second thought, we want to show them that these seemingly harmless text files may be invading their personal privacy. While not all Cookies are dangerous, it is important for users to understand who has control over their personal data and how much of their lives are being tracked and  exploited. Users should not feel as though they need to block cookies all together, and in fact this could make it almost impossible to effectively browse the web. Rather, we want to pass along important knowledge about cookies in a succinct and easy-to-understand format. For example, demonstrating to users that only certain cookies involve tracking personal information and simply turning of third-party cookies can diminish the possibility of private data being spread to unintended parties. There are several ways to circumvent the threat that cookies pose to users privacy, and through Cookie Monster, we want to teach the public how to keep their information safe.

Bibliography

Gill, Rosalind and Andy Pratt. “In the Social Factory? Immaterial labour, Precariousness and Cultural work.” Theory, Culture & Society 25.7-8 (2008): 1-30.

Lazzarato, Maurizio. “Immaterial Labor.” Radical Thought In Italy: A Potential Politics 1996 (1996): 133-47.

Libert, Timothy. “Exposing The Hidden Web: An Analysis of Third-Party HTTP Requests On One Million Websites.” International Journal of Communication 9. (2015): 3544-3561.

Penland, Jon. “What’s The Deal With Cookie Consent Notices.” WPMUDEV. 17 June 2016. 15 August 2018. <https://premium.wpmudev.org/blog/cookie-consent-notices/>

Read, Jason. “A genealogy of homo-economicus: neoliberalism and the production of subjectivity.” A Foucault for the 21st century: Governmentality, Biopolitics and Discipline in the New Millennium.  Ed. S. Binkley & J. Capetillo-Ponce. Newcastle Upon Tyne: Cambridge Scholar Publishers, 2016. 2-15.

Scholz, Trebor. “Why Does Digital Labor Matter Now?” Digital Labor: The Internet as Playground and Factory. Ed. Trebor Scholz. New York: Routledge, 2012. 1-9.

Terranova, Tiziana. “Free Labor.” Digital Labor: Digital Labor: The Internet as Playground and Factory. Ed. Trebor Scholz. New York, NY: Routledge, 2012. 33–57.

Theo. “How Much Is Your Data Worth? Datum Calculator Answers Your Questions.” Datum Blog. Datum. 6 November 2017. 15 Oktober 2018. <https://blog.datum.org/how-much-is-your-data-worth-datum-calculator-answers-your-questions-f8fb38575153>

Tuunainen, V. K., O. Pitkänen, and M. Hovi. “Users’ Awareness of Privacy on Online Social Networking sites-Case Facebook.” Bled Conference, June 14-17, Bled, Slovenia. 2009.

Yeung, Karen.‘Hypernudge’: Big Data as a mode of regulation by design.” Information, Communication & Society, 20. (2017): 118-136.

 

 

Introduction

Users that roam the European Internet on a daily basis will have come across them countless of times. Each time a new website is visited it pops up in the middle of the screen or it rests in the corner of the website in the form of a banner: a cookie notice. It asks you for your consent to the use of cookies. What these cookies may actually entail remains unknown for the majority of its visitors. For the majority of users, the cookie notice is just an annoying pop-up that blocks their view. Clicking the ‘I accept’ button as quickly as possible, and thus allowing these tracking cookies, has become just another part of their everyday browsing ritual.

The spur of cookie consent requests on nearly every website is a consequence of the General Data Protection Regulation (GDPR) that came into effect earlier this year. The regulation is there to protect user privacy in the digital realm: tracking cookies may only be used if the user consents first. In theory, this regulation should give users more control over their own data. However, the problem here is that users still do not have an easy possibility to actually opt-out of this practice. Most users blindly accept the cookies and continue on with their browsing activities. The moment the cookie, a text file that ‘memorizes’ things about websites, is placed upon a browser, access is granted to first and third parties to collect, store and analyze possibly every movement from that point on. Moreover, websites are even able to track users across the web: social media sites such as Facebook are generating enormous profits just by selling data that they extracted from their users. The user-generated content and browser activity thus creates value, but there is no monetary compensation for this. In a sense, the audience performs free digital labour. Users now function as both consumers and creators of media products.

With our project, we aim to make users of Web 2.0 aware of the fact that they are in a sense part of a digital labour system and that their personal data is being “exploited” by third parties through the use of cookies. There still remains a lack of transparency within this practice and general knowledge about different types of cookies and what they do is still quite scarce. As Yeung points out, there exists what Nissenbaum has called a ‘transparency paradox’ within the practice of third-party tracking right now: people must be informed about what kind of data is being collected and for what purposes (Yeung, 126). However, to avoid information overload, the information that is provided is often not very detailed. Yeung emphasizes that most users within the Big Data environment are not fully aware of what they are actually sharing, but are focused on the convenience and efficiency that digital services have to offer. (Yeung, 126). Our project, “Cookie Monster”, is designed to make users more aware of what the ‘I accept’ button means for their personal data and to assist them in making an informed decision about cookies via a plug-in and a website. The goal of our technological intervention would ultimately be to bring awareness to how much of our data is being used without our knowledge and to show people the ways to protect themselves from these forms of data-exploitation.

First, we will frame the issue of third-party tracking by discussing the notion of free labour through various literary work. Secondly, the methodology section will further elaborate on how the plug-in and website were designed and how the data for this was accumulated. Then the Cookie Monster plug-in and website will be introduced and finally a conclusion will be given.

 

Allowing cookies: a form of free labour?

The classic form of labour took place within the traditional environment of the firm or factory (Lazzarato, 4). However, the rise of the Internet has defined new forms of labour that can take many different shapes and sizes. As opposed to the classic or traditional form of labour, immaterial labour in the digital media environment takes place in the society, in a system of networks and flows (Lazzarato, 4). We can apply this shift to the the “society-factory” that finds its roots in Italian Marxist theory (Gill & Pratt, 1). Terranova links the “society-factory” to the digital economy and broadly defines it as “a process whereby work processes have shifted from the factory to the society” (34). As Terranova argues, the Internet is not an empty space: it includes forms of cultural and technical labour that produce a continuous flow of labour that is inherent “to the flows of the network society at large” (34). This new economic system of the digital economy has important implications for the way that labour is defined in advanced capitalist societies. In the age of the digital economy, the source of added value is generated by users (Terranova). In a sense, personal behavioral data that is continuously being created by users has actually become a currency itself. (Libert, 3544).

By surfing the web and generating content online, everyday users of the Internet are generating large amounts of valuable data for companies. Scholz mentions waged and unwaged ‘digital labour’ and explains that this concept can entail activities from writing fanfiction or “game modding” for example (1-2) or, as in our case, ‘producing’ data for third-party advertisers. Personal information such as browser activity and user-generated content, but also user settings, information about previous purchases etcetera is turned into value. However users are not being compensated for the “labour,” i.e. time and effort,  they have put into this. The exploitation of personal data for commercial and profiling purposes has grown to be a big part of the digital economy. A large quantitative analysis conducted in 2015 indicated that nearly nine out of ten websites leak user data to parties of which the user is most likely unaware and that third-party cookies are placed on over six in ten websites (Libert, 3544). The labour that is being performed can be viewed as free labour, because it generates value. As Read has pointed out, (potential) labour in the age of neoliberal capitalism can be any activity that works towards desired ends (Read, 8).

 

Relevance

Our aim with this research is to create awareness and to make visible how much valuable data users produce every day and how little users are actually compensated for this. Users are usually unaware of the fact that their every movement is being monitored by third-party tracking devices on the entire web. Most users would for example be unaware that even if they visit health-related websites their browsing data is being passed on to third-party entities. We want to show users how essential audience labour power is to corporations, as they function as both consumers and creators of media products. Furthermore, we to make all kinds of everyday users of Web 2.0 aware of the current existing technologies and methods they can actually implement to circumvent the tracking of their data in a simple way. By building this type of awareness we are showing users the ways in which they can gain back some control over their personal data in a world where data-mining is an ever growing practice.  

 

Methodology

The objective of Cookie Monster relies largely on a literary review of free labour. The literature was found in the Google Scholar database by searching for the keywords “free labour” and “immaterial labour”. Not every literary source explicitly mentioned the connection between their notion of labour with the digital aspect, but this is done on purpose to provide a clear picture of the full academic debate around free labour. The plug-in for Cookie Monster was designed with the digital design application called Sketch. The data that is shown in this plug-in is retrieved by two different already existing applications: Data Calculator by Datum and Ghostery by Evidon Inc. The Data Calculator is developed by Datum draws on available information from both public as well as privately-owned companies, and information that their companies as gathered at trade shows (Theo, n.pag.).

For the designing of the mock-up of the website Cookie Monster, we used the website building platform, Wix. The information about the cookies has been found through Google search.

 

Cookie Monster

With the increasing computerization of our world, it is important to bring attention to the commodification and exploitation of users online data. More than three billion people have access to the Internet and the number of people that roam the Web on a daily basis will most likely only continue to grow. Most of these people are either unaware that their data is being tracked and for which purposes, or ill-informed about this phenomenon (Tuunainen, Pitkänen & Hovi, 9). In the context of Neoliberalism, the Internet has given people the freedom to participate online without showing them how these online activities benefit the accumulation of capital for corporations. While social media provides the space for open communication, the cultural capital produced by consumers is seen in economic terms that are motivated by the desire to accumulate money. Social media platforms, such as Facebook and Twitter, depend on its users to produce content that can be leveraged and sold to advertisers for profit. This can also be seen as a form of free labour.

Cookie Monster is designed to raise awareness about what cookies are on the visited website, how they work and how much value is generated from them. It takes the form of a plug-in that can be added to your browser which shows you how much your extracted data is worth and how many cookies are placed (figure 1). By revealing the actual worth of the users’ data, Cookie Monster aims to raise awareness about the free labour aspect of sharing your data as a user. The plug-in also shows how many cookies are placed to show that there can be many and that. Additional to the plug-in, there is a general website with basic information about cookies and their implications. This website contains information about various types of cookies and their purposes, but also proposes options that can protect the user while roaming the Internet.  

Cookie Monsters’ aim to raise awareness about this issue but also make the information more accessible to the general public. Most of the existing information that is available about how cookies work and what their implications are might not always understandable to for the not digitally natives and/or have a subjective tone.

The website includes the EU regulations since Cookie Monster is catered specifically to European users that are affected by the new cookies laws. Firstly, the website explains what a cookie is: it is a small text file that is placed on the users hard drive when they visit a website and is used for storing information about users. Secondly, the different types of cookies are explained. The EU law namely distinguishes between two types of cookies: session and persistent. Session cookies are strictly required for website functionality and don’t track user activity once the browser window is closed. Persistent cookie however, do track user behavior even after a user has moved on from the site or closed the browser window. It is important that users know the difference between these, because these persistent cookies can pose a greater risk than session cookies, since they can track your activity over time on multiple websites.Cookie Monster also explains what cookies are used for: how websites use cookies to store your preferences, such as your username and password – and then how advertisers use the information stored in cookies to create unique profiles for targeting users (Penland, n.pag.). Finally, Cookie Monster gives users the option to protect their privacy. These options include different ways that users can avoid cookies tracking their data,  for example links to install internet security software, or a company called Evidon created a consolidated link of the most common third-party cookies where you can opt-out of all of them.

 

Figure 1

Figure 2

Cookie Monster Website: https://danalamb3.wixsite.com/cookiemonster

 

Conclusion

The introduction of Web 2.0 has essentially allowed companies to profit off of users activities online. With this project we wanted to evaluate how corporations are able to use cookies to identify people and track their online behaviour. Companies are now starting to be held accountable for their practice of data collection as the new European Cookie Laws make all platforms display notices on their sites when Cookies are placed to track users movements across the web. While some of these notices may provide users with the opportunity to read more about what cookies are, this information is often presented in long and confusing text. Therefore, most people just accept cookie notices without understanding the implications.

By creating Cookie Monster, our hope is to make the public aware of their involvement in a process of unwaged labour where everything they post online is monitored and analyzed in the form of data and then sold to tertiary parties. As many users accept Cookies without a second thought, we want to show them that these seemingly harmless text files may be invading their personal privacy. While not all Cookies are dangerous, it is important for users to understand who has control over their personal data and how much of their lives are being tracked and  exploited. Users should not feel as though they need to block cookies all together, and in fact this could make it almost impossible to effectively browse the web. Rather, we want to pass along important knowledge about cookies in a succinct and easy-to-understand format. For example, demonstrating to users that only certain cookies involve tracking personal information and simply turning of third-party cookies can diminish the possibility of private data being spread to unintended parties. There are several ways to circumvent the threat that cookies pose to users privacy, and through Cookie Monster, we want to teach the public how to keep their information safe.

Bibliography

Gill, Rosalind and Andy Pratt. “In the Social Factory? Immaterial labour, Precariousness and Cultural work.” Theory, Culture & Society 25.7-8 (2008): 1-30.

Lazzarato, Maurizio. “Immaterial Labor.” Radical Thought In Italy: A Potential Politics 1996 (1996): 133-47.

Libert, Timothy. “Exposing The Hidden Web: An Analysis of Third-Party HTTP Requests On One Million Websites.” International Journal of Communication 9. (2015): 3544-3561.

Penland, Jon. “What’s The Deal With Cookie Consent Notices.” WPMUDEV. 17 June 2016. 15 August 2018. <https://premium.wpmudev.org/blog/cookie-consent-notices/>

Read, Jason. “A genealogy of homo-economicus: neoliberalism and the production of subjectivity.” A Foucault for the 21st century: Governmentality, Biopolitics and Discipline in the New Millennium.  Ed. S. Binkley & J. Capetillo-Ponce. Newcastle Upon Tyne: Cambridge Scholar Publishers, 2016. 2-15.

Scholz, Trebor. “Why Does Digital Labor Matter Now?” Digital Labor: The Internet as Playground and Factory. Ed. Trebor Scholz. New York: Routledge, 2012. 1-9.

Terranova, Tiziana. “Free Labor.” Digital Labor: Digital Labor: The Internet as Playground and Factory. Ed. Trebor Scholz. New York, NY: Routledge, 2012. 33–57.

Theo. “How Much Is Your Data Worth? Datum Calculator Answers Your Questions.” Datum Blog. Datum. 6 November 2017. 15 Oktober 2018. <https://blog.datum.org/how-much-is-your-data-worth-datum-calculator-answers-your-questions-f8fb38575153>

Tuunainen, V. K., O. Pitkänen, and M. Hovi. “Users’ Awareness of Privacy on Online Social Networking sites-Case Facebook.” Bled Conference, June 14-17, Bled, Slovenia. 2009.

Yeung, Karen.‘Hypernudge’: Big Data as a mode of regulation by design.” Information, Communication & Society, 20. (2017): 118-136.

 

Introduction

In 1995, the then thirty-year-old Jeff Bezos founded Amazon and by using the emerging World Wide Web, he was able to sell books globally, from his garage. Although he could probably not imagine that he would one day become the richest man on earth through his company, he already saw quite clearly the potential of the ubiquitous internet. Today Amazon sells pretty much anything: from books to provisions, from services for Cloud computing to video streaming. This is possible mainly because Amazon acts as an intermediary or ‘middleman’ for many small businesses.

By making optimal use of technology, existing infrastructure and labour, Bezos managed to make Amazon the biggest e-commerce store in the world. Last year, the company generated a revenue of 178 billion dollars (Annual Rapport 2017 25). Amazon’s goal is to serve and provide customers in the best possible way and aims to fulfil their needs. By taking this as a starting position, Amazon has ensured that the consumer nowadays has the power in the economy. Where that used to be the manufacturer and the shopkeeper (Amazon zij met ons). With an Amazon Prime Membership it becomes even easier and lucrative for the customer to shop at Amazon. Furthermore, Amazon invests its sales back into the company, which makes it possible to keep the prices low. In addition to the products sold on Amazon.com, the company also offers web services through its subsidiary Amazon Web Services (AWS). With these services they can collect significant amounts of data about their consumers, which can be used for different purposes. For example, Amazon is now also trying to delve into the market for medicines, education and energy. Because they have many and enormous datasets and have made a lot of progress in the field of Artificial Intelligence, government agencies in those sectors are eager to cooperate with them (Amazon zij met ons). The goal is to improve, for instance, health care and education. Sounds like a dream business, right?

Context

However, a company like Amazon causes troubles in society on different levels. Especially given the enormous growth of the company, which does not seem to stagnate in the near future, these problems are only expected to become worse. The whole idea and the way of shopping and the shift of power has been completely changed by platforms like Amazon. We are mainly concerned about the relationship Amazon has with its customers. Jeff Bezos’ idea and main goal for Amazon was, and still is, to become “earth’s most customer-centric company where customers can find and discover anything they might want to buy online, and endeavours to offer its customers the lowest possible prices” (Amazon Jobs). Knowingly and willingly Amazon tries to influence customers and maybe even manipulate them to purchase more and more goods on their website.  

The method of steering consumers, often without them being aware of it, into the buying of more products on Amazon, can be regarded as nudging. Richard Thaler and Cass Sunstein describe nudging in their book as ‘’any aspect of choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives’’ (120). When purchasing goods at Amazon, it often does not result in only buying the good(s) you were initially looking for. To elaborate, the interface of Amazon is structured in a particular way that makes it very easy to find goods and purchase them. Furthermore, when consumers searching for a specific product, many extra suggestions are provided by Amazon such as ‘’customers also shopped’’ or ‘’sponsored products related to this item’’ and “you may also like”. By doing this Amazons design “makes a normative claim about its purpose” (Stanfill 1060). So, Amazon manipulates its users to act in a particular way on their website. This can be seen as a modern form of power and control.

Amazon is able to do this because they are gathering large amounts of data about their customers online behaviour. John Cheney-Lippold agrees on this in his article “A New Algorithmic Identity”. He claims that we are entering an online world where our identities are largely made for us through inference based on web use. This algorithmic identity is a “formation that works through mathematical algorithms to infer categories of identity on otherwise anonymous beings” (Cheney-Lippold 165). This technique is used for different purposes, but mostly used in marketing and advertising: “mathematical algorithms allowed marketers to make sense out of these data to better understand how to more effectively target services, advertisement, and content” (Cheney-Lippold 168). For Amazon, this means that they can approach customers more personally and individually. It gathers the data that is produced by the customers and categorizes it so that it can be used (against them) to nudge them towards more purchases. Cheney-Lippold warns us for the consequences of buying everything online. He argues we have no control over who we algorithmically are as your identity is made useful not for yourself but for someone else, in this case for Amazon (178).

The option for an Amazon Prime Membership makes it even more lucrative to shop at Amazon and for the consumer it creates a physiological commitment with the company. Almost everything you can think of is sold on Amazon, the shipping costs are free, your order will be delivered to the door as soon as possible and returns are free of charge. With a Prime Membership Amazon engages in a personal relationship with the customer, which results in a greater shopping enjoyment. Brand loyalty and sensitivity play a major role in this and ensures that the customer continues to buy from Amazon (Pappas et. al. 732).

In conclusion, through nudging and by making it very lucrative to shop on their website using a Prime Membership, Amazon strengthens the relationship between the consumer and itself and essentially perpetuates the vicious cycle.

Project aim

Considering that Amazon mostly reinvests its revenue back into the company in order to innovate and undercut competition to become even wealthier, this money is drawn from the ‘regular’ economy. It is expected that this is going to influence the market and will probably result in the disappearance of specific market segments. On the long term, it will entail that the regular market will be disrupted while Amazon by then will probably have gained a monopoly position. Amazon can already be seen as an utility company which delivers everything and slowly enclosures our society (Khan 754-755). In addition, the technological innovations and their aspiration to deliver everything as quickly and efficiently as possible ensures that much human labor will be replaced by technology. The people who still work in the distribution centres or as couriers are paid minimum wage and work in bad labor conditions. Finally, the monopoly position Amazon seems to aspire to, by constantly penetrating new markets can be put into question. Is it justified to have one company in power about almost everything society has to offer?

        The aim of this project is to make the consumer more aware of the disadvantages of Amazon, since this is considered not to be common knowledge. When consumers continue to buy on Amazon and use their web services, he or she are unconsciously maintaining a system that will eventually be the end of a free market. This can have a detrimental effect on the economy and the society in the near future. By creating an activist-oriented website that allows consumers to browse through the material, they need to become more aware of what their online buying habits cause.

Relevance

Considering that our society is becoming more digitized each day, its natural that is also influences how and where we buy our goods. Lisa Parks argues in her text “Stuff You Can Kick” that we need to focus more on the (physical) infrastructure of media and think more of what media are actually made of (356). Opening up invisible infrastructures is important because it is “our duty as infrastructural citizens/users to be aware of the systems that surround us and that we subsidize and use” (Mattern). This is exactly what this project is trying to do: opening up the invisible, or in this case unknown, infrastructure of Amazon, in particular its downside. Because the company is becoming so ubiquitous, it is necessary to critically look at what their influence is on the local and global society. Amazon can be seen as a “technology within which a culture grows” and because Amazon made it so easy and lucrative to shop online, an entirely new culture is emerging. This new culture can have influence on different levels in society, like politics and economy, but also on social levels (Postman 10).

In his text “The Humanism of Media Ecology”, Neil Postman states the importance of research into the influences of certain media on our “ecology”. He stated that the interaction between media and humans shapes a culture and it ensures a certain balance in a society (Postman 11). Marshall McLuhan once suggested that in order to create a proportionate balance, people sometimes will have to limit the use of some media (Postman 13). This also seems to apply in the case of Amazon. Postman suggested a couple of questions we should ask ourselves, in order to understand a certain medium. This should help us to judge whether a medium has a good or bad influence on people and/or society. Some of the questions he asks are; to what extent does a medium contribute to the development of democratic process; to what extent does a medium give greater access to meaningful information; and to what extent does a medium enhance or diminish our moral sense? (Postman 13-15). If we were to apply these questions to the case of Amazon, it is expected the answers will be quite negative (except for the fact that Amazon does provide greater access to information). If Amazon achieves a monopoly position in the near future, it will be very unlikely that they will contribute in the development of democratic process, because there will eventually be nothing left to choose from. Furthermore, we can argue that Amazon diminishes our moral sense, seeing as how many of the consequences associated with its growth seem undesirable for society. We must be cautious about the idea that technological innovation also means humane progress, “because science and technology proceed without a moral basis and they do not make the mind receptive to moral decency” (Postman 15).

Methodology

Our aim is to create an awareness among our audience to be weary of the growing importance and presence of Amazon, a digital intervention of sorts. In our eyes, Amazon is slowly encroaching more and more aspects of our everyday lives, and if we do not warn others, might surround us entirely. We chose a grassroots form of online activism in order to deliver our message to the public the best way possible. A web interface that is easily accessible and takes the audience on a ‘guided tour’ which makes the significance of our claim visible. We chose this approach for many of the same reasons that Amazon has used to become the global entity that it is today. Users spend more time online and can increasingly be targeted through the medium, by relying on strategies that ensure the audience sees the severity of the situation. By using this strategy “the online media is not only an instrument used in staging traditional activism, but also an environment changing the very character and possibilities of political activism” (Knudsen and Stage 149). We employ this by guiding the user through a multi-media selection of the negative aspects, which leaves little to no room for a negotiated meaning. Our goal is to create as little of a gap as possible between the, in the words of Stuart Hall, encoded meaning we attempted to employ, and the decoded meaning received by our audience (306-310). Our hope is that the user of our interface comes out the other end, conscious of the power of Amazon.

Role of New Media and our Solution

Through an activation-oriented website we want to make consumers aware of the downsides of the Amazon utopia. Through this story, people will gain insights on how Amazon really operates and what kind of impact it has on the economy, especially on the smaller businesses, labor and other parts of society. When people open our website, they will see an interface that has a globe in the middle and is surrounded by circles. It does not matter where visitors will click as they will always start with the first circle. The rest of the story will follow after clicking again. Each of the circles represents a certain aspect of our argument. We decided to do a storytelling and not let the users click freely through the material to prevent any confusion from encountering. This is because we think there is a logical coherence between the arguments and the interactive design will contribute to increase the understanding of our vision on Amazon. The circles will provide the user information about the arguments and is often accompanied by different multimedia such as images, articles and videoclips.

 

Screenshot of our interface

 

The order of the circles will be as followed:

1 – Introduction of the project

In this circle there will be a brief introduction on what Amazon is, how it became such a huge company and how it has negative influence on the economy. Besides the written information there will be a video that does an in-depth explanation on how Amazon became this big and some illustrations provided will explain how Amazon changes the economy.

Screenshot first circle

 

Illustrations on how Amazon changes the economy

 

 

2 – The consumer vs. Amazon

This circle describes the relationship between the consumers and Amazon. The emphasis will lay on how Amazon manipulates its customers into buying more products by using nudges and algorithmic identities. It will be explained how these concepts work and concrete examples of nudges on Amazon’s website will be provided. Furthermore, more information about the influence of Amazon Prime will be shared and for the visitors who want more in-depth information scientific articles will be provided.

Examples of nudges on the Amazon website

Online resources about Amazon Prime:

https://www.forbes.com/sites/louiscolumbus/2018/03/04/10-charts-that-will-change-your-perspective-of-amazon-primes-growth/#2aae4eab3fee

Suggestions for scientific articles:

Thaler and Sunstein – Introduction in the book Nudge

Cheney-Lippold – A New Algorithmic Identity

Postman – The Humanism of Media Ecology

Stanfill – This Interface as Discourse: The Production of Norms Through Web Design

 

3 & 4 – Work environment and technology at Amazon

In these circles we will point out how Amazon makes optimal use of technology and labor and how they influence the labor market. To make it possible to deliver all the packages as soon as possible, Amazon relies on technology rather than on human forces. Furthermore, the labor conditions of so called ‘’flex workers’’ at Amazon will be described.

Article about working as a flex worker for Amazon: https://www.theatlantic.com/technology/archive/2018/06/amazon-flex-workers/563444/

Video technology taking over in distribution centers:

Videos Amazon Flex App:

 

5 – Amazon Web Services

AWS offers online storage space and services for cloud computing. In this circle we will explain how Amazon (mis)uses the data that are derived from its Web Services in order to expand its business and reach a monopoly position.

Video about Amazon Web Services:

https://aws.amazon.com/products/

 

6 – Amazon as a utility company

In this circle we provide arguments why Amazon should not be obtaining a monopoly position and how Amazon exploits the existing physical infrastructure, without contributing to it (Kovach and Pagano).

Video about Amazon avoiding taxes:

https://www.businessinsider.com/amazon-not-paying-taxes-trump-bezos-2018-4?international=true&r=US&IR=T

Suggestion for in-depth reading:

Khan – Amazon’s Antitrust Paradox

Interesting part of the Tegenlicht episode ‘’Amazon zij met ons’’ on this topic:

37:37 – 40:48

https://www.vpro.nl/programmas/tegenlicht/kijk/afleveringen/2018-2019/amazon-zij-met-ons.html

 

7 – Bezos

In this circle we will shed a light on Amazon founder and CEO, Jeff Bezos. Assisted by videoclips we will explain his questionable vision on running a business.

Interview with Jeff Bezos:

https://www.vpro.nl/programmas/tegenlicht/lees/bijlagen/2018-2019/amazon-zij-met-ons/interview-jeff-bezos.html

Interesting part of the Tegenlicht episode ‘’Amazon zij met ons’’ on this topic:

18:18 – 19:11

https://www.vpro.nl/programmas/tegenlicht/kijk/afleveringen/2018-2019/amazon-zij-met-ons.html

 

Analysis

With this project we think we explored ways on how to make invisible infrastructures visible. It can be seen as a case study which helps contribute to an ongoing academic debate on how media users should deal with threats in this mediated and technologically saturated world.

Looking at our project, one of the biggest limitations will probably be that many people are not interested in what is happening, because it is far removed from their own personal life and it does not affect them personally (yet). Consumers only care about obtaining their products cheap and fast delivery. Although the people, whose interest in the topic is already sparked, will probably visit the website, it is of importance that we also reach the people that are not aware. We know that it will not be an easy task trying to reach those particular people. However, we believe that we can prevent the evolution towards this potentially harmful future, but only if we as a society are fully aware of what is going on and what is yet to come.

 

References

  • “Amazon zijn met ons”. Tegenlicht. VPRO, NPO2. 30 September 2018.
  • Amazon Jobs. “Our DNA” 16 October 2018. https://www.amazon.jobs/en/working/working-amazon
  • Amazon Web Services. “Cloud Products” 16 October 2018. https://aws.amazon.com/products/
  • Bezos, Jeff. “Annual Rapport 2017” Amazon. P. 25-26.
  • Khan, Lina. “Amazon’s Antitrust Paradox”. The Yale Law Journal. 126.3 (2017): 710-805.
  • Kovach, Steve and Alyssa Pagano. “How Amazon Gets Away With Not Paying Taxes”. Business
    Insider. 2018. 18 October 2018. https://www.businessinsider.com/amazon-not-paying-taxes-trump-bezos-2018-4?international=true&r=US&IR=T
  • Mattern, Shannon. 2013. “Infrastructural Tourism.” Places Journal, (2013). 16 October 2018.
    https://placesjournal.org/article/infrastructural-tourism/
  • Pappas, Ilias, et. al. “The interplay of online shopping motivations and experiential factors on
    personalized e-commerce: A complexity theory approach”. Telematics and Informatics. 34.5 (2017): 730-742.
  • Postman, Neil. The Humanism of Media Ecology. Proceedings of the Media Ecology Association, Volume I: 10-16.
  • Parks, Lisa. “‘Stuff You Can Kick’: Toward a Theory of Media Infrastructures.” Between Humanities
    and the Digital. Ed. Patrik Svensson and David Theo Goldberg. Cambridge, MA: MIT Press,
    2015. 335–274.
  • Stanfill, Mel. “This interface as discourse: The production of norms through web design.” New Media & Society. 17.7 (2015): 1059-1074
  • Thaler, Richard and Cass Sunstein. “Introduction” Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven, London: Yale University Press, 2008. 1-14
  • Hall, Stuart. “Encoding/Decoding”. Media Studies. A Reader, Paul Marris and Sue Thornham, 2nd ed., New York University Press, New York, 2006: 306-310.
  • Knudsen, Britta Timm, and Carsten Stage. “Contagious Bodies. An Investigation Of Affective And Discursive Strategies In Contemporary Online Activism”. Emotion, Space And Society, vol 5, no. 3, 2012, pp. 148-155. Elsevier BV.

A look at popular medical, health & fitness apps in the Google Play store (NL)

Figure 1: Google Play Store search results for “period” mobile applications

Introduction

This research project interests itself in the gender dynamics at play in the ecosystems and visual interfaces of mobile applications. When searching for “women’s health” in the Google Play Store, one is overwhelmingly confronted with traditional and loaded symbols of femininity, as pictured above. Flowers, the colour pink, as well as hearts and nature imagery recur, constructing an image of womanhood deeply embedded within traditional gender norms.

Applications are a ubiquitous technology within our technologically mediated life. That they should reinforce and strengthen pre-existing gender norms at play within our culture should be problematised. The aim of the research is to investigate how new media objects are related to social practices and what forms our prejudices and stereotypes take. Specifically, this project asks; how are gender norms reconstructed through app store ecologies and visual interfaces? It focuses firstly on the algorithmic logic behind the Google Play Store and what gendered app ecologies it could create, then on the visual interfaces of health and fitness as well as medical apps, a category where we could potentially see gender bias occur.

Context & Relevance

This topic lies in the debates surrounding algorithms influence on human agency, especially “black-box” algorithms. This terms refers to algorithms “whose workings are mysterious; we can observe its inputs and outputs, but we cannot tell how one becomes the other” and thus the internal workings are unknown (Pasquale, 3). Debates around artefact ecologies that revolve around digital devices, applications and “when a new technology is infused into an artefact ecology” (Bodker) should, therefore, encompass the algorithms shaping and structuring the marketplace where ecologies are being created.

App stores are considered to be an “online curated marketplace” (Jansen & Bloemendal) that constructs individuals interactions with mobile applications. Algorithms should be understood as structurings interactions with new media and recent scandals around algorithms bias. Mobile applications are a ubiquitous and popular form of technology that “construct and configure” human capacities and desires (Lupton). They are interactive artefacts society uses daily and extend to digital artefact ecologies and have the potential to influence its users. The aim of this research is to analyse gender dynamics and representation, as well as how new media mirrors wider societal norms, stereotypes and prejudices. As new media objects, applications are born out of culture with a certain set of associations and internalised hierarchies, and therefore looking at these ideals are perpetuated through the objects we produce continue to institutionalise gender norms is especially relevant in this culture. Algorithms should be understood as architecturally structuring societies interactions with new media. Algorithms should be looked at as human-developed tools and looking at the values imbued into them reveals. With this it is important to understand how gender bias and gender identity is constructed in society. Men have always been held more economic, political and social power they have also had more influence on cultural and historical events (Beauvoir). This had lead men to be the ones defining societal norms. This leads to the construction of femininity by men and in turn the other-ing of women (Beauvoir). Therefore making male the neutral gender that defines what being a women means. Women thus are only viewed in society in relation to men.

When looking at app stores as gatekeepers to these essential tools, app store ecologies should, in theory, be neutral (Petsas) and mobile app ecosystems should be easily readable and visible. In an attempt to uncover any potential gender bias, the study will examine apps that could be categorized as gendered apps, specifically health-related apps. With around 9 percent of women more likely to use mobile health apps compared with 4 percent of men, women’s health apps are a growing market with a target audience in mind (Derbyshire).

Methodology

For the purposes of the project, a mixed method approach was used in order to provide multiple empirical perspectives. New media was essential to the research, as it automized the collection of relevant apps and data, as well as allowing for collaborative contributions to the methods, observations and insights. The results of the following two methods were combined for the purposes of analysis and conclusions.

Method 1 – DMI Similar Apps Tool & Gephi

The top 60 apps from both the ‘Health & Fitness’ and ‘Medical’ categories of the Google Play App Store were extracted using the DMI Link Ripper tool. This resulted in URLs to 120 apps in the Google Play Store. The URL links to the apps were cleaned on Google Sheets to produce a list of 120 App IDs that were then entered into the DMI Google Play Similar Apps tool and processed. The results were downloaded as a .gexf file and uploaded to Gephi, a network visualisation program (Jacomy et al.). The resulting graph was edited and spatialized in terms of layout and appearance using the ForceAtlas2 algorithm as well as other aesthetic tweaks to optimise visualisation of the network. The network consisted of 2400 nodes that represented apps in the Google Play Store, connected by the ‘Similar Apps’ feature.

The nodes were set to be larger based on their ‘in-degree,’ as this clearly displayed which apps are more frequently found in the ‘Similar Apps’ feature of the app store. There were not many variables to work with to set the node colours, however, but they were coloured by the price of the app where green indicates a free app; cyan less than €1; dark blue €1 – €5; purple €5 – €10 and red more than €10. The final graph is shown without labels in Graph 1, and Graph 2 with labels.

Method 2 – Coding Apps as Presented in the Google Play Store

For the second methodology, the apps were divided amongst the project’s collaborators and coded, recorded using Google Sheets. They were coded based on the following categories, as well as any additional comments that could be made in regards to the research question:

1. Logo Colour
2. Logo Shape
3. Keywords
4. Description
5. Language
6. Similar Apps

This resulted in a master spreadsheet documenting aspects of each app that could be interpreted and coded as ‘male,’ ‘female’ or ‘gender neutral’.

Limitations

There were a number of limitations to this project. As the project took place in The Netherlands, the Similar Apps Tool could analyse only the Dutch App Store, meaning that exploring popular apps in other countries was not an available option. This also created a language barrier in the study as some of the applications were only available in Dutch. This lead to a limited ability to analyze some of the applications.
The methodology used also had certain limitations. The DMI Similar Apps tool is only relevant to the Google Play store, so it was not possible to include the Apple App Store in the research project. This may have had the potential to offer new insights and valuable findings. When examining applications interfaces and coding in the second methodology multiple limitations were found. Firstly, the huge variety of application interfaces and intended uses made it difficult to propose consistent coding and conclusions. Secondly, the scope of the project also limited the number of applications that could be analysed – it was limited to the top applications by popularity, which is subject to change on a daily basis. Examining a greater number of applications could widen the scope of the findings.
Crucially, it was also very challenging in determining what counted as being coded ‘male’ or ‘female’ in the first place. For example, does a purple interface mean that an app is targeted to women? These apps were being coded as ‘male’ or ‘female’ based entirely on the researchers pre-conditioned notions of what it means to be these genders, therefore it is arguable that classing these apps in this way is purely subjective and not entirely empirical.

Analysis & Results

Graph 1: Similar Apps Network

Graph 2: Similar Apps Network with labels

Graph 3: Main similar apps clusters

Female Cluster 1: Menstruation and Ovulation Tracker apps
Female Cluster 2: Birth Control Reminder apps
Female Cluster 3: Pregnancy apps

Graph 4: Female Cluster 1

Analysis:

Network analysis:

Graph 3 indicates the location of Female Cluster 1, the predominant cluster of female apps in the network. This cluster consists of menstruation and ovulation trackers that would typically be used by women. The cluster is fairly distinct from the rest of the network, with few ties to the center. This indicates that these apps are rarely classed as a ‘similar app’ outside their own app ecosystem. The secondary ‘female’ cluster, Female Cluster 2, again with its position in the network shown in Graph 3, was oriented around birth control reminder apps. It was positioned much closer to the center of the network due to its ties with more general medication reminder apps.

Graph 5: Female Cluster 2

What is significant is that despite there being at least these two distinct clusters of ‘female’ apps, there was no distinct ‘male’ cluster in the network. All of the apps that were determined to be ‘male’ by the second methodology were in the center of the network with essentially no links to other clusters. This might be expected given the app categories of this project, as women have more niche medical and health needs than men, especially when it comes to reproductive health.

However, it was found that a cluster of apps not directly related to gender, that of yoga and mindfulness apps, was gendered by the Similar Apps algorithm. This cluster was arranged around the “Daily Yoga App”, which is itself only connected to either ‘gender neutral’ or ‘female’ fitness apps. Daily Yoga links as well to both CalorieenTeller & Abs Workout – Home Workout, Tabata, HIIT (Graph 4 – N.B Abs Workout is small at the bottom). CalorieenTeller, as we shall soon see, has other links to gendering certain types of apps and Abs Workout is a distinctly ‘female’ app (see Figure 2). No other distinctly ‘female’ apps bridge together a cluster in this way in the network. This makes yoga & mindfulness a gendered cluster even though the apps themselves are non-gendered. This was the most significant case found of the Google Play Store’s Similar Apps algorithm gendering apps.

Graph 6: Connections of Yoga & Mindfulness cluster

Application Interface Analysis

The analysis of the application interfaces brought insights regarding the patterns of genderings. One of the most interesting clusters that was examined was the cluster of fitness apps found in the center of the network. Out of the 60 Fitness & Health applications, six were coded as ‘female and five were coded as ‘male’.

The majority of the ‘male’ applications, although offering product for men, were described as suitable for both men and women. However ‘female’ applications repeatedly and explicitly stated that they were exclusively for women, as one application states “each exercise is carefully considered by our trainer team and there will be differences between man and woman”.

Figure 2: Female fitness apps analysis

When analysing the language used by the developers of the apps it was found that the language used in the ‘female’ coded applications were more emotionally charged and tended to rationalize the decision to download the app often by referring to scientific information and focusing on health (Figure 2). In contrast, the language used in the ‘male’ coded apps emphasized strength and power. These apps also referred to the efficiency and ‘ease of use’ of the application instead of emphasizing health (Figure 3). It can be reasoned that this is a reflection of stereotypes associated with each gender and what is seen as important for them.

Figure 3: Male fitness apps analysis

While men are expected to have muscles and be strong, women are supposed to be thin and feminine. These ideals are thus reflected in the coding and language of these applications.

Figure 4: Leap Fitness Group applications for abs workout & network graph

By cross referencing an app’s location in the network with interface analysis, some additional observations can be made. This was found in apps by the Leap Fitness Group and their connection to dieting apps. It could be observed that ‘female’ fitness apps are commonly connected to the Calorie Counter application (CalorieenTeller in Figure 4), whilst almost no ‘male’ fitness apps were connected to it. Such an observation emphasises a certain model of femininity associated with the stereotypical idea of women’s health and beauty norms (Englis et al.) It can be reasoned that the lack of a connection between male apps and calorie counters suggest different social expectations from men. This might also be interpreted in the naming of certain applications: e.g. a ‘male’ app emphasising strength in “Six Pack in 30 Days” versus the ‘female’ version emphasising weight loss in “Lose Belly Fat in 30 Days” (Figure 4).

Applications that were coded as gender neutral during the analysis were in most cases step counters and running applications. However, even when an app’s functionalities were marketed as gender neutral e.g. Runtastic (Figure 5), the visual interface, in this case logo and featured video, were in some cases presenting men.

Figure 5: Runtastic application in Google Play Store

When examining this from a gender theory point of view it can be seen that the male-gendered applications were described as being for both genders because male is seen as the neutral gender. However women’s applications are coded specifically as female based because female is only seen in relation to the neutral male (Beauvoir). It can also be seen that there is a lack of male coded applications because ‘male’ is coded as neutral while ‘female’ is coded as female. This also accounts for the large number of female gendered applications. Therefore it can be seen that societies understanding of ‘female’ and ‘male’ are reflected in the Google Play Store health, fitness, and medical categories.

Conclusion

This project looked for female clusters in the Medical and Health and Fitness categories as they could have the most potential results due to the specificity of women’s needs in terms of health. This gave rise to a number of female clusters, but no significant male clusters. Female clusters were coded mostly through their visual interfaces and function. However, another cluster was found that, though composed of apps that were visually coded as gender-neutral and whose function was unisex, was gendered through its Google Play Store “Similar Apps” recommendation.

Looking at the interface of fitness apps, the core cluster of these results, was especially relevant in seeing how visual cues and language was used to shape our interactions with these apps as gendered subjects. Our results have shown that it is hard to code apps as male as ‘male’ is often coded as neutral while ‘female’ is coded as female. This could be seen in numerous gender-neutral coded apps, such as Runtastic. While its logo and certain elements in its interface privileged the male body, the categories used to code apps through language, function and keywords found that these were in effect gender neutral app. Male is often seen as a neutral gender while female is seen as other, giving rise to several female clusters but no male clusters.

Going further with this research could interrogate other categories of App stores. Categories that are potentially more likely to present distinct male and female clusters could be areas of life that have traditionally been more gendered, such as the “Auto & Vehicles” and “Sports” categories. Categories such as “Family”, which represents children’s entertainment would provide further insight into how children are socialised into these gender norms that are later enforced in life through more other technologies. These could represent a more divisive separation between gendered apps and their ecosystem.

Bibliography

Beauvoir, Simone de. The Second Sex. Knopf Doubleday Publishing Group, 1949. Open WorldCat, http://banq.lib.overdrive.com/ContentDetails.htm?id=00038A93-7B24-4653-94E6-9C4689DA09EA.

Bødker, Susanne, and Clemens Nylandsted Klokmose. ‘Dynamics in Artifact Ecologies’. Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design, ACM, 2012.

Derbyshire, Emma, and Darren Dancey. ‘Smartphone Medical Applications for Women’s Health: What Is the Evidence-Base and Feedback?.”’. International Journal of Telemedicine and Applications, no. 9, 2013.

Englis, Basil G., et al. ‘Beauty Before the Eyes of Beholders: The Cultural Encoding of Beauty Types in Magazine Advertising and Music Television’. Journal of Advertising, vol. 23, no. 2, June 1994, pp. 49–64. Crossref, doi:10.1080/00913367.1994.10673441.

Hall, Miranda. ‘The Strange Sexism of Period Apps’. Motherboard, 2017, https://motherboard.vice.com/en_us/article/qvp5yd/the-strange-sexism-of-period-apps.

Jacomy, Mathieu, et al. ‘ForceAtlas2, a Continuous Graph Layout Algorithm for Handy Network Visualization Designed for the Gephi Software’. PLoS ONE, edited by Mark R. Muldoon, vol. 9, no. 6, June 2014, p. e98679. Crossref, doi:10.1371/journal.pone.0098679.

Jansen, Slinger, and Ewoud Bloemendal. ‘Defining App Stores: The Role of Curated Marketplaces in Software Ecosystems’. International Conference of Software Business, Springer, 2013.

Lupton, Deborad. ‘Apps as Artefacts: Towards a Critical Perspective on Mobile Health and Medical Apps’. Societies, vol. 4, no. 4, 2014, pp. 606-622.

Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015.

Petsas, Thanasis, et al. ‘Rise of the Planet of the Apps: A Systematic Study of the Mobile App Ecosystem’. Proceedings of the 2013 Conference on Internet Measurement Conference – IMC ’13, ACM Press, 2013, pp. 277–90. Crossref, doi:10.1145/2504730.2504749.

The social and interactive nature of web 2.0 (O’Reilly) and new media platforms provide a sense of freedom for users to produce or consume content according to their preferences. Platforms such as Facebook, Twitter and Youtube have, over the years, taken this concept to help them achieve success and draw more users. In the midst of this competition to provide platform for user interaction and user generated content, Twitch.tv rose into popularity, by providing access for ordinary internet users to stream their own generated content. Starting with games as their main interest, the platform has grown into a significant player in the web-based industry. From the start of 2011, Twitch’s popularity has kept on rising (fig. 1). With such quick-rise to popularity and success, it is interesting to observe how Twitch modulates content on their platform. Does the platform operate consistently with the notion that streamers have freedom over their content? Or does the way they operate allow for more audience control?

Figure 1: Twitch’s popularity over the years (TwitchTracker 2018)

By researching interaction on the platform, this project is interested in answering the question of whether Twitch’s monetisation policy has changed the relationship between audience and streamer, and its implication on who has more control over content? In attempting to answer this question, this paper will observe Twitch’s monetisation policies and user-streamer interaction to understand how these mechanisms affect streamer-audience relations. The project will also look into audience numbers using online tracking tools from “Socialblade” and “SullyGnome” to identify its relation to types of content that audiences are consuming. Findings from the two analyses will then be discussed, in the context of content control, to identify the balance of control between audience and streamer on what content is being produced. Findings from this project can add to the discussion of user perceived freedom over their content in a social-media or other modern web-based platform.

How Twitch Works

Twitch is a live streaming platform centered around video games, that allows its users to watch content directly from one of their devices. It also allows them to create content and stream any video game they want. The popularity of the platform started to rise rapidly from the start of 2011. This led to Amazon buying Twitch for 970 million dollars in August 2014 (Kim). As of now, the amount of viewers and viewing hours on twitch continue to rise (fig. 1).

These increasing interests make Twitch a feasible platform for content creators to be active on.

The result is that there are a huge amount of streamers playing a big and diverse selection of games. To navigate through the wide selection, Twitch ranks on basis of most popular category followed up by popularity of the channel. This guides viewers to the more popular streams, who are essential for monetisation on the stream.

One way in which streamers can make money through Twitch is with subscriptions. Twitch’s subscription requires users to pay several amounts of money depending on the type of subscription. Although users can still watch content without subscribing, the option to subscribe allow a user to gain exclusive benefits like custom made emotes. Another way to monetize on streamer content is through a feature called “bits”. Bits are cheering mechanism that allow users to interact with gameplay, by cheering at a certain moment (fig. 2). Revenue from subscription and bits will be shared between content creator and Twitch, the portion of the share is unclear.

Figure 2: Use of bits as cheering mechanism (Souppouris 2016)

In addition to the monetization policies that Twitch offers, streamers can also generate revenue in alternative ways. For instance, viewers can directly donate money to the stream. The donation usually shows up live in the stream, by text and/or voice. These donations are being kept tracked off in the channel profile space, most of the time. This profile space is a free space which streamers can fill in how they like. So it can also be used for sponsorships and merchandise. The revenue that is generated by these means is all for the streamer.

Now that it’s clear what Twitch is and how revenue can be generated by streamers, we can delve deeper into the relationship between the audience and streamers and how this affects the content that is being produced.

Role of Streamer-Audience Interactivity in Driving Content

Monetization has made online video creation a means to earn money which makes it a viable career option (Johnson et al.1). This created a surge in Youtube Celebrities and aspiring professionals (Garcia-Rapp 1). And like with all jobs, professionalism is needed. But the future of streamers is still uncertain because there is newness surrounding this career path (14).

By being professional, a streamer has a higher chance of being successful. Professionalism shows in the technical quality of the broadcasting, like having a high resolution and a good quality mic. These “specs” (technical specification) are usually written down by the streamer in their channel profile (fig. 3). Some streamers get sponsored by using specific brands of equipment. The audience can use these specs to copy the streamer.

Figure 3: Streamer’s profile page often showcases their specs (summit1g 2018)

Aside from high technical equipment, a streamer also needs to be understandable, entertaining and easy to follow (Garcia-Rapp 15). To get a clear overview of what their Twitch channel is about, most streamers use a similar layout/template. This template consists of the profile and the presentation of the content with extra information on the screen (fig. 4).

Figure 4: Streamer showcases extra information like donations in the stream (AnisaJohma  2018)

Part of the template is the aforementioned profile to give useful information. Like the previously mentioned specs, but also who the streamer is, when they will stream and what game they are playing. Aside from these technical descriptions, Twitch profiles also have room reserved for the monetization. For example, there is a donate button and a list of highest as well as most recent donors. Popular streamers tend to mention sponsors or sponsored content.

During the livestream, the streamer often uses a facecam to make themselves visible whilst playing. This makes the whole stream interactive because the audience is face to face with the streamer on a real time basis. Moreover, most streamers have notifications to show who has donated the most amount or most recently. Notifications relating to money are mostly shown live during the stream. When a viewer, donates or subscribes, that information gets shared live by text and sometimes by the streamer himself. By showing that the streamer cares about the donations he nudges viewers to donate more.

The streamer needs to combine this professional presentation with a more personal touch to be successful, because the audience prefers a strong bond with the streamer (Garcia-Rapp 16). This interactivity is something that differentiates Twitch from traditional media. By sharing parts of their daily life, the streamers seems more real and can generate a stronger personal bond with the audience. Interactivity is reached by using tools like the chat box and (direct) communication with the audience.

A chat box is a place where the audience can talk with each other, the streamer and respond to the stream. Subscribed viewers can use custom emotes to show that they are more hard-core followers and that they know what the norms of the community are. More popular streamers also have moderators who monitor the behaviour of their audience. This shows that the chat box can create a community that is hierarchic and follows certain norms.

Aside from this, notifications and updates are also a way of interacting with the audiences. The streamer communicates with his audiences about the money they receive. With a facecam the audience can actually measure the response.

However, to reach a strong personal bond, streamers need to be authentic. Without it, they will lose their audience and thus their income (Garcia-Rapp 4). The authenticity shows the audience that the social connection is real. This authenticity and honesty are highly appreciated (4). An authentic streamer can establish a strong social bond with their community. This results in affective ties and feelings of trust between audience and streamer (2). Thus it pays off for the streamer to show off the more personal side of his live, so that his authenticity can be measured.

Hence, interactivity level and streamer’s authenticity matters, in generating user interest as well as maintaining monetisation on their channel. By allowing audience to engage in a more personal relationship, the streamer can monetise their content more easily and produce a community that supports the streamer despite their content presentation.

Influence of Viewers Ratings over Content

As Gillespie argues, ‘platforms’ not just allow code to be written or run, but afford an opportunity to communicate, interact or sell (351). Twitch, being a peer to peer platform operating on real time basis gives audiences potential control over the content.

The sense of interactivity that Twitch’s design and monetisation policies provide, raises one important question as to who controls the content delivered in the platform. As Fogg states with regard to persuasive technologies, persuasion techniques are most effective when they are interactive which is one of the main reasons behind the success of Twitch as a platform (6). In traditional media, arguments can be made that because of TV’s and radios ad oriented business model, viewers have a certain degree of control over what content sells, thanks to the rating and share schematic (Herman and Chomsky 18).

On the contrary, Twitch’s monetisation policies seem to go against this traditional rating based parameters. Channels are given the flexibility to determine how they want to earn money.  Furthermore, streamers are seemingly more in control over what content they want to broadcast. Even though viewer ratings are important in determining success, streamers do not necessarily need to follow those numbers as a strict guideline. It is thus important not just to understand that relationships are activated online, but also how they are activated: by whom, for what purpose, and according to which mechanisms (Bucher 480). As Van Dijck points out “what is important to understand about social network sites is how they activate relational impulses” (161).

That being said, data suggest that content consistency matters (Track Twitch Analytics, Future Predictions, & Twitch Usage Graphs). In fact, the top ten streamers on Twitch all broadcast the same type of content, that is Fortnite (2017) and Black Ops 4 (2018). The number one streamer, Ninja, even mentioned the importance of consistency in a tweet back in July. He lost 40.000 subscribers by not streaming for 48 hours (Grayson). This suggest that Twitch viewers prefer consistency from their streamers. His audience interest is higher when he played games that he is most known for, like Fortnite (2017), with his follower growth per hour growing at the rate of 1,470.76 compared to 1,069.13 when he plays Black Ops 4 (2018) (Ninja – SullyGnome).  

A similar trend can also be seen in channels such as Shroud. He is more known as a Black Ops 4 (2018) Streamer. When playing the game, Shroud gains an average of 1,059.53 followers per hour for playing Black Ops 4 (2018) and 1,082.87 per hour when playing Fortnite (2017). Meanwhile, when he play other game such as CSGO (2012), Rainbow Six Siege (2015) or Assassin’s Creed Odyssey (2018), his follower growth is less than 600 per hour (Shroud – SullyGnome). The trend is also apparent in a smaller channel such as Teosgame, where followers growth only go past twenty when he plays more mainstream games such as Fortnite (2017) and Rainbow Six Siege (2015) (TeosGame – SullyGnome).

Thus, through the data presented, it can be argued that the audience do hold a certain degree of control over presented contents. Although viewer ratings don’t operate as a mandatory guide like the way rating systems work in traditional media, following what audiences’ watch and what’s popular can be a temptation for streamers to follow in order to achieve monetary success. This implicates that even though streamers such as Ninja and Shroud can stream any content that they want, audiences appetite for popular games such as Fortnite (2017) and Black Ops 4 (2018) is still important for said streamers to consider.

Control over Content

Thus far, we have discussed factors that are influential in determining who has more control over streamer’s content. Looking at Twitch through Foucault’s conception of power as productive we can see that by allowing the user certain interface affordances, it encourages certain behaviour (Stanfill 1060).

On the one hand, Twitch’s features and abilities allow interactivity in the stream, and has given the streamer more control over communication with the audience. Features such as notifications, chat box and profile page help the streamer showcase his personality and establish their own community. This has made the content creator more visible and prominent. Aside from the creator, the audience also play an important part in the stream. Donations are celebrated and monitored, and viewers can show their love for the stream and community in the chat by using the right emotes. This results in the audience becoming part of the broadcasted content.

Our visualization of Streamer-Audience relationship on Twitch

On the other hand, our research also concludes that audience demonstrates its influence in determining what kind of content (presentation) attracts viewers. This shows that there is a certain degree of pressure for streamer to follow the trend, because a bigger audience tends to mean more money. However, the streamer has the freedom to ignore the trend.

Our analysis shows that the audience has a significant degree of control over the content, be it through interactivity or viewer ratings. The findings demonstrate that Twitch as a new digital platform provides a new type of interaction between audience and content providers. However, it is the streamer that profits financially from the monetization. And the moment a viewer stops giving money, he also lose a big part of his control.

Conclusion

In conclusion, this paper attempts to answer the question of whether Twitch’s monetisation policy has changed the relationship between audience and streamer, and its implication on who has more control over content. In doing that, we have analysed how Twitch’s design and monetisation policies guides streamers to be interactive. We concluded that interactivity and authenticity play a huge part in social bonding between audience and streamer, and that it is needed to generate revenue. It’s the celebration and need of this relationship that differentiates Twitch from traditional media.

On the other hand, we have also looked at viewer ratings data and discussed its effect on influencing streamers’ decision to stream certain popular content and in a professional way. There seems to be no significant difference between Twitch and traditional media on this point.  

Ultimately, our analysis demonstrates that the audience controls what needs to be streamed and that the streamer needs to be personal and authentic. In response to this, the streamer conforms to the audience’s needs and consequently capitalizes on it.

We propose, that further discussion relating to Twitch’s audience and stream relationship should look into categories beyond gaming. This is because, Twitch’s monetisation has seemingly allow for other categories such as cooking and talk shows to also capitalise and take advantage in the platform.

 

 

References

AnisaJohma. “BIGGEST DONATION ON TWITCH EVERRRRR”. Twitch. 2015. Accessed: 24 October 2018. <https://www.twitch.tv/videos/9509158>.
Assassin’s Creed Odyssey. Designed by Jordane Thiboust. Ubisoft, 2018.
Bucher, Taina. “The Friendship Assemblage: Investigating Programmed Sociality on Facebook.” Television & New Media 14.6 (2012): 479-493.
Call of Duty: Black Ops IIII. Activision, 2018.
CSGO. Valve Corporation, 2012.
Fogg, B.J. “Introduction: Persuasion in the Digital Age.” Persuasive Technology. San Francisco: Morgan Kaufmann Publishers, 2003. 1-13.
Fortnite. Designed by Darren Sugg. Epic Games, 2017.
Games played by Ninja in the past 30 days. 2018. SullyGnome. Accessed: 18 October 2018. <https://sullygnome.com/channel/ninja/30/games>
Games played by Shroud in the past 30 days. 2018. SullyGnome. Accessed: 18 October 2018. <https://sullygnome.com/channel/shroud/30/games>
Games played by TeosGame in the past 30 days. 2018. SullyGnome. Accessed: 18 October 2018. <https://sullygnome.com/channel/teosgame/30/games>
García-Rapp, F. “‘Come Join and Let’s BOND’: Authenticity and Legitimacy Building on YouTube’s Beauty Community.” Journal of Media Practice, 18.2 (2017): 1-20.
Gillespie, Tarleton. “The Politics of ‘’Platforms”.” New Media & Society 12.3 (2010): 347-364.
Grayson, Nathan. “Ninja Takes Two-Day Break, Loses 40,000 Subscribers.” Kotaku. 13 June 2018. Accessed:18 October 2018. <https://kotaku.com/ninja-takes-two-day-break-loses-40-000-subscribers-1826813300>
Herman, Edward S., and Noam Chomsky. “Title Chapter.” Manufacturing Consent: The Political Economy of the Mass Media. New York: Random House, (2010): 14-18 .
Johnson, Mark R., and Jamie Woodcock. “‘It’s Like the Gold Rush’: The Lives and Careers of Professional Video Game Streamers on Twitch.tv.” Information, Communication & Society (2017): 1-16.
Kim, Eugene. “Amazon Buys Twitch For $970 Million In Cash.” Business Insider. 2014. Accessed 18 October 2018. <https://www.businessinsider.com/amazon-buys-twitch-2014-8?international=true&r=US&IR=T>
Rainbow Six: Siege. Designed by Daniel Drapeau. Ubisoft, 2015.
Souppouris, Aaron. “Twitch Introduces ‘Cheering’ Emotes for Tipping Streamers.” Engadget, 27 June 2016. Accessed 23 Oct. 2018.
<https://www.engadget.com/2016/06/27/twitch-cheering-beta-bits-currency-tips/. >
Summit1g. “matchmaking, blackout, zombies, EVERYTHING [ @summit1g.” 2018. Twitch. Accessed: 18 October 2018. <https://www.twitch.tv/summit1g>
Stanfill, Mel. “The Interface as Discourse: The Production of Norms Through Web Design.” New Media & Society 17.7 (2015): 1059-1074.
Track Twitch Analytics, Future Predictions, & Twitch Usage Graphs – Social Blade. https://socialblade.com/twitch/. Accessed 23 Oct. 2018.
Twitch Statistics and Charts. 2018. TwitchTracker. Accessed: 20 October 2018. <https://twitchtracker.com/statistics>
Van Dijck, José. “Facebook as a Tool for Producing Sociality and Connectivity.” Television & New Media, vol. 13, no. 2, Mar. (2012): 160–76.

Imagine a world where your every move, online and offline, gets monitored and scored by big companies and the government. This idea gets even more chilling when your online activities, behaviour, relationships and financial data ultimately determine whether you can travel by train, are accepted in certain schools, can obtain a better interest rate on your loans or not.

This dystopian plot for one of Black Mirror’s most well-known episode Nosedive is becoming reality in China. In 2014, the Chinese government introduced the plans for their Social Credit System (SCS) which, however frightening the thought of having a number allotted to each of the almost 1.4 billion occupants of the PRC, will develop from a currently voluntary basis of functioning into a mandatory scheme starting with 2020. According to the government, the system aims to enhance the “trustworthiness” of its citizens while creating a more “sincere” society (Creemers). Although this idea clashes with our Western mindset and values in every way, we are not far from a similar reality as we are already allowing apps to track our location, health, online behaviour and purchases.

To monitor one’s own behaviour and get some insight into which data the government (already) possesses of its citizens, we are introducing WIJ – an app through which a Credit System, similar to the SCS in China, could be implemented in The Netherlands. For this app to work, the laws concerning data, as well as the norms and values dominating the liberal Netherlands should be kept in mind. WIJ is our answer to the research question: “How could a system similar to China’s SCS be implemented in The Netherlands?”

Sci-Fi meets reality: China’s Social Credit System
Before creating a “Westernized” version of the controversial Credit System, the original model should be analyzed. Through minutious tracking, rating and ultimately ranking of both people and companies’ behavior alike, regardless of preferences or personal will, the “Sesame Credit” measures and grants individuals scores between 350 and 950 points, taking into account five factors: credit history, fulfilment capacity, personal characteristics, behavior and preference, and interpersonal relationships (Botsman). Behavior is not only investigated, according to Rachel Botsman, but shaped. Through the soon-implemented final version of the SCS, one’s online and offline presence merge into an “onlife”. “As our society increasingly becomes an inosphere, a mixture of physical and virtual experiences, we are acquiring an onlife personality – different from who we innately are in the “real world” alone” (Botsman).

It can be considered that China’s population is nudged “away from purchases and behaviors the government does not like” (Botsman). However, if done consistently and built on a base of fear and communist standardization of life as well as constraining free will behind the bars of political extremism, the credit system hinders a set of universal ethical values of human rights and privacy, essential for the development of a healthy, democratic but most importantly diverse society. The rewards are a very much real component of the system, therefore supporting the citizens who do not dare to divert from China’s strived-for “trustworthiness”. More generous deadlines, loans, traveling terms, a heightened social status or the chance at quality private education are the packaging in which constant surveillance is appealingly wrapped.

WeChat and our own Super-app
Tencent, a Chinese multinational investment conglomerate founded WeChat in 2011 as a mobile messaging app. The platform has 902 million users but its appeal lies in the fact that WeChat has developed the ability to add mini-apps through its in-app store. This, combined with several Chinese government-imposed laws and administrative regulations in order to restrict internet access to Western websites such as google.com and Facebook has created a situation where most of internet transactions and practices done from your phone are done through only several government approved applications. Since the governments’ censoring of Western social media sites (e.g. Facebook in 2009 and WhatsApp in 2017), a vacuum filled by government-backed companies such as Tencent appeared, further aiding the oversight and centralization of social media platforms in China (“Tencent Launches a Social Credit System Similar to Alibaba’s | Business News”).

For a quick introduction to China’s current implementation of the SCS, take a look at the video below:

Exposing China’s Digital Dystopian Dictatorship | Foreign Correspondent

WeChat emerged as a messaging app and slowly integrated different features into what is now called ‘a super-app’. Today, the app allows one to send mobile payments, make video calls, play games, hail taxis, share locations, look for restaurants and leave reviews, order services, pay electronic bills and much more. As shown in the video above, governmental authorities not only monitor internet access but also what people do online; it is an accepted reality that officials censor and monitor users. What is more frightening is that a few years ago, platforms such as WeChat and its competitor Alibaba received the green light from the government to test-drive social credit systems, all this with the intention of gathering data for the construction of a national integrated social credit system.

Demystifying China’s social credit system – Rogier Creemers – SMC050 July 2018

In the video above, Creemers illustrates that the problem of information dispersion among Chinese provinces can be solved by creating information sharing mechanisms through the Sesame Credit for the law to be more efficiently applied on a nationwide level. One of the most imperative issues with the Social Credit System is that, while consistently using algorithms to monitor and rank its recipients, these means of control and data collection do not take into account context. Therefore, we aim to find creative ways to “embrace nuances, inconsistencies and contradictions” – elements inherent in human beings (Botsman) – and the ways in which they hold the power of mirroring real life.

With our own “super-app”  we wish to demonstrate how a clear and strong interconnection between all aspects of human life can be possible: social, political, private, educational, etc. which inevitably alters the natural course of life. “We are entering an age where an individual’s actions will be judged by standards they can’t control and where that judgement can’t be erased” (Botsman). In order for such a social credit system to take form in the Netherlands we must ensure a kind of transparency that allows citizens to trust the system and, as Botsman agreed, the unknowns need to be reduced alongside the opacity of algorithms upon which our app is designed. At the same time, we need to limit the probability of hacking and cyber crimes that could take place within the system.

WIJ
Taking all these theories and developments into account, we created a design for the app WIJ. When installing the app, the user has to comply with the privacy agreement the app formulated which follows the EU and Dutch laws concerning privacy and data-gathering and distribution. The agreement specifies which data is gathered and how it is being utilized. WIJ aims to communicate this complicated process in understandable language in order to prevent that users just click and accept, a more than frequently occurring phenomenon (Yeung 125).

 

Fig. 1: Opening-screen of WIJ


When opening the app, the user is required to log in with their DigiD, which stands for “digital identity” and is a mandatory ‘tool’ used to sign in on a governmental website (e.g. tax authority, benefits, study loans etc.). Additionally, a large number of health insurance companies have made DigiD available as a login option for their online platforms. Because the ‘digital identity’ is linked to the government, certain personal data such as complete name, address, loans and debts are automatically transferred into the app. When the login is successful, the user is directed to the main menu.

 

Fig. 2: Main Menu


When visiting the main menu, a few options are available to the user: profile, score, finance, governmental, social, shopping, entertainment, settings, and app store. The latter allows the user to download other apps on their WIJ-account, such as their personal bank app, social media apps, investment apps and shopping apps. Below we display the design of the financial and governmental page of the app.

Fig 3: Financial Matters

Fig. 4: Governmental matters

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Above, the mockups for two of the menu options of the app are displayed. As can be seen in Fig. 3, the app allows the installation of external applications within itself. We used the ING-app (personal bank account) as an example, as well as the iDeal-app (an app that handles online transactions).

The Importance of WIJ
Through this app, the users can attend to various kinds of actions such as payments that need to be carried out (e.g. taxes, loans etc.) and keep track of their own score as well as their friends’. WIJ focuses on rewards in the form of discounts, free products, services or trips rather than relying on punishments, which makes it different from the SCS China is implementing. By nudging the users through incentives, WIJ aims to increase good behaviour amongst Dutch citizens, without using penalties as a threat. We believe that focusing on the positive consequences instead of the negative ones would result in a Western society being more accepting of such a system that keeps tabs on behaviour.

The past few years have shown that we look differently at the gathering and distribution of data: data has transformed from just a convenient insight in people’s or customers’ behaviour and preferences to a true merchandise for which companies are willing to pay large amounts of money. Rob Kitchin calls this a “data revolution” and connects it to the increase of domestic-, work-related and public use of mobile devices and technology (Kitchin 15). When living in an era where your own data has so much worth, it is important to be aware of which data is gathered and what is being done with it. Laws such as GDPR passed in 2016 only standardizes and sets rules concerning the transparency of the processing of personal data by commercial organizations and governments. It does not stipulate what can or cannot happen with your data. Another more worrying fact is that when pop-ups such as the GDPR compliance or cookies collection notices appear most people click yes by default in their impatient need to watch that cat video or read that sensational celebrity gossip. Most people do not know what they are agreeing to or how their data is further used.

Just as the WeChat, if our app becomes all-encompassing and ubiquitous, then there is no real possibility for you to disagree or not comply for that might affect your communication or data exchange with others. As McLuhan famously said “the medium is the message” (13) –  it is not that our phones are our lives but that we experience life through our phones and that is the core of why people would adhere to apps like these. Removing yourself from the app means that you are no longer able to take part in that life.

“There is power in standardization”

By building such a powerful technological network at a national level, we are creating an infrastructure meant to bridge the knowledge gap between national authorities and the Dutch society. Considering the demographic and the environmental design (including the size) of the Netherlands, installing the system and WIJ, respectively, as a governmental practice of surveillance and control is more feasible and could render much more accurate results through a concentrated focus, more advanced technology and a higher accuracy of algorithms. Therefore, the app will be an infrastructure that will “make visible the invisible” (Mattern 16), bringing awareness to the citizens regarding the ubiquitous governmental power now exerted at every single level of one’s life.

“There is power in standardization,” Morozov (14) rightfully admits. The recently developed apps, games and technologies give a strong nod to the concept of “augmented reality” – “infusing our everyday environment with smart technologies” (20). Through the implementation of both the Social Credit System in China as well as WIJ in the Netherlands, life is gamified which determines a dependability between incentives and the human, moral behavior. Ruth Grant believed that “once they are removed, [the incentives’] effectiveness ends. Incentives treat symptoms, not causes; they are a superficial fix” (Grant in Morozov 208). The systems can be regarded as the solutionist approach of a government which aims to take full control of how its society is built and acts – gamification becomes the standardized means by which trust and general civic issues are solved, stripping, however, the idea of citizenship of much meaning, as Morozov admits (206). Citizens, through constant nudging, exerted power and data collection, are regarded or rather disregarded as consumers and players “who expect everything to be fun and based on reward schemes” (Morozov 205). The app can be fun for some, an unpleasant burden to keep track of, but once this is implemented and becomes part and parcel of one’s life, “there’s no going back”. “People’s expectations have been reset. This will be the new normal” (Morozov 205).

We are going to take the role of “choice architects” meant to steer the Dutch’s behavior towards a more uniform demeanor which can be easily categorized, rewarded or punished accordingly. However, there is an intriguing point in introducing such measures within the workings of a society. Citizens will engage in the “desired behavior” not because their actions will, at a certain point in time, mirror an immense civic spirit but because collecting points or unlocking different advantages or boosting their social position will be “more fun” (Morozov 206). As Rachel Botsman brilliantly stated,

“The new system reflects a cunning paradigm shift. As we’ve noted, instead of trying to enforce stability or conformity with a big stick and a good dose of top-down fear, the government is attempting to make obedience feel like gaming. It is a method of social control dressed up in some points-reward system. It’s gamified obedience.”

Discipline as Power
There is a reason why we find this kind of systems controversial and even dystopic. We have more than enough examples of state agencies or institutions misusing citizens’ data. As mentioned previously, laws are more than ambiguous when it comes to what can be done with big amounts of harvested data, leaving much of it to vague, yet menacing companies to decide how to profit from it. We are past any naive ideas of the state or giant corporations not misusing our data, particularly when such extensive amounts of it are involved.

Foucault argues in Meshes of Power that state power has changed from a disciplinary form to “individualization” (160). In the past, states (or monarchies) used to exert control by disciplinary means with often violent outcomes. However, this kind of discipline is not all-encompassing and is often difficult to enact because of the necessary physical presence of the violent tools of the state. Our new form of control exertion is compatible with capitalism and is, most of all, cost-effective. Individualization makes subjects incorporate state sanctioned norms and values into their behaviours. This way, people can control themselves, method which stands much more in line with governmentality and neoliberalism. In today’s world, power does not only lie centralized in the state, but spreads out into institutions and data mining as an effective tool of control exertion for those institutions.

Conclusion
Our first instinct is to think that such forms of control are incompatible with our liberal form of western democracy, but Foucault was not referring to socialist China when he spoke of bio-politics. The PRC wants to control its population by implementing a system that will change people’s behaviour and will ultimately mean that people will control themselves by incorporating the preferred norms and values into themselves and their lives. Effectively enacting this discipline without the constant overview of the state or its institutions, it may seem Orwellian to us to think that such a system could be implemented in the Netherlands, but the state already exerts this control over our lives, be it in the form of taxes, finances, housing or movement. With the infrastructures for such a system already in place, our app WIJ is a realistic step in the direction of a Western credit system.

Due to limited resources and time we were not able to take every aspect of the SCS into consideration. For further research, we recommend that the influence of the system be measured when it has been implemented and used for a longer period of time. Only then, one can get a complete image of what the implications are for the society, government and individuals.  


Bibliography

ABC News (Australia). Exposing China’s Digital Dystopian Dictatorship | Foreign Correspondent. YouTube, https://www.youtube.com/watch?v=eViswN602_k. Accessed 1 Oct. 2018.

Alexandra Ma. “China has started ranking citizens with a creepy ‘social credit’ system — here’s what you can do wrong, and the embarrassing, demeaning ways they can punish you.” Business Insider, 8 Apr. 2018, http://www.businessinsider.com/china-social-credit-system-punishments-and-rewards-explained-2018-4. Accessed 12 Oct. 2018. 

Botsman, Rachel. “Big Data Meets Big Brother as China Moves to Rate Its Citizens.” Wired UK, Oct. 2017https://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion. Accessed 10 Oct. 2018.

Foucault, Michel. “The Meshes of Power.” Space, Knowledge and Power: Foucault and Geography, 2007, pp. 153–162.

Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. Sage, 2014.

Mattern, Shannon. “Infrastructural Tourism.” Places Journal, July 2013. placesjournal.org, doi:10.22269/130701.

McLuhan, Marshall. Understanding Media: The Extensions of Man. MIT Press, 1994.

Morozov, Evgeny. To Save Everything, Click Here: The Folly of Technological Solutionism. Public Affairs, 2013.

Niewenhuis, Lucas. “Tencent Launches a Social Credit System Similar to Alibaba’s | Business News.” SupChina, 31 Jan. 2018, https://supchina.com/2018/01/31/tencent-launches-social-credit-system-similar-alibabas/. Accessed 12 Oct. 2018.

SMC050. Demystifying China’s Social Credit System – Rogier Creemers – SMC050 July 2018. YouTube, https://www.youtube.com/watch?v=GsIdUGWsXn8. Accessed 10 Oct. 2018.

Yeung, Karen. “‘Hypernudge’: Big Data as a mode of regulation by design.” Information, Communication & Society, May 2016, http://dx.doi.org/10.1080/1369118X.2016.1186713.

“The endless, continuously updated streams of information online are selected, processed and made available through recommendations calculated by complex algorithms. Am I explores how these algorithmic recommendations shape our everyday existence – and both reflect as shape our sense of self. Am I explores how Google gets to know us – and in turn determines what there is to know about ourselves, and how to know it.”

– Introduction text of the ‘Am I’ exhibition

Figure 1. Video clip of Am I exhibition (CLICK IT!)

The importance of algorithms in our datafied world has become a widely discussed topic. More and more it is realised what crucial role they play in selecting what we get to see online. Algorithms are useful, of course, indispensable even; but since they are such a key feature of the information architecture of new media. Since ‘there is no such thing as “neutral” design’ (Thaler and Sunstein, 2008, p.3), interrogating how algorithms work, the underlying values of their programming and the consequences these choices have in shaping our culture are of growing relevance and urgency. In the above introduction text, we included an observation from Tarleton Gillespie, who stresses exactly this point. He observes that algorithms ‘not only help us find information, they provide a means to know what there is to know and how to know it’ (Gillespie, 2014, p.167). Gillespie offers us a starting point for

understanding the cultural and political consequences of what he calls public relevant algorithms, by taking apart their specific knowledge logic. He observes several knowledge logic dimensions, such as the way algorithms try to anticipate users behavior and the promise of algorithmic objectivity. A dimension relevant in the context of our project, is ‘the production of calculated publics’. This entails

‘.. how the algorithmic presentation of publics back to themselves shape a public’s sense of itself, and who is best positioned to benefit from that knowledge ’

– (Gillespie, 2014, p. 168).

This specific dimension of algorithmic knowledge production touches upon what John Cheney-Lippold has described as a process of algorithmic identification. His analysis firstly focuses on the ‘how’ of such a process, in which categorisation according to our user data play a crucial role. Certain patterns of online behavior leads to the assignment of ‘52% female’, for example. He emphasises that this identification process is continuously adapted on the basis of new data-inputs. The relevance of this process, that happens behind the ‘computational curtains’, is that they these identities inform the recommendations we get as users, such as ads. This way, our algorithmic identities present us with their interpretation who we are – in a selection of what we get to see. This mechanism can be seen as a technology of power, a form of control:

‘ Interpreting control’s mark on subjects as a guiding mechanism that opens and closes particular conditions of possibility that users can encounter. The feedback mechanism required in this guiding mechanism is the process of suggestion.’

– (Cheney-Lippold, 2011,p. 175)

By shaping the terms of our information, these suggestions shape the terms of our subjectivity; telling us who we are, what we want and who we should be. At the same time, the suggestions are based on our online behavior, on what we do, and are embedded within a broader cultural logic and knowledge system. This means that suggestions are both shaped by us and shape us; a beautiful example of the productive working of power as described by Foucault.

It is this process that we set out to research while developing Am I . And if one wants to interrogate the influence of algorithms on our subjectivity, what better place to start than one of the most used – and as much heralded as contested – algorithms; the Google search algorithm. There is much research into how the first search results are what most people click on – and how adapting these results can have huge consequences for how people think about certain issues and reinforce bias (see for example Epstein and Robertson, 2015, Noble, 2018). A specific feature of the Google search algorithm is the autocomplete recommendation that appears while a user is typing in a query. By not even letting us finish our thought, it already suggests what we might be looking for. Google explains that this feature is a ‘huge time saver’ which ‘reduces typing by about 25 percent’ on average:

‘ Cumulatively, we estimate it saves over 200 years of typing time per day. Yes, per day! ’ – from: “How Google Autocomplete Works in Search.”

Which means that the feature is indeed used a lot, and its role in selecting and then suggesting what information we might be looking for, has great significance. Google itself stresses it does not intends to steer us in any direction by the autocomplete feature, by explicitly noting that they should be read as ‘predictions’:

‘ You’ll notice we call these autocomplete “predictions” rather than “suggestions,” and there’s a good reason for that. Autocomplete is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed. These are our best predictions of the query you were likely to continue entering .’ from: “How Google Autocomplete Works in Search.”

Though Google seems to take distance from the effect of suggesting what we could search for, this play of words can of course not limit its actual productive power. Autocomplete, whether intended or not, works as a form of control described by Cheney-Lippold. What is furthermore interesting about the autocomplete feature, is that it feeds-back information it collects of searches quite directly to us. There is definitely a process of selection and curation that we cannot see or evaluate – but it is still an interesting moment where the inner workings of the algorithm exposes itself through showing us what in its logic qualifies as a ‘prediction’ of our behaviour.

For these specific characteristics – Google searches important place in the hierarchy of algorithms, autocomplete as a much used feature and an opening up of the feedback mechanism, it forms an excellent artifact for our research. Our central research question being:

‘How can we explore the shaping of subjectivity by the process of algorithmic identities through search recommendations from Google?’

Secondary questions we investigated were: How are we defined by what we search or – how does the search recommendation ‘conduct conduct’? What insights give the recommendations us in the shaping of our subjectivity? What do the recommendations say about us and what do they say about the algorithm? How are the recommendations personalised, or differ in different contexts and languages?

The operationalisation

In order to focus our project on the shaping of algorithmic identities through google search, our focus is specifically on what people search for about themselves. To start simply here: how does autocomplete ‘predict’ queries starting with ‘Am I…?’. With this as starting point, we created an Alphabet of Suggestion as central element of our inquiry.

Because Google search results are contextual – within every country there are different search trends, autocomplete gives us the opportunity how the selection of algorithmic information is culturally embedded, by exploring which suggestions come up in different languages. Several friends were approached to type in the alphabet within their country in their respective languages.

Figure 2. Searching in four different languages

We wanted to present this Alphabet in a way that would magnify the form of control of suggestions, to take these recommendations from the private, individual experience into a public experience, to offer a space for confrontation, contemplation and comparison. Exploring the Alphabet of Suggestions was fascinating. At the one hand it almost felt intrusive; many questions seemed to be very personal and private. At the other hand it seemed to offer a glimpse of what were more collective, cultural worries. This is how parallel to the conceptual axis of the power of the algorithmic recommendations in shaping our sense of self, a second axis arose: that of exposure of the personal. Searches that we made in the privacy of our own home, showing our personal doubts, vulnerabilities, are now out in the open. We turn to Google to ask things that we would never dare to ask to anybody; but Google sees it all and does not handle this information discretely; it shows it to everybody through its recommendations. To combine these different themes (how the recommendations shape of our subjectivity and culture while at the same time being embedded in that culture, the role of Google Search in intimate questions and the exposure of private information) we designed a multi-media art installation with three different elements.

Figure 3. Official webpage of Am I exhibition (Link: https://joyshijing.wixsite.com/amiexhibition)

The main, most central element of the installation are six huge pillars with the Alphabet of Recommendations projected on their side. Each side gives space to the recommendations belonging to one letter. People can go inside the pillars to further explore the alphabet, and because the sides are semi-translucent, visitors turn into dark silhouettes behind the autocomplete sentences. This creates the impression of both intimacy and exposure, of a person behind the questions, but a person without a face, simultaneously anonymous as utterly visible. The second element resides inside one of the pillars, and is a more interactive element. There visitors can find a personal search booth, with a Google search interface, and no possibility to go anywhere else. All the other information we might want to access, needs to be accessed through Google search. There is a choice however: either one can search using their own, personal log in, or a pre-programmed profile. This offers the possibility to compare their own results with the alphabet, which is also displayed inside the search booth. While a visitor is typing queries on the inside, these are projected on the pillar at the outside, exposed to the public. Everyone outside can see what the person is searching for and which autocomplete suggestions she sees. Just like the algorithm always ‘sees’ what we are doing, and feeds it back to public suggestions.

Figure 2. Overview of Am I exhibition

The final element revolves around the cultural context of searches and auto-correct. It is a video room where the prerecorded searches in different languages are simultaneously displayed. Under every screen there is an English translation available to compare the different results.

The installation gives time and space to reflect on key characteristics of a specific set of search results. Below, we explore some of the common characteristics of the autocomplete suggestions and what they tell us about a broader context, as well as how they might shape our subjectivity.

After the installation: reflections on autocomplete, algorithmic subjectivity and culture

If you type in the query ‘ Am I a…’ in Google search, you get a huge list of results. Strangely bragging about its performance, Google even clarifies that it found 485.000.000 results in 0,61 seconds. But only ten from these thousands of results are selected as autocomplete suggestions. In these autocomplete suggestions, some relevant results are highlighted while others excluded. It is algorithms that offer predictions about what you should or would be more interested to know. While algorithms were often described as being benign, neutral, or objective, they are anything but. (Noble, 2018, p.1). The search results could be influenced by geographic places, sponsors, national regulations and norms, and even way of expressions in the linguistic sense, both reflecting and reinforcing social logics (Stanfill, 2015). Therefore, on the one hand these personalized results are tightly bonded to cultural constraints. On the other, however, these auto-complete suggestions apply their own mathematical formulations to store, process, transmit these cultural issues and giving them new meaning by selecting the more ‘relevant’. Google search engine, in this sense, is capable to prioritize the searching results on the basis of a variety of topics that seem “objective” and “popular” (Noble, 2018, p.24).

When entering our art exhibition, you might immediately confront with this constructive power that “enable and assign meaningfulness, managing how information is perceived by users, the distribution of the sensible” (Langlois, 2012). To elaborate it more specifically, we extract some examples from our alphabet. All examples are collected from English language setting and private browsing model.

Am I: Beauty Matters

Throughout the whole alphabet, a large number of questions about appearance have been inquired via Google search. Instead of mentioning about other personal qualities, the importance of appearance is highlighted by Google autosuggestions. By constantly being suggested these questions a notice—if not anxiety—about one’s appearance has been made. It is worth mentioning that, the suggested beauty standards is not inherent or a quality that one was born with. Rather, every individual could give efforts to make some difference on the appearance throughout one’s lifespan. When one clicks one of the auto-complete suggestions, he or she would receive countless results offering articles, videos, and pictures about how to use specific makeups, clothing styles, or services to improve the personal atheistic image. By purchasing certain kind of commodities, one is promised to be more attractive. In this way, the beauty standard is transferred as a judgment of personal taste, which corresponds to certain commodities.

Am I attractive Am I beautiful

Am I cool

Am I fat or skinny Am I handsome

Am I: Know yourself through Quiz

Besides the forms of articles, videos, and pictures one might confront after clicking on certain search recommendation, there is another form to provide specific knowledge, which can be shown directly on the auto-complete suggestions—the quiz or test. As the following list shows, the questions linked to a quiz can be very soft. For instance, am I a good father. There should be no concrete answer to every quiz/test, but the quizzes/tests often use a quantitative method including questionnaires to measure the certain issues. It is hard to tell whether believe it or not, but this gamified way definitely opens up what can be quantitatively tested through this, which means that instead of different context for a different person, there is a standardized gradient suitable for everyone.

Am I a good father quiz
Am I a psychopath test
Am I addicted to cigarettes quiz Am I beautiful quiz
Am I depressed quiz
Am I evil quiz
Am I fat quiz
Am I gay quiz
Am I handsome picture test
Am I in love quiz

 

Conclusion

According to Halavais, a search engine is a window into our own desires, which can have an impact on the values of society (Noble, 2018, p.25). However, the purpose of Am I Exhibition is to reveal the fact that our desires and subjectivities have been gradually shaped in the process of autosuggestions by search engines. In other words, search engines like Google may to some extent defining who we are and what content we should see. Although Google stresses that its autosuggestion function is only a ‘huge time saver’ which ‘reduces typing by about 25 percent’ on average. It cannot be denied that certain values from specific kinds of people-namely, the most powerful institution in society and those who control them ((Noble 2018, p.29), have already embedded in our way of thinking, and shaping our subjectivities with or without consciousness. As Lacan described the process of completing one’s sense of

‘self’ is through the reflection of a mirror, and thus establish the relationship between the Innenwelt and the Umwelt (Lacan, 1949, p. 97). We believe that today search engine’s recommendation is increasingly shaping our subjectivities and has rather become the mirror for reflections of people’s ‘algorithmic identities’ than a ‘window’ into our own desires.

 

 

 

 

Bibliography
Cheney-Lippold, John. “A New Algorithmic Identity: Soft Biopolitics and the Modulation of

Control.” Theory, Culture & Society, vol. 28, no. 6, Nov. 2011, pp. 164–81.

Epstein, Robert, and Ronald E. Robertson. “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections.” Proceedings of the National Academy of Sciences , vol. 112, no. 33, Aug. 2015, pp. E4512–21.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media Technologies, edited by Tarleton Gillespie et al., The MIT Press, 2014, pp. 167–94.

“How Google Autocomplete Works in Search.” Google , 20 Apr. 2018, https://www.blog.google/products/search/how-google-autocomplete-works-search/ . Accessed 20 October 2018

Lovink, Geert. The Society of the Query and the Googlization of Our Lives . p. 7.
Safiya Umoja Noble author. Algorithms of Oppression: How Search Engines Reinforce Racism . New York University Press, 2018.

Lacan, Jacques, and Alan Sheridan. “The Mirror Stage as Formative of the Function of the I as Revealed in Psychoanalytic Experience.” Écrits: A Selection. London: Routledge, 2001. pp. 1-7

Langlois, Ganaele. “Participatory Culture and the New Governance of Communication: The Paradox of Participatory Media.” Television & New Media 14, no. 2, Mar. 2013, pp. 91–105. https://doi.org/10.1177/1527476411433519 .

Society of the Query | Reflect and Act! Introduction to the Society of the Query Reader. http://networkcultures.org/query/2014/04/23/reflect-and-act-introduction-to-the-society-of-t he-query-reader/ . Accessed 1 Oct. 2018.