Can Smart Robots Manipulate Narratives On Twitter?

By:
On: October 4, 2021
Print Friendly, PDF & Email
About


   
Twitter to label 'good' bot accounts - BBC News

Misinformation on social media platforms has exponentially grown to a wide degree in recent years and the fabrication of the news is a significant example that has occurred in many recent political campaigns across the globe. Misinformation serves to exploit vulnerable groups in the pursuit of maintaining power over social or political narratives. For example, the 2016 US election that saw many fake news articles being spread by smart robots, were reshared and became viral (Shao et al, 2018).

I will focus primarily on how smart robots (or Bots) spread misinformation on Twitter in relation to current affairs such as political campaigns, and social movements. Twitter has been a dominant player in mediating political content and it also happens to be the most bot friendly platform.

I believe that there is a large amount evidence of political campaigns such as Trumps 2016 election victory that have proliferated the use of misinformation with social bots on Twitter. The bots exacerbate political or social narratives, mimicking to be a real person who is expressing their discontent, often evoking counter cultures that oppose mainstream narratives. For example, observe a trending twitter post about politics and you may find short rhetorical statements by accounts that have no followers or content at all. 

Firstly, Shao et al conducted on a study on Twitter that found how social bots used low-credibility content sources that were identified as third-party news, used a strategy that published these articles 100 times a week compared to fact-checked ones. They expressed that “fact-checking millions of individual articles is unfeasible” and concluded that the bots had a strategy of targeting influential users in retweeting fake news articles, which caused their followers to also retweet it, making it go viral. (Shao et al 2018).

This study provides an insight to how social bots operate in manipulating perceptions through increasing the frequency of posts that desensitise users and draw them to these ‘trending’ posts. However, I can agree that low-credibility news sources are almost indistinguishable from fact checked ones if one were to take it from face value. Retweets from influential users is also a concern so shouldn’t platforms supervise what can be fact checked or not?

Although many users will accept it without questioning, I want to make this point as a preliminary because it doesn’t take into account the background of individuals who are swept by these narratives such as their experience or cultural backgrounds. If by increasing the frequency of misinformation alone can seduce users into intensifying these political or social narratives, then the majority of the population would hold these views when in fact, views on social media are very diverse.

Furthermore, Rosemary Clark – Peterson’s article on Margins-as-methods (Clark-Petersons 2020) describes how Nancy Fraser’s concept of “counterpublics and countercultures” act as an explicit stance towards mainstream power structures. It elaborates how counterpublics “connect through a shared marginalised identity and experience subordination, while countercultures are not necessarily the product of social, political, or economic disenfranchisement” (Fraser 1992). 

If a group has experienced subordination and is able to find refuge online, then it is precisely this power imbalance that empowers these individuals to rebuke mainstream values. Should an individual see that so many others like them are reprimanding in the same fashion, then this serves only to validate their stance to oppose these mainstream issues. But what if these posts don’t carry the genuine intention for their movement and instead, look to twist the narrative? For example, a tweet on COVID-19 and the use of misleading figures can thwart the chances of an individual going to protest the next day if it were deemed as truth. 

I presume that bots also have the power to gaslight and cause individuals to misconstrue these narratives, which can leave marginalised groups feeling wider contempt. I believe that depriving a groups sense of power can lead to a more drastic mentality which is why these bots it’s important should be regulated.  

In addition, Marres and Gerlitz investigated climate debate on Twitter and found that the platform relies on a ‘frequency of mentions’ to identify and promote trending topics.” As a result, they found that the hashtags in regards to events or campaigns were sorted in a hierarchal algorithm that ‘privileged’ these tags but also found that ‘associationist measures’ such as other hashtags attached along with those that are trending, were mixed in with others.(Marres and Gerlitz 2016).

This investigation could be an example to how bots could be a grow of concern if they were to mix tags in regards to reaching wider audiences. A study by Bastos & Mercea looked at the retweeting rates of the Brexit campaign and found that the botnet was effective at creating ‘retweeting cascades. They compared that the retweeting rates between regular users and bots but concluded that a “small share of bots was found to have triggered the most retweets” (Bastos & Mercea 2019).

Though we have an amount of evidence to show that bots can distort narratives in ominous manner, the question of why these companies have been left unregulated for so long still remains to find a solution to prevent misinformation.

In conclusion, so do smart robots inflame narratives? I would argue that from the large amount of examples provided already show that it has occurred and not only have they been able to manipulate our perceptions on current affairs but it has made us more aware that actively questioning narratives online is a requirement.

Misinformation exists to a wide degree on social media as well as other platforms such as Youtube, News comment sections and more, but there is little known for the people who control the bots which exist for economic or political benefit. 

Bots posting at such a large rate to appeal to marginalised groups has been successful in contributing into these narratives in regards to elections, referendums, public opinion and more. Although there needs to be more research on the issue, social media companies have started to clamp down on the number of bots and fake accounts that emit misinformation, and there needs to be more regulatory powers in governments that approach technology with the view of impartiality. 

Bibliography:

Bastos, Marco T., and Dan Mercea. ‘The Brexit Botnet and User-Generated Hyperpartisan News’. Social Science Computer Review 37, no. 1 (February 2019): 38–54. https://doi.org/10.1177/0894439317734157.

Fraser, Nancy. ‘Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy’. Social Text, no. 25/26 (1990): 56. https://doi.org/10.2307/466240.

Marres, Noortje, and Carolin Gerlitz. ‘Interface Methods: Renegotiating Relations between Digital Social Research, STS and Sociology’. The Sociological Review 64, no. 1 (February 2016): 21–46. https://doi.org/10.1111/1467-954X.12314.

Shao, Chengcheng, Giovanni Luca Ciampaglia, Onur Varol, Kai-Cheng Yang, Alessandro Flammini, and Filippo Menczer. ‘The Spread of Low-Credibility Content by Social Bots’. Nature Communications 9, no. 1 (December 2018): 4787. https://doi.org/10.1038/s41467-018-06930-7

Comments are closed.