The iPhone nudge and the choice to budge

On: September 25, 2017
Print Friendly, PDF & Email
About Phil Creamer



Every time you reach into your pocket for your iPhone (sorry to exclude non-iPhone users) you’re prompted with a security code, unless this feature has been disabled. Your options are entering a 4 to 6 numerical code or simply by holding one of your fingers to the home button using the TouchID feature. The interaction between the metallic ring, the surface of the button, and the tactile feedback of your finger is all recognized from locally saved finger scans. This action grants its users access to the phone as well as the ability to associate the feature with different apps and various accounts.

With the announcement of the iPhone X, due to be released in Q4 of 2017, Apple will be doing away with TouchID and will introduce FaceID, a form of identification through facial recognition, which is detected by the front facing camera working in tandem with infrared sensors. The expectation is that one identification measure will result in the user’s acceptance for another. These are also technological applications that possibly encroach on privacy and surveillance, methods and procedures that slip themselves into our lives without realizing it.

How is Apple so successful in applying these techniques, while the majority of its users so willingly divulge their physical and identifiable qualities on such young and under-developed technologies? This question and type of behavior can be explained through the concept of Nudging (Thaler & Sunstein, 2008), and iPhone users can further situate, inform and question themselves through the idea of Invisible Infrastructures (Parks, 2015). Briefly covered are the concepts of privacy, data capture, discursive interface analysis, and grammars of action.

Behavioral Economist Richard Thaler describes a nudge as “any aspect of the choice that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives… [Choice architects] are self-consciously attempting to move people in directions that will make their lives better. They nudge. (Thaler & Sunstein, p6, 2008)”.

According to Thaler, in order to push people into making the “right” choice all you have to do is to reverse the choice. Examples are; instead of having people sign up for services, have them already signed up and then give them chance opt out.

“Merely slicing up apples and putting them into a plastic bag, greatly increased apple consumption by kids in school by nearly 70%… That’s because we’ve made [the choice] easy” (Thaler, NPR, 2017).

Applications of this theory can be used for good and it could also be used to manipulate and control. If we apply Thaler and Sunstein’s nudge theory to Apple’s iPhone, we can see that the nudges are the new features being released, the choices are to either use these features or to stay with current methods. The choices to use these features are simply laid out (PIN numbers vs. face recognition), the nudging is clear and apparent (Staying technologically up to date vs. sticking with the old), and the applications of new technologies, such as FaceID, are then placed into practice among society.

We begin to see the numerous nudges that Apple applies. Nobody is forced to buy the most recent iPhone X, the options are to stick with the current device knowing that older models will soon phase out of production, or to buy the newest model knowing the first batches of the phone will likely sell out. The nudges move consumers towards purchasing the new. No one is forced to use TouchID or FaceID, but some do simply because the feature is readily available and this eases access to other applications such as ApplePay. The nudges are the choices between staying with older methods of security or to use these new technologies for the sake of ease and consolidation.

Thanks to Apple, it is safe to assume that FaceID will slowly but surely integrate itself into our lives, despite potentially invasive and under-studied implications, which may affect society. It is therefore important for the iPhone user to understand their role and their relationship between to their devices. Understanding the idea of Invisible Infrastructures (Parks, 2015) empowers the user to make better-informed decisions.

According to Lisa Parks, Infrastructures are “the material sites and objects that are organized to produce a larger, dispersed yet integrated system for distributing material value” (Parks, 355, 2015). She applies the understanding of infrastructure to infrastructure imaginaries(invisible), which are “ways of thinking about what infrastructures are, where they are located, who controls them, and what they do… When viewing/consuming media we must think not only about what they represent and how they relate to a history of style, genre, or meaning but also think more elementally about what they are made of and how they arrived” (Parks, 357, 2015).

With this in mind, we can begin to unravel and discern Apple’s intentions with the iPhone and the processes that occur as we interact with it. Much like the layout of a family tree, invisible infrastructures allow individuals to trace issues back to the source. Understanding the interconnectedness of sources leads to further questions: How does FaceID technology function and who develops FaceID? Are facial imprints at risk of being easily stolen in such a hackable era? Have the FaceID developers developed technologies outside Apple’s interests and what are the applications?

A major issue that is becoming apparent with FaceID are the concerns with facial recognition technologies and privacy. Apple claims that facial recognition data is stored locally and never uploaded to a ‘cloud’ but this is not the concern. The invisible infrastructures involved are the algorithms and the artificial intelligence software developed. Working with captured data that is aggregated by the users. Philip Agre defines the capture mode of privacy as “the practices of information technologies, built upon linguistic metaphors and takes as its prototype the deliberate reorganization of industrial work activities to allow computers to track them in real time” (Agre, 1, 1994). This means that the information captured by our actions are collected and create a grammar of action, languages that “represent human activities” (Agre, 6, 1994). This newly developed “language” contributes to the development and wide acceptance of facial recognition and surveillance, applied on a mass scale.

Mel Stanfil illustrates in his text “The Interface as Discourse”, through Foucault’s idea of power as productive (power as a source of social discipline and conformity) and functional affordances (uses of a medium may benefit a user over another, varying on the users standpoint), website interfaces dictate how the user would and should behave.

“Discursive interface analysis facilitates comprehending how technologies both arise from particular beliefs about what Users ought to do and reinforce them by constraining the actions of site visitors” (Stanfil, 1071, 2015).

Similar to the UI of the iPhone, features like TouchID and FaceID are pushed upon the users, through identification measure which requires submitting fingerprints or facial prints. The acceptance of TouchID will presumably lead to the acceptance of FaceID through the use of power, prediction, and constraints.

With these examples, we can then dive deeper into the on-goings that occur behind the scenes such as, how are facial data banks managed, who has control over them? Facial imprints are stored locally but what happens when a third-party app demands the same type of security, is the data then stored in the third-parties’ data banks? By simply asking these questions, we begin to see the associations and strings that trickle down through our actions. We are able to unravel and understand connections that are already put into place, ones that are not so obvious at first glance. With the notion of invisible structures in mind, perhaps we are able to discover new associations and links, some will make themselves apparent as a result of a break or a malfunction in the infrastructural system, such as data leaks.

Shannon Mattern describes infrastructures revealing themselves, “Exploring the senses of infrastructures can reveal not only how those systems indicate their functionality for us but also their own operation modes and logic… Sound serves as a useful diagnostic tool; we can often hear infrastructural malfunctions” (Matter, 9, 2013).

Further interpreting the meaning of facial recognition and FaceID, it is important to understand that unlike TouchID, it no longer needs to ask permission from the user (passwords, fingerprints) and instead captures the information it requires (structural, facial imprints) with or without the user’s authorization. Such facial recognition technologies have already been employed by the Republican National Convention. According to Jennifer Granick, in 2012 the RNC used facial recognition to identify and predict potential troublemakers, she has also stated that “50% of American adults now have their facial imprint in a national database (Granick, TedXStanford, 2017)”, indicating that the applications are already being tested and applied.

This text is not intended to sway a user away from Apple’s technologies but rather inform and give a more critical explanation to those who may not have realized the control of information Apple demands of its users. It is important to understand the relationships that are formed between users and technologies, in this case, recognize the subtle direction Apple sways. The consolidation of personal details has become implemented and habitual for many iPhone users. The more one understands the structures involved in divulging such information, the more power and knowledge one has over the technology. Apple has a way of nudging their technology upon its users, but their users do not necessarily have to budge.



Agre, Philip. “Surveillance and Capture: Two Models of Privacy.” Information Society, 1994, pp. 101–127.

Mattern, Shannon. “Infrastructural Tourism.”, 2013, pp. 1–17.

Parks, Lisa. “”Stuff You Can Kick”: Toward A Theory Of Media Infrastructures.” 2015, pp. 1–20.

Thaler, Richard H, and Cass R Sunstein. Nudge – Improving Divisions About Health, Wealth, and Happiness. Yale University Press, 2008.

Stanfill, Mel. “New Media & Society.” The Interface As Discourse: The Production of Norms Through Web Design, vol. 17, no. 7, 2014, pp. 1059–1074.

“What Is A Nudge?” NPR: TED Radio Hour, 24 June 2016, Accessed 22 Sept. 2017.

“How the US government spies on people who protest — including you.” TED, Apr. 2017, Accessed 22 Sept. 2017.

Leave a Reply