AI IS WATCHING YOU
Artificial Intelligence is, and actually always has been, a hot topic of debate. In the computational sciences AI is conceptualized as the study of intelligent agents. The computer perceives the environment and consequently performs actions that are expected to maximize the chance of the successful accomplishment of its goals (Stuart & Norvig, 3). Its possibilities are nearly endless and has enabled immense technological progress in various domains. As a book from Trappl already noted in 1986, AI has made (and will continue to make) significant impacts in are in, inter alia, human psychology and neurology (Trappl, 64), robotics (65), educational applications (66) and the medical world, the technologies are thriving and are becoming more prevalent (The Economist, n.pag.).
The specific field where new ways of deploying AI are emerging is in the prevention, detection and fighting of crime (Faggella, n.pag.). The technology can take many forms in this area: a company can deploy AI to detect issues from employee theft to insider trading, fraud to money laundering and blocking forbidden content on the Internet (Quest et al. n.pag.). Another often used technology is the facial recognition tool. This tool itself can again take many forms as well: it is used for detecting victims of human trafficking and sexual abuse in the FaceSearch (Markowitz, n.pag.), Spotlight (Woods & Hartsock, n.pag.) and even Google has recently developed a tool that automates the process of scanning through uncountable amounts images – a new form of AI that hasn’t been seen before (Ghoshal, n.pag.). AI can even identify people in the street that appear on the surveillance cameras, scan their facial expressions and predict behaviour accordingly (Quain, n.pag.).
This emerging form of crime detection raises questions about safety, ethics and surveillance issues. Because technology is always improving, growing and becoming more ubiquitous, it is important to reflect on the possible implications these technological advancements may have. The use of the facial recognition is obviously increasing, with even Google now stepping in, and problems like the spreading of, for example, “child sexual abuse material” are nowhere near being contained (Ghoshal, n.pag.).
There are two sides to every story, and the use of AI is no exception in crime. There are pro’s and con’s, optimists and pessimists, supporters and opposers.
So let’s first talk about the good news.
First of all, the use of AI in the previously named crime domain is obviously used to enhance safety and decrease crime. HikVision, an application that uses facial recognition to detect potential criminals or people who are missing, claims that they attain a 99% accuracy. A study concerning HikVision showed that the software was able to detect an estimate of 80% of threatening visuals. It is even said to cause the crime rate by 65% in South Africa (Fagella, n.pag.), which is an indinable improvement. Furthermore, AI is much quicker and requires less human interaction (Ghoshal, n.pag.).
And now the bad.
As Quest et al. mention in their article AI has great potential to eventually decrease crime, but the obvious precondition for this is that the technologies are managed well. And this managing is eventually done by humans. The software, and therefore the properties of the medium, are developed by humans (Manovich, 32). They could build in a certain bias into one of the algorithms of the AI, even if they didn’t explicitly mean to do this. This endangers the objectivity of the technology and might cause certain groups of people to be structurally prejudiced and harmed (Rieland, n.pag.).
Because the actual deployment of AI is relatively new, it is often said that the ethical code concerning these issues is missing and questions are being raised about the permitted uses of the technologies (Chen, n.pag.). Surely, the probable intention is to protect us citizens and enhancing our quality of life, but at what costs does this come?
Now these techniques are (hopefully) only used by the police or other governmental institutions for the pure intent of detecting victims of human trafficking or child pornography. But what if the collected data would be obtained by third parties or used for other purposes? This is exactly what the people of New Orleans are afraid of, when their Mayor proposed a new surveillance plan which involved combining data mining of municipal camera’s with the live feeds of privately owned webcams. The public is afraid that this data will be used by the federal immigration officials to detect illegal immigrants and deport them (Quain, n.pag.). Would this situation put us in the panopticon, as described by Foucault? A situation where we, being the people on the street, have no idea of where and when the government is watching us, creating a surveillance system that is reliant on permanent registration (196). But now we’re not merely being watched, we’re also being scanned, recognized and filed. How are we supposed to live our lives in peace when we experience the constant fear of being watched?
So what do we do?
As the crime issues are not close to being solved and technical leaps are being made by the minute, new technologies involving facial recognition will only continue to grow and become more prevalent in our society. As Deuze stated in his text ‘Media Life’, media has become more ubiquitous in our society, and it has formed “building blocks for our constant remix of the categories of everyday life” (137). An example of one of these categories is private and public (137), and as the notion of privacy is a cultural phenomenon (Agre, 740) perhaps our society alters its meaning and therefore handling of privacy. It will change our society for sure, but whether this is either for better or worse is still undecided.
Literature
Agre, Philip E. 1994. Surveillance and Capture: Two Models of Privacy. The Information Society. 10(2): 101-127.
Chen, Sophia. “AI Research Is in Desperate Need of an Ethical Watchdog.” WIRED. 2017. 20-09-2018.
<https://www.wired.com/story/ai-research-is-in-desperate-need-of-an-ethical-watchdog/>
Deuze, Mark. “Media Life.” Media, Culture & Society 33.1 (2011): 137-148.
The Economist. “Artificial Intelligence Will Improve Medical Treatments.” The Economist. 2018. 20-09-2018. <https://www.economist.com/science-and-technology/2018/06/07/artificial-intelligence-will-improve-medical-treatments>
Faggalla, Daniel. “AI For Crime Prevention and Detection – Current Applications.” TechEmergence. 2018. 20-09-2018. <https://www.techemergence.com/ai-crime-prevention-5-current-applications/>
Foucault, Michel. “Discipline and Punish.” (A. Sheridan, trans.). New York: Pantheon, 1977.
Ghoshal, Abhimanyu. “Google Unleashed Its New Image Detection AI On Child Abuse Content Online”. The Next Web. 2018. 20-09-2018. <https://thenextweb.com/artificial-intelligence/2018/09/04/google-sics-its-new-ai-on-child-abuse-images-online/>
Manovich, Lev. 2013. Media After Software. Journal of Visual Culture 12: 30-37.
Markowitz, Saul and Livia Rice. “Pittsburgh-Based Tech Company Debuts First Facial Recognition Technology Designed To Halt Global Human Trafficking.” Marinus Analytics. 2017. Marinus Analytics. 20-09-2018. <http://www.marinusanalytics.com/articles/2017/6/27/face-search-debut>.
Rieland, Randy. “Artificial Intelligence Is Now Used To Predict Crime. But Is It Biased?” Smithsonian. 2018. 21-09-2018 <https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/>
Quest, Lisa. et al. “The Risks and Benefits of Using AI to Detect Crime.” Harvard Business Review. 2018. Harvard. 20-09-2018 <https://hbr.org/2018/08/the-risks-and-benefits-of-using-ai-to-detect-crime>
Quain, Robert. “Crime-Predicting AI Isn’t Science Fiction. It’s About To Roll Out In India”. TechEmergence. 2018. 20-09-2018. <https://www.digitaltrends.com/cool-tech/could-ai-based-surveillance-predict-crime-before-it-happens/>
Russell, Stuart J., and Peter Norvig. Artificial Intelligence: a Modern Approach. Malaysia; Pearson Education Limited, 2016.
Trappl, Robert. Impacts of Artificial Intelligence. Amsterdam: North Holland Publishing Co., 1986.