Ethical concerns of combating crimes with artificial intelligence surveillance and facial…

Original Source Here

Ethical concerns of combating crimes with artificial intelligence surveillance and facial recognition technology

Artificial intelligence (AI) has been rapidly growing worldwide, with new applications being discovered every day. The definition of AI has been continuously changing since the Dartmouth Artificial Intelligence Conference in 1956 (Dartmouth AI Conference 1956). This paper will use the following definition of AI from Encyclopedia Britannica: “AI is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” (Copeland 2020). While AI has applications across many sectors, one area where it is commonly utilized is in AI surveillance and facial recognition technology to combat crimes. As of 2019, at least seventy-five countries globally are actively using AI technologies for surveillance purposes, including smart city/safe city platforms, facial recognition systems, and smart policing initiatives (Feldstein 2019: 1). However, the widespread use of AI in the name of combating crimes does not come without a cost; multiple ethical concerns have arisen in the past couple of years, which questions the feasibility of implementing AI technology to combat crimes. This paper will examine two prominent ethical concerns regarding AI in fighting crimes: biases in facial recognition technology and authoritarian governments exploiting AI surveillance in the name of public safety.

Fueled by new research in AI, facial recognition technology has become more popular than ever; however, it is not always accurate in its findings. According to a recent study conducted by The Journal of Research of the National Institute of Standards and Technology (NIST), a peer-reviewed, government-funded, US-based publication, facial recognition software has certain biases regarding race, age, and sex. Patrick Grother, a NIST computer scientist, headed this first of its kind study. Grother and his team evaluated 189 software algorithms from 99 developers to measure whether these algorithms exhibit demographic differentials, which is a term measuring if an algorithm’s ability to match images differ for various demographic groups (NIST 2019). Using four collections of photographs containing 18.27 million images of 8.49 million people provided by various government agencies, the team evaluated these algorithms’ matching ability concerning demographic factors. The results were astounding; though levels of inaccuracies differ between algorithms, most of them exhibited demographic differentials. In particular, Grother points out that Asian, African American and Native groups are 10 to 100 times more likely to be wrongly identified compared to Caucasians. Moreover, algorithms also struggle with identifying women compared to men and older adults compared to middle-aged adults (NIST 2019; Grother, Ngan and Hanaoka 2019). These findings are critical as they expose biases in facial recognition systems that hinder the safe implementation of these technologies. “One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse,” (Singer and Metz, 2019) said Jay Stanley, an analyst at the American Civil Liberties Union. This reality of widespread demographic differential from inherently discriminatory AI facial recognition systems remains a paramount ethical issue that needs to be addressed.

Unfortunately, biases in facial recognition technology have already led to injustices in the United States. The first known example of this is the case of Robert William, an African American man arrested after a facial recognition system mistakenly matched his photo to a thief (Porter 2020). Williams ended up having his mug shot, fingerprints and DNA taken and was held overnight (Porter 2020). When shown an image from the surveillance video by a detective, William said, “No, this is not me, you think all black men look alike?” (Hill 2020). While William ended up being released, his experience has been traumatic, and those around him, including his five-year-old daughter, could never unsee him being handcuffed and taken away (Porter 2020). Robert Williams’s story serves as a powerful testament to the harm that a flawed facial recognition technology can do to society.

While liberal democracies such as the US are struggling with utilizing AI to further the safeguarding of society, ethical concerns also arise in authoritarian governments exploiting AI surveillance in the name of combating crime. One such country is China; through the “New Generation Artificial Intelligence Development Plan” (AIDP), the nation delineated an overarching goal to make China the world leader in AI. The AIDP indicates China’s intention to use AI for defence, social welfare, and developing ethical standards (Robert et al. 2020: 1–2). However, knowing the political culture of corruption and repression in China, some, such as Ross Anderson, a deputy editor of The Atlantic, argued that China’s pronouncements on AI have a sinister edge. Anderson believes that China wants to use AI to build an all-seeing digital system of social control, which would push China to the cutting edge of surveillance (Anderson 2020). This possibility of an all knowing system fueled by AI-based surveillance presents ethical concerns because it grants governments absolute control at the expense of civil liberties.

Anderson is certainly not speaking without cause in his worries of an all knowing digital system use for government control. According to Paul Mozur, a Hongkong-based correspondent of The New York Times, the Chinese government uses AI surveillance to profile the Uighurs, a mostly Muslim minority group in China (Mozur 2019). This type of surveillance, according to Mozur, is the first example of a government intentionally using AI for racial profiling. Through AI surveillance, the government exclusively looks for Uighurs based on their appearance and keeps records of their daily movement. This information is used to keep tabs on China’s 11 million Uighurs in Xinjiang province. Due to this widespread integration of AI technology, authorities had put a million Uighurs in detention camps for suspicion of terrorism and other alleged crimes (Mozur 2019). Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, points out that people will use the riskiest parts of AI technology: “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.” (Garvie in Mozur 2019). The capability and implementation of mass AI surveillance employed by autocratic governments such as China remains an urgent ethical crisis to human activists and leaders worldwide.

While it is critical to address ethical concerns regarding the incorporation of AI in combating crimes, it is also essential to acknowledge some of the benefits AI brings to the table. According to the Oliver Wyman Risk Journal, AI is used to detect crimes such as employee theft, cyber fraud, fake invoices, money laundering, and terrorist financing (Quest et al. 2018). These AI applications have triumphed against financial crimes. Specifically, banks have reduced false alerts by 50% while finding success with AI-driven tools to track criminals (Quest et al. 2018). Furthermore, the scope of AI’s application is virtually unlimited if utilized correctly. Future applications include detecting and tracking illegal goods, terrorist activities, and human trafficking (Quest et al. 2018). Delivery companies can use AI to assess parcels containing illegal goods, shops can use AI to identify abnormal purchases, and law enforcement can use AI to combat human trafficking (Quest et al. 2018). It is worth noting that the source of this information, the Oliver Wyman Risk Journal, is not itself peer-reviewed nor a leading research institution in AI technology. Nevertheless, all of these AI applications display promising capabilities of enhancing society’s safety across the globe.

The ethics involved in employing AI technology to combat crimes will remain a critical issue debated among researchers, government authorities and the general population. While AI has potential in combating crimes and increasing citizens’ safety across the globe, it is undeniable that there are ethical concerns regarding the implementation of AI in fighting crimes. Crucial issues include totalitarian regimes’ abuse of AI surveillance and any government’s use of fundamentally biased facial recognition systems. As a response to emerging concerns with AI technology, multiple guidelines have been published in recent years. One such set of guidelines was published by Dr. David Leslie in the Alan Turing Institute, a peer-reviewed, UK-based journal. The report stresses the significance of AI ethics and explores platforms for the responsible delivery of AI technologies (Leslie 2019: 3). As AI is becoming a gatekeeper technology, humankind ultimately can choose which direction it goes, whether it is the exponential advancement of human well being or the possibilities of significant risks (Leslie 2019: 73). The increasing integration of AI globally inevitably leads to major ethical concerns, but if leaders and researchers are willing to crack down on unethical behaviours and follow appropriate guidelines, AI can be an invincible force towards a better future.


Anderson, R. (2020, September 9). The panopticon is already here. The Atlantic.

Artificial intelligence (AI) coined at Dartmouth. (n.d.). Dartmouth.

Copeland, B. J. (n.d.). Artificial intelligence (AI). Encyclopedia Britannica School. Retrieved December 23, 2020, from

Feldstein, S. (2019). The global expansion of AI surveillance. Carnegie Endowment for International Peace. JSTOR.

Grother, P., Ngan, M., & Hanaoka, K. (2019). Face recognition vendor test (FRVT) part 3: Demographic effects. The Journal of Research of the National Institute of Standards and Technology.

Hill, K. (2020, June 24). Wrongfully accused by an algorithm. The New York Times.

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute.

Mozur, P. (2019, April 14). One month, 500,000 face scans: How China is using A.I. to profile a minority. The New York Times.

NIST study evaluates effects of race, age, sex on face recognition software. (2019). The Journal of Research of the National Institute of Standards and Technology.

Porter, J. (2020, June 24). A black man was wrongfully arrested because of facial recognition. The Verge.

Quest, L., Charrie, A., & Roy, S. (2018). The risks and benefits of using AI to detect crime. Oliver Wyman Risk Journal, 8.

Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2020). The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI and Society.

Singer, N., & Metz, C. (2019, December 19). Many facial-recognition systems are biased, says U.S. study. The New York Times.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: