Facial Recognition and Immigration Policies: Secure Technology or Misleading Tools?

Victória Costa/ September 2, 2022/

Victoria Costa[1]

Abstract

As societies evolve, so do the means by which daily activities are carried out. When it comes to immigration diligence, technology is the one thing that has been constantly taking over many of these processes, in the hopes of facilitating the data management of great flows of people, which then ends up being less time-consuming as well as safer than human-relied verdicts, some might argue. As a pro-technology mindset keeps on blooming, concerns rise with regard to the extent to which artificial intelligence (AI) can be explored when it comes to sensitive data analyses, such as asylum applications for refugees and other migration assistance.

One of the main features of AI beginning to be implemented nowadays is facial recognition. This article aims to disclose its advantages, risks and current uses when it comes to immigration practices.

Key Words: Artificial Intelligence, Facial Recognition, Technology, Immigration, Images.


What is Artificial Intelligence (AI)?

Essentially, AI is the simulation of human intelligence processes by machines, especially computer systems (Ed Burns). In the words of Jeff Dean, the earliest approaches people found to building machines that could see, and understand language and speech was by hand-coding algorithms but turned out to work poorly. In the last 15 years, however, a single approach has unexpectedly advanced all these different problem spaces all at once: neural networks.

Dean further explains that neural networks are loosely based on some of the properties of real neural systems. It is, therefore, a series of interconnected artificial neurons that emulate those properties of real neurons. An individual neuron in one of these systems has a set of inputs – each with an associated weight – and the output of a neuron is a function of one of these inputs multiplied by those weights:

Lots of these neurons work together to learn complicated things, and this learning process consists of repeatedly making small adjustments to the weight values (e.g. strengthening the influence of some things and weakening the influence of others). Lastly, the computer scientist explains that by driving the overall system towards desired behaviors, these systems can be trained to carry out really complicated tasks, such as language translation and identification of objects in a photo, which brings us back to the main topic: facial recognition.

To William Crumpler and James A. Lewis, facial recognition is a way of using software to determine the similarity between two face images in order to evaluate a claim, whereas facial characterization refers to the practice of using software to classify a single face according to its gender, age, emotion, or other characteristics. They state further that facial classification is different from facial recognition, whose purpose is instead to compare two different faces. Facial characterization is often confused with facial recognition in popular reporting, but they are actually distinct technologies. The duo also point out that many claims about the dangers of facial recognition are actually talking about characterization.

According to other sources, facial recognition is a biometric tool that identifies a person based on specific aspects of their physiology. Although the software can vary, the process of facial recognition tends to follow three basic steps: 1. Your face is captured by a photo or video, whether you are alone or in a crowd; 2. The software then measures a variety of facial features, called “landmarks” or “nodal points” on your face (these could include the distance between the eyes, the width of the nose, depth of the eye sockets, etc); and 3. The information is then converted into a mathematical formula representing your unique facial signature, which is compared to a database of known faces, all in a matter of seconds[4].

Facial verification VS Facial recognition

When analyzing Singapore’s pioneer use of facial scans for national identity schemes, BBC News reporter Tim McDonald points out a key difference among the vocabulary used for the technology in question. Although facial recognition and facial verification are synonyms, and both depend on scanning a subject’s face and matching it with a database of existing images to attribute an identity, their practical use is completely different. In the words of Andrew Bud[5] “Face Recognition has all sorts of social implications. Face verification is extremely benign”.

Whilst face verification requires the explicit consent of the user (and in return, the user gets access to something such as their phone, bank apps or perhaps, boarding a plane), facial recognition might scan large groups of people and warn authorities if any images consist with an irregularity, such as the photo of a criminal that matches a database.

AI and Immigration Case Study: Hong Kong

Facial verification technology has slowly begun to be implemented in immigration services and is already used frequently in some countries with regard to border control and other purposes. The most innovative use of the technology so far when it comes to immigration is the ‘contactless channels’ introduced in Hong Kong, featuring facial recognition technology.

Launched on December 1st, 2021, this system allows “faster, more convenient travel and a more hygienic immigration clearance service for residents”[6] and is available at the Hong Kong International Airport, Hong Kong-Zhuhai-Macao Bridge and Shenzhen Bay control points.

Residents aged 11 to 17 require parental consent to enroll in the e-channel system and nonetheless, residents can still continue to use fingerprint verification technology for automated immigration clearance or traditional counters for immigration clearance. According to the Hong Kong Immigration Department, “eligible Hong Kong residents can use a smartphone with biometric authentication to download the ImmD’s Contactless e-Channel mobile application for enrolment. After successful enrolment with the mobile application, they can generate the encrypted QR code through the mobile application to enter the e-Channel, and then verify their identity with the facial verification technology for automated immigration clearance. Throughout the entire process, there will be no need for them to touch any shared equipment of the e-Channel”.

Brazil and the USA

Facial verification technology also takes part in immigration at some checkpoints in Brazil and in the United States.

In June of 2021, Brazil provided the very first facial verification system in partnership with IDEMIA[7] to be used in airports[8]. As standard procedure, passengers only need to check in and provide their personal identification number and a photo in order for the verification to take place. Currently, 100% digital boarding is available at the airports of Congonhas in Sao Paulo and at Santos Dumont in Rio de Janeiro, allowing for travels between the two states to be fully digital.

The digital boarding program utilizes MFACE technology, developed by IDEMIA, along with an app and database of the Brazilian Federal Government. All the institutions involved in the program certify that all data collection and storage is safe, and the executive secretary of the Infrastructure Ministry stated that the digital boarding system obliges with “all precepts of GDPR[9]”. In that sense, IDEMIA further explains that the data collection for facial verification is done by the use of cryptography and does not store the actual pictures of its users, since it only collects a ‘template’ of dots created upon each passenger’s face and cannot be reused to replicate the original image[10].

In March of 2022, USA’s Transportation Security Administration (TSA) also introduced facial recognition technology at the Los Angeles International Airport (LAX) at some of its security checkpoints[11]. TSA uses Credential Authentication Technology (CAT), which works similarly to the previous technology studied. According to TSA’s local press information, ‘guests may be asked to insert their government-issued photo ID into the CAT unit, which is equipped with a camera that captures a photo of the guest. The CAT compares the guest’s facial features on their photo ID against the facial features from the in-person photo, confirming their identity’. In addition, TSA states that photos captured by CAT units will not be stored or used for any other purpose than identity verification at the security checkpoint. Also, guests who do not wish to participate in facial recognition verification can opt out in favour of an alternative identity verification process.

Surveillance by Facial Recognition Technology

Different from the uses of facial recognition technology described previously in airports, concerns have risen with regard to the extension of this technology for purposes of surveillance, which can eventually clash with the GDPR standards.

It remains a dilemma whether or not this technology should be further implemented in the EU, since there have been some issues with its functioning, but also reasonable outcomes from its use. For instance, the DragonFly Project[12] in Hungary since its implementation in 2019 has so far resulted in 6000 matches, 250 stop-and-searches and 4 arrests by the police and the Hungarian secret services[13], where costs for implementation rounded about 160 million euros – without additional costs. The project consists of 35.000 cameras and 25.000 terabytes of monitoring data, where all recorded data is stored for a period of 30 days, without the consent of individuals.

In Mannheim, Germany police have implemented cameras designed to record moving patterns of individuals, which then use software that detects suspicious behavior. The system has however reported numerous false positives, mistaking hugs for suspicious activity[14].

Although article 9 of the GDPR states that biometric data should not be used to identify a person unless an individual has provided explicit consent, the dilemma prevails when article 10 and 6 of the Regulation overrules consent whenever deemed reasonable by law enforcement, without specific clarification on the circumstances in which that disregard applies.

It is surely impossible to foresee and establish the exact situations law enforcement should be allowed to surveil an individual or groups of individuals without their consent. Nevertheless, it is possible to weigh proportionate acts and therefore come to a compromise that allows authorities to access that data for security purposes.

Human Rights and the use of AI Technology

Alongside biometrics, other technological means such as Big Data and AI lie detectors are also used at airports by private companies to manage migration. According to Petra Molnar, the collection of vast amounts of data on particular groups also presents issues around data-sharing and access. While exchanging data on humanitarian crises or biometric identification is often presented as a way to increase efficiency and inter-agency and inter-state cooperation, benefits from the collection do not accrue equally. Molnar further raises questions with regards to the idea of “consent” in data sharing, since it cannot be truly freely given if under coercion, such as personal data in exchange for asylum or food, even if the coercive circumstances masquerade as efficiency and promise improved service delivery[15].

Essentially, there cannot be a pure sine qua non data-exchange context, although this must be acceptable in a human rights context if the compromise relates to safety. The real question is, to what extent can security justify the collection of personal data and what are the limitations imposed by law? Scholarly arguments typically revolve around a disproportionate and unjustifiable collection of data, especially in vulnerable situations such as those of asylum applications. Once again, Molnar brings up the fact that algorithms are vulnerable to the same decision-making concerns that plague human decision-makers: transparency, accountability, discrimination, bias and error. According to the author, the opaque nature of immigration and refugee decision-making creates an environment ripe for algorithmic discrimination. This is especially concerning when it comes to interoperability and the criteria used upon data-sharing between systems.

Conclusions

With regards to a possible bias of facial recognition, William Crumpler and James A. Lewis state that demographic differences in the technology accuracy rates have been well documented, but the evidence suggests that this problem can be addressed if sufficient attention is paid to improving both the training process for algorithms and the quality of captured images. As for its accuracy, they point out that facial recognition is improving rapidly, but while algorithms can achieve very high performance in controlled settings, many systems have lower performance when deployed in the real world.

Lastly, the authors affirm that facial recognition is used to aid human decision-making rather than replace it since human oversight helps to mitigate the risk of errors. Therefore, operators need to understand how system performance can be affected by deployment conditions in order to put in place the right safeguards to manage trade-offs between accuracy and risk.

Improvement of the algorithms along with the proportionate safeguard for fundamental rights seems to provide a better balance between the application of new technologies and data management by the government. Furthermore, in light of the examples studied, the purpose of the technology installed, and the respective outcomes should avoid being distinctively disregarded, that way also eluding financial waste as well as unnecessary privacy disturbance.

Complementary Bibliography:

What is Artificial Intelligence (AI)? Definition, Benefits and Use Cases (techtarget.com)

Curious Inspiration

210610_Crumpler_Lewis_FacialRecognition.pdf (csis-website-prod.s3.amazonaws.com)

https://www.scmp.com/news/hong-kong/transport/article/3157920/hong-kong-introduces-contactless-immigration-channels?module=perpetual_scroll_0&pgtype=article&campaign=3157920

Immigration Department introduces Contactless e-Channel service for Hong Kong residents | Immigration Department (immd.gov.hk)


[1] É aluna do 4.º ano de licenciatura em Direito e é também membro do Núcleo de Estudantes Internacionais da NOVA School of Law desde 2018. No ano académico 2021-2022 fez parte da Linha de Investigação “Migração e Transformação Digital” da NOVA Refugee Clinic.

[2] Available at: Curious Inspiration

[3] Ibidem, note 1.

[4] Available at: How does facial recognition work?

[5] Founder and Chief Executive of iProov.

[6] Immigration Department. Immigration Department introduces Contactless e-Channel service for Hong Kong residents | Immigration Department, 30 Nov 21, available at: https://www.immd.gov.hk/eng/press/press-releases/20211130b.html

[7] IDEMIA. The leader in identity technologies, 1 Aug 22, available at https://www.idemia.com/

[8] TECMUNDO. Brasil é pioneiro no uso de reconhecimento facial em aeroportos, 15Jun 21, available at: https://www.tecmundo.com.br/mobilidade-urbana-smart-cities/219316-brasil-usa-reconhecimento-facial-ter-aeroportos-embarque-digital.htm

[9] General Data Protection Regulation.

[10] Ibidem, note 7.

[11] Transportation Security Administration. TSA launches cutting-edge passenger identification technology at LAX security checkpoints, 18 Mar 22, available at https://www.tsa.gov/news/press/releases/2022/03/18/tsa-launches-cutting-edge-passenger-identification-technology-lax

[12] Hungary Today. CCTV: Is It Big Brother or the Eye of Providence?, 18 Jan 2019, available at https://hungarytoday.hu/cctv-is-it-big-brother-or-the-eye-of-providence/

[13] Greens/EFA. Facial Recognition in European Cities, 15 May 22, available at https://www.greens-efa.eu/opinions/facial-recognition-in-european-cities-what-you-should-know-about-biometric-mass-surveillance/

[14] Ibidem, note 11.

[15] Molnar, Petra. “Technology on the margins: AI and global migration management from a human rights perspective”. Cambridge International Law Journal, University of Toronto, Canada, 2019.


COMO CITAR ESTE BLOG POST:

Costa, Victoria. “Facial Recognition and Immigration Policies: Secure Technology or Misleading Tools?”. NOVA Refugee Clinic Blog, Setembro 2022, disponível em: <https://novarefugeelegalclinic.novalaw.unl.pt/?blog_post=facial-recognition-and-immigration-policies-secure-technology-or-misleading-tools>

Share this Post

About Victória Costa

É aluna do 4.º ano de licenciatura em Direito e é também membro do Núcleo de Estudantes Internacionais da NOVA School of Law desde 2018.