Facing the Law: A Closer Look at AI and Facial Recognition Technology Part 3

In the last two chapters of Facing the Law, we noted the increasing widespread use of facial recognition technology (“FRT”) by governments and large companies alike and its legal implications. In this piece, we dive deeper into the legal issues surrounding government use of FRT in particular, the exceptions in privacy laws that allow it, and the various legal challenges that have arisen against it.

 

CHINA AND THE ORWELLIAN EYE

Imagine a country where an all-seeing eye monitors every action of its citizens, collects data on their online activities, including the shopping sites they visit, the movies they watch, and their political opinions, and assigns them credit scores, based on behaviours deemed good or bad. This score then determines whether these citizens are granted certain privileges or denied them, or regarded as ‘desirable citizens’ or not. A state in which round the clock algorithmic digital surveillance has been normalised, in the name of public security.

You probably wouldn’t need to stretch your imagination because, by all accounts, China is already such a state. For instance, the app “WeChat,” used by nearly the entire 1.4 billion population of China, has evolved from a basic messaging platform like WhatsApp into an all-in-one application. It facilitates various functions, including bill payments, loan approvals, and even divorce filings, all through a single interface. This convenience enables the government to collect extensive data on its citizens, and take action accordingly. For instance, it regularly censors users without their knowledge or consent, especially for content that is politically-charged or otherwise deemed controversial. Additionally, the government has employed FRT to publicly identify individuals for minor infractions, such as jaywalking, by displaying their name, address, and part of their ID on billboards.

In a paper published in 2024 about FRT regulation in China, the researchers acknowledged that the extensive state-enabled surveillance and data collection mechanisms in the country have led to violations of privacy rights, and that there is limited legal recourse available to citizens in such cases. However, in recent years, China has also been demonstrating how ‘authoritarian’ states can attempt to balance digital surveillance and the protection of its citizens’ personal data. For instance,  the Civil Code of the People’s Republic of China (2020) marked a major shift in the regulatory landscape for the protection of personal information, including biometric data. The Civil Code dedicates a new chapter to Chinese privacy laws and views personal information as a basic civil right. Article 1035 of the Civil Code even establishes general data protection principles similar to those envisaged in other major data protection laws, such as purpose limitations and the requirement for informed consent by data subjects in processing personal information.

China has, however, also adopted the public security exception, as was discussed in Part 2 of Facing the Law. Although this is an internationally recognised data protection principle, the challenge with China is that the concept of public security has been interpreted expansively and according to the needs of the state. Moreover, since the legal system primarily reinforces state authority in China, avenues for citizens to address legal concerns related to surveillance and government actions have been limited compared to other countries.

 

USA, UK, AND EUROPE

In the USA and Europe, legal challenges to government FRT use arise more frequently and serve as reminders of the complexity of the ethical, moral and legal issues involved in widespread FRT use. In the US, various cities have enacted bans or restrictions on FRT use by law enforcement, citing risks of surveillance overreach and racial bias. For instance, San Francisco became the first major city to prohibit police from using FRT, prompting similar measures in other jurisdictions. In Europe, a number of countries have seen lawsuits aimed at restricting the use of FRT by public authorities, focusing on issues such as consent, data retention, and the absence of clear guidelines governing its deployment.

USA

Earlier this year, in June, a landmark settlement was reached in the case of Robert Williams Vs. the City of Detroit, where the plaintiff, Mr. Williams, had been arrested and detained by Detroit law enforcement early on in 2020 based on a wrong FRT analysis by the Detroit police. Mr. Williams was accused of stealing expensive watches from a Detroit store, but there was no evidence to prove the said theft except low-resolution CCTV footage of an African-American man shop-lifting. Based on FRT analysis of the footage, the police arrested Mr. Williams, despite his strong alibi at the time of theft. A lawsuit was then filed by the civil liberties group of Michigan on behalf of Mr. Williams, and finally in 2024, an agreement was reached whereby the Detroit police admitted wrongdoing and paid a fine to the plaintiff. The court observed that law enforcement could not henceforth rely solely on FRT to make arrests, and that law enforcement personnel must undergo mandatory training on using FRT and understand the risks associated to it. The court also mandated that all cases in which FRT had been used by the police to make arrests, dating back to 2017, were to undergo an audit to ensure that no other misidentifications had been made.

In another recent case, T-Mobile was fined heavily for violating the Biometric Information Privacy Act and the 2019 Biometric Identifier Information Act of New York. In this instance, T-Mobile was found using FRT and collecting the facial images and voice samples of every individual who entered their store to prevent against any potential theft. Since such information was obtained without the knowledge and consent of the customers, lawsuits were filed against the company under the aforementioned Acts. As of August 2024, T-Mobile has been fined $60 million USD for its data privacy breach.

UK

In 2020, the Court of Appeals in South Wales overturned a previous ruling regarding the use of an automated FRT system by police in the case of Edward Bridges vs. South Wales Police. The automated system, known as “AFR Locate,” captures facial images from live feeds and compares them to a database of wanted individuals and persons of interest. The plaintiff was identified by this system in 2017 and 2018, and his image was recorded and stored without his knowledge or consent. Initially the Divisional Court found no violation of rights, but the Court of Appeals later ruled that the system lacked clear guidelines for police use and had not been assessed for potential discrimination based on gender or race. Additionally, it was missing a required data protection impact assessment under the UK’s Data Protection Act 2018.  

In July 2024, the UK Home Office issued a tender for a new national facial recognition software for policing given the increasing use of the technology by the country’s law enforcement agencies, further reflecting the country’s willingness to introduce FRT on a large scale for public good.

Europe

Germany has recently begun to push up against the Artificial Intelligence Act (which restricts FRT) by introducing a draft bill that allows police deployment of FRT to identify criminals and terrorists. If enacted, the German police would be permitted to use facial recognition software to compare photos or screenshots from video footage with images posted to social media so as to determine the whereabouts of suspects or identify unknown criminals. This follows the use of live FRT in the Saxony region of Germany earlier this year, which was met with criticism. Here, law enforcement reportedly deployed a surveillance system called Personal Identification System (PerIS), without notifying the German data protection agency. 

 

CONCLUSION

The widespread use of FRT by governments have raised privacy concerns, as the collection and processing of biometric data can infringe individual rights and liberties. Hence, as governments increasingly integrate FRT into public safety and law enforcement initiatives, legal cases are on the rise as individuals claim violation of their civil rights. These lawsuits often centre around issues such as unlawful surveillance, violation of privacy rights, and the lack of transparency regarding how FRT data is collected, stored, and used.

The EU, through the Artificial Intelligence Act (AI Act), has imposed restrictions on FRT by classifying it as a high-risk AI application. Accordingly, it is subject to the compliance requirements given in Section 2 of the AI Act, including technical documentation, risk management systems, and measures to ensure accuracy, robustness, and cybersecurity. It also mandates that any deployment of FRT must undergo impact assessments to evaluate potential effects on privacy and other fundamental rights. Moreover, the use of FRT for real-time remote biometric identification by law enforcement is heavily regulated, with strict limitations placed on its use in public spaces, particularly for surveillance purposes. However, the AI Act also allows the use of real-time remote biometric identification if the government obtains authorisation by a judicial authority or an independent administrative authority in its country. Even in general, the AI Act states ‘national security’ as an exception to this regulation in Article 2, giving nations the flexibility to implement FRT under the ambit of this exception. Accordingly, a country like Germany may in fact continue to use the FRT systems they have in place, but to do so they must ensure compliance with the specific system requirements under the AI Act.  

Authors: Varun Alase, Shantanu Mukherjee, Sreyosi Roy