Facing the Law: A Closer Look at AI and Facial Recognition Technology Part 2

In Part 1 of Facing the Law, we discussed the recent sanctions imposed by the Dutch Data Protection Authority against Clearview AI, and also explored the varied regulatory landscape for Facial Recognition Technology (FRT) across different regions. In this part, we dive deeper to examine the extent to which the use of FRT has amplified concerns about data privacy globally, leading to a wave of legal action and regulatory crackdowns.

TECH GIANTS AND PRIVACY CONCERNS

Over the past decade, major tech companies have become central to debates about data privacy, especially regarding their use of FRT. Prominent players such as Apple, Google, and Meta have embedded FRT into various products and services, resulting in the collection of vast amounts of biometric data. Privacy policies and terms of use are essential in this context, providing companies with the opportunity to clearly communicate their practices with respect to collection and use of facial data.

For example, Apple in its privacy disclosures has highlighted its commitment to privacy by storing facial recognition data locally on devices and also uses end-to-end encryption for photos with ‘Advanced Data Protection.’ In contrast, Google Photos does not encrypt photos and retains free access to see and review them. This has even led to numerous instances of users’ photos being flagged and accounts being disabled. Another example is Mastercard, which stated in its 2024 privacy notice that it may require processing of biometric information via facial recognition as part of new authentication programmes, but only at the volition of the user upon subscription.

While most of these companies do provide options to disable facial recognition features, data protection and privacy laws generally require explicit consent for processing sensitive biometric information like facial data. Failing this, individuals can seek legal recourse for privacy violations and enforce their rights.

RISING LITIGATION IN THE USA

Companies like Google and Meta have had to defend their use of FRT against legal challenge in the USA on a few different occasions. For instance, in a class-action lawsuit filed by the state of Illinois against Google, it was alleged that the company violated the state’s Biometric Information Privacy Act (BIPA). Specifically, it was claimed that the Google Photos application was analysing a person’s facial structure for its face grouping feature without meeting the statutory requirements of BIPA – providing information, obtaining written consent, and publishing data retention policies. As a consequence, Google had to pay $100 million in settlement for using individuals’ biometric information for its face grouping feature.

Meta has faced even more censure and legal backlash over its handling of facial recognition data. In another case under the BIPA, Meta was made to pay $650 million for the use of FRT in its facial tagging features on Facebook. The settlement covered about 7 million Facebook users in Illinois whose face templates were created and stored by Facebook, and over 20% of these eligible users had filed a claim, making it a class-action suit. Similarly, media company TikTok was ordered pay a $92 million settlement for its use of FRT in photo-tagging, and was thereby restricted from collecting and storing biometric data without user consent.

In the face of increasing scrutiny from courts and regulators, Meta took the decision to shut down facial recognition systems on Facebook and delete over 1 billion people’s individual facial recognition data. They only recently settled a lawsuit filed by the state of Texas, which accused the company of unlawfully using FRT to collect biometric data from millions of Texans without their consent. In doing so, it was breaching the Texas Data Privacy and Security Act, which prescribes compliances to be met by data processors and controllers with respect to biometric data. As per the court order, Facebook had to pay $1.4 billion as settlement for illegally using sensitive personal information of Texans.

Another Meta product made in collaboration with Ray-Ban – the Ray-Ban Meta Smart Glasses – serve as eyewear with integrated smart features, allowing users to take photos, videos, and make phone calls hands-free. These Smart Glasses were recently hacked by two Harvard students as a project to show the potential privacy concerns this technology brings. The glasses were used to livestream on Instagram, and an Application developed by the students called I-XRAY monitored the live feed. Faces from the public then captured from the stream were processed through services like PimEyes, which match them to publicly available images on the internet. I-XRAY also cross-referenced this data using people-search sites to uncover personal details like addresses and phone numbers.

REGULATORY ACTION IN EUROPE

Apart from Clearview AI, other companies have faced regulatory pressure for their use of FRT in Europe. For example, the European Centre for Digital Rights (also called NOYB) had filed a complaint against low-cost airline company Ryanair in 2023 before the Spanish Data Protection Authority. This was in response to the company’s decision to have travellers undergo facial recognition as a step in the online booking process from third parties. In its complaint, NOYB called Ryanair’s facial recognition requirement as unnecessary and presenting an unacceptably high privacy risk for the airline’s customers. The controversy resurfaced in 2024 when the EU Travel Tech Association filed complaints with the Belgian and French data protection authorities against Ryanair, again alleging that the company was using automated facial recognition to identify passengers without Ryanair customer accounts. The investigations in this matter are still underway, but fines for violation of the GDPR are anticipated.

In general, the regulation of FRT has been a bone of contention between institutions in Europe. Even though the general shift has been towards prohibition, countries like Belgium have a history of using FRT developed by companies like Clearview AI for policing, which was deemed unlawful by the Belgian Data Protection Authority. However, governments in general have mostly avoided being subject to regulatory pressure for using FRT, which has created a new dimension of legal issues.

GOVERNMENT FRT USE

The use of FRT by governments around the world, especially in public places, has substantially increased over the past few years, with China deploying FRT cameras on a massive scale, India embracing the implementation of around 170 FRT systems for large-scale surveillance and crime prevention, and the USA reportedly deploying FRT systems in 18 out of 24 government agencies for mostly domestic law enforcement purposes.

Such government-use of FRT generally enjoys legislative protection. For example, although the Artificial Intelligence Act (AI Act) restricts the use of biometric categorization systems and the creation of facial recognition databases, it does envisage exceptions which permit the deployment of real time FRT if judicial or administrative authorisation has been obtained. Not just the AI Act, but even India’s newly introduced Digital Personal Data Protection Act, 2023 and the aforementioned laws in Illinois and Texas all provide exceptions in case processing is in furtherance of prevention, detection, or investigation of crimes.

A recent 2024 study enumerated several such countries’ governments that have had a relatively free hand in the use of FRT. For example, Brazil’s General Data Protection Law makes data protection a fundamental right in the country, but it does not apply to data collection carried out for the purposes of public safety, national security, or for investigation or prosecution of criminal offenses. In Australia and Singapore, even though there is no AI-specific legislation, the government uses FRT widely with limited restrictions.

Hence, authorities have been given leeway to use FRT for policing and border control purposes under the justifications of law enforcement and state security. This begs the question – Even if individuals are able to exercise their rights against private companies, to what extent can they exercise the same against their governments?

A significant case in this regard was Glukhin v. Russia, where the compatibility of FRT and human rights was brought into question. Here, the applicant was staging a demonstration by himself in the Moscow Underground, and was arrested after CCTV cameras identified him using live facial recognition. In its ruling, the European Court of Human Rights found that the applicant’s right to privacy had been transgressed by processing his biometric data through FRT. In such a situation, even the ‘law-enforcement purpose’ exception would not hold up because of how innocuous the applicant’s actions were.

Even in the UK, where the use of FRT has generally been accepted in law enforcement, there was a legal challenge against the South Wales Police’s use of live facial recognition. It claimed that they have taken sensitive facial biometric data from around 500,000 people without their consent. Although the High Court initially ruled in favour of the police, the Court of Appeal remedied this by stating that the appellant’s right to privacy had been breached.  

Authors: Varun Alase, Shantanu Mukherjee