Did Minority Report, the Spielberg sci-fi classic released in 2002, predict the future? Certainly: among other things, it predicted wearable technologies, voice activation, driverless cars, and gestural and multi-touch interfaces. In fact, the film’s production designer once said that over 100 patents had been issued for ideas first presented in the film.
INTRODUCING CLEARVIEW
Clearview AI, an American company specializing in facial recognition software, enables users to identify individuals by matching facial images against a vast database consisting of more than 30 billion facial images. This database is built by scraping publicly available images from the internet, including social media sites, news articles, and other online sources. The company’s services are primarily marketed to law enforcement agencies and security services, positioning its technology as a tool for solving crimes, locating missing persons, and enhancing public safety.
Clearview AI has, on more than one occasion, faced criticism regarding privacy, data protection, and ethical issues due to the sensitive nature of its AI system, leading to legal challenges and regulatory scrutiny across various jurisdictions. In its most recent run-in with the law, Clearview AI was subject to heavy fines in the Netherlands for illegally processing residents’ personal data.
IF IT’S TUESDAY, THIS MUST BE HOLLAND
On September 3rd, the Dutch Data Protection Authority, known as the Autoriteit Persoonsgegevens (AP), announced a substantial fine of €30.5 million to be imposed on Clearview AI.
The AP highlighted concerns about the company’s disregard for data protection regulations, which mandate transparent processing of personal data and explicit consent, especially for sensitive technologies like facial recognition. In its order, the AP stated that it found Clearview AI guilty of processing the personal data of data subjects in the Netherlands without a legal basis to do so. Therefore, they had violated Article 5 (principles of processing – lawful, fair and transparent) and Article 6(1) (lawfulness of processing) of the GDPR. This order was made based on an investigation conducted by the AP, which revealed that Clearview AI had processed personal data by gathering images from internet sources without obtaining proper consent. Clearview AI’s services also violated Article 9(1) of the GDPR by processing a special category of personal data which is prohibited – biometric data, and their failure to inform data subjects of the same has transgressed data subjects’ rights under Article 12(1). Furthermore, since Clearview AI is a data controller not established in the EU, they are also required to designate a representative in the EU as per Article 27 of the GDPR, which they failed to do.
The AP also passed orders subject to a penalty for non-compliance. Accordingly, it has mandated compliance periods within which Clearview AI must discontinue its actions in violation of the GDPR, failing which it will attract penalties up to €5 million.
Clearview AI had released a statement calling the decision “unlawful, devoid of due process and is unenforceable” but has not appealed the decision. The AP has also warned Dutch companies that using Clearview’s services is banned from here on out.
REPEAT OFFENDER
Clearview AI has previously faced sanctions in Europe for breaching GDPR regulations. The company was subject to €20 million fines from regulatory bodies in Italy, Greece, and France, all on the same grounds – unlawful processing of personal data, lack of consent and transparency, and breach of data subjects’ rights. The Austrian data protection authority also found Clearview AI guilty of illegal processing, thereby deeming the company’s technology as illicit in their jurisdiction and ordering the erasure of the personal data that they had obtained. In the UK, Clearview AI was fined around $9.4 million by the Information Commissioner’s Office, though the company successfully appealed this decision by arguing that the agency lacked jurisdiction to take enforcement action.
GLOBAL REGULATION OF FRT
European Union
The EU’s stance on FRT is well-established through both the GDPR and the Artificial Intelligence Act (EU AI Act). While the GDPR places a conditional ban on the processing of biometric data due to its sensitive nature, the EU AI Act regulates the use of biometric identification systems, including FRTs, to prevent pervasive surveillance. It classifies these technologies based on their risk levels – high-risk or low-risk – and imposes stricter requirements on those deemed high-risk. Since most FRTs are classified as high-risk AI systems under the EU AI Act, they are subject to stringent technological standards and must undergo a fundamental rights impact assessment before their initial use.
United States
In the USA, the California Consumer Privacy Act (CCPA) considers the processing of biometric information (including facial recognition data) for the purpose of uniquely identifying a consumer as sensitive personal information. Similarly, there are numerous state-level Acts which govern the use of biometric data in their respective states. However, there are no broad regulations at the federal level in the US relating exclusively to FRT as of yet, despite the fact that it is, barring China, the country with the highest level of surveillance and supervision. There is, however, a Bill pending in Congress called the Facial Recognition and Biometric Technology Moratorium Act of 2023, which aims to prohibit biometric surveillance by the Federal Government without explicit statutory authorization.
United Kingdom
In the UK, papers published by the Information Commissioner’s Office show that the country is attempting to create guidelines for the safe and effective use of FRT, instead of over-regulating it. They have endorsed the use of live facial recognition – which allows for real time identification of individuals based on their facial characteristics – for surveillance, marketing, and other uses. The UK’s pro-innovation approach to AI, as revealed in the government white paper, indicated that it will not use too heavy a hand in dealing with facial recognition.
INDIA
India has no specific law relating to FRT. However, the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules) define biometric information to include ‘facial patterns,’ and classify them as sensitive personal data or information. These Rules, which apply to body corporates in India, regulate the collection, storage, use, disclosure, and transfer of SPDI (Sections 5, 6, and 7), as well as the security practices and procedures for handling SPDI (Section 8). Any company in the FRT space would need to comply with these.
The Digital Personal Data Protection Act, 2023 (DPDP Act) has defined personal data broadly, leaving room to interpret the inclusion of sensitive information (such as biometric data) under its ambit. Similar to other data privacy laws around the world, the essential requirements for processing here are consent and legitimate purpose, and numerous rights are conferred on the data principal to protect their personal information. Interestingly, however, Section 17 of the DPDP Act takes away the rights and requirements if processing is in the interest of prevention, detection, investigation, or prosecution of any crime in the country. This indicates that widespread use of state-enabled FRT in India, especially in public places, may well be legally defensible on grounds of law enforcement.
While the SDPI Rules are specific regulations under the Information Technology Act, 2000 (IT Act) outlining security and consent requirements for sensitive personal data or information, the DPDP Act provides a more comprehensive framework for the protection of all types of personal data. It has extraterritorial application, imposes a wider range of obligations on entities, and even establishes the Data Protection Board of India. However, the DPDP Act is not yet in its finalized form, as many implementing Rules are yet to be notified to give weight to its provisions. The DPDP Act will, upon enforcement, replace the SDPI Rules and Section 43A of the IT Act to form a single consolidated legislation on data protection.
CONCLUSION
In the wake of FRT’s burgeoning applications, several global tech companies have been sued in the US and Europe for collecting and using FRT and biometric data. In Part 2 of Facing the Law, we take a closer look at these cases.
Authors: Varun Alase, Shantanu Mukherjee