AI in Healthcare Part 3: A Global Overview of AI Legislation

By Shantanu Mukherjee and Anushka Iyer

 

INTRODUCTION

In Part 2 of our series on AI in Healthcare, we examined the key legal and ethical challenges arising from the increasing use and adoption of AI, including issues related to data privacy, transparency, algorithmic bias, and product liability.

In the months since the launch and sensational success of ChatGPT, there has been a frenzy of activity in the AI space.

OpenAI pivoted from being a non-profit to a for-profit company, with a reported USD 10 Bn investment from Microsoft, and quickly launched GPT-4, a more advanced version of the LLM behind ChatGPT.

Microsoft quickly began to deploy OpenAI’s models across their consumer and enterprise products, including in their Azure OpenAI services and Bing (which briefly, and infamously, developed a personality and began cajoling users to leave their wives).

Google launched its chatbot competitor, Bard, which failed to impress in early demos. Meta launched LlaMA and announced that AI, not the metaverse, would be its “single largest investment”.

Virtually every tech company now has an AI product or products, from behemoths like NVidia, Salesforce, and Amazon to fledgling startups, looking to solve problems large (drug development) and small (generating professional headshots from selfies).

But many have concerns about the runaway, unregulated explosion of advanced AI models. There are real threats from large-scale deployment of AI, including misinformation, privacy, cybersecurity, job automation, and unemployment, among others.

On March 22, 2023, more than 50,000 signatories, including tech celebrities such as Elon Musk and Steve Wozniak released an open letter (prepared by the Future for Life Institute), calling for an immediate, six-month moratorium on the development of advanced AI systems.[1] The letter points out that AI systems now have human-competitive intelligence, which ‘could represent a profound change in the history of life on Earth’, but that there was no commensurate level of planning or management of this technology, even as an “out-of-control race” is on to develop new “digital minds” that not even their creators can understand or predict.

The open letter was criticised by a group of AI ethicists at the Distributed AI Research (DAIR) Institute (known for having been pushed out of Google over a paper criticising the capabilities of AI) as indulging in ‘longtermism’ and not addressing more immediate harms arising from AI systems today. They suggest that:

“What we need is regulation that enforces transparency. Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures.”

Meanwhile, the Center for AI and Digital Policy (CAIDP), an AI-focused research organization, filed a complaint with the Federal Trade Commission (FTC) seeking to hold OpenAI liable for violating the FTC Act, which prohibits unfair and deceptive trade practices. CAIDP argues that OpenAI released GPT-4 for commercial use despite knowing the risks it poses – including potential bias, AI hallucinations,[2] and the threat to data privacy.

So how do we regulate AI? What are Governments doing about it globally? Which jurisdictions appear to be closest to implementing meaningful and comprehensive AI legislation?

In this piece, we present an overview of how certain jurisdictions – namely the EU, US, UK, India, Singapore, and China – are looking to regulate AI, in a bid to track legislative trends in relation to this complex and rapidly evolving technology.

 

EUROPEAN UNION

Europe has been one of the first jurisdictions to consider implementing a comprehensive regulation on AI systems and aims to act as the guiding light for the AI legislative framework in other countries.

1. Draft EU AI Act

On April 21, 2021, the EU Commission notified the first draft of the EU AI Act – a first-of-its-kind regulation proposed to regulate AI systems. Thereafter, based on comments received from the public, the Commission revised the text and proposed a compromised draft of the EU AI Act on December 6, 2022. The European Parliament is set to vote on the adoption of the compromised draft by the end of April thereby making it a regulation.[3] We discuss a few key measures adopted by the EU AI Act below.

Extra-territorial applicability

Similar to the General Data Protection Regulation (GDPR), the EU’s primary data protection legislation, the EU AI Act has extra-territorial reach and applies to:

a. Companies placing AI systems on the EU market, regardless of whether they are physically located within the EU itself;

b. Users of AI systems located within the EU;

c. Providers and users of AI systems located anywhere outside the EU, if the output generated by the AI system is used within the EU.

Risk-based classification and requirements

The EU AI Act classifies AI systems based on the level of risk it poses, as follows:

a. Minimal or low-risk

b. High-risk

c. Unacceptable risk

AI Sandboxes

The EU AI Act provides for the establishment of controlled environments for the development, training, testing, and validation of AI systems that mimic real-world conditions, with reduced to no obligations or liability, referred to as ‘AI Sandboxes’ to encourage innovation.

Penalties

While EU member states have the flexibility to lay down state-specific rules on penalties and administrative fines, the EU AI Act defines an elaborate system of penalties for specific severe kinds of infringement.

Applicability of other provisions

The EU AI Act is not intended to work in isolation but alongside a set of harmonized legislation and rules applicable based on the type of AI system – including without limitation the GDPR to govern aspects related to privacy, and the MDR directive to govern AI systems aimed to carry out medical functions.

2. AI Liability Directive and Product Liability Directive

On February 19, 2020, the EU published a White Paper on AI,[4] identifying the specific challenges posed by AI to the existing liability rules in the EU. After a public consultation on adapting liability rules to AI, the EU Commission adopted two proposals – the first, being the AI Liability Directive, and the second, a proposal to amend the existing Product Liability Directive in the EU and uniformize the rules on liability arising from the use of AI systems.

AI Liability Directive[5]

Some of the key features of the AI Liability Directive include:

a. Access to evidence: In contrast to traditional products where an underlying defect may be apparent; in the case of AI products, it is often a challenge to determine the defect in the underlying algorithm. To address this, under the Directive, providers of high-risk AI systems may be ordered by a court to disclose relevant and necessary evidence relating to such AI products;

b. Presumption of causality: Subject to certain conditions outlined in the EU AI Act, courts may presume that the failure or defect in the AI system or output generated by such AI system was due to the fault of the provider;

c. Who can be sued?: In most cases, the provider of the AI system, and in some cases the Users of the AI system may be sued in case of damages arising due to the use of the AI system.

Product Liability Directive

The revised Product Liability Directive (PLD) adapts existing product liability rules to address new types of products and services, such as software or AI systems, or advanced machinery. Under the PLD:

a. software or AI systems are explicitly named as “products”;

b. a person has the right to bring a claim for damages against the manufacturer of such AI product in cases of death, personal injury including psychological harm, damage to property, data loss, or corruption;

c. in cases where the court considers that a person bringing a claim faces difficulty in proving a defect or causality between defect and damage due to technical or scientific complexity, the burden of proof may be reduced (though the person sued can contest this). This proposal to shift the burden of proof aligns the proposed Product Liability Directive with the AI Liability Directive;[6]

d. claim thresholds and caps on compensation levels have been removed;

e. parties such as importers, authorised representatives, service providers or distributors may be held liable for AI products manufactured outside the EU in cases where a manufacturer cannot be identified.

 

UNITED KINGDOM

White Paper on AI Regulation

On March 29, 2023, the UK Government published a white paper to regulate AI systems titled ‘A pro-innovation approach to AI regulation’.[7] The white paper outlines five principles that should be at the center of any AI regulation including:

a. safety, security, and robustness;

b. transparency and explainability;

c. fairness;

d. accountability and governance;

e. contestability and redress.

The white paper makes certain other recommendations including:

a. Adoption of a context-specific layered approach to the output generated by AI systems and the context in which they are intended to be used. This means that UK intends to regulate the use of AI systems as opposed to the AI system itself;

b. Establishment of regulatory sandboxes and test-beds, and technical standards;

c. Periodic compliance assessments throughout the life-cycle of an AI system;

d. Compliance with UK’s existing laws e.g. laws governing equality, or data privacy.

The white paper, however, lacks a clear position on the allocation of liability arising from the use of AI systems stating that it is an area that is ‘complex and rapidly evolving’. The paper is open to public comments until June 21, 2023.

 

UNITED STATES

1. Federal Policies, Laws, and Regulations

AI Bill of Rights

On October 4, 2022, the White House Office of Science and Technology Policy (OSTP) released a white paper titled the ‘Blueprint for an AI Bill of Rights”[8] aimed at providing practical guidance to government agencies and companies, researchers, and other stakeholders to build AI systems that are human-centric. It identifies five non-binding principles to act as a backstop and prevent or minimize the potential risks posed by AI systems.

National Institute of Standards and Technology (NIST) Risk Management Framework

On August 18, 2022, NIST published the NIST AI Risk Management Framework Playbook (AI RMF).[9] The AI RMF provides guidance for managing risks in the design, development, use, and evaluation of AI systems. It is non-binding and intended for voluntary use to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.[10]

Algorithmic Accountability Act of 2022

The Algorithmic Accountability Act was introduced on February 3, 2022, requiring technology companies to perform bias impact assessments of automated decision-making systems involved in making critical decisions in various sectors including employment, financial services, healthcare, housing, and legal services.

FTC Guidelines

The FTC released guidelines on the use of AI and algorithms[11] which emphasize the benefits of AI while identifying its potential risks. In the guidelines, the FTC highlights five key elements that must be taken into consideration when designing AI systems:

a. Transparency;

b. Explainability;

c. Fairness;

d. Robustness;

e. Accountability.

2. State Laws and Regulations to protect against unfair discrimination

A number of US states such as Washington D.C., and Colorado have sought to address bias and prevent unfair discrimination caused due to the use of AI systems.

In the District of Columbia, a pending bill titled Stop Discrimination by Algorithms Act of 2021 (SDAA) sought to prohibit users of algorithmic decision-making in a discriminatory manner in sectors such as employment, housing, healthcare, and financial lending.

In July 2021, Colorado enacted a law titled ‘Protecting Consumers from Unfair Discrimination in Insurance Practices’ intended to protect consumers from unfair discrimination in insurance rate-setting mechanisms. Similar laws and policies have since been proposed in Indiana, Oklahoma, Rhode Island, New Jersey, and California.

3. AI for Employment Decisions

NY City AI Law

New York City’s AI law restricts employers from using Automated Employment Decision Tools (AEDT) in hiring and promotion decisions unless it performed and cleared an independent bias audit in the year immediately preceding the use of such an AEDT system. The law also imposes certain posting and notice requirements to be given to applicants and employees.

New Jersey Bill to regulate the use of AI tools in hiring decisions

On December 5, 2022, New Jersey lawmakers introduced a bill, similar to that of the NY City AI Law, to regulate the use of AEDT in hiring decisions and to minimize discrimination in employment. It adopts requirements of audit and notification to potential employees similar to those set out in the NY City AI Law.

Unlike the EU and Singapore, the US doesn’t yet have a singular framework governing the use of AI systems, instead, similar to its privacy framework, the AI regulatory framework in the US consists of a patchwork of various current and proposed rules, laws, and policies adopted for different sectors, or by different states.

 

INDIA

In India, NITI Aayog, a Government of India policy think tank has published two papers on AI – the National Strategy for AI; and Responsible AI.

1. National Strategy for AI

The National Strategy for AI (NSAI)[12] was published in 2018, and identified five focus areas requiring AI intervention such as healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation.

It also identified the key challenges to the adoption of AI in India such as lack of enabling data ecosystems, low intensity of AI research, inadequate availability of AI expertise, high resource cost and low awareness, and unclear privacy, security, and ethical issues, among others.

Recommendations

It recommended the increased need for self-regulation, collaboration between industry stakeholders and lawmakers, and the need to adopt sufficient accountability and liability mechanisms. The NSAI also focused on actionable steps that can be undertaken by the Government including without limitation setting up AI research centers, updating the IP framework to safeguard AI innovation, introducing AI/ML courses in universities, and the need to implement an extensive data privacy and protection framework in line with global standards

2. Responsible AI

Responsible AI[13] is a two-part working paper published by NITI Aayog which explores the ethical considerations involved in the use of AI systems such as bias, privacy and security risks, and lack of transparency, among others. It makes a comparative analysis of the global regulatory landscape governing AI and prescribes a self-assessment non-binding guide to be used by all stakeholders to assess their AI governance procedures and policies. It also recommends the adoption of legal and regulatory measures, and technological measures to achieve compliance with the Responsible AI principles of safety and reliability, transparency, equality, inclusivity, privacy and security, and accountability.

3. Ethical Guidelines for the Application of Artificial Intelligence in Biomedical Research and Healthcare

The Department of Health Research (DHR) and the Indian Council of Medical Research (ICMR) recently released a set of working guidelines governing the use of AI in biomedical research and healthcare in India called the Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare.[14] It is non-binding and covers ethical principles in medical AI, guiding principles for stakeholders, an ethics review process, governance of AI use, and the requirements for valid informed consent. It is addressed to all stakeholders involved in biomedical research using AI and healthcare, including creators, developers, researchers, clinicians, ethics committees, institutions, sponsors, and funding organizations.

Earlier in March, Union Minister for Electronics and IT, Rajeev Chandrashekhar, promised that the Digital India Act (which is proposed to replace the existing technology laws in the country) will have guardrails in place for the responsible use of AI.

 

SINGAPORE

Model Artificial Intelligence Governance Framework

On January 23, 2019, Singapore introduced the Model Artificial Intelligence Governance Framework[15] (Model Framework) at the World Economic Forum (WEF) annual meeting and a revised version of the Model Framework in the following year. The key features of the Model Framework involve:

a. Key Principles; and

b. Guidance areas.

Key Principles

The Model Framework is centered around two key principles:

a. The decisions made by an AI system should be explainable, transparent, and fair;

b. The AI systems must be human-centric i.e. they should focus on the safety and well-being of consumers.

Guidance Areas

The Model Framework develops the key principles into four areas of guidance as follows:

a. Internal governance structures and measures: This involves adapting or establishing governance structures to incorporate values, risks, and responsibilities relating to AI decision-making.

b. Level of Human Involvement: Organizations must identify acceptable risks of the AI system and determine the level of human involvement in decision-making accordingly.

c. Operations management: Lays out factors that organizations must consider when developing, selecting, and maintaining AI models, including data management practices, transparency, audibility, and good governance.

d. Stakeholder communication: This includes strategies for cross-stakeholder communication i.e. creation of a policy on the details to be shared with AI users, providing users the option to opt-out of the use of AI systems, and establishing a feedback and grievance redressal channel.

The Singapore model recommends a rule and risk-based management approach to address the risks associated with AI which broadly aligns with other global frameworks, for example, the EU AI Act.[16]

AI Verify

Following the Model Framework, Singapore also launched AI Verify, the world’s first AI governance testing framework, and toolkit, to enable companies to test and demonstrate the safety and reliability of their AI systems. Singapore’s Personal Data Protection Commission and Infocomm Media Development Authority, in collaboration, developed the toolkit. For now, AI Verify is meant to function as a Minimum Viable Product (MVP) (similar to beta testing) and organizations can obtain early access to the product which is set to undergo further testing and development.

 

CHINA

China’s Draft AI legislation

On April 11, 2023, the Cyberspace Administration of China (CAC) released a draft of “Management Measures for Generative AI Services” – China’s new draft AI law[17]. A few key features include:

a. Extra-territorial applicability: The law applies to any AI product providing services in China. This means that similar to the EU AI Act, China’s draft AI law envisages extra-territorial applicability;

b. Adherence to socialist norms, and other laws: The AI products and any content generated using such products must comply with socialist norms, intellectual property rights and business ethics, privacy rights, and must prevent unfair discrimination;

c. Security Assessment: Prior to releasing any AI product for use by the public, a security assessment must be undertaken;

d. Obligations of provider: The providers of AI products shall ensure compliance with the relevant laws throughout the life-cycle of the AI product, provide users with the necessary operating and any other instructions, appropriately mark the AI products, and establish a grievance redressal mechanism.

China’s draft AI law comes after a number of Chinese tech giants like Alibaba, Baidu, and SenseTime, launched advanced AI products from chatbots to image generators. The draft is open for public comments until May 10, 2023.

A number of other countries have also attempted to introduce AI rules or legislations including Brazil which published an AI Report and Draft AI Law, and Canada, which tabled the AI and Data Act (AIDA).

 

CONCLUSION

ChatGPT’s sensational success has taken the tech motto ‘move fast and break things’ to another level, as every company from Microsoft (leading the charge with OpenAI’s suite of products) to NVidia, Google and Meta jostles to secure a toehold in this new market by deploying AI products as quickly as it can, sometimes at the cost of having to roll back the launch (as with Google Bard) or tweak access to the product (Microsoft’s ChatGPT-powered Bing, or ‘Sydney’, as it called itself) when something goes wrong. While AI has the potential to revolutionize industries and improve our lives, it also has the potential to do significant harm – e.g., large-scale automated theft of data and intellectual property, patient and consumer harm from inaccurate or malfunctioning AI systems and algorithmic bias. Therefore, there is a pressing need today to implement regulations that take a human-centric approach and sufficiently address the potential risks of AI while paving the way for innovation.


REFERENCES

[1] https://futureoflife.org/open-letter/pause-giant-ai-experiments/?utm_source=www.theneurondaily.com&utm_medium=newsletter&utm_campaign=elon-please-stop

[2] AI hallucinations are the phenomenon where AI systems produce outputs that are non-existent, false, and unpredictable. E.g. when Bing confessed to spying on workers, or when Google’s Bard answered questions incorrectly.

[3] https://www.euractiv.com/section/artificial-intelligence/news/ai-act-european-parliament-headed-for-key-committee-vote-at-end-of-april/

[4] https://commission.europa.eu/document/d2ec4039-c5be-423a-81ef-b9e44e79825b_en

[5] https://commission.europa.eu/system/files/2022-09/1_1_197605_prop_dir_ai_en.pdf

[6] https://www.allenovery.com/en-gb/global/blogs/digital-hub/european-commission-proposes-ai-liability-directive-and-modernised-product-liability-directive

[7] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146542/a_pro-innovation_approach_to_AI_regulation.pdf

[8] https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

[9] https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

[10] https://www.nist.gov/itl/ai-risk-management-framework

[11] https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-and-algorithms

[12] https://niti.gov.in/sites/default/files/2019-01/NationalStrategy-for-AI-Discussion-Paper.pdf

[13] https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdfnhttps://www.niti.gov.in/sites/default/files/2021-08/Part2-Responsible-AI-12082021.pdf.

[14] https://main.icmr.nic.in/content/ethical-guidelines-application-artificial-intelligence-biomedical-research-and-healthcare

[15] https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf

[16] https://www.globalgovernmentforum.com/singapores-ai-governance-framework-insights-governments/.

[17] http://www.cac.gov.cn/2023-04/11/c_1682854275475410.htm.