AI in Healthcare Part 4: A Global Overview of AI Legislation

Shantanu Mukherjee, Anushka Iyer, Shruti Gupta, Gaea Sukumar

 

BACKGROUND

In Part 3 of our series on AI in Healthcare, we looked at AI legislation in various stages in the EU, US, UK, Singapore, India, and China. In this piece, we continue our review of the current or proposed AI regulations in the US, South Korea, Japan, and Australia.

 

EUROPEAN UNION

On June 14, 2023, the European Parliament passed the EU AI Act which seeks to regulate AI systems and foundational models e.g., on which ChatGPT is based. If an agreement on the text of the EU AI Act is reached before January 2024, then it may come into force as early as June 2024.

 

UNITED STATES

Automated Decision Making Systems

On June 7, 2023, the US Senate introduced a draft text of the Transparency Automated Governance (TAG) Act, which seeks to impose transparency obligations on US government entities using Automated Decision-Making Systems (“ADS”) to make critical decisions.

The proposed obligations for US agencies under the TAG Bill include:

  • Informing individuals about their interactions with an ADS;
  • Establishing a grievance redressal mechanism for individuals aggrieved by the output of the ADS;
  • Providing users with the option of an alternative review by an individual without the assistance of an ADS; and
  • Ensuring the accuracy, reliability, and accountability of the decision made by the ADS.
  • In addition to the attempts at the federal level, states like New York, New Jersey, Massachusetts, Connecticut, Rhode Island, and California have also introduced legislation to govern ADS.
California[1]

California’s AB 331, a bill to regulate the use of Automated Decision Tools (“ADT”) proposed various requirements such as an impact assessment, a private right of action to users, and a notice informing users that an ADT is being used in decision-making, among others. However, on May 18, 2023, the legislature in California suspended the bill meaning that it would not be considered for the rest of the year.

Connecticut[2]

The Connecticut bill proposes the categorization of “automated systems” into three types:

    • Automated decision systems – which are used to make, inform, or materially support state agencies in critical decision-making.

    • Automated decision support systems – which are systems that merely provide material information to inform an individual’s decision on behalf of a state agency.

    • Automated final decision systems – which are systems that independently make decisions on behalf of a state agency.

The bill proposes AI governance measures, safeguards to ensure fairness, and the appointment of AI Officers who are required to periodically examine and deactivate systems that are inconsistent with the then-existing legal framework. It also seeks the establishment of an AI Advisory Board to provide guidance and periodically update the legislation in line with technological advancements.

Massachusetts[3]

On 16 February 2023, Massachusetts introduced draft regulations for the use of ADS in employment-related decisions. The bill entitles employees to a notice of use of an ADS, the right to request information, including, whether their data is being used as an input for the ADS, and the output generated based on such data.

Notably, since 2021, US lawmakers have introduced close to 200 bills regulating AI of which only 12 have been passed till date.

 

SOUTH KOREA[4]

Act on Fostering AI and Establishing a Trust Base

On February 14, 2023, the National Assembly Science, ICT, Broadcasting and Communications Committee proposed legislation which translates to – “Act on Fostering AI and Establishing a Trust Base”. This bill aims to become the primary law regulating AI in South Korea. It defines AI as the implementation of human-like intellectual abilities, such as learning, reasoning, perception, judgment, and language comprehension, through electronic means.[5] It also requires stakeholders to adhere to binding principles including ensuring safety and reliability throughout the life-cycle of AI systems, promoting accessibility, and fostering innovation by implementing self-regulatory measures. Further, the MSIT has been granted broad powers to issue guidelines regarding risk management, certification, assessment of technical feasibility, protection of user interests, and compliance with the existing laws.

Strategy to realize trustworthy AI

More recently, on May 13, 2023, at the general meeting of the Presidential Committee, the Ministry of Science and Technology (MSIT) presented the Strategy to realize trustworthy AI which consists of 3 strategies and 10 action plans including the design and development of trustworthy systems, conducting risk assessments and trust certification, among others. The MSIT intends to implement these in a step-wise manner until 2025.

NIS Security Rules for use of ChatGPT and other AI-powered chatbots

On June 11th, 2023, the National Intelligence Service (NIS), South Korea’s spy agency announced that it plans to release security guidelines for the use of ChatGPT and other AI chatbots.

 

JAPAN[6]

Social Principles of Human-centric AI

In March 2019, the Japanese Government published “Social Principles of Human-centric AI” which set out seven basic principles including human-centric AI systems, establishing AI literacy, safeguarding privacy and security, ensuring fair competition, and making fair, accountable, and transparent AI systems. Japan’s AI regulatory policy is based on these Social Principles.

METI Guidelines

In July 2021, the Ministry of Economy, Trade, and Industry (METI) published non-binding Governance Guidelines for Implementation of AI Principles, which lays down action targets for the implementation of the Social Principles with practical examples.

METI also published the Contract Guidelines on Utilization of AI and Data which explains key legal issues while entering into contracts for data transfer in relation to AI development, with model clauses.

The AI White Paper

On April 13, 2023, the Japanese Government published the “AI White Paper: Japan’s National Strategy in the New Era of AI” – a working white paper that was first published in 2019 and continues to be annually updated to reflect the evolving AI policy landscape. Against this backdrop, in April 2023, the Japanese Government also established a new Strategy Council to act as the central command center responsible for deliberating national AI strategies and providing primary policy direction.

AI and Copyright

On May 30, 2023, Japan’s Agency for Cultural Affairs (“ACA”) issued a statement titled “About the relationship between AI and copyright” clarifying that AI may be used for educational, research, and non-commercial purposes freely. If however, a copyrighted work is used for commercial purposes without authorization from the copyright holder, then it may be considered as copyright infringement. In such cases, the copyright holder may claim damages, or injunction, and the infringer may even be subject to criminal penalties.

 

AUSTRALIA[7]

AI Ethics Principles

In November 2019, the Department of Industry, Science, and Resources in Australia introduced a national voluntary AI Ethics Framework for the development of AI governance comprising eight tenets including the well-being of humans, society, and the environment, human-centric AI, fairness, privacy and security, reliability and safety, transparency and explainability, and the provision of a grievance redressal mechanism to challenge AI outputs and enforce accountability.

White Paper on Safe and Responsible AI[8]

More recently, on June 1, 2023, the Australian Government released a white paper titled ‘Safe and Responsible AI in Australia’. The high-level white paper provides an overview of the existing framework in Australia, an overview of the international developments, and seeks feedback on governance and regulatory measures that must be implemented by identifying potential gaps in the existing AI governance landscape. While the paper doesn’t cover all aspects like the impact on the labor market, national security, and IP, it identifies gaps in governance. Similar to the EU AI Act, it recommends adopting a risk-based approach to AI governance.

 

CONCLUSION

From a bird’s eye view, the global approach to AI can be broadly classified into – the hard-law approach taken by countries such as the EU, Canada, and China, and the soft-law approach taken by countries such as the UK, and Japan. With a handful of state-specific legislation already being passed, it seems likely that the US may fall into the former class once the Algorithmic Accountability Act, and the TAG Act are passed.


REFERENCES

[1] https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20230424-california-proposes-artificial-intelligence-legislation

[2] https://www.cga.ct.gov/2023/fc/pdf/2023SB-01103-R000228-FC.pdf (SB 1103)

[3] Bill H.1873 https://malegislature.gov/Bills/193/H1873

[4] https://www.lexology.com/library/detail.aspx?g=fa073ec6-81a1-44fd-87ce-c8d3f5f7a706#:~:text=On%20February%2014%2C%202023%2C%20the,South%20Korea%20but%20also%20around

[5] https://www.kimchang.com/en/insights/detail.kc?sch_section=4&idx=26935#:~:text=The%20AI%20Act%20-set s%20forth,businesses%20in%20the%20AI%20industry.

[6] https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20220128_2.pdf, https://practiceguides.chambers.com/practice-guides/artificial-intelligence-2023/japan/trends-and-developments

[7] https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles

[8] https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia.pdf