Policy Brief: Harmonised Standards for the EU AI Act

The EU AI Act and Healthcare: Shaping Standards for Safe and Trustworthy AI

The EU has taken a pioneering step with the introduction of the EU AI Act (“Act”), which became the first formalised legal framework for Artificial Intelligence (“AI”) on August 1, 2024. This transformative Act underscores the EU’s commitment to fostering safe, trustworthy, and innovation-friendly AI across all sectors—including healthcare, where the stakes are high for patient safety and public trust.

The Act classifies healthcare as a high-risk sector, mandating that AI systems align with essential standards around risk management, data quality, transparency, human oversight, and cybersecurity. Therefore, all these technologies-from AI-assisted diagnostics to patient monitoring systems-go through intense assessments, guaranteeing the protection of the health, safety, and rights of individuals. The Act harmonises the standards that will pave a safe passage for healthcare AI providers to navigate the regulatory labyrinth with confidence, thereby reducing the compliance burdens with safety and ethical integrity in place.

 

Requirements for providers of high-risk AI systems (Art. 8-17 of the Act)

High-risk AI system providers must implement comprehensive measures to ensure safety and ethical use. These include establishing a robust risk management system, conducting rigorous data governance practices, and maintaining detailed technical documentation. Additionally, they must design systems to enable record-keeping, user instructions, human oversight, and achieve high levels of accuracy, robustness, and cybersecurity. Finally, a robust quality management system is essential to ensure ongoing compliance with the AI Act.

 

What’s New for Healthcare AI?

Recently on October 24, 2024, the Joint Research Centre of the European Commission unveiled a policy brief for ensuring the safe and ethical development of AI in the EU through standardised practices.

This policy brief outlines the key requirements for high-risk AI systems, as specified by the EU AI Act, and with the role of technical standards in defining how to meet them in practice. Starting from August 2026, these systems must adhere to strict standards for risk management, data quality, transparency, human oversight, and cybersecurity. To ensure compliance, providers must establish a robust quality management system and undergo rigorous conformity assessments before launching their AI products.

The EU AI Act outlines the essential safety standards for high-risk AI systems. To ensure compliance, technical standards are being developed by European organizations. These standards provide practical guidelines and best practices for meeting the legal requirements. Once assessed and published in the Official Journal of the EU, standards will confer providers of high-risk AI systems with presumption of conformity with the relevant legal obligations, thereby simplifying compliance requirements.

AI standards are developed through a collaborative process involving various stakeholders, including small and medium-sized enterprises and societal groups. While creating new standards from scratch can be time-consuming, the EU can leverage existing international standards from organisations like ISO and IEC to expedite the process.

 

Key Focus Areas in AI Standards:

  • Risk Management: AI providers must actively identify and mitigate risks to health, safety, and fundamental rights throughout the entire AI system lifecycle. These measures must be demonstrably effective, with thorough testing and evaluation protocols.
  • Data Governance and Quality: High standards of data quality are essential to prevent bias and ensure accuracy in AI-driven healthcare solutions. The Act emphasizes robust data governance to manage data throughout an AI system’s lifecycle, especially in data-intensive fields like machine learning.
  • Transparency: Transparency is critical, with requirements for clear information on AI system functionality, limitations, and risks. This transparency enables users and healthcare providers to make informed, confident decisions on AI utilisation.
  • Record Keeping: AI providers must maintain accurate records on AI operations and performance, essential for continuous risk identification and mitigation.
  • Human Oversight: Human oversight measures are a cornerstone of the Act, ensuring human intervention is possible when necessary. This is particularly vital in healthcare to allow healthcare professionals to intervene in critical decisions.
  • Accuracy: Standards specify precise accuracy metrics, setting thresholds for acceptable performance and requiring reliable, consistent measurement and reporting—vital in diagnostics and treatment recommendations.
  • Cybersecurity: With the sensitive nature of healthcare data, robust cybersecurity is essential. The Act mandates security measures to guard AI systems against cyber threats, ensuring patient data and operational integrity are protected.
  • Robustness: AI systems in healthcare must be resilient to errors, faults, and inconsistencies to prevent adverse effects. Specific measures ensure that the AI system performs safely, even under challenging conditions.
  • Quality Management: Effective quality management systems ensure ongoing compliance with the Act, supporting healthcare AI systems throughout their lifecycle.
  • Conformity Assessment: A structured conformity assessment process will verify that AI systems meet all legal requirements before entering the healthcare market, setting a trusted benchmark for safe deployment.

It marks a step forward by allowing healthcare providers and developers to build AI solutions that would safeguard patient welfare, furthering innovation across borders while being accountable, transparent, and of high quality: the EU AI Act has given a solid foundation to trust future-proof AI applications in healthcare and beyond.

 

Road to Implementation

Standards for high-risk AI systems will come into full effect in August 2026, and significant groundwork is underway to develop these standards. The European Commission has collaborated with CEN-CENELEC to develop harmonised standards that draw upon existing global frameworks and respond to the fast pace of change in AI. For healthcare, such standards will cover the entire lifecycle of AI systems from design through deployment and continuous monitoring, ensuring that applications of AI are maintained on the highest standards of safety and efficacy.

 

What This Means for Healthcare Innovators

The EU AI Act establishes a level playing field by a uniform regulatory environment. The more streamlined standards make integration of new AI solutions from other borders across the EU easier and help in smoother compliance processes and increase patient as well as stakeholder trust. The healthcare sector stands at the forefront of realising the benefits of the EU AI Act’s harmonised standards. With the establishment of clear, actionable guidelines, healthcare providers and developers can leverage AI more effectively, safeguarding patient care and setting a precedent for ethical AI use across other critical domains.

Authors: Roshni Rajani, Shantanu Mukherjee