ARTICLE
18 December 2025

Bringing The EU AI Act To Life In A Multinational MedTech Organization

FP
FABIAN PRIVACY LEGAL GmbH

Contributor

We are a boutique law firm specializing in data, privacy and data protection laws and related issues, information security, data and privacy governance, risk management, program implementation and legal compliance. Our strengths are the combination of expert knowledge and practical in-house experience as well as a strong network with industry groups, privacy associations and experts around the world.
The EU Artificial Intelligence Act (AI Act) introduces a risk-based regulatory framework that will significantly impact medical technology (MedTech) organizations operating in Europe and beyond...
European Union Food, Drugs, Healthcare, Life Sciences
Daniela Fabian’s articles from FABIAN PRIVACY LEGAL GmbH are most popular:
  • within Food, Drugs, Healthcare and Life Sciences topic(s)
  • with readers working within the Business & Consumer Services and Telecomms industries
FABIAN PRIVACY LEGAL GmbH are most popular:
  • within Food, Drugs, Healthcare and Life Sciences topic(s)
  • with Senior Company Executives and HR

Introduction

The EU Artificial Intelligence Act (AI Act) introduces a risk-based regulatory framework that will significantly impact medical technology (MedTech) organizations operating in Europe and beyond. As artificial intelligence (AI) systems increasingly underpin medical devices and interconnected digital health solutions, compliance with both the AI Act and existing frameworks such as the EU Regulation on medical devices (MDR) and the EU Regulation on in vitro diagnostic medical devices (IVDR) presents unique challenges.

This article explores the applicability of the AI Act in the MedTech context, outlines key definitions and obligations, and maps the timelines for compliance. It further examines how MedTech organizations can prepare for implementation through governance frameworks, classification of AI systems, gap assessments, and vendor management. Practical guidance is provided for aligning AI Act requirements with MDR obligations, ensuring readiness ahead of the upcoming 2025, 2026, and 2027 milestones1.

MedTech organizations and devices

MedTech organizations range from large global manufacturers to highly specialized digital health companies. They develop products that support prevention, diagnosis, monitoring, and treatment of disease, from diagnostic imaging software and AI-powered pathology tools, to wearable devices for chronic disease management, robotic surgical systems, and connected health platforms that combine sensors, cloud analytics, and clinical decision support. Increasingly, these technologies are powered by AI to deliver greater accuracy, efficiency, and improved patient outcomes. As a result, MedTech organizations sit at the intersection of innovation and regulation, facing growing responsibilities to ensure that their AI-enabled solutions meet strict safety, data protection, and compliances standards.

Under the MDR and the IVDR, a medical device is broadly defined as any instrument, apparatus, appliance, software, implant, reagent, material, or other article intended by the manufacturer to be used for human beings for medical purposes, such as diagnosis, prevention, monitoring, treatment, or alleviation of disease. Importantly, software can itself be a medical device, particularly where it provides diagnostic or therapeutic functionality. For example, software that analyses radiological images to detect tumors would qualify as a medical device under the MDR.

Alongside stand-alone devices, the ecosystem increasingly includes interconnected devices and platforms, where multiple systems exchange data and interact with one another. This raises additional regulatory challenges, as compliance must account not only for the safety of individual components but also for the functioning of the integrated system as a whole.

Regulatory landscape of MedTech

The AI Act sets out obligations for AI systems depending on their risk category and the role of the responsible party (such as provider, deployer, importer, or distributor), with high-risk systems subject to the most stringent requirements. MedTech is a priority sector due to the potential impact of AI-enabled devices on patient safety and public health. Multinational MedTech companies face the challenge of applying the AI Act alongside sector-specific regulations such as the MDR and the IVDR, while also ensuring cross-border compliance. In addition, MedTech organizations must consider the EU Data Act (Data Act), binding since September 12, 2025, which regulates access, sharing, and interoperability of device-generated data. While the Data Act forms an important part of the broader compliance landscape, this article focuses on the implementation of the AI Act in the MedTech sector, outlining the context, regulatory requirements, and practical steps for compliance.

High-level comparative overview of relevant laws for MedTech and AI

Law/Regulation

Scope in MedTech

Key Obligations

Overlap with Other Frameworks

MDR/IVDR

Applies to medical devices and in vitro diagnostic devices placed on the EU market. Includes hardware, software, and digital health applications.

  • Conformity assessment with notified bodies.
  • CE marking.
  • Clinical evaluation and post-market surveillance.
  • Risk management and technical documentation.

Overlaps with AI Act for AI-enabled devices (Annex I systems). Requires integration of AI Act requirements into MDR conformity processes.

GDPR

Applies to all personal data, including sensitive health data processed by MedTech products and services.

  • Lawful basis for processing.
  • Data minimization and purpose limitation.
  • Security and Privacy-by-Design.
  • Data subject rights.

Intersects with Data Act (data access and sharing must also comply with data protection). Relevant to AI Act (transparency and human oversight often involve personal data).

Data Act

Applies to data generated by connected devices and services (personal and non-personal). Covers access, use, sharing, and interoperability.

  • Obligation for manufacturers/service providers to enable user access to device-generated data.
  • Data-sharing obligations (business-to-business (B2B), business-to-government (B2G)).
  • Fair contract terms and interoperability standards.

Overlaps with GDPR (access rights cannot conflict with data protection).

Intersects with AI Act (availability of training and performance data for AI systems).

AI Act

Applies to AI systems placed on or whose output is used in the EU market, including AI-enabled medical devices and corporate AI tools (e.g., human resources (HR) systems).

  • Risk-based obligations (prohibited, high-risk, limited-risk, minimal-risk).
  • Providers: conformity assessments, technical documentation, CE marking, post-market monitoring.
  • Deployers: proper use, human oversight, and incident reporting.

Overlaps with MDR/IVDR (Annex I high-risk medical devices). Intersects with Data Act (data access for training/monitoring). Must comply alongside GDPR where AI systems process personal data.



More specifically, together, these laws mean that MedTech organizations must simultaneously address requirements from sectoral product regulation, general product and safety regulation, data protection law, data governance law, and the AI Act. The interplay between these instruments is particularly relevant where AI-enabled devices both generate and process large volumes of data, requiring compliance with multiple overlapping legal regimes.

Cross-border compliance

The AI Act has extraterritorial scope, meaning its requirements apply not only to providers and deployers established in the EU, but also to those outside the EU whenever their AI systems are placed on the EU market or when their AI systems' outputs are used within the EU. For multinational MedTech companies that are headquartered outside the EU, this creates direct compliance obligations whenever their AI systems are placed on the EU market or their outputs are used within the EU, e.g., when diagnostic software developed abroad is made available to EU hospitals or its results are provided to EU healthcare providers.

Cross-border compliance significantly increases complexity. Organizations must align EU obligations with other legal frameworks, such as the Food and Drug Administration (FDA) regulations in the US, the UK Medical Devices Regulation, or the emerging AI and data governance laws in Asia. Internal compliance systems must be capable of addressing overlapping but distinct obligations, such as reconciling the AI Act's transparency and risk-management requirements with MDR conformity assessments, while also ensuring compliance with the Data Act's rules on access and sharing, and the GDPR's requirements on data protection and security.

For MedTech companies already operating under the MDR/IVDR, the challenge lies in integrating AI Act and Data Act requirements into existing quality management, product development, and data governance frameworks. A harmonized approach helps avoid fragmented compliance silos and creates efficiency across global operations.

AI Act requirements and roadmap for MedTech

Key concepts

The AI Act establishes obligations not only by risk category but also by the role an organization plays in the lifecycle of an AI system. The following four roles are central.

  • Provider: The entity that develops an AI system and places it on the market or puts it into service under its own name or trademark. In the MedTech context, this typically refers to manufacturers of AI-enabled medical devices or diagnostic software.
  • Deployer: The entity that uses an AI system under its authority, other than in a personal, non-professional capacity. Hospitals, clinics, or MedTech companies using AI recruitment software fall within this category.
  • Importer: An entity established in the EU that places on the market an AI system developed outside the EU.
  • Distributor: An entity in the supply chain, other than the provider or importer, that makes an AI system available on the EU market.

These roles are conceptually distinct from those under the GDPR (controller and processor). For example, a company may act as a controller of patient data under the GDPR while at the same time being a provider of an AI-based medical device or a deployer of an HR tool under the AI Act. Understanding these distinctions is crucial, as obligations differ significantly by role.

The definition of an 'AI system' is the cornerstone of the AI Act's applicability, defined as a 'machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.'

This broad definition extends well beyond traditional notions of 'AI' and may capture many technologies already used in MedTech, provided they use inferential processes rather than purely fixed rules. Examples include diagnostic imaging software or clinical decision-support tools that use algorithms to predict or recommend outcomes, as well as recruitment systems that rely on algorithmic inference. Whether applied in core medical devices or in corporate operations, such systems may trigger different obligations depending on their risk category and the company's role.

When is a medical device an AI system?

Not every medical device falls within the scope of the AI Act. A device is considered an AI system under the AI Act only if it incorporates a machine-based component that infers from input data to generate outputs — such as predictions, recommendations, or decisions. Devices that operate solely through fixed, rule-based programming without such inferential capability do not qualify as AI systems.

For example, a connected insulin pump that delivers insulin based on pre-programmed rules is a medical device under the MDR, but not an AI system. By contrast, if the pump uses a machine learning algorithm that analyses glucose data and predicts insulin needs to adjust dosing in real time, it becomes an AI-based medical device, triggering compliance with both the MDR and the AI Act.

In practice, a machine-based system may qualify as an AI system under the AI Act if it infers from input data to generate outputs such as predictions, recommendations, or decisions that can influence people or processes. In the MedTech sector, this includes, diagnostic imaging software that uses algorithms to detect patterns, clinical decision-support tools that recommend treatments, or wearable monitoring algorithms that predict health risks. Similar obligations may also apply to AI-enabled systems in corporate functions, such as HR or supply chain management. Purely rule-based or deterministic software, however, would not fall within the AI Act's definition.

Timeline of obligations under AI Act

The AI Act is being phased in over several years. While the first obligations are already in effect, the next milestones in 2026 and 2027 are critical, particularly for the MedTech organizations.

By February 2, 2025, all actors — providers, deployers, importers, and distributors — were required to comply with the first set of obligations. These included the removal of prohibited AI practices and the introduction of AI literacy measures to ensure that staff and relevant third parties are equipped to understand and manage AI-related risks. Although detailed documentation duties will only apply later, organizations were expected to begin creating an inventory of their AI systems as a foundation for classification and compliance. Companies that did not act in time should now urgently address these gaps.

By August 2, 2026, the AI Act will apply in full to Annex III high-risk AI systems, such as AI tools used in HR, recruitment, and workforce management. Providers will need to comply with the requirements set out in Articles 8–15 and their role-based obligations under Articles 16–22, including risk management, data governance, technical documentation, conformity assessment, CE marking, and registration of systems in the EU database. Deployers will have to ensure proper oversight, use AI systems in accordance with instructions, monitor performance, and conduct vendor due diligence. All actors will also be required to meet transparency obligations, for example in cases where AI interacts directly with individuals. Organizations should already be preparing for this milestone to avoid last-minute compliance challenges.

August 2, 2027, is a critical deadline for MedTech and other regulated sectors listed in Annex I, high-risk AI systems, such as AI-enabled medical devices, as they will be subject to full compliance. From this date, providers must integrate AI Act obligations directly into their MDR/IVDR conformity processes, ensuring that technical documentation, risk management, and post-market monitoring are aligned under both frameworks. Deployers, in turn, must demonstrate correct use of systems, establish effective human oversight, and implement incident reporting mechanisms that complement their existing obligations under the MDR/IVDR and related product safety laws.

Specifics for MedTech organizations

The AI Act has particular relevance for MedTech, where companies often act simultaneously in multiple roles (e.g., as providers of medical devices and deployers of corporate AI tools). Key considerations include the following.

  • Annex III systems: Many MedTech organizations use high-risk AI in corporate functions such as HR and recruitment. For these systems, obligations differ depending on the role; providers must comply with design, documentation, and conformity requirements, while deployers must focus on oversight, use, and contractual due diligence when acquiring systems from third parties.
  • Annex I systems: AI-enabled medical devices can qualify as high-risk systems either as safety components (for example, AI monitoring algorithms designed to prevent device malfunction) or as stand-alone products (diagnostic software that analyses medical images). Here, providers face obligations to prepare technical documentation, manage risks throughout the lifecycle, and undergo conformity assessments, while deployers are responsible for ensuring correct use, oversight, and reporting.
  • Conformity assessment processes: For MedTech providers, conformity assessments will primarily follow MDR/IVDR procedures, but these procedures must also demonstrate compliance with the AI Act. Under Article 43(3) and Recital 124, AI Act requirements are integrated into MDR/IVDR conformity assessments to avoid duplication. While there is overlap in areas such as risk management and post-market surveillance, the AI Act introduces additional obligations, including transparency, human oversight, and lifecycle data governance. Annex I and Annex III systems differ in their assessment routes: Annex I systems, such as AI-enabled medical devices, undergo MDR/IVDR conformity assessments with AI Act requirements embedded, while Annex III systems, such as HR or recruitment tools, follow the AI Act's own conformity procedures. This requires careful coordination across legal, regulatory, and technical functions to ensure consistency and compliance.

Preparing for implementation

For multinational MedTech organizations, preparing for compliance with the AI Act requires a structured and proactive approach. The following steps form a practical roadmap.

Step one: Establish an AI strategy and governance framework

Effective compliance starts with strong governance. Although the AI Act refers to 'AI governance' only once, its requirements are embedded throughout the rules for high-risk systems and the obligations of providers and other actors. Effective AI governance is the cornerstone of compliance. It ensures that high-risk AI systems are properly identified and managed, accountability is clear across functions, risks such as bias or malfunction are mitigated, and trust is built with regulators, users, and the public.

'AI governance' means establishing structures, processes, and responsibilities to ensure AI systems are developed, deployed, and monitored in a compliant, transparent, and accountable way. This includes:

  • assigning clear roles for AI compliance;
  • maintaining risk and quality management systems;
  • implementing data governance and bias controls;
  • ensuring documentation, record-keeping, and transparency;
  • providing staff training and AI literacy;
  • enabling human oversight;
  • completing conformity assessments and CE marking;
  • registering high-risk AI systems in the EU database; and
  • setting up post-market monitoring and incident response mechanisms.

Governance also extends to contracts and supply chain management.

Governance boards or steering committees should include representatives from legal, compliance, research and development (R&D), IT, HR, and clinical teams. Executive or board-level oversight is critical for accountability and resource allocation. Importantly, governance should not be stand alone but be integrated into existing MDR/IVDR, GDPR, and cybersecurity frameworks for consistency.

Step two: Map and classify AI systems

A comprehensive inventory of all AI systems used or developed by the organization is the foundation for compliance. Each system must be classified according to the AI Act's categories:

  • prohibited systems – prohibited and must be removed;
  • high-risk systems – covered by Annex I (AI-enabled medical devices and safety components) or Annex III (e.g., HR, recruitment, workforce management); and
  • low-risk/minimal-risk systems – subject to limited obligations.

Classification determines whether a system must be eliminated, undergo full compliance measures, or only meet lighter requirements. Misclassification risks under-compliance (and penalties) or unnecessary costs.

Therefore, key actions include mapping all AI use cases, applying a classification matrix (prohibited, Annex I, Annex III, low risk), and for Annex I systems, comparing requirements with MDR/IVDR to identify overlaps and gaps.

For example, a MedTech company developing AI diagnostic imaging software acts as a provider of an Annex I high-risk system, triggering both MDR and AI Act obligations. The same company using an AI recruitment tool is a deployer of an Annex III high-risk system, with obligations focused on oversight and vendor due diligence.

Step three: Determine company role per system

Once systems are classified, companies must determine their role for each system: provider, deployer, importer, or distributor. These roles define the specific obligations under the AI Act.

A MedTech company may be a provider for an AI-enabled medical device it develops, while also a deployer for an AI recruitment tool it uses internally. Many organizations operate in multiple roles at once, making early role assignment essential to avoid gaps or overlaps in accountability.

Step four: Identify applicable laws and frameworks

Compliance cannot be seen in isolation. Organizations must consider the interplay between the AI Act and other legal frameworks, as well as broader initiatives such as International Organization for Standardization (ISO)/IEC 42001 (AI management systems), the US National Institute of Standards and Technology (NIST) AI Risk Management Framework, and the Organisation for Economic Co-Operation and Development (OECD) AI Principles. For global organizations, these frameworks can serve as reference points to harmonize practices across jurisdictions.

Take the instance of an AI-enabled insulin pump. It must meet the MDR safety requirements. If the pump includes an algorithm that predicts insulin needs, the AI Act also applies, requiring dataset quality, transparency, and human oversight. Operational data from the pump may fall under the Data Act, while processing patient health data must comply with the GDPR.

Step five: Perform a gap analysis

A gap analysis identifies where MDR/IVDR compliance already meets AI Act requirements and where new measures are needed.

Although the MDR and the AI Act overlap in areas such as risk management and post-market surveillance, the AI Act introduces additional requirements: bias monitoring, explainability, human oversight, and enhanced transparency. Without a gap analysis, organizations risk either leaving critical gaps unaddressed or duplicating compliance efforts.

Comparing MDR/IVDR obligations with AI Act requirements highlights overlaps and gaps. The outcome should be an integrated plan to expand existing quality management systems to include AI-specific measures.

For example, a provider of AI-based diagnostic software may already comply with MDR risk management and post-market surveillance but must add AI Act requirements such as dataset quality checks, explainability for clinicians, and bias monitoring.

Step six: Develop implementation plans

Gap analysis results must be translated into concrete implementation plans, tailored by role and system risk level. Structured planning ensures timely compliance with the AI Act's phased deadlines of August 2026 for Annex III systems and August 2027 for Annex I systems.

Key actions include updating documentation, establishing transparency measures, implementing oversight protocols, strengthening monitoring, and rolling out AI literacy training.

For example, a MedTech company using an AI recruitment tool must implement oversight and vendor controls by 2026, while its AI diagnostic device must integrate AI Act requirements into MDR conformity processes by 2027.

Step seven: Strengthen vendor and contract management

Many AI systems, particularly for HR and workforce management, are acquired from third parties.

Even when procuring third-party AI, deployers remain responsible under the AI Act. Vendor and contract management is therefore critical to ensure compliance.

It is therefore essential to conduct vendor due diligence, require conformity documentation, and update contracts to allocate responsibilities on data quality, transparency, and risk management. Monitoring must continue throughout the system's lifecycle.

For instance, a MedTech company deploying an AI workforce management tool should require the vendor to demonstrate compliance with the AI Act, share technical documentation, and agree to contractual terms on transparency and incident reporting.

Together, these steps form a coherent roadmap for MedTech companies preparing for the AI Act. By embedding AI Act obligations into existing MDR/IVDR, data protection, and data governance structures, organizations can avoid fragmented compliance and build trust in their use of AI across both medical and corporate functions.

Conclusion

The AI Act is a turning point for MedTech. With the first obligations already in effect and the major deadlines approaching, early preparation is critical to ensure compliance and to position AI solutions as safe, effective, and trustworthy.

For MedTech companies, compliance means working through a dual lens: the established MDR/IVDR requirements and the new AI Act obligations. In parallel, organizations must also be mindful of the Data Act, which introduces further rules on access, sharing, and interoperability of device-generated data. Together, these frameworks create a multi-layered compliance environment where product safety, AI governance, and data governance must be integrated into one coherent system.

Beyond meeting legal requirements, strong AI governance and alignment with international standards offer an opportunity to build patient trust, support innovation, and demonstrate leadership in trustworthy AI for healthcare.

Now is the time to act. By embedding AI Act requirements into governance, operations, and product development, MedTech organizations can not only achieve compliance but also turn regulatory readiness into a competitive advantage.

Footnote

1. Please note that is article was written before the publication of the EU Commission's Digital Omnibus proposal, which may adjust the timeline and some implementing arrangements for the AI Act and related digital legislation. However, the analysis in this article focuses on the substance and interplay of the frameworks and remains valid irrespective of the exact date on which specific obligations become applicable.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More