ARTICLE
2 January 2026

Navigating EU Compliance For AI-Enabled Wearable Medical Devices: MDR, AI Act, GDPR And Data Act Interplay

FP
FABIAN PRIVACY LEGAL GmbH

Contributor

We are a boutique law firm specializing in data, privacy and data protection laws and related issues, information security, data and privacy governance, risk management, program implementation and legal compliance. Our strengths are the combination of expert knowledge and practical in-house experience as well as a strong network with industry groups, privacy associations and experts around the world.
AI-enabled wearable medical devices are reshaping healthcare by enabling continuous physiological monitoring, real-time clinical insights and increasingly personalized interventions.
Switzerland Privacy
Daniela Fabian’s articles from FABIAN PRIVACY LEGAL GmbH are most popular:
  • with readers working within the Business & Consumer Services and Telecomms industries
FABIAN PRIVACY LEGAL GmbH are most popular:
  • within Food, Drugs, Healthcare and Life Sciences topic(s)
  • with Senior Company Executives and HR

Abstract

AI-enabled wearable medical devices are reshaping healthcare by enabling continuous physiological monitoring, real-time clinical insights and increasingly personalized interventions. Their transformative potential, however, is matched by a complex and multilayered regulatory landscape. A single wearable can simultaneously trigger compliance obligations under the Medical Device Regulation (MDR), the EU Artificial Intelligence Act (AI Act), the General Data Protection Regulation (GDPR) and the EU Data Act and may also fall within the scope of the NIS 2 Directive, depending on the roles and services involved. Each framework governs a different dimension of the device's lifecycle, from clinical safety and AI governance to data protection, data access and cybersecurity, yet all must be satisfied coherently.

This article examines the regulatory interplay using a concrete case study: a wearable ECG sensor powered by an AI-based arrhythmia-detection algorithm and connected cloud ecosystem. It outlines the key obligations arising under each legal regime, analyzes areas of synergy and regulatory tension, and explains the extraterritorial reach that brings non-EU manufacturers into scope when their devices are placed or made available on the EU market, or used by patients or healthcare professionals in the Union.

Building on insights from " Bringing the AI Act to Life in a Multinational MedTech Organization", the article proposes a pragmatic integrated compliance pathway for MedTech companies. This includes harmonizing MDR and AI Act conformity processes, embedding GDPR and Data Act requirements into a unified quality and compliance management system, consolidating risk assessments and technical documentation, and establishing cross-functional governance to overcome organizational silos. The analysis demonstrates that integrated compliance is not merely operationally efficient, it is essential to ensuring that AI-enabled medical devices are safe, transparent, trustworthy and ready for the EU's evolving digital regulatory environment.

I. Introduction

AI-enabled wearable medical devices are at the center of a profound transformation in healthcare. They enable continuous physiological monitoring, real-time clinical insights and personalized interventions, all powered by interconnected sensors, cloud platforms and machine learning models. This convergence of medical technology, artificial intelligence and data-driven healthcare promises better treatment outcomes for patients but also leads to unprecedented regulatory complexity.

A single AI-enabled wearable medical device can simultaneously trigger multiple EU legal frameworks, including in particular:

  • Medical Device Regulation (MDR) – governing safety, performance, conformity assessment, CE marking and post-market surveillance.
  • EU Artificial Intelligence Act (AI Act) – classifying AI systems used for medical purposes as high-risk and imposing lifecycle-based governance, transparency and risk management obligations.1
  • General Data Protection Regulation (GDPR) – regulating processing of sensitive health data generated and analyzed by the device.
  • EU Data Act – mandating data access, interoperability and fair data-sharing rules for connected products and related services.

In addition, the NIS 2 Directive may apply where the manufacturer or a supporting service provider, such as a cloud provider, qualifies as an essential or important entity, introducing risk-management and incident reporting obligations.

For developers and compliance teams, the challenge is therefore no longer to understand each regulatory regime in isolation, but to integrate their requirements into a coherent, scalable and operationally efficient compliance architecture. This article builds on the earlier publication by FABIAN PRIVACY LEGAL GmbH "Bringing the AI Act to Life in a Multinational MedTech Organization", and extends its insights to the rapidly evolving domain of AI-enabled wearable medical devices.

The objective of this article is to:

  • explain how the relevant EU regulatory regimes apply to a realistic wearable medical device case study;
  • outline the core obligations under each framework;
  • analyze overlaps, synergies and points of regulatory tension; and
  • propose a pragmatic pathway toward integrated, cross-regulatory compliance.

II. Case Study Description: AI-Enabled Wearable for Real-Time Cardiac Monitoring

A MedTech company is developing a wearable ECG sensor that continuously records a patient's heart rhythm and transmits the data to a companion mobile application and secure cloud backend. At the heart of the system is an AI-based system that analyzes ECG patterns in real time to detect possible arrhythmias. When the model identifies an irregularity, the system automatically notifies both the patient and the treating physician, allowing for rapid clinical assessment without the need for an in-person appointment.

Although the device may appear simple from the patient perspective, it operates within a sophisticated ecosystem of interconnected components and actors. The manufacturer is responsible not only for the physical device but also for the AI system, the mobile interface and the backend infrastructure. Each of these elements must be designed with appropriate controls for data governance, cybersecurity and risk management. Patients interact with the wearable continuously and receive clinically relevant notifications. Physicians, meanwhile, act as deployers of the AI system under the AI Act and as professional users of the medical device. They must be able to trust the accuracy, robustness and transparency of the AI-generated alerts that support their clinical decision-making.

Cloud infrastructure plays a central role in the system's operation. It enables real-time AI inference, manages the storage of sensitive health data and maintains logs required for regulatory traceability, auditing and post-market surveillance. Where part of the AI system is provided by an external vendor, the manufacturer must ensure that contractual and technical arrangements are in place to secure cooperation, documentation quality and ongoing transparency throughout the system's lifecycle. For companies based outside the EU, authorized representatives, importers and distributors add further layers of responsibility to ensure that the device meets all requirements for placement on the EU market and remains compliant once it is in use.

The data flows involved in this system highlight the level of connectivity typical of modern digital health solutions. The wearable collects raw ECG signals, which the mobile application preprocesses and encrypts before transmitting them securely to the cloud. The AI model performs the arrhythmia detection, while detailed system logs document its functioning and support regulatory oversight. Physicians access the risk alerts and any relevant context, and patients may review historical data or export it, depending on their applicable legal rights.

Cross-border data flows are often unavoidable, particularly where cloud hosting or AI development activities take place outside the EU. In such cases, international transfers of health data fall under the GDPR's strict requirements. The AI Act and the Data Act also have extraterritorial reach, meaning they apply to non-EU companies whenever AI systems are placed or made available on the EU market, or where their outputs or connected products are used by individuals in the EU. This requires robust contractual safeguards, technical security measures and a clear allocation of responsibilities to ensure regulatory compliance across all jurisdictions involved.

Within this ecosystem, the regulatory challenges are substantial. The manufacturer must align MDR requirements on device safety, performance, clinical evaluation, risk management and post-market surveillance with the AI Act's expectations for transparency, data governance, robustness, logging and human oversight. At the same time, the GDPR imposes strict obligations for the processing of health data, including data minimization, security measures, lawful processing and the full range of data-subject rights. The Data Act introduces additional duties regarding user access to device-generated data, interoperability and fair contractual conditions. Cybersecurity requirements, potentially including obligations under NIS 2, add further layers of complexity, particularly with respect to incident reporting across medical-device vigilance, AI serious-incident reporting and personal data breaches. Ensuring that documentation, risk assessment and monitoring processes remain aligned across these frameworks requires an integrated and proactive regulatory approach.

Why all these EU laws apply even if the manufacturer is located outside the EU

EU regulations governing medical devices, AI systems, personal data and connected products are intentionally designed to protect users in the Union, regardless of where the technology is developed or operated. For this cardiac-monitoring wearable, all major EU laws apply because the device, its associated digital services and the AI-generated alerts are intended for use by patients and healthcare professionals in the EU.

The MDR applies whenever a medical device is placed on the EU market, irrespective of the manufacturer's place of establishment. The AI Act likewise has broad territorial reach: it applies not only to AI systems marketed in the EU but also to systems whose outputs, such as arrhythmia alerts, are used by individuals in the Union. The GDPR applies whenever personal data of individuals in the EU are processed, including continuous ECG monitoring, even if the controller or processor is established outside the EU. The Data Act applies to connected products and related digital services made available in the EU and requires that users have fair access to the data generated through their use. Finally, NIS 2 may apply if the manufacturer or a supporting service provider, such as a cloud provider, qualifies as an essential or important entity serving the EU market.

Taken together, these extraterritorial mechanisms ensure that EU patients benefit from consistent protection with regard to safety, transparency, cybersecurity and data privacy, regardless of where the technology is designed, developed or hosted.

III. Applicable Legal Frameworks and Key Obligations

AI-enabled wearable devices sit at the intersection of several major EU regulatory regimes. Each framework governs a different dimension of the product lifecycle, including clinical safety, AI governance, data protection, data access, and cybersecurity, yet all must be satisfied simultaneously. The following overview explains how these laws apply to the cardiac-monitoring wearable introduced in the case study.

1. Medical Device Regulation (MDR)

The wearable ECG sensor and its companion app qualify as medical devices under the MDR because they continuously monitor physiological parameters and support diagnostic decision-making. Under Rule 11 of Annex VIII MDR, software that analyzes physiological data to detect arrhythmias typically falls into Class IIa or IIb, requiring the involvement of a notified body and a structured conformity-assessment procedure.

To demonstrate compliance, the manufacturer must prepare and maintain comprehensive technical documentation in accordance with Annex II and Annex III MDR, covering the device's design and intended purpose, risk-management processes, cybersecurity controls, clinical evaluation and post-market surveillance system. These activities must be supported by a quality management system meeting the requirements of Article 10(9) MDR. Before the device can be placed on the EU market, it must undergo the appropriate conformity assessment and bear the CE marking pursuant to Article 20 MDR. Once the device is in use, the manufacturer must operate a robust post-market surveillance and vigilance system, including periodic safety updates and mandatory reporting of serious incidents.

A key development for AI-enabled medical devices is Article 43(3) of the EU AI Act, which provides that high-risk AI systems forming part of MDR-regulated medical devices are assessed within the MDR conformity-assessment framework, rather than through a separate standalone AI Act procedure. The intention is to align both sets of requirements and avoid duplication, allowing AI-specific obligations, such as data governance, transparency, robustness, logging and human-oversight measures, to be evaluated as part of the existing MDR process.

However, this legal integration does not mean that all MDR notified bodies are already fully equipped to assess compliance with AI Act requirements. In practice, notified bodies will require additional designation, accreditation and expertise in areas such as machine learning, data quality, AI risk management and algorithmic transparency. The development of harmonized standards, common specifications and guidance for the AI Act is still underway, and the capacity of notified bodies to perform fully integrated assessments may vary significantly during the implementation phase.

As a result, while the MDR provides the procedural framework for conformity assessment, the effective operational integration of AI Act requirements remains an evolving challenge that manufacturers, notified bodies and regulators will need to address collaboratively as the AI Act becomes fully applicable.

2. EU AI Act

The arrhythmia-detection AI system qualifies as a high-risk AI system under Article 6(1) of the AI Act because it forms part of a medical device regulated under the MDR, which is one of the Union harmonization acts listed in Annex I. As its provider, the manufacturer must comply with extensive lifecycle obligations aimed at ensuring trustworthy, human-centric and safe AI. These include a documented risk-management system, rigorous data governance and data-quality controls, detailed technical documentation and logging mechanisms, and clear transparency and human-oversight measures. High-risk AI systems must also meet requirements relating to accuracy, robustness and cybersecurity and operate under a supporting quality management system aligned with both the AI Act and the MDR.

For AI systems integrated into medical devices, conformity assessment under the AI Act is performed as part of the MDR conformity procedure in accordance with Article 43(3) AI Act. Compliance is documented through the MDR technical file and EU declaration of conformity, and the CE marking is affixed via the MDR process. Due to the adaptive and frequently updated nature of AI systems, compliance obligations under both the MDR and the AI Act are particularly concentrated in the post-market phase, requiring continuous monitoring, change management, and reassessment of risks following model updates.

Once deployed, the manufacturer must operate a post-market monitoring system and report any serious incidents within the timelines set out in Article 73 AI Act.

Deployers, such as physicians or healthcare institutions, also have obligations. They must use the AI system in accordance with the provider's instructions, apply meaningful human oversight, monitor system performance and report suspected serious incidents. In clinical contexts, AI-generated outputs serve as decision-support tools rather than as a replacement for professional medical judgment, ensuring that final clinical decisions remain subject to human assessment and responsibility.

Overall, the AI Act significantly raises expectations for governance, transparency and accountability in AI-enabled medical devices, requiring manufacturers and deployers to embed risk management, oversight and documentation throughout the product lifecycle.2

3. GDPR

Because the wearable continuously collects and analyzes sensitive health data, the GDPR applies. In the context of this device, the manufacturer typically acts as the data controller, as it determines the purposes and essential means of the processing, including which ECG data are collected, how the AI-supported analysis is performed, retention periods and security measures. External partners such as cloud providers or AI vendors operate only on the manufacturer's documented instructions and therefore act as processors.

Physicians also process patient data when using the alerts and information generated by the device in their clinical practice. For these processing activities, they act as independent controllers, as they decide how to interpret and use the data for diagnosis, treatment and follow-up care. They do not, however, determine how the wearable collects data or how the AI system performs its analysis; those aspects remain under the manufacturer's control. This results in a typical digital-health structure involving separate controllership rather than joint controllership.

As controller for the processing inherent in the device ecosystem, the manufacturer must ensure that all processing of health data is based on a valid legal ground and complies with the GDPR's core principles, including necessity, proportionality, purpose limitation and data minimization. Compliance with MDR or AI Act requirements does not replace or override these obligations. The system must be designed with privacy by design and by default in mind, ensuring that only data required for arrhythmia detection and safe device functioning are processed.

Patients and healthcare professionals must receive clear and accessible information about how their data are processed, who is responsible for which processing operations and how long the data are retained. Because the device involves continuous monitoring of sensitive health data, the manufacturer must conduct a Data Protection Impact Assessment to identify and mitigate risks to individuals' rights and freedoms. Where external service providers process data on the manufacturer's behalf, these relationships must be governed by robust data-processing agreements.

Patients retain all data-protection rights, including access, rectification and, where technically feasible, data portability. These rights must be supported through intuitive mechanisms within the mobile application and backend systems.

International data transfer and cross-border data access

Given the cloud-based architecture of the wearable and the potential involvement of non-EU service providers, international data transfers are a structural feature of the system. Where personal data are transferred outside the EU, the manufacturer must comply with the GDPR's international transfer regime, including reliance on adequacy decisions where available, appropriate safeguards such as standard contractual clauses, and, where required, supplementary technical and organizational measures ensuring essentially equivalent protection.

The Data Act does not create an independent transfer mechanism and does not override the GDPR. Where users request that device-generated data be shared with third-party recipients located outside the EU, such sharing remains subject to applicable data-protection laws and to safeguards protecting trade secrets, confidentiality and security. Public-sector access to data under the Data Act is similarly constrained by strict requirements of necessity, proportionality and confidentiality.

In practice, international data access and transfers must therefore be assessed jointly under the GDPR, the Data Act and applicable cybersecurity requirements and embedded into contractual arrangements, system architecture and access-control mechanisms form the outset.

4. EU Data Act

Because the wearable cardiac-monitoring device is a connected product, the Data Act applies to the manufacturer as the data holder and to patients as the users. Users have the right to access the data generated through their use of the device and its associated digital service. This includes raw sensor data, such as ECG readings, and certain derived data resulting directly from the device's normal functioning, but not data created through additional analysis or enrichment performed solely by the manufacturer.

The manufacturer must make these data available easily, securely and, where feasible, in real time, in a structured and machine-readable format. At the user's request, the manufacturer must also transmit the data directly to a third-party data recipient designated by the user, under equivalent technical conditions.

Where data are shared with third parties in a business-to-business context, the Data Act requires fair, reasonable and non-discriminatory contractual terms. At the same time, the manufacturer must implement measures to prevent unauthorized access and protect trade secrets, which may only be shared to the extent strictly necessary and subject to confidentiality safeguards.

The Data Act operates alongside the GDPR. Because ECG data constitute personal data, any access or sharing under the Data Act must fully comply with data-protection requirements. The Data Act therefore complements, but does not replace, GDPR rights.

5. NIS 2 Directive (if applicable)

The NIS 2 Directive establishes the EU's updated framework for cybersecurity and incident reporting for sectors considered essential or important to societal and economic functioning. Its applicability within the wearable ecosystem depends on the nature of the activities performed by the relevant actors.

NIS 2 does not automatically apply to medical device manufacturers, as manufacturing of medical devices is not listed in Annex I or II. The manufacturer in this case is therefore not in scope by default, unless it is designated by a Member State as critical or performing additional activities captured by the Directive.

By contrast, the cloud service provider supporting the wearable's backend is likely to fall within the scope, as cloud computing services are listed as essential entities. Healthcare providers using the device may also be subject to NIS 2 if they meet the applicable criteria.

Entities subject to NIS 2 must implement appropriate and proportionate cybersecurity risk-management measures and comply with a phased incident-reporting regime. Even where the manufacturer is not directly subject to the Directive, close coordination with NIS 2 regulated partners remains essential, as cybersecurity incidents affecting the device ecosystem may have regulatory implications across multiple frameworks.

IV. Interplay and Comparison of Legal Frameworks

1. Overlaps and reinforcing synergies

Several obligations imposed by these laws align closely, enabling manufacturers to build unified processes that satisfy multiple frameworks simultaneously. The MDR and the AI Act share a strong emphasis on risk management, technical documentation, transparency, human oversight and post-market surveillance. In the context of the wearable, the MDR's quality management system provides a natural backbone for incorporating the AI Act's lifecycle controls, particularly for logging, traceability and continuous performance monitoring of the arrhythmia-detection AI system.

The GDPR, while focused on the protection of personal data, reinforces many of the AI Act's requirements. The GDPR's mandates on data quality, privacy by design and accountability complement the AI Act's expectations for robust data governance, reproducibility and clear information for deployers such as physicians. For this wearable, privacy-by-design measures introduced to meet GDPR obligations, such as data minimization and access control, can also enhance the AI system's reliability and reduce the risk of biased or inappropriate outputs.

The relationship between GDPR and the Data Act is similarly complementary. Each strengthens the individual's control over device-generated data, albeit with different objectives: the GDPR focuses on protecting individual rights and freedoms, while the Data Act ensures that users can access and share the data generated through their use of the device. For the cardiac wearable, these rights operate in parallel: patient may request access to their ECG data under either framework, but any onward sharing remains subject to GDPR safeguards.

Interoperability requirements under the Data Act can also support MDR objectives by facilitating the generation of real-world evidence and enabling more effective post-market surveillance. Meanwhile, all frameworks converge on the importance of cybersecurity and incident reporting. MDR vigilance, AI Act serious-incident reporting, GDPR breach notifications and NIS 2 reporting duties collectively underscore the need for a coordinated, well-governed incident-response structure spanning the entire device ecosystem.

2. Potential tensions

Despite these points of alignment, several areas of regulatory tension may arise. The AI Act's emphasis on transparency, including meaningful information to deployers, must be balanced against GDPR requirements for data minimization and the protection of trade secrets. Transparency measures, such as logs used to trace AI decisions, must therefore be designed to support clinical oversight without revealing unnecessary personal data or sensitive algorithmic details protected under both the AI Act and the GDPR.

The Data Act's user-access rights can also create tension with clinical safety obligations under the MDR. Patients may request access to raw ECG data or algorithmic outputs that, if misinterpreted, could lead to unnecessary anxiety or inappropriate self-diagnosis. While the Data Act grants users broad access rights, it also recognizes that access mechanisms must not compromise the safety or proper functioning of the product. This aligns with the MDR, which allows manufacturers to apply proportionate limitations where necessary to protect patient safety and device performance. Access pathways must therefore be carefully designed to respect user rights while remaining clinically responsible.

Finally, data-retention expectations diverge across frameworks. The AI Act requires lifecycle logs to support traceability and post-market monitoring, and the MDR imposes extended retention obligations for post-market surveillance. By contrast, the GDPR mandates that personal data be stored only for as long as necessary. The manufacturer must therefore justify retention periods in a way that satisfies the operational and safety needs of the MDR and AI Act, while ensuring compliance with the GDPR's proportionality and minimization principles.

3. Complementarities and opportunities

Despite these challenges, the interplay of the frameworks presents opportunities for a coherent and efficient compliance architecture. A unified quality and risk-management system can serve as a central structure for meeting MDR and AI Act obligations, while also incorporating GDPR data-protection measures and Data Act user-access requirements. For the wearable device, such an integrated system allows the manufacturer to document risk controls, data flows, AI system performance and clinical safety within a single, coordinated governance environment.

While a GDPR Data Protection Impact Assessment (DPIA) does not replace the AI Act's risk-management process, it can provide valuable input, particularly regarding risks to individuals' rights and the context of personal data processing, that can be integrated into the AI Act's broader lifecycle-oriented analysis. Conversely, AI Act documentation and post-market monitoring can support elements of MDR clinical evaluation and post-market surveillance. Likewise, Data Act interoperability and portability requirements can be leveraged to enhance transparency and improve the quality of real-world data used for safety monitoring and algorithm refinement.

Taken together, the regulatory frameworks are best understood not as competing or isolated requirements, but as a mutually reinforcing ecosystem. When approached holistically, they enable manufacturers to build AI-enabled medical devices that are not only compliant but also demonstrably safe, trustworthy, transparent and respectful of patient rights, qualities that are central to the long-term adoption of digital health technologies.

4. Liability and enforcement interplay

Each regulatory regime applicable to AI-enabled wearable medical devices operates through its own enforcement and liability logic, yet in practice these mechanisms interact and may reinforce one another. For manufacturers, this means that compliance failures, incidents, or even regulatory disclosures under one framework can have consequences across public enforcement, civil liability, contractual disputes, and reputational risk.

Under the GDPR, controllers and processors are subject to administrative fines, corrective measures, and binding orders by data-protection authorities, alongside potential civil liability. The AI Act introduces a comparable administrative enforcement model for high-risk AI systems, including significant fines, corrective actions, and, in severe cases, withdrawal of non-compliant systems from the market. MDR enforcement relies on market-surveillance authorities and notified bodies, with powers ranging from corrective measures and recalls to suspension or withdrawal of CE-marked devices. By contrast, the Data Act focuses on enforcement through access rights, contractual fairness obligations, and remedies for unlawful refusals or misuse of shared data.

For AI-enabled medical devices, these regimes accumulate rather than displace one another. A single malfunction, cybersecurity incident, or governance failure may therefore trigger parallel scrutiny under multiple frameworks. An AI-related safety incident reported under MDR vigilance or AI Act serious-incident obligations may also prompt data-protection investigations, contractual disputes, or civil claims where patient harm or service disruption occurs. Regulatory findings under one regime may further inform proceedings under another, even where legal standards differ.

A critical dimension for manufacturers is the evidentiary role of compliance documentation. The AI Act requires extensive technical documentation, logging, and post-market monitoring for high-risk systems. While essential for demonstrating conformity, these materials also create detailed records of design choices, risk awareness, and mitigation measures, which may be relied upon in enforcement actions or civil litigation to assess knowledge, foreseeability, and control. Integrated compliance therefore requires not only comprehensive documentation, but careful governance of how risks and residual uncertainties are recorded.

Finally, while human oversight is central to both GDPR and AI Act compliance, it is not a complete liability shield. Courts and authorities may examine whether oversight is effective in practice or whether clinical workflows result in de facto reliance on AI outputs. Ensuring that oversight mechanisms are operational, documented, and supported by training is therefore essential not only for compliance, but also for mitigating liability risk.

5. Automated decision-making and human oversight

The cardiac-monitoring use case illustrates the close interaction between GDPR safeguards relating to automated decision-making and the AI Act's human-oversight requirements. While the AI system automatically detects arrhythmias and generates alerts, clinical decisions remain with healthcare professionals. This design choice is essential to avoid reliance on solely automated decisions producing legal or similarly significant effects under the GDPR and to satisfy the AI Act's requirement for effective human oversight of high-risk AI systems.

In practice, meaningful human oversight requires more than the formal possibility of intervention. Physicians must receive clear, actionable information about the AI system's role, limitations and confidence levels, and must be able to override or disregard AI outputs where clinically appropriate. Aligning GDPR safeguards with AI Act oversight mechanisms therefore strengthens patient protection while preserving clinical accountability.

V. A Pragmatic Integrated Compliance Pathway

Meeting the combined obligations of the MDR, AI Act, GDPR, Data Act and, where applicable, NIS 2 requires a compliance approach that is both holistic and operationally realistic. The wearable cardiac-monitoring device exemplifies a broader trend in MedTech: traditional quality management alone is no longer sufficient, and fragmented, silo-based compliance structures lead to inefficiencies, duplicative documentation and inconsistent risk governance. A pragmatic strategy therefore focuses on integration, clear allocation of responsibilities and coordinated oversight across the entire lifecycle of the device and its AI system.

1. Role allocation and system mapping

The starting point is a clear mapping of regulatory roles and system interactions. For the wearable device, the manufacturer typically acts simultaneously as MDR manufacturer, AI Act provider, GDPR controller and Data Act data holder, while healthcare professionals function as AI Act deployers and independent controllers for clinical use. Cloud providers, AI vendors and other partners must be classified as processors, service providers or distributors depending on their functions.

Documenting these roles, together with up-to-date inventories of AI systems, device components and data flows, creates a shared reference model. This enables teams to understand where obligations sit, how responsibilities interact across frameworks, and where accountability ultimately lies.

2. A unified quality and compliance management system

The MDR's quality management system provides a natural foundation for integrating obligations across legal regimes. Rather than operating separate governance structures for AI, data protection, cybersecurity and data access, the manufacturer can extend MDR quality management systems (QMS) into a broader quality and compliance management system (QCMS).

Such a system integrates:

  • the AI Act's lifecycle requirements, including risk management, transparency, logging and data governance,
  • GDPR obligations such as privacy by design, DPIAs and controller-processor agreements,
  • Data Act mechanisms for user access, interoperability and fair contractual terms, and
  • NIS 2 elements relating to cybersecurity governance and incident handling, where applicable.

This approach avoids parallel processes and ensures that design, development, deployment and post-market activities remain aligned across regulatory expectations.

3. Coordinated risk-assessment framework

Although each legal framework mandates its own form of risk analysis, these assessments overlap substantially. MDR risk management focuses on clinical and patient safety; the AI Act addresses risks to health, safety and fundamental rights arising from AI behavior; GDPR DPIAs evaluate risks to individuals' rights and freedoms; the Data Act adds risks linked to access, reuse and misuse of device-generated data; and NIS 2 introduces cybersecurity and supply-chain risks.

Bringing these perspectives together into a single risk register creates a coherent view of the risk landscape and reduces duplication. A GDPR DPIA does not replace AI Act risk management, but it can inform it by identifying rights-related risks and contextual factors. Conversely, AI Act lifecycle documentation can support MDR clinical evaluation and post-market surveillance. Coordinated risk management helps ensure that mitigation measures are consistent, proportionate and mutually reinforcing.

4. Integrated technical documentation

A unified documentation approach further streamlines compliance. Building on MDR Annex II–III, manufacturers can integrate AI Act technical documentation and logs, GDPR documentation (such as DPIAs and technical and organizational measures), and Data Act evidence relating to user access, interoperability and contractual fairness. Where relevant, organizational and cybersecurity documentation supporting NIS 2 obligations can be included.

Maintaining this material in a single, controlled technical master file simplifies interactions with notified bodies, market-surveillance authorities and data-protection authorities, while providing internal teams with a clear and consistent evidence base.

5. Data governance and user-access controls

Effective data governance must reconcile GDPR access and portability rights, Data Act user-access rights, MDR safety requirements and AI Act transparency obligations. Access to raw ECG data or AI system outputs must be designed to enable lawful access while protecting user safety and avoiding misinterpretation. At the same time, transparency measures must avoid exposing unnecessary personal data or proprietary algorithmic information.

Clear user interfaces, proportionate safeguards and well-defined contractual terms help ensure that data sharing, portability and clinical responsibility remain aligned.

6. Alignment of conformity assessment and incident response

Given the interconnected nature of the regulatory frameworks, conformity-assessment processes should be aligned from the outset. AI-enabled medical devices follow the MDR conformity route while incorporating AI Act requirements, enabling a single, coherent submission package.

Incident reporting similarly benefits from integration. MDR vigilance, AI Act serious-incident reporting, GDPR personal data breach notifications and NIS 2 cybersecurity reporting involve different triggers and timelines. A central incident-response workflow ensures timely escalation, consistent messaging and reduced risk of regulatory gaps.

7. Contractual alignment and supplier governance

Manufacturers depend on cloud providers, AI vendors, analytics partners and other suppliers. These relationships must reflect combined expectations of the MDR, AI Act, GDPR, Data Act and NIS 2. Modular contract templates can allocate responsibilities clearly, for example, AI Act cooperation duties, GDPR processor obligations, Data Act access and confidentiality requirements, MDR supplier controls and NIS 2 cybersecurity expectations, without duplicating or contradicting obligations.

8. Overcoming organizational silos through cross-functional governance

One of the most significant practical challenges for MedTech companies is organizational fragmentation. Quality, regulatory, privacy, cybersecurity, AI governance, clinical and compliance teams often operate with different priorities, terminologies and processes, leading to inconsistent documentation and unclear risk ownership.

To address this, companies establish a cross-functional AI and Data Governance Board. This forum should bring together the person responsible for regulatory compliance, regulatory affairs and quality assurance leads, the data protection officer, AI risk and model governance owners, IT and cybersecurity specialists, clinical safety experts and legal counsel. The board serves as a decision-making and coordination mechanisms, reviewing risk assessments, aligning documentation across regulatory frameworks, coordinating incident response and ensuring a shared understanding of regulatory obligations. Regular training, transparent communication and structured decision-making help foster a unified compliance culture across functions.

VI. Final Considerations

AI-enabled wearable medical devices mark a transformative moment in digital health. They combine continuous physiological monitoring, real-time analytics and clinical decision support in ways that promise earlier diagnosis, more personalized care and improved patient outcomes. These advances, however, come with a regulatory landscape that is inherently interdisciplinary: medical-device safety, AI governance, data protection, cybersecurity and data-access rights intersect throughout the lifecycle of the device.

As this article has shown, compliance with the MDR, AI Act, GDPR, Data Act and, where relevant, NIS 2 cannot be approached through isolated workstreams. Each framework introduces obligations that interact with and reinforce one another, from transparency and human oversight to interoperability, privacy by design, risk management and post-market monitoring. For the manufacturer of an AI-enabled cardiac-monitoring wearable, these frameworks collectively shape the design, development, deployment and ongoing operation of the system.

The emerging best practice is therefore an integrated compliance architecture built around a unified quality and governance system, coordinated risk assessments, consolidated technical documentation and cross-functional oversight. Such an approach not only reduces duplication and addresses the organizational silos that often hinder implementation, but also supports coherent, evidence-based decision-making throughout the product lifecycle.

Critically, an integrated approach is also the most efficient path to future readiness. With high-risk AI obligations becoming fully applicable in a phased manner under the AI Act, and enforcement expectations increasing across privacy, cybersecurity and data-sharing regimes, organizations that operationalize harmonized processes today will be better positioned to navigate conformity assessments, market access and regulatory scrutiny in the years ahead.

Ultimately, integrated compliance is more than a regulatory necessity. It is a foundation for trustworthy innovation. By embedding safety, privacy, transparency and accountability into the core of AI-enabled medical technologies, manufacturers can strengthen patient confidence, support clinical adoption and contribute to a more resilient and trustworthy digital-health ecosystem across the EU.

This article was written within the scope of the Cybersecurity Advisors Network (CyAN) Mentorship Program. For more information about the program, please click here.

Footnotes

1. At the time of writing, the EU is discussing a so-called "Digital Omnibus" framework, which may adjust the timeline and some implementing arrangements for the AI Act and related digital legislation. The analysis in this article focuses on the substance and interplay of the frameworks and remains valid irrespective of the exact date on which specific obligations become applicable.

2. Note on Fundamental Rights Impact Assessment (FRIA): While the AI Act introduces a mandatory FRIA for certain Annex III high-risk AI systems, this obligation does not apply to AI systems that are regulated exclusively under Annex I as part of medical devices subject to the MDR. For AI-enabled medical devices, fundamental-rights considerations are instead addressed through the combined operation of MDR clinical risk management, AI Act risk management, and GDPR data-protection safeguards.

Disclaimer: This article is provided for general informational purposes only and does not constitute legal advice. It is based on the applicable regulatory framework at the time of writing. No warranty is given as to the accuracy, completeness, or continued validity of the information. Readers should consult the official legal texts and seek independent professional advice before taking any decisions based on this content.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More