- within Food, Drugs, Healthcare and Life Sciences topic(s)
- in United States
- with readers working within the Pharmaceuticals & BioTech industries
- within Insurance, Wealth Management and Tax topic(s)
Artificial intelligence (AI) is rapidly transforming the healthcare and life sciences industry, from accelerating drug discovery to personalizing treatment plans and optimizing clinical trials. AI can also be used in fraud detection or prevention by detecting anomalies in billing and claims data, as an example. But as AI becomes embedded in critical compliance and operational workflows, the need for responsible and ethical implementation has never been more urgent.
Navigating the Regulatory Landscape
AI systems and innovation are driving measurable improvements in patient care, operational efficiency, and scientific discovery. But they also raise critical questions about data privacy and regulatory compliance. AI systems in healthcare must comply with complex regulations, as applicable to the type and geography of the entity; these regulations include but are not limited to:
- HIPAA and FDA Guidelines (U.S.): The Health Insurance Portability and Accountability Act (HIPAA) governs the use of Protected Health Information (PHI). AI tools must adhere to the "minimum necessary" standard and ensure robust de-identification protocols. The Federal Drug Administration (FDA) covers AI-enabled medical devices, demanding explainability, validation, and post-market surveillance.
- Section 1557 of the Affordable Care Act (U.S.): Prohibits covered entities from discrimination on the basis of race, color, national origin, sex, age, disability, or any combination thereof through the use of patient care decision support tools. Requires identification and mitigation of risks of discrimination associated with the use of such tools in health programs or activities.
- GDPR and AI Act (EU): Requires transparency, data minimization, and explicit consent when processing personal health data for the General Data Protection Regulation (GDPR). The EU AI Act requires classification of AI systems; registration of high-risk AI systems including conducting conformity assessments and fundamental rights assessments; and implementation of a quality management system (QMS), including continuous monitoring, data governance measures, vendor contract management, and training and awareness.
- Colorado AI Act (U.S.): Requires use of reasonable care to prevent AI systems from causing unlawful differential treatment and validation of AI tools to ensure they do not produce biased outcomes. Deployers must also conduct AI impact assessments, ensure transparency and provide consumer rights, and conduct vendor contract management.
Ethical Imperatives
Beyond legal compliance, ethical and responsible AI deployment is essential to maintaining public trust and delivering equitable care. Key considerations include:
- Bias Mitigation: AI models trained on non-representative datasets can perpetuate disparities. For example, commercial algorithms that underestimate the health needs of certain demographic patients.
- Transparency: Black-box models hinder clinical oversight. Explainable AI is critical for informed consent and clinician trust.
- Health Equity: AI tools must be accessible across socioeconomic and geographic boundaries. Deploying diagnostic AI only in well-resourced settings risks widening disparities.
Ethical frameworks should guide every stage of AI development and testing, from data collection and model training to deployment and continuous monitoring.
Building a Responsible AI Strategy
To implement AI systems responsibly and in compliance with regulatory considerations, healthcare organizations should:
- Conduct AI-specific risk assessments tailored to data flows, model behavior, and access points.
- Establish governance structures that define accountability and oversight for AI decisions.
- Ensure diverse datasets are used to train models, reducing bias and improving generalization for explainability and validity.
- Maintain audit trails and documentation to support regulatory reviews and internal transparency.
- Engage cross-functional teams, including clinicians and technologists, to align AI tools with patient-centered care.
Ankura has supported organizations with implementing responsible and ethical AI governance programs. See our thought leadership here on using the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) to support the development of an AI governance program.
Conclusion
AI offers unprecedented opportunities to improve healthcare outcomes and support compliance activities, but its power must be matched with responsible and ethical use. Between the patchwork of regulations and varying standards across jurisdictions, organizations developing and deploying AI must navigate differences in data privacy and legal liability. Organizations must also address data governance challenges with consent and de-identification of sensitive data and manage bias and equity by ensuring algorithmic fairness.
By establishing a standard or following a framework such as NIST AI RMF to guide your strategic implementation of a responsible and ethical AI governance program, healthcare and life sciences organizations can innovate confidently while safeguarding patient trust and societal well-being.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.