ARTICLE
7 October 2025

Healthcare AI In The United States — Navigating Regulatory Evolution, Market Dynamics, And Emerging Challenges In An Era Of Rapid Innovation

JW
Jones Walker

Contributor

At Jones Walker, we look beyond today’s challenges and focus on the opportunities of the future. Since our founding in May 1937 by Joseph Merrick Jones, Sr., and Tulane Law School graduates William B. Dreux and A.J. Waechter, we have consistently asked ourselves a simple question: What can we do to help our clients succeed, today and tomorrow?
The use of artificial intelligence (AI) tools in healthcare continues to evolve at an unprecedented pace, fundamentally reshaping how medical care is delivered, managed, and regulated across the United States.
United States California Food, Drugs, Healthcare, Life Sciences
Nadia de la Houssaye’s articles from Jones Walker are most popular:
  • within Food, Drugs, Healthcare and Life Sciences topic(s)
  • in United States
  • with readers working within the Banking & Credit, Healthcare and Technology industries
Jones Walker are most popular:
  • within Government, Public Sector, Litigation, Mediation & Arbitration and Law Practice Management topic(s)

The use of artificial intelligence (AI) tools in healthcare continues to evolve at an unprecedented pace, fundamentally reshaping how medical care is delivered, managed, and regulated across the United States. As 2025 progresses, the convergence of technological innovation, regulatory adaptation (or lack thereof), and market shifts has created remarkable opportunities and complex challenges for healthcare providers, technology developers, and federal and state legislators and regulatory bodies alike.

The rapid proliferation of AI-enabled medical devices represents perhaps the most visible manifestation of this transformation. With nearly 800 AI- and machine learning (ML)-enabled medical devices authorized for marketing by the US Food and Drug Administration (FDA) in the five-year period ending September 2024, the regulatory apparatus has been forced to adapt traditional frameworks designed for static devices to accommodate dynamic, continuously learning algorithms that evolve after deployment. This fundamental shift has prompted new approaches to oversight, such as the development of predetermined change control plans (PCCPs) that allow manufacturers to modify their systems within predefined parameters and without requiring additional premarket submissions.

Regulatory Frameworks Under Pressure

The regulatory environment governing healthcare AI reflects the broader challenges facing federal agencies as they attempt to balance innovation and patient safety. The FDA's approach to AI-enabled software as a medical device (SaMD) has evolved significantly, culminating in the January publication of comprehensive draft guidance addressing life cycle management and marketing submission recommendations for AI-enabled device software functions. This guidance represents a critical milestone in establishing clear regulatory pathways for AI and ML systems that challenge traditional notions of device stability and predictability.

The traditional FDA paradigm of medical device regulation was not designed for adaptive AI and ML technologies. This creates unique challenges for continuously learning algorithms that may evolve after initial market authorization. The FDA's January 2021 AI/ML-based SaMD Action Plan outlined five key actions based on the total product life cycle approach: tailoring regulatory frameworks with PCCPs, harmonizing good ML practices, developing patient-centric approaches, supporting bias elimination methods, and piloting real-world performance monitoring.

However, the regulatory landscape remains fragmented and uncertain. The rescission of the Biden administration's Executive Order (EO) 14110, "Safe, Secure, and Trustworthy Artificial Intelligence," by the Trump administration and the current administration's issuance of its own EO on AI, "Removing Barriers to American Leadership in Artificial Intelligence," in January has created additional uncertainty regarding federal AI governance priorities. While the Biden administration's EO has been rescinded, its influence persists through agency actions already underway, including the April 2024 Affordable Care Act (ACA) Section 1557 final rule on nondiscrimination in health programs run by the US Department of Health and Human Services (HHS) and the final rule on algorithm transparency in the Office for Civil Rights. Consequently, enforcement priorities and future regulatory development remain uncertain.

State-level regulatory activity has attempted to fill some of these gaps, with 45 states introducing AI-related legislation during the 2024 session. California Assembly Bill 3030, which specifically regulates generative AI (gen AI) use in healthcare, exemplifies the growing trend toward state-specific requirements that healthcare organizations must navigate alongside federal regulations. This patchwork of state and federal requirements creates particularly acute challenges for healthcare AI developers and users operating across multiple jurisdictions.

Data Privacy and Security: The HIPAA Challenge

One of the most pressing concerns facing healthcare AI deployment involves the intersection of AI capabilities and healthcare data privacy requirements. The Health Insurance Portability and Accountability Act (HIPAA) was enacted long before the emergence of modern AI systems, creating significant compliance challenges as healthcare providers increasingly rely on AI tools for clinical documentation, decision support, and administrative functions.

The use of AI-powered transcription and documentation tools has emerged as a particular area of concern. Healthcare providers utilizing AI systems for automated note-taking during patient encounters face potential HIPAA violations if proper safeguards are not implemented. These systems often require access to comprehensive patient information to function effectively, yet traditional HIPAA standards may conflict with AI systems' need for extensive datasets to optimize performance. AI tools must be designed to access and use only the protected health information (PHI) strictly necessary for their purpose, even though AI models often require comprehensive datasets to achieve their full potential.

The proposed HHS regulations issued in January attempt to address some of these concerns by requiring covered entities to include AI tools in their risk analysis and risk management compliance activities. These requirements mandate that organizations conduct vulnerability scanning at least every six months and penetration testing annually, recognizing that AI systems introduce new vectors for potential data breaches and unauthorized access.

Business associate agreements (BAAs) have become increasingly complex as organizations attempt to address AI-specific risks. These agreements must now encompass algorithm updates, data retention policies, and security measures for ML processes, while ensuring that AI vendors processing PHI operate under robust contractual frameworks that specify permissible data uses and required safeguards. Healthcare organizations must ensure that AI vendors processing PHI operate under robust BAAs that specify permissible data uses and necessary security measures and account for AI-specific risks related to algorithm updates, data retention policies, and other ML processes.

Algorithmic Bias and Health Equity Concerns

The potential for algorithmic bias in healthcare AI systems has emerged as one of the most significant ethical and legal challenges facing the industry. A 2024 review of 692 AI- and ML-enabled FDA-approved medical devices revealed troubling gaps in demographic representation, with only 3.6% of approved devices reporting race and ethnicity data, 99.1% providing no socioeconomic information, and 81.6% failing to report study subject ages.

These data gaps have profound implications for health equity, as AI systems trained on nonrepresentative datasets may perpetuate or exacerbate existing healthcare disparities. Training data quality and representativeness significantly — and inevitably — impact AI system performance across diverse patient populations. The challenge is particularly acute given the rapid changes in federal enforcement priorities regarding diversity, equity, and inclusion (DEI) initiatives.

While the April 2024 ACA Section 1557 final rule regarding HHS programs established requirements for healthcare entities to ensure AI systems do not discriminate against protected classes, the current administration's opposition to DEI initiatives has created uncertainty about enforcement mechanisms and compliance expectations. Given the rapid turnabout in executive branch policy toward DEI and antidiscrimination initiatives, it remains to be seen how federal healthcare AI regulations with respect to bias and fairness will be affected.

Healthcare organizations are increasingly implementing systematic bias testing and mitigation strategies throughout the AI life cycle, focusing on validating the technology, promoting health equity, ensuring algorithmic transparency, engaging patient communities, identifying fairness issues and trade-offs, and maintaining accountability for equitable outcomes. AI system developers have, until recently, faced increasing regulatory pressure to ensure training datasets adequately represent diverse patient populations. And most healthcare AI developers and practitioners continue to maintain that relevant characteristics, including age, gender, sex, race, and ethnicity, should be appropriately represented and tracked in clinical studies to ensure that results can be reasonably generalized to the intended-use populations.

However, these efforts often occur without clear regulatory guidance or standardized methodologies for bias detection and remediation. Special attention must be paid to protecting vulnerable populations, including pediatric patients, elderly individuals, racial and ethnic minorities, and individuals with disabilities.

Professional Liability and Standards of Care

The integration of AI into clinical practice has created novel questions about professional liability and standards of care that existing legal frameworks struggle to address. Traditional medical malpractice analysis relies on established standards of care, but the rapid evolution of AI capabilities makes it difficult to determine what constitutes appropriate use of algorithmic recommendations in clinical decision-making.

Healthcare AI liability generally operates within established medical malpractice frameworks that require the establishment of four key elements: duty of care, breach of that duty, causation, and damages. When AI systems are involved in patient care, determining these elements becomes more complex. While a physician must exercise the skill and knowledge normally possessed by other physicians, AI integration creates uncertainty about what constitutes reasonable care.

The Federation of State Medical Boards' April 2024 recommendations to hold clinicians liable for AI technology-related medical errors represent an attempt to clarify professional responsibilities in an era of algorithm-assisted care. However, these recommendations raise complex questions about causation, particularly when multiple factors contribute to patient outcomes and AI systems provide recommendations that healthcare providers may accept, modify, or reject based on their clinical judgment.

When algorithms influence or drive medical decisions, determining responsibility for adverse outcomes presents novel legal challenges not fully addressed in existing liability frameworks. Courts must evaluate whether AI system recommendations served as a proximate cause of patient harm as well as the impacts of the healthcare provider's independent medical judgment and other contributing factors.

Documentation requirements have become increasingly important, as healthcare providers must maintain detailed records of AI system use, including the specific recommendations provided, the clinical reasoning for accepting or rejecting algorithmic guidance, and any modifications made to AI-generated suggestions. These documentation practices are essential for defending against potential malpractice claims while ensuring that healthcare providers can demonstrate appropriate clinical judgment and professional accountability.

AI-related malpractice cases may require expert witnesses with specialized knowledge of medical practice and existing AI technology capabilities and limitations. Such experts should have the experience necessary to evaluate whether healthcare providers used AI systems in an appropriate manner and whether algorithmic recommendations met relevant standards. Plaintiffs in AI-related malpractice cases face challenges proving that AI system errors directly caused patient harm, particularly when healthcare providers retained decision-making authority.

Market Dynamics and Investment Trends

Despite regulatory uncertainties, venture capital investment in healthcare AI remains robust, with billions of dollars allocated to startups and established companies developing innovative solutions. However, investment patterns have become more selective, focusing on solutions that demonstrate clear clinical value and regulatory compliance rather than pursuing speculative technologies without proven benefits.

The American Hospital Association's early 2025 survey of digital health industry leaders revealed cautious optimism, with 81% expressing positive or cautiously optimistic outlooks for investment prospects and 79% indicating plans to pursue new investment capital over the next 12 months. This suggests continued confidence in the long-term potential of healthcare AI despite near-term regulatory and economic uncertainties.

Clinical workflow optimization solutions, value-based care enablement platforms, and revenue cycle management technologies have attracted significant funding, reflecting healthcare organizations' focus on addressing immediate operational challenges while building foundations for more advanced AI applications. The increasing integration of AI into these core healthcare functions demonstrates the technology's evolution from experimental applications to essential operational tools.

Major technology corporations are driving significant innovation in healthcare AI through substantial research and development investments. Companies such as Google Health, Microsoft Healthcare, Amazon Web Services, and IBM Watson Health continue to develop foundational AI platforms and tools. Large health systems and academic medical centers lead healthcare AI adoption through dedicated innovation centers, research partnerships, and pilot programs, often serving as testing grounds for emerging AI technologies.

Pharmaceutical companies increasingly integrate AI throughout drug development pipelines, from target identification and molecular design to clinical trial optimization and regulatory submissions. These investments aim to reduce development costs and timelines while improving success rates for new therapeutic approvals.

Large healthcare technology companies increasingly acquire specialized AI startups to integrate innovative capabilities into comprehensive healthcare platforms. These acquisitions accelerate technology deployment while providing startups with the resources necessary for large-scale implementation and regulatory compliance.

Emerging Technologies and Integration Challenges

The rapid advancement of gen AI technologies has introduced new regulatory and practical challenges for healthcare organizations. As of late 2023, the FDA had not approved any devices relying on purely gen AI architectures, creating uncertainty about the regulatory pathways for these increasingly sophisticated technologies. Gen AI's ability to create synthetic content, including medical images and clinical text, requires new approaches to validation and oversight that traditional medical device frameworks may not adequately address.

The distinction between clinical decision support tools and medical devices remains an ongoing area of regulatory clarification. Software that provides information to healthcare providers for clinical decision-making may or may not constitute a medical device depending on the specific functionality and level of interpretation provided.

Healthcare AI systems must provide sufficient transparency to enable healthcare providers to understand system recommendations and limitations. The FDA emphasizes the importance of explainable AI that allows clinicians to understand the reasoning behind algorithmic recommendations. AI systems must provide understandable explanations for their recommendations, which healthcare providers in turn use to communicate with patients.

The integration of AI with emerging technologies such as robotics, virtual reality, and internet of medical things (IoMT) devices creates additional complexity for healthcare organizations attempting to navigate regulatory requirements and clinical implementation challenges. These convergent technologies offer significant potential benefits but also introduce new risks related to cybersecurity, data privacy, and clinical safety that existing regulatory frameworks struggle to address comprehensively.

AI-enabled remote monitoring systems utilize wearable devices, IoMT sensors, and mobile health applications to continuously track patients' vital signs, medication adherence, and disease progression. These technologies enable early intervention for deteriorating conditions and support chronic disease management outside traditional healthcare settings, but they face unique regulatory challenges related to device performance, user training, and clinical oversight.

Cybersecurity and Infrastructure Considerations

Healthcare data remains a prime target for cybersecurity threats, with data breaches involving 500 or more healthcare records reaching near-record numbers in 2024, continuing an alarming upward trend. Healthcare data remains a prime target for hackers due to its high value on black markets and the critical nature of healthcare operations, which makes organizations more likely to pay ransoms.

The integration of AI systems, which often require access to vast amounts of patient data, further complicates the security landscape and creates new vulnerabilities that organizations must address through robust security frameworks. Healthcare organizations face substantial challenges integrating AI tools into existing clinical workflows and electronic health record systems. Technical interoperability issues, user training requirements, and change management processes require significant investment and coordination across multiple departments and stakeholders.

The Consolidated Appropriations Act of 2023's requirement for cybersecurity information in premarket submissions for "cyber devices" represents an important step in addressing these concerns, but the rapid pace of AI innovation often outstrips the development of adequate security measures. Medical device manufacturers must now include cybersecurity information in premarket submissions for AI-enabled devices that connect to networks or process electronic data.

Healthcare organizations must implement comprehensive cybersecurity programs that address not only technical vulnerabilities but also the human factors that frequently contribute to data breaches. Strong technical safeguards must be implemented when using de-identified data for AI training, including access controls, encryption, audit logging, and secure computing environments, and should address both intentional and accidental reidentification risks throughout the AI development process.

A significant concern is the lack of a private right of action for individuals affected by healthcare data breaches, leaving many patients with limited recourse when their sensitive information is compromised. While many states have enacted laws more stringent than federal legislation, enforcement resources may be stretched thin.

Human Oversight and Professional Standards

In most federal and state regulatory schemes, ultimate responsibility for healthcare AI systems is assigned to the people and organizations that implement it rather than to the AI system itself. Healthcare providers must maintain ultimate authority for clinical decisions even when using AI-powered decision support tools. Healthcare AI applications must require meaningful human involvement in decision-making processes rather than defaulting to fully automated systems.

AI systems must provide healthcare providers with clear, easily accessible mechanisms to override algorithmic recommendations when clinical judgment suggests alternative approaches. Healthcare providers using AI systems must be provided with the tools to achieve system competency through ongoing training and education programs. At the organization level, hospitals and health systems must implement robust quality assurance programs that monitor AI system performance and healthcare provider usage patterns.

Medical schools and residency programs are beginning to incorporate AI literacy into their curricula, while professional societies are developing guidelines for the responsible use of these tools in clinical practice. For digital health developers, these shifts underscore the importance of designing AI systems that complement clinical workflows and support physician decision-making rather than attempting to automate complex clinical judgments.

The rapid advancement of AI in healthcare is reshaping certain medical specialties, particularly those that rely heavily on image interpretation and pattern recognition, such as radiology, pathology, and dermatology. As AI systems demonstrate increasing accuracy in reading X-rays, magnetic resonance images, and other diagnostic images, some medical students and physicians are reconsidering their specialization choices. This trend reflects broader concerns about the potential for AI to displace certain aspects of physician work, though most experts emphasize that AI tools should augment rather than replace clinical judgment.

Conclusion: Balancing Innovation and Responsibility

The healthcare AI landscape in the United States reflects the broader challenges of regulating rapidly evolving technologies while promoting innovation and protecting patient welfare. Despite regulatory uncertainties and implementation challenges, the fundamental value proposition of AI in healthcare remains compelling, offering the potential to improve diagnostic accuracy, enhance clinical efficiency, reduce costs, and expand access to specialized care.

Success in this environment requires healthcare organizations, technology developers, and regulatory bodies to maintain vigilance regarding compliance obligations while advocating for regulatory frameworks that protect patients without unnecessarily hindering innovation. Organizations that can navigate the complex and evolving regulatory environment while delivering demonstrable clinical value will continue to find opportunities for growth and impact in this dynamic sector.

The path forward demands a collaborative approach that brings together clinical expertise, technological innovation, regulatory insight, and ethical review. As 2025 progresses (and beyond), the healthcare AI community must work together to realize the technology's full potential while maintaining the trust and confidence of patients, providers, and the broader healthcare system. This balanced approach will be essential to ensuring that AI fulfills its promise as a transformative force in American healthcare delivery.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More