Liability of Healthcare AI Providers in the EU: How to Navigate Risks in a Shifting Regulatory Ecosystem

Written By

nils loelfing module
Dr. Nils Lölfing

Senior Counsel
Germany

As Senior Counsel in our Technology & Communications Sector Group at Bird & Bird, I provide pragmatic, solution-driven advice on all aspects of data and information technology law. With over a decade of experience as a Tech, AI, and Data Lawyer, I have a strong focus on and extensive expertise in projects involving AI, particularly generative AI, advising clients across a range of industries, including life sciences, healthcare, automotive, and technology.

This article was co-authored by Nils Lölfing and Niranjan Nair Reghuvaran.

Healthcare AI has the potential to revolutionise patient care, from early disease detection and personalised treatment plans to breakthroughs in drug discovery and streamlined hospital workflows. Yet, despite its promise, adoption of AI across the healthcare sector in Europe remains slow. A recent European Commission study identified legal uncertainty around liability as one of the key barriers hindering the development and deployment of AI-based tools. This lack of clarity not only creates hesitation among developers, device manufacturers, and digital health companies but also risks stifling innovation, ultimately delaying the societal benefits of healthcare AI.

Such challenges could not come at a worse time. European healthcare systems are already under immense pressure due to an aging population, a rise in chronic conditions, and a projected shortage of 4.1 million healthcare workers by 2030 (see study). These issues, compounded by persistent healthcare inequalities, underscore the urgent need for innovative, AI-driven solutions. Realising the full potential of AI, whether through improved diagnostics, personalised treatments, or optimised resource allocation, requires a legal framework that fosters trust and clarity, rather than uncertainty.

This article examines how the revised EU Product Liability Directive (Directive (EU) 2024/2853, hereafter PLD) aims to address the long-standing ambiguity surrounding liability for healthcare AI systems. The PLD is particularly relevant for healthcare compared to other sectors because it explicitly covers damages such as death, personal injury, and medically recognised psychological harm—key risks in the deployment of AI in an industry where patient safety is paramount. 

Against this background, we explore the key liability considerations for healthcare AI providers, focusing on how the revised Directive will shape accountability in the future. While the Directive introduces mandatory provisions that cannot be derogated from, we also discuss practical strategies to help providers navigate and minimise liability risks in this evolving regulatory landscape.

1. New EU product safety and liability regime: increased liability risks for healthcare AI

In the EU, an AI healthcare system used in patient care is typically regulated both as a medical device under the Medical Device Regulation (MDR) and as a high-risk AI system under the future EU AI Act requirements for high-risk AI systems. Both regimes place stringent obligations on providers who place on the market or put into service products in the EU: ensuring safety, efficacy, transparency, and ongoing monitoring.

1.1 High-risk AI systems, MDR, and liability

The MDR is a listed union harmonisation legislation in Annex I of the EU AI Act, meaning that any AI product that undergoes a third-party conformity assessment under the MDR is also a high-risk AI system under the EU AI Act. Thus, the regulation of AI-powered medical devices in the EU would be shaped by a combination of the MDR and the EU AI Act, which apply in parallel. The requirements for these high-risk AI systems outlined in Annex I of the EU AI Act are set to take effect on August 2, 2027.

Even though third-party conformity assessment procedures are only applicable to medical devices classified at Class IIa and above, a large portion (if not all) of standalone AI medical software will likely be classified as at least Class IIa under the MDR.

Being high-risk amplifies liability. High-risk AI must have detailed risk management plans, high-quality training data, transparent documentation, human oversight mechanisms, and continuous monitoring after deployment. If a provider fails to meet any of these, it not only breaches the EU AI Act (which can trigger heavy fines) but also creates avenue for a “defect” under Article 7(2)(f) PLD. There are certain supply chain aspects which affect liability here, compounded by some legal uncertainty (see PLD supply chain considerations below). 

1.2 The revised product liability directive

The revised PLD, set to be transposed into EU Member State law by December 2026, equips claimants of AI-caused healthcare harm, especially personal injury (including medically recognised damage to psychological health), with powerful remedies. The regime focuses on certain harm caused by defective products, including AI systems not complying with relevant product safety requirements (such as from the EU AI Act), regardless of fault. It has been specifically designed to address challenges claimants face when the harm is caused by innovative technologies, which are often complex or opaque (see below). The relevance for healthcare AI providers is heightened for the following reasons: 

  • It removes caps on liability for personal injury, including medically‑recognised psychological damage, and loss or corruption of personal data, removing the previous €85 million ceiling and structural barriers to recovery. Victims in healthcare contexts (e.g. misdiagnoses via AI) can thus claim full compensation for physical, mental, and data harms, without artificial ceilings. Additionally, Member States may be required to extend the 10‑year liability period up to 25 years for latent injuries, meaning delayed onset damage liability is better covered. 
  • The new regime grants claimants stronger rights to disclosures and legal presumptions to alleviate or shift the burden of proof. Once a claimant presents a plausible case (“facts and evidence sufficient to support the plausibility of the claim”), defendants must produce documentation necessary and proportionate (for AI systems, this could include logs, upgrades, risk assessments, etc), failing which the product is presumed defective. In complex black‑box AI cases where it may be difficult for claimants to prove defectiveness or causal link due to technical or scientific complexity, courts are empowered to presume defectiveness and/or causation, if it is likely that the product is defective or that there is a causal link between the defectiveness of the product and the damage.

2. Liability for healthcare AI providers along the supply chain

This regime applies across sectors. However, there are certain complications within the AI value chain, whose interactions with the revised PLD causes additional points of concern for healthcare AI providers. 

2.1 Joint and several liability in complex AI ecosystems

Liability in practice is often distributed across a complex chain of actors. For instance, an AI model might be developed by one company, integrated by a second into a broader clinical software system, or embedded by a third into a CE-marked diagnostic device that is then sold to hospitals. In this context, joint and several liability may apply under the revised PLD. This means that if damage is caused, multiple actors in the chain could be liable to the injured party (Article 12(1) PLD).

For example, if a false negative diagnosis results from an AI model embedded into an ultrasound device, and that model underperforms due to a training data flaw, the device manufacturer may be held strictly liable under the PLD. However, the manufacturer could seek compensation from the upstream AI provider if the error stemmed from the latter’s development process (Article 14 PLD). This can generally not be contractually excluded, to ensure that liability is ultimately allocated to the party responsible for the defectiveness. There is only one exception to this rule with a view to supporting the innovative capacity of microenterprises and small enterprises that manufacture software, including AI: these upstream providers have the right to waive such recourse via contract with the manufacturer who placed the end product on the market.

Consider a reverse scenario: an AI developer provides an AI diagnostic support tool, and the hospital hires an external IT vendor to set up the system and ensure it meets strict cybersecurity standards. However, after installation, the AI system doesn’t work properly because of compatibility issues caused by the way cybersecurity was handled; not because the AI malfunctioned. 

Although the AI system met regulatory standards initially, the causal chain between the defect and the harm may still exist if the integration misconfiguration was caused due to lack of clear and transparent information provision from the AI provider; a requirement under the EU AI Act for third parties like an AI provider that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system, such as the medical device (Article 25(4) EU AI Act). If the AI violated product safety legislation in this regard, defectiveness will be presumed (as per Article 10(2)(b) PLD), and the AI provider would be liable. If the causal chain was broken based on the facts (because transparent information was provided by the AI provider), the AI provider would not be liable.

The revised PLD, which will expand strict liability to standalone AI-software, allows compensation not only for personal injury but also for privacy-related harms, such as those resulting from cybersecurity incidents, and introduces rebuttable presumptions of defectiveness where AI systems fail to meet safety expectations. These changes reflect the reality that harm in AI systems often results not from obvious malfunction, but from subtle misalignments with user expectations or clinical practice.

2.2 AI-software updates and post-deployment control

One of the most complex aspects of liability arises from AI-software updates and post-deployment control. Under the 1985 Product Liability Directive, if the defect did not exist when the product went into circulation (i.e., placed on the market), or if the defect had come into being after, the “producer” was not liable. This was because the product was considered outside the manufacturer’s control after it was placed on the market, i.e., the manufacturer could not introduce defective changes to the product after it had been put into circulation. 

The new PLD, though, recognises that many products still remain within the manufacturer’s control even after being placed on the market due to their digital nature. Such AI software or related services are considered to be within the manufacturer’s control where they are supplied by that manufacturer or where that manufacturer authorises them or otherwise consents to their supply by a third party. In such case, both the manufacturer and the AI provider (as the manufacturer of a defective component, Article 8(1)(b) PLD) could be held jointly liable for any damages under the PLD. Conversely, where this was supplied outside of the manufacturer's control, merely the AI provider, if responsible for the defectiveness and subject to the PLD, is liable (Recital 36 PLD). 

For example, if a diagnostic imaging device is marketed as including an AI-based diagnostic application, but the healthcare provider must download the AI software from the manufacturer’s platform after purchasing the device, the device is still considered to be under the manufacturer’s control (Article 4(5) PLD).

This becomes particularly problematic when the AI model provider is different from the system integrator or device manufacturer. Manufacturer’s control as described in Article 4(5) specifically includes situations in which a third party introduces defects; the factor that determines liability is the authorisation or consent of the manufacturer. Consider a modification to the example above: the AI software to be downloaded is not from the manufacturer’s platform, but from a third party’s (developed by them), after purchasing the device. For any defects in the AI application (not physical medical device), the device manufacturer should remain liable, alongside the AI software developer, even if the defect only manifested after the device was placed on the market. 

This situation introduces many uncertainties; in many cases, the models themselves will be provided by one party, the high-risk AI system by another, and the physical device by another. The final product may still be considered the compliance responsibility of the end-device manufacturer but change management in liability control could become quite complex; particularly when the end-device manufacturer does not adequately establish control over updates. 

2.3 Balancing human oversight and AI provider liability 

AI-generated recommendations are ultimately actioned by human clinicians. Overreliance on AI risks diminishing clinicians' decision-making skills, especially in varied clinical settings or when AI fails. To mitigate this, AI must serve as a supportive tool with strong human oversight, involving clinicians in its design, testing, and monitoring. One might assume this human-in-the-loop role breaks the causal chain, especially under the PLD. However, the reality is more nuanced. 

While strict liability under the PLD attaches to the defective product itself, hospitals and clinicians may still face fault-based professional liability claims. If it can be shown that a clinician should have recognised an erroneous AI output and failed to act accordingly, they could be held at-fault and therefore liable under national contractual and tort law regimes. For AI providers, though, even in cases where hospitals compensate patients, they may seek recourse from the AI provider under contractual or tort principles, if there was a defect for which they are liable under the PLD (Article 13(1) PLD). Clear disclosure about foreseeable risk and information on operating the system, provided to healthcare providers, can help AI providers to a certain extent with liability under contractual or tortious circumstances. 

At the same time, AI providers cannot evade liability under the PLD by listing all conceivable side-effects of the system, claiming misuse by the healthcare provider for any defects. Recital 31 provides that reasonably foreseeable use also encompasses misuse that is “not unreasonable” under the circumstances of use of the product. Although not mentioned in the articles, the recital’s interpretation could be followed by courts when applying Article 7(2)(b) and (c).

This could include, for example, a clinician inputting incomplete patient data during a high-workload shift or a nurse misinterpreting on-screen alerts due to alarm fatigue. These may be situations that a manufacturer could reasonably foresee and design safeguards against; possibly using parameters such as shift times, number of patients treated, active logging, etc. 

In AI, especially for systems that generate direct clinical outputs or recommendations, courts may treat developers as quasi-clinical actors with their own duty to warn about risks, limitations, and use constraints; this logic is already reflected in the mandatory information provisions in the EU AI Act, and a liability regime stemming from it is within the realms of possibility, which means liability for the AI provider if they were to not inform properly about risks and limitations of their AI-based outputs. The EU AI Act’s provisions on human oversight (Article 14 EU AI Act) clarify the interaction between healthcare and AI providers. High-risk AI systems must be designed with tools enabling effective human oversight to minimise risks to health, safety, or fundamental rights, even under foreseeable misuse. Oversight measures should align with the system's risks, autonomy, and context of use, and in the absence of appropriate tools designed into the AI system by the AI provider, this will further increase liability.

3. Mitigation strategies for AI providers

In this complex legal and technical landscape, contracts play a key role in allocating risk. However, they are not the sole mitigation strategy: transparency, clear communication with healthcare providers, and enabling effective human oversight are equally critical. Once a defect and harm occur, escaping liability becomes extremely challenging. Therefore, it’s essential to consider not only contractual approaches but also other proactive mitigation strategies.

Contracts can be particularly useful in addressing risks inherent to AI systems, especially those beyond a provider’s control, while keeping in mind the concept of defect under Article 7(1) and (2) of the PLD. For instance, an AI system used for triage in emergency care may generate probabilistic risk scores that classify patients into urgency categories based on symptoms and EHR data. These outputs, being statistically driven, are inherently subject to error margins, particularly for borderline cases. Such margins should not automatically qualify as defects, provided they are clearly addressed in the system’s communications, documentation, and contracts.

That said, under the PLD, AI providers and device manufacturers cannot contract out of liability for personal injury, as this is non-negotiable. Instead, they should focus on avoiding unnecessary liability risks stemming from unclear or poorly designed contracts. 

Contracts with device manufacturers, platform integrators, or hospital clients should include:

  • Clear scope and disclaimers: Define the AI system’s intended use and limitations very precisely. If the AI is probabilistic (e.g. triaging urgency levels rather than giving a definitive diagnosis), it should be documented. Any known error rates or uncertainties should appear in the manual or contract. Overpromising (e.g. calling the AI “100% accurate” in communications) could prove problematic.
  • Warranty limitations: Specify that the AI provider is not responsible for third-party negligence (like an integrator’s misconfiguration) or for uses outside the agreed scope. For example, an AI used only in adult patients should not be covered for paediatric use unless separately agreed. Include clauses stating that if the hospital alters the system (e.g. changes thresholds, integrates untested new data feeds), the AI provider’s warranty does not cover resulting errors.
  • SLAs: SLAs with upstream AI providers, clarifying update responsibilities and model performance standards. SLAs should address change management mechanisms, detailing retraining and versioning procedures for evolving AI systems. Additionally, these should specifically address manufacturer control and consent mechanisms for updates.

Non-contractual mitigations methods could include: 

  • Detailed performance disclosures: Include accuracy metrics, sensitivity/specificity, error margins, and known limitations in the instructions of use and accompanying technical documentation. Provide context-specific guidance (e.g., system performs best with certain imaging modalities or patient populations).
  • Dynamic risk communication: Issue periodic performance reports and risk notices when post-market monitoring reveals new failure modes, biases, or performance degradation. Use structured formats (e.g., standardised risk bulletins) to support consistent interpretation by healthcare providers.
  • Scenario-based guidance: Provide case-based examples of appropriate and inappropriate uses. Outline foreseeable misuse patterns and corresponding mitigation steps.
  • Clear escalation protocols: Furnish guidance on when clinicians may override AI recommendations (e.g., low confidence levels, out-of-distribution patient data). But make sure that such communication may not overly influence physician’s autonomy. Use soft warnings vs. hard warnings to indicate risk severity and encourage critical review.
  • Implement appropriate human oversight tools: Incorporate user interfaces that display confidence intervals, highlight anomalies, or flag borderline cases where human judgment is critical. Graphics with historical AI recommendations and instances where recommendations were overridden could be shown.

While many of these steps are already needed for compliance with the MDR or EU AI Act, this does not insulate providers from civil litigation if harm occurs due to a defect. AI providers should not view regulatory compliance as the silver bullet for all legal responsibility. Steps must be taken to make sure that these are effective and appropriate to the context of deployment. The assistance provided to deployers in this regard thus may go beyond that prescribed by the EU AI Act or the MDR, but it could be warranted, in many cases, to limit harm. 

4. Outlook

Regulatory compliance, at the end of the day, is not the ceiling but the floor, but sets the baseline for trust and accountability in any industry. In the context of healthcare, where the stakes are uniquely high, a robust legal and organisational framework is not a mere checkbox; it’s a competitive advantage. AI providers that can demonstrate rigorous compliance, clarity of communication, and sound liability management will be preferred partners for hospitals and device makers who share the risk. In such a high-stakes environment, regulators and customers alike will gravitate toward AI solutions backed by meticulous risk management. In this way, foresight in law and contract becomes not just a legal necessity but a critical enabler of AI adoption in healthcare, fostering safer and more effective AI-driven care.

Latest insights

More Insights
featured image

EU Court issues further guidance on net neutrality and zero-rating

5 minutes Sep 01 2025

Read More
featured image

Taking the EU AI Act to Practice: Decoding the GPAI code of practice and the training data summary template

9 minutes Sep 01 2025

Read More
featured image

DIFC enacts amendments to Data Protection Law

3 minutes Aug 28 2025

Read More