AI as a digital asset

Civil Liability Regime for AI

Latest Developments

On 28 September 2022, the European Commission published a proposal for a revision of the Directive on liability for defective products (“Revised Product Liability Directive”) in order to accommodate for emerging digital technologies. At the same day, the European Commission published a proposal for a Directive on adapting noncontractual civil liability rules to artificial intelligence (“AI Liability Directive”).

Revised Product Liability Directive – the Council reached a general approach on 14 June 2023. The European Parliament adopted a common position in October 2023. The Committee decision to enter into interinstitutional negotiations (trilogue) was confirmed by plenary.

AI Liability Directive - The European Parliament and the Council of the EU are in still the process of reaching their common positions.


The proposal for the Revised Product Liability Directive will apply to product liability claims for damages in the form of physical harm, property damage and loss/corruption of data (on the condition that this property/data is not used exclusively for professional purposes). The proposal specifically provides that AI-systems and AI-enabled goods are ‘products’ within the meaning of the Directive and provides a basis to claim not only from hardware manufacturers but also software providers and providers of digital services who affect the functioning of the product. This includes parties who integrate AI-systems into other products and parties who are responsible for changes to AI-systems which are already on the market (including those triggered by software updates and machine learning).

Compensation is available when defective AI systems cause damage, without the injured party having to prove fault, just like for any other product. The Directive provides for a basis on which to claim disclosure of evidence from the defendant when the claimant has presented facts and evidence sufficient to support the plausibility of the claim. The Directive lightens the claimant’s burden of proof by introducing presumptions of defectiveness/causality e.g., in complex cases, when defendants fail to comply with an order to disclose evidence and when products fail to comply with safety requirements, such as those in the (Draft) Artificial Intelligence Act. The proposal also removes the current minimal claim value threshold of EUR 5001.

The proposal for the AI Liability Directive will apply to all other forms of non-contractual (civil) liability, where the injured party does have to prove fault. Compensation is available in any form available pursuant to national laws (including non-material damages resulting from e.g., discrimination and privacy breaches). The proposal introduces a right to disclosure of evidence when the claimant has presented facts and evidence sufficient to support the plausibility of a claim for damages caused by a high-risk AI-system (within the meaning of the Artificial Intelligence Act). This proposal also introduces a presumption of causality (between the fault of the defendant and the output/failure to produce an output by the AI-system) in case of noncompliance with a legal duty of care intended to protect against the damage in question. However, this only applies if it can be considered reasonably likely that the fault has influenced the output of the AI-system/the failure of the AI-system to produce and output and the claimant has demonstrated that the output/failure to produce an output gave rise to the damage. In case of a high-risk AI-system, the presumption applies when the defendant has failed to comply with an order for disclosure of evidence or where the system is non-compliant with several requirements laid down in the Artificial Intelligence Act. In case of a non-high-risk AI-system, the presumption only applies where a national court considers it excessively difficult for the claimant to prove causality.

How could it be relevant for you?

Both proposals create a stricter liability regime than currently exists in most Member States and would be relevant to all manufacturers, importers, distributors and operators of AI systems in the EU.

Next steps

The European Council and the European Parliament will provide their input, after which voting will take place according to the ordinary legislative procedure.

*Information is accurate up to 27 November 2023

Explore other chapters in the guide

Data as a key digital asset

Crypto assets

AI as a digital asset

Privacy & Data Protection


Digital Identity and Trust Services