Infringing AI Patents: Who’s liable, a UK perspective

Written By

This article was first published by Law360 and co-authored by Alexander Korenberg, Kilburn & Strode

Introduction

The past year has seen a steep uptake in private investment and impressive technical achievements in AI and machine learning, according to the 2022 AI Index. [1] At the same time patent filings world-wide have grown apace, growing by 75% in 2021, a 30-fold increase compared to 2015. While patenting activity is increasing, thus far there is little guidance from the courts as to how claims to AI technology will be considered. With the ever-increasing commercial activity in this field, it is only a question of time until we will see more AI-related patent litigation activity. Potential plaintiffs and defendants, and their advisors involved in filing and litigating AI patents, need to be prepared.

What form of claims are commonly being granted in Europe?

Under the European Patent Office’s criteria, AI technology can be patented based either on its application for a technical purpose or its technical implementation, i.e. an implementation designed with the internal workings of a computer in mind. While other jurisdictions outside of the EU, like the UK and the US, put this differently, the result can broadly be expected to be similar. Therefore, the most common way to claim AI technology is as a method of using the AI technology for a specified purpose, which may be a specific technical purpose or more generic if the implementation can be considered technical itself. Such method claims naturally go together with corresponding systems configured to implement the method and computer program products, such as computer readable media with instructions implementing the method.

The EPO guidelines acknowledge that training a machine learning model for a technical purpose and obtaining training data for training the model also may contribute to the technical character of an invention. In practice, the EPO allows applicants to have claims for training the model and obtaining the training data in addition to a use claim. The same applies at the UK Intellectual Property Office. The claims for training a neural network may turn out to be interesting, if they protect the trained model as a direct result of the training process.

Are developers or users liable for infringement of AI patent claims?

The claim types commonly used for AI patents and the way AI technology is developed and deployed give rise to questions regarding the scope of protection offered by the claims, and importantly who will be liable for infringement of those claims.

English law prohibits importation, use, keeping, disposal or offer for disposal of products in the UK which fall within the scope of a patent claim (s60(1)(a) Patents Act 1977). Equally, subject to certain knowledge requirements, infringement arises from using a patented method in the UK or offering the method for use in the UK (s60(1)(b)). Method claims are also infringed by importation, use, keeping, disposal or offer for disposal of products in the UK which are obtained directly by means of the process (s60(1)(c)). Supplying or offering to supply in the UK “means relating to an essential element” of an invention with the knowledge that the means are suitable for putting, and are intended to put, the invention into effect in the UK are also considered infringing acts (s60(2)).

Indirect liability can also arise for the infringing acts of others. This includes the vicarious liability of a principal for the infringements on their agents, or liability as a joint tortfeasor for a party which has procured an infringement or entered a common design with another party which has given rise to an infringement.

We consider below how these principles of direct and indirect liability are likely to apply to developers and users of AI technology.

AI Developers

AI developers will often be at the highest risk of infringing AI patent claims and clearly be in the frame as primary infringers for claims which cover training an AI model for a particular purpose if they carry out the training activities in the UK. As a computationally demanding activity, AI training often takes place in the cloud, which could give rise to situations where the individuals setting up and running the training process are based in the UK, while the training process itself takes place on servers outside the UK. Does this qualify as using a method “in the UK”?

A key factor will be whether the benefits of the method are directly realised in the UK. In Ilumina v Premaitha in 2017, [2] samples were taken from patients in the UK for processing in Taiwan. A method for diagnostic testing was still held to be ‘used in the UK’ because the test results were delivered to the patients in the UK. Applying this logic to a method of training an AI model, individuals in the UK who set up and run a training process on servers outside the UK would be likely to infringe a claim to a method of training if the training process resulted in the trained model being obtained in the UK.

AI developers may also supply their customers with software for the development of AI models, e.g. pre-trained or part trained models for use in transfer learning. If the use of the software would result in the use of a claimed method of training an AI model, offering it for use in the UK could give rise to infringement. Alternately, supplying (or offering to supply) the software for use in the UK could give make the developer liable for contributory infringement.

AI developers can also incur liability if they offer a service based on a trained model to customers based in the UK, e.g. a customer is provided access to a platform which allows them to submit input data and receive the output from the AI system. Offering this sort of service could potentially infringe claims which cover a method of using a trained model for its intended purpose. While it would be the customer who is using the trained model for its intended purpose, the developer could incur liability by offering the method for use in the UK.

Hosting the AI solution on servers outside the UK would not necessarily avoid liability. A court may well consider that offering users the ability to submit inputs and receive outputs in the UK is sufficient to qualify as offering the method for use in the UK, notwithstanding the intermediate processing steps that occur elsewhere, as demonstrated in Illumina.

AI Users

AI users who procure an “off the shelf” AI solution will be exposed to AI patents which claim a method of using a trained model for a particular purpose, or a system configured to implement such a method.

A more complex question is whether an AI model can be a product obtained directly by means of a claimed process for training a model for an intended purpose. If it can, the user of the model in the UK would themselves incur liability for direct patent infringement under s60(1)(c). This would provide the AI patent holder with a powerful right to prohibit the distribution and use of AI systems trained using their claimed method.

A first challenge to this infringement theory is that an AI model is a data structure and not a tangible object. Is it therefore meaningful to speak of importing, keeping, or disposing of an AI model? The closest the English courts have come to addressing this question was in Halliburton v Smith [2005] EWHC 1623 (Pat), where a CAD file for a drill bit was considered the immediate result of a claimed simulation process and the drill bit itself to be a direct product of the claimed method. The reasoning adopted by the court suggested that the CAD file should be considered a “product” of the method. However, the court was not required to specifically answer this question or consider the other potential consequences of finding that a data structure was a product, e.g. would transferring the CAD file to someone in the UK have given rise to an infringement under s60(1)(c)? It is an open question whether an AI model can be considered a “product” for the purposes of section 60(1)(c).

Assuming an AI model is a “product”, a second challenge to the infringement theory is establishing that the AI model had been “obtained directly” from the claimed process. An AI model will typically be one element of a wider software package. The model may also be refined and developed following an initial training process. English courts establish whether a product is “obtained directly” from a process by identifying the product which emerges at the stage the claimed process is complete and then consider whether any further processing would result in a loss of identity between the immediate and resulting product. A loss of identity is found if the resulting product has lost the essential characteristics of the immediate product, assessed with reference to the characteristics which arise from the inventive concept of the patent. It will be a question of fact and degree whether any subsequent steps taken with respect to an AI model which results from a claimed training process is “obtained directly” from that process.

For patents to a method of training an AI model, the solution to this conundrum may be to include a product claim to a computer readable medium containing a model trained in accordance with the claimed method. A product claim of this nature would eliminate questions regarding the status of an AI model as a product and whether it had been obtained directly from the claimed process. Instead, a court would “only” have to assess whether the computer readable medium contained an AI model which had indeed been trained using the claimed training method and whether that medium had been imported, used, kept, disposed, or offered for disposal in the UK. While this may pose evidentiary challenges for a claimant, it would not suffer from the conceptual difficulties discussed above in relation to s60(1)(c).

Where an AI user is exposed to patent infringement liability through their use of an “off the shelf” AI model, the approaches to manging such liability (e.g., through contractual warranties and indemnities from the supplier) will be similar to many other types of software solution. However, where an AI system is developed through the combination of customer data with supplier technology a customer may be at greater risk from claims that they are jointly liable for the development process itself, e.g. they could be exposed to claims that they have acted pursuant to a common design with the AI developer. If the training process has been conducted using the customer’s IT infrastructure, there is also a possibility that they could themselves be directly liable for infringing acts. AI users in this position may therefore want to pay particular attention to the wording of any IP warranties or indemnities offered by the AI supplier, to ensure they are sufficiently protected against third party patent infringement claims. Suppliers, for example, will commonly try to limit warranties and indemnities to infringement of third-party copyrights or trade secrets. This would not offer sufficient protection if a customer is concerned that their role in developing an AI system could expose them to liability for patent infringement.

Conclusion

The patent infringement risk in relation to AI is in many ways no different from risks associated with the use of other forms of software. That said, there are a number of open questions, in particular regarding cloud-based AI technology and the fact that much of AI innovation may be captured by claims related to training AI models. To date there are no infringement cases in which AI technology has been specifically considered. Nor has the validity of AI patents been seriously tested in litigation. Considering the commercial importance of AI, it can only be a matter of time for this to change.

[1] The State of AI in 9 Charts (stanford.edu).

[2] ([2017] EWHC 2930 at [500]-[508]).