Contracting for generative AI in the Financial Services sector – what in-house lawyers need to know

The financial services (FS) sector is increasingly seen as one of the key sectors at the forefront of the adoption of disruptive digital technologies. So much so that many FS companies see themselves as technology companies that happen to operate in a regulated sector.

So, it is no surprise that we are seeing the FS sector as one of the key sectors (alongside the life sciences sector) in terms of the adoption of generative AI services (AI services). Currently, many FS clients are trialling AI services to help launch chatbots for limited use cases (internal and/or external facing).

There are a number of legal issues to consider before implementing AI services. Some are similar to the legal issues that arise when implementing a cloud service (assuming the AI services is underpinned by a software-as-a-service (SaaS) model), but others are specific to the procurement of AI services. Given the fast-paced nature at which AI services are developing, it can often be difficult to stay up to date with what is market practice (and, arguably, there is no market practice yet as the technology is still so new).

This short article sets out some key legal issues to consider when implementing AI services in the FS sector. There are other issues to consider but we think this is a helpful starting point!

Understand the set-up

It is important to spend time with the business to understand the technical set up of the AI services being procured. Questions to consider include:

  • Is the AI services delivered on a SaaS basis (e.g. the AI service provider has an AI model that is separately hosted by it and this AI model is used to deliver the AI services, which are provided over the internet via a browser or an API integration)?
  • Will the AI services be based on one “instance” of the AI model made available to many customers, or will it be based on a private “instance” made available only to one customer (in each case, on a SaaS basis)?
  • Will the customer install its own copy of the AI model and make it available itself to provide the AI service?
  • Is the AI model built using open-source software or is it proprietary?
  • Does the customer need to train and/or fine-tune the AI model using customer data prior to using the AI services?
  • Will the customer be vectorising any customer data (vector data) and providing access to that vector data (separately hosted by the customer) in order to train/fine-tune the AI model for a specific purpose prior to using the AI services?
  • Will the AI service provider agree not to re-use the learnings (including machine learnings) acquired by the relevant AI model through its use of any customer data or vector data?

Understanding the answers to these questions is important. For example:

  • If the AI services is delivered on a SaaS basis, then the customer will need to consider issues such as security of the solution, access to data, business continuity and service levels in relation to availability of the AI services.
  • If the AI services is built using open-source software, the customer will need to be aware of the open-source software licence terms governing its use and any related restrictions imposed by such terms.
  • If training data is being used to train the AI model, who is taking responsibility for the procurement of such data and ensuring the use of such data (including to generate outputs) will not infringe any third party’s IP?

Implementation

Often the AI service provider will have a pre-trained, pre-existing AI model which is then trained and/or fine-tuned further using the data provided by customers (Input(s)) prior to the AI services being provided (based on the trained AI model).

However, before using the AI services in a live production environment, it is important that the customer has the opportunity to test the trained AI model to ensure it meets the relevant criteria in relation to explainability, compliance with laws, ability to produce outputs that are accurate and freedom from bias, etc. Only once these tests have been completed should the customer “accept” the trained AI model for use in a live production environment (and, ideally, only once this acceptance has occurred, should the customer pay for the implementation services and be on the hook to pay for use of the AI services in a live production environment).

IP and confidentiality

The AI model/trained AI model will receive Inputs and then generate outputs in response to those Inputs (Output(s)).

Sometimes the Inputs may comprise proprietary or confidential information of the customer that it does not want the AI service provider to be able to re-use for the benefit of other customers. In addition, the customer may not want the trained AI model (trained on its data) to be used for the benefit of other customers.

A good position for the customer is:

  • As between the AI service provider and the customer, the customer owns the rights (including any IP) in the Inputs.
  • As between the AI service provider and the customer, the customer owns the rights (including any IP) in the Outputs – this gives the customer the freedom to fully use the Outputs. However, if the Outputs are not based on Inputs provided by the customer (but instead might be based on inputs provided by the AI service provider), then consider whether owning the rights (including any IP) in the Outputs will be acceptable to the AI service provider. If the AI service provider is to own the Outputs then ensure the customer has an appropriate licence to use the Outputs as it requires, with any necessary restrictions on the AI service provider from using commercially sensitive Outputs with competitors / other third parties.
  • The customer grants a licence to the AI service provider to use the Inputs for the term of the agreement to train the AI model and then, following completion of such training, to provide the AI services (which will include the creation of the relevant Outputs).
  • The customer grants a licence to the AI service provider to use the Outputs for the term of the agreement to provide the AI services / continually train the AI model.
  • On termination/expiry of the agreement, the Inputs are returned to the customer (or at its election, deleted) and the AI service provider can no longer use the Inputs including to further train its AI model, save that the AI service provider can use the learnings (including machine learnings) acquired by it from the use of the Inputs during the term of the agreement, provided that the AI service provider continues to comply with its confidentiality obligations under the agreement.

The customer should ensure where it is uploading Inputs received from third parties that comprise propriety information / IP or is otherwise based on such third parties confidential information that it has the necessary consents from those third parties to sublicence the Inputs to / share such confidential information with the AI service provider for the purposes envisaged by the relevant agreement with the AI service provider.

Finally, it is important that the IP and confidentiality clauses are correctly aligned. For example, if the Inputs are likely to include confidential information and the licence to use the Inputs is for the term of the agreement, then it should be clear in the confidentiality clause (or consequences of termination clause) that any customer confidential information should be returned to the customer or deleted on termination/expiry of the agreement.

IP Indemnity

The AI service provider may request that the customer indemnifies it for any loss it suffers in connection with a claim from a third party that the use of the Inputs (provided by the customer) by the AI service provider infringes that third party’s IP. Before accepting such an indemnity, the customer should consider if it can back-off this liability to any third parties who it has received the Inputs (or data incorporated into such Inputs) from.

The Customer should consider asking for an IP indemnity from the AI service provider in the event that the use of the AI service, including the generation and use of any Outputs, infringes a third party’s IP, save to the extent the claim arises from the use of the Inputs provided by the customer (in which case the IP indemnity does not apply). This will then protect the customer in respect of use of the AI services and any Outputs (to the extent the Outputs are not based on its Inputs).

Fees

Often AI services based on a SaaS model will require that the fees are paid in advance.

If fees are paid in advance then, if possible, it should be made clear that if the agreement terminates early then the customer should be able to recover a pro-rata refund of any fees for the AI services in relation to the period during which such AI services were not provided.

Sometimes this position is not possible as the AI service provider states that fees are non-cancellable and non-refundable, except for some limited carve-outs. If this is the case, then the aforementioned carve-outs should be clearly set out. For example, if the agreement terminates for any reason (other than due to the customer’s default), then any pre-paid fees should be refunded.

Security

If the AI services is based on a SaaS model, then the customer will want to understand the security set up that the AI provider has in place including to prevent the unauthorised access and use, loss or corruption of its Inputs. In addition, the customer should consider whether it should require the AI service provider to provide warranties in respect of the use of anti-virus software to prevent the introduction of malicious software from the AI services (including via the Outputs) into the customer’s systems. The customer may also wish to have audit rights to check the security of the relevant systems underpinning the AI services.

Service levels and support services

For AI services based on a SaaS model, check that the AI service provider has appropriate service levels in place in relation to the availability of the AI services and the support services provided (e.g. responsiveness of a helpdesk, commitments in relation to resolving faults raised with the AI service provider). In addition, the customer should check that service credits are provided in respect of such service levels and determine whether they are the sole and exclusive remedy for the service level failure or a non-exclusive remedy (the latter being more customer friendly).

Performance standards

The customer should consider what commitments it wants from the AI service provider including in relation to accuracy of Outputs, freedom from bias and discrimination and the explainability of the trained AI model. It’s unlikely that the AI service provider will agree to absolute obligations in respect of these commitments. However, most AI service providers will be aware that customers will want some commitment and so may agree to “reasonable endeavours” types obligations.

Compliance with laws

The text of the EU AI Act (AI Act) is not yet finalised, as the AI Act is currently under trilogue negotiations, likely until the end of this calendar year or even Q1 of 2024. However, it may be prudent (especially for long term agreements) to consider the scope of the AI act and include appropriate obligations on the AI service provider given (based on the current draft of the AI act as at the date this article was first published) the AI Act will apply to certain AI service providers and/or the use cases underpinning the AI services. For example, a customer could consider including a statement that “the AI service provider will perform its obligations in accordance with, and ensure the [AI services] comply with, applicable law, including, where applicable the regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).!

Outsourcing regulation

Financial institutions will also need to consider compliance with regulatory outsourcing requirements) in relation to the use of the AI services. For example, if the AI services comprise an outsourcing which is “critical or important” or “material”, then customers will need to consider imposing certain rights and obligations into their contracts with service providers which can be onerous (including broad audit and access rights on the AI service provider for the customer, their auditors and regulators and commitments in relation to exit and business continuity). In relation to AI services, data security is also likely to be an important issue. Also, we think that UK regulators will expect customers to have appropriate oversight and control to help understand the training and development of the machine learning algorithms to help manage risks of biases that lead to customer detriment or unfairness.

Data protection

The procurement and implementation of AI services that involve the processing of personal data requires careful consideration of various data protection and privacy issues. The technical complexities of AI systems and the unique risks involved can make data protection issues particularly challenging to deal with. The use of AI to process personal data often triggers the need for a data protection impact assessment and information from the AI service provider may be needed to complete this.

Data protection regulators are increasing their focus on the use of AI. The UK Information Commissioner’s Office (ICO), for example, included actions in its ICO25 strategic plan to tackle AI-driven discrimination on the basis that this is an issue that can have damaging consequences for people’s lives. The ICO has published guidance on AI and data protection as well as an AI and data protection risk toolkit.

Recital 78 GDPR says producers of AI solutions should be encouraged to take into account the right to data protection when developing and designing their systems and make sure that controllers and processors are able to fulfil their data protection obligations. However, those procuring an AI solution will still need to ensure that the implementation and use of any solution is compliant. There is still a need for independent evaluation of the AI service and data protection compliance at the procurement stage (rather than afterwards). Since new risks and compliance considerations may arise during the course of the deployment, those procuring AI solutions should regularly review compliance and be able to modify the service or switch to another provider if their use is no longer compliant.

Where a third-party provider of AI services processes personal data, businesses will need to consider the role of the third-party provider (controller, joint controller or processor) to understand which data protection obligations apply and what compliance steps need to be taken.

Conclusion

AI services are developing at a break-neck speed and there is a lot for lawyers to get up to speed on as more and more FS businesses seek to adopt AI to help streamline processes and reduce costs. The above issues list is a helpful starting point for any lawyer tasked with the review of contracts based on AI services.

For more information, please get in touch with the Fintech team.

Latest insights

More Insights
Curiosity line pink background

China Cybersecurity and Data Protection: Monthly Update - April 2024 Issue

Apr 26 2024

Read More
Curiosity line teal background

Bring out the wine and cheese: Enhanced protection for European GIs in New Zealand

Apr 26 2024

Read More
Green paper windmill

Green Gold: Navigating Mandatory Climate Disclosure and ESG Strategies

Apr 26 2024

Read More