AI in FS: risks, opportunities, regulation

After decades of more gradual evolution, AI is having its breakthrough moment. Or should we say “moments” given the sheer number of breakthroughs and innovations being publicised. These breakthroughs have been facilitated by three key factors: greater availability of vast datasets to train AI systems, greater computational power of the underlying computers of these systems including new AI architectures (e.g. transformers) and a wider availability of AI talent and resource. But a key recent development has been the combination of AI learning with natural language referred to as “generative AI” and the creation of large language models (LLMs) such as OpenAI’s GPT-4 and Google’s Bard.

AI has the potential to transform the financial services (FS) sector by replacing routine tasks requiring the analysis of vast amounts of data and so potentially leading to faster and more accurate outputs. For example, it was recently reported that Morgan Stanley, an international investment bank, had been training an advanced generative AI chatbot based on OpenAI’s latest LLM technology on selected pieces of data vetted by it with the aim of creating a tool that could assist its financial advisors when advising wealth management clients.  This tool will provide these advisors with insights at a much greater scale and speed than a human could do, resulting in the potential transformation of the wealth management industry.

But the opportunities for firms authorised in the FS sector (Firms) needs to be balanced against the risks.  For example, how do we deal with bias in the training data being used to train AI systems, copyright infringement issues and data protection concerns?  How do we deal with the risk that these AI systems’ capabilities outrun their creators’ understanding resulting in systemic risks to the financial system and detrimental outcomes to consumers?  How does existing FS regulation apply to this new technology? Is the existing regulatory framework sufficient to address AI risks or is a bespoke framework required?

There is a lot to unpack so we’ve broken down this article into two parts:

  • In Part 1 we examine what AI is, how it works and some potential uses cases for the FS sector.
  • In Part 2 we examine some of the practical legal, regulatory and commercial risks in-house lawyers working in the FS sector need to think about.

Read the full article here

Latest insights

More Insights
Snow-capped mountain range

Green Claims & Greenwashing – A Legal Update

May 06 2024

Read More
cards

Gambling Commission Announces New Changes

May 03 2024

Read More
Curiosity line blue background

Australia to fast-track some privacy & e-safety reforms to bolster individual rights and combat doxxing

May 03 2024

Read More