When AI goes wrong – part 1: trading algorithms

Written By

will bryson module
Will Bryson

Partner
UK

As a Partner in the Tech Transactions team, I primarily advise clients on technology contracts across the Technology and Defence sectors. I focus on emerging and cutting edge technology (Artificial Intelligence in particular), more 'traditional' defence contracting, and the intersection of the two - helping clients navigate the rapidly evolving Defence Tech sector.

Artificial Intelligence applications are appearing everywhere with promises to transform our lives in many ways. But what happens when things go wrong? In this series of two articles, we consider the question of who should be responsible when an AI makes a mistake. We’ll do this by exploring two current or near future examples of the technology: a trading algorithm and an autonomous car.

The examples we’ve taken are not “general” or “strong” AIs (i.e. a computer system that could potentially take on any task it is faced with – a technology which, for the time being at least, remains the stuff of science fiction). Instead we are considering “narrow” or “week” AIs: a computer system which can analyse data in order to take actions that maximise its chance of success at an identified goal.

Our example AI application for this article is a trading algorithm.

View the full article on digitalbusiness.law >

Latest insights

More Insights
featured image

UK - Ofcom sets outs plan for regulation of gigabit connectivity

4 minutes May 30 2025

Read More
featured image

Dutch investment plans military laser communications satellites

2 minutes May 30 2025

Read More
featured image

Commission Opens Public Consultation on Apply AI Strategy to Shape Europe’s Digital Future

3 minutes May 30 2025

Read More