When AI goes wrong – part 1: trading algorithms

Written By

will bryson module
Will Bryson

Partner
UK

As a Partner in the Tech Transactions team, I primarily advise clients on technology contracts across the Technology and Defence sectors. I focus on emerging and cutting edge technology (Artificial Intelligence in particular), more 'traditional' defence contracting, and the intersection of the two - helping clients navigate the rapidly evolving Defence Tech sector.

Artificial Intelligence applications are appearing everywhere with promises to transform our lives in many ways. But what happens when things go wrong? In this series of two articles, we consider the question of who should be responsible when an AI makes a mistake. We’ll do this by exploring two current or near future examples of the technology: a trading algorithm and an autonomous car.

The examples we’ve taken are not “general” or “strong” AIs (i.e. a computer system that could potentially take on any task it is faced with – a technology which, for the time being at least, remains the stuff of science fiction). Instead we are considering “narrow” or “week” AIs: a computer system which can analyse data in order to take actions that maximise its chance of success at an identified goal.

Our example AI application for this article is a trading algorithm.

View the full article on digitalbusiness.law >

Latest insights

More Insights
Curiosity line green background

From sales to sanctions: Optus faces $100 million penalty for unconscionable sales practices

Jun 19 2025

Read More
Robot Arm

Our experts discuss Tech Disputes with Financier Worldwide Magazine

Jun 18 2025

Read More
featured image

Mitigating the legal risks of licencing in open-source software and database elements

7 minutes Jun 17 2025

Read More