When AI goes wrong – part 1: trading algorithms

Written By

will bryson module
Will Bryson

Partner
UK

As a Partner in the Tech Transactions team, I primarily advise clients on technology contracts across the Technology and Defence sectors. I focus on emerging and cutting edge technology (Artificial Intelligence in particular), more 'traditional' defence contracting, and the intersection of the two - helping clients navigate the rapidly evolving Defence Tech sector.

Artificial Intelligence applications are appearing everywhere with promises to transform our lives in many ways. But what happens when things go wrong? In this series of two articles, we consider the question of who should be responsible when an AI makes a mistake. We’ll do this by exploring two current or near future examples of the technology: a trading algorithm and an autonomous car.

The examples we’ve taken are not “general” or “strong” AIs (i.e. a computer system that could potentially take on any task it is faced with – a technology which, for the time being at least, remains the stuff of science fiction). Instead we are considering “narrow” or “week” AIs: a computer system which can analyse data in order to take actions that maximise its chance of success at an identified goal.

Our example AI application for this article is a trading algorithm.

View the full article on digitalbusiness.law >

Latest insights

More Insights
featured image

Control, Alt, Expand: The Rise and Rise of Esports in the GCC

10 minutes Jun 11 2025

Read More
featured image

4 Things to Know About Australia's New Statutory Tort of Privacy

5 minutes Jun 10 2025

Read More
featured image

A game-enhancer, not a game-changer: key takeaways on the new UAE Media Law penalties

5 minutes Jun 10 2025

Read More