When AI goes wrong – part 1: trading algorithms

Written By

will bryson module
Will Bryson

Partner
UK

As a Partner in the Tech Transactions team, I primarily advise clients on technology contracts across the Technology and Defence sectors. I focus on emerging and cutting edge technology (Artificial Intelligence in particular), more 'traditional' defence contracting, and the intersection of the two - helping clients navigate the rapidly evolving Defence Tech sector.

Artificial Intelligence applications are appearing everywhere with promises to transform our lives in many ways. But what happens when things go wrong? In this series of two articles, we consider the question of who should be responsible when an AI makes a mistake. We’ll do this by exploring two current or near future examples of the technology: a trading algorithm and an autonomous car.

The examples we’ve taken are not “general” or “strong” AIs (i.e. a computer system that could potentially take on any task it is faced with – a technology which, for the time being at least, remains the stuff of science fiction). Instead we are considering “narrow” or “week” AIs: a computer system which can analyse data in order to take actions that maximise its chance of success at an identified goal.

Our example AI application for this article is a trading algorithm.

View the full article on digitalbusiness.law >

Latest insights

More Insights
featured image

Employment Litigation in Singapore: Employees Can Double-Strike with Employment Claims Tribunal Win Followed by High Court Claim

4 minutes Jul 11 2025

Read More
featured image

Singapore Issues Advisory to Stop Use of NRIC Numbers for Authentication

3 minutes Jul 11 2025

Read More
Curiosity line green background

China’s Personal Information Protection Audit - When Is It Required and How to Conduct It?

Jul 10 2025

Read More