Artificial Intelligence applications are appearing everywhere with promises to transform our lives in many ways. But what happens when things go wrong? In this series of two articles, we consider the question of who should be responsible when an AI makes a mistake. We’ll do this by exploring two current or near future examples of the technology: a trading algorithm and an autonomous car.
The examples we’ve taken are not “general” or “strong” AIs (i.e. a computer system that could potentially take on any task it is faced with – a technology which, for the time being at least, remains the stuff of science fiction). Instead we are considering “narrow” or “week” AIs: a computer system which can analyse data in order to take actions that maximise its chance of success at an identified goal.
Our example AI application for this article is a trading algorithm.