When AI systems cause harm: the application of civil and criminal liability

Written By

russell williamson module
Russell Williamson

Senior Associate
UK

I'm a senior associate in our Dispute Resolution Group in London. <BR/><BR/>I specialise in advising clients on complex disputes stemming from commercial contracts and corporate relationships, particularly in the technology, media, entertainment and sport, retail and consumer, public procurement and automotive sectors.

“Good morning, Dave.” 

It’s fairly safe to say that, in the main, those of us who practice commercial law do not have sufficient expertise in computer science to assess whether any given computing system is based on artificial intelligence (AI) techniques or on more traditional system development techniques. Indeed, AI systems are often described in terms that, to us laypersons, seem better suited to science fiction – as exemplified by HAL, the sentient computer in Stanley Kubrick’s 1968 film 2001: A Space Odyssey – than real life. Even in debates between computer scientists, AI has been light-heartedly defined as “whatever hasn’t been done yet“, suggesting that it is more akin to magic or wishful thinking than reality.

But, bringing a more blame-focused lawyer’s perspective, we can see that, possibly more so than traditional systems, systems based on AI techniques or methods are developed by combinations of separate designers, developers, software programmers, hardware manufacturers, system integrators and data or network service providers.

Full article available on here.

Latest insights

More Insights
glass building

Fintech Features May Edition

May 09 2025

Read More
featured image

Time to check-out… the European Accessibility Act and its impact on travel, hospitality and leisure

5 minutes May 08 2025

Read More
Curiosity line teal background

Privacy by Design: The Standard for Information Systems Under Australian Law

May 08 2025

Read More