Embracing AI & Wellness – Where does the buck stop when AI goes wrong?

Written By

lorraine tay module
Lorraine Tay

Partner
Singapore

I am head of our Intellectual Property Group in Singapore. With more than 20 years' experience, I have honed a deep familiarity with international and cross-border issues involving IP commercialisation and brand management.

pin ping oh module
Pin-Ping Oh

Partner
Singapore

As a partner in our Intellectual Property Group in Singapore and part of the Media, Entertainment & Sports team, I focus on contentious IP matters including IP infringement litigation, patent revocation actions and trade mark oppositions, but also advise clients extensively on non-contentious matters including IP commercialisation, patent and trade mark freedom-to-operate issues and brand protection.

AI – The Next-Big-Thing for Wellness

AI is hyped to revolutionise many industries and Wellness is no exception. AI is touted as being able to provide personalised recommendations and advice based on an individual’s biometric information, personal preferences and lifestyle – whether in terms of workouts, sleep, diet or even stress management. When wearables purport to monitor the individual’s heart rate, breaths, sleep and steps and these metrices are added to the equation, the potential for hyper-personalisation is enormous. It will not be a stretch to expect your next fitness instructor / nutritionist / sleep coach to be an AI-driven bot!

But… it’s not all fun and games!

Whilst AI is set to be a game changer, there have been reports of AI serving up erroneous decisions and less than optimal recommendations.

For instance, in 2023, the US National Eating Disorders Association was forced to shut down its AI-powered wellness chatbot, named Tessa, when it started giving problematic advice. Tessa was meant to provide guidance about eating behaviours, but was found to be serving up standard weight loss tips which could be harmful to people with eating disorders. Tessa reported deviated from a specific algorithm written by eating disorder experts.

Hypothetically, if the problem had not been detected and Tessa’s advice had resulted in harm to an individual suffering from an eating disorder – for instance, by aggravating instead of helping the condition, who should ultimately be responsible for the error?

As we have seen with fights surrounding liability for accidents caused by autonomous vehicles, when it comes to determining liability for errors made by an AI system, there is no easy answer. There are two key reasons for this.

First, traditional legal principles were not developed with AI in mind. Take for example the tort of negligence. To succeed in this claim, the claimant must show, amongst other things, that the defendant had breached his duty of care. However, given the autonomous nature of AI, it may not be clear that any human actor or legal person is to be blamed for the AI’s decisions. Also, traditionally, a defendant would have breached his duty of care if he failed to exercise a standard of care that a reasonable person would have in the circumstances. However, the standard of care in any scenario can be questionable when AI is oftentimes a “black box” such that it may not be clear to a human how the AI came to its decisions.

Further, given the myriad stakeholders within the AI ecosystem, each playing different roles in designing, developing and deploying an AI system, it becomes even more challenging to determine who to pin blame on. There could be a host of potential reasons for an AI going wrong, including an error in the software code or the deployer having failed to adequately monitor the AI output. It is also possible that the AI’s erroneous decision fell within the range of its expected outputs (given that such outputs are inherently probabilistic in nature). In such a case, the issue could be with the design of AI or its training data. As such, investigations will be necessary to ascertain how or why exactly the error arose.

What can a business do?

For businesses looking to tap on the potential of AI, here are three tips to mitigate the risks, even as regulators strive to develop frameworks to create a trusted environment for AI innovation:

  • Internal governance - Implement internal governance procedures to ensure that all AI systems are adequately supervised so that if/when the AI goes rogue, this can be promptly detected and mitigated. Of course, this also entails ensuring that the staff responsible for supervising the systems are trained to identify the potential problems that could arise and the appropriate responses in the various scenarios.
  • Contractual safeguards - Put in place contractual safeguards to carve up the parties’ roles, responsibilities, risks and liabilities as clearly as possible. This applies equally to vendor-facing and customer-facing contractors. For customer-facing contracts, consumer protection laws have to be taken into account as these could restrict the enforceability of certain terms if they are deemed to be unreasonable.
  • Warnings and disclaimers - Finally, warn customers that they are interacting with AI instead and of the limitations of the system. It is yet unclear how effective this is in limiting liability to users where an AI was used in accordance with its intended use case. However, if the AI was used for an unintended purpose, disclaimers could help support an argument that the AI provider should not be liable for any harm suffered by the customer.

So now, who’s ready for AI? Watch this space!

 

This article is produced by our Singapore office, Bird & Bird ATMD LLP. It does not constitute as legal advice and is intended to provide general information only. Information in this article is accurate as of 4 September 2024.

Latest insights

More Insights
featured image

Saudi Arabia: Qualified obligation on data controllers to register with Data Protection Authority

3 minutes Dec 03 2024

Read More
microphone on a blue background 782x440

Retail Therapy Episode 15

Dec 03 2024

Read More
Tropical beach

Offshore Developments in the Netherlands: Updates on the Wind Energy Roadmap and Offshore Hydrogen Demo Project

Dec 03 2024

Read More