AI in the Travel Industry - Legal Friend or Foe?

The travel industry has always been an early adopter of technology, giving rise to tremendous innovation in product and service offerings. What used to be done online is now being done through mobile technology and social media; and Artificial Intelligence (AI) is increasingly being used to predict travel choices, personalise services, complete bookings and manage in-trip and post-trip needs. EY’s recent Disruption IndexTM highlights that “travel & leisure is the sector undergoing the fastest rate of change from disruptive technology, driven by artificial intelligence and virtual reality”. But what are the legal implications of this transformation? We look at some of the key developments and legal considerations here and ask: how can you ensure your business remains commercially agile whilst legally compliant?

Evolution, rather than revolution…

AI has very much been heralded as a revolution, but, in fact it isn't a new technology at all. AI was talked about in the 1940s although it is only more recently that its use has become more widespread. Many hotels and travel businesses have offerings that incorporate element of AI, for example bookings.com’s chatbot performs real-time translation and supports 43 languages. It also provides a 24/7 customer service for travel related queries to its customers and seems to have become a hit. Many airports are already utilising biometrics to ensure a smoother passenger experience from check-in to boarding, and visitors to the 2020 Tokyo Olympics can expect to be greeted on arrival at Haneda Airport by robots available to provide help with tasks from luggage transportation to language assistance.

Of course, AI brings with it a number of commercial advantages: reducing operating costs which can be passed on to the passenger, improving efficiency, enhancing personalisation and an improved customer experience - which all drive greater loyalty and profitability. In today's highly competitive environment AI might be just enough to give a business an edge over its competitors.

Why we need explainability (for now)

AI applications and their algorithms tend to operate in black boxes – closed systems which give limited insight into how they reach their outcomes – and this can pose problems for key (human) decision makers in a business, who are often unaware of how systems work. For relatively harmless uses, such as predicting preferences and making recommendations, this may not be a problem and people are often happy to use AI. However, as complexity and risk increase, or in higher risk environments, trust tends to diminish and people demand explainability or fear loss of control.

AI - and the lack of explainability - does present some novel legal issues, such as the inability, or difficulty, in allocating fault if something goes wrong. Ownership of newly generated intellectual property can also be an issue. But all of this can be worked through and your contracts will be crucial in providing clarity where the law may not offer it. It's a case of thinking outside the box – pardon the pun - and considering where new risks might arise. Solutions are evolving steadily to respond to these new risks and it is likely that businesses will become more accepting of these technological advances over time as people begin to use and see more evidence of AI's benefits.

Is a new legislative framework required?

There is a lot of legislative activity around autonomous vehicles and 'robots'; however this shouldn't be seen as a necessity for the wider AI landscape. AI is, after all, still just software – clever software - but not a legal person, and standard approaches to risk and liability should still be applied with AI solutions. You may just have to consider them in a different way.

Will you warrant it?

Ironically, customers of AI products expect higher standards because the results are from a machine and not a human, but typical software warranties (such as compliance with a spec, fit for purpose etc.) may actually be less appropriate for AI solutions and developers may be less willing to give them because of the explicit learning functionality within AI solutions. As the use of AI becomes more commonplace, it is possible that new, more service-based, warranties become the norm.

It's the machine's fault! Or is it?

Determining whether an AI product is faulty is not as straightforward as it might first appear; new considerations come into play such as who, or what, is providing the training data, how much is the customer involved in the training and testing, and what does 'tested' mean? These are all issues that businesses buying or developing AI will need to grapple with and which are fairly unique to AI.

And what if the AI product goes wrong? Whose fault is it? And what is 'wrong' in the context of AI? An unexpected result might not mean the AI solution is flawed: perhaps the wrong data was input into the chat bot, perhaps the data was biased, or the AI software did not have sufficient training. Just because you don't get the answer you expect – or want – does not mean the answer is wrong. It is not always easy to find the fault and businesses should be clear about what results they expect to see before being satisfied it can be released on its customers.

Cyber and all the other risks we cannot (currently) see…

In travel, every stage of the customer journey contains data points that can be stored and analysed by AI, including identified patterns of behaviour, booking dates and age, marital status. Capturing valuable insights for travel businesses can be done quickly and efficiently using AI, thus generating insights and providing opportunities to target and reach consumers more effectively. In doing so, however, AI often requires access to multiple data sources across an organisation and this, potentially, creates a security vulnerability. AI solutions may also be able to re-identify data that has been anonymised – e.g. by cross-referencing other data sets – meaning you can’t ignore data protection compliance. Hackers are also using AI solutions themselves to identify security vulnerabilities that can be exploited!

Allocating risk and liability around AI is more complex but still remains achievable. Some risks will be new and won't have been encountered in a commercial or legal environment before. It may take some time to adjust to this new reality but it won't be the first time a new technology has made us think outside the box.

And, finally, a smiley will never truly replace a smile!

Feelings are mixed as to whether AI is a legal friend or foe, but it's here to stay and will find more and more applications within the travel industry in coming years, which in turn will bring with it the inevitable legal challenges and opportunities.

Feedback from within the industry indicates that, whilst AI brings with it a wealth of commercial opportunities, such as more effective personalisation and targeted marketing, they see this as complementing, rather than replacing the human touch: cooks are still needed to prepare traveller's favourite meals, beds need to be made, and everyone loves a welcoming smile and a tip from a local. Tellingly, Skyscanner reports that its customers tend to interact with its bots in a very human way, asking names and sending emojis in response.

Latest insights

More Insights
Energy and Utilities 500x333

Current European plans to promote hydrogen technologies: The Net Zero Industry Act

Apr 25 2024

Read More
Generative AI

Use of AI within the energy sector – Ofgem’s proposals and call for input

Apr 25 2024

Read More
Competition and EU

Competitive Edge newsletter - Special edition on Investigations - April 2024

Apr 25 2024

Read More