AI as a judge? News from AI Decoding IP - Day 2

By David Egan

06-2019

Bird & Bird’s David Egan attended the second day of the UKIPO and WIPO joint conference on the interaction between AI technology and intellectual property law and practice.

AI as judge, examiner and member of the skilled team

The potential role of AI as a decision-making tool was explored by Dr. Ilanah Fhima, reader in IP law at University College London, who discussed whether machine learning models could be used to determine likelihood of confusion for trade marks. She explained that the EUIPO has recently launched an AI image-searching tool which identifies marks with a similar appearance compared with a user-uploaded image. Although the tool still has significant room for improvement - for example, a sample image supplied by Dr. Fhima showing a Lego brick returned a mark for a box of biscuits - this is just the first stage of the EUIPO's roll-out of the tool, which promises greater accuracy in future.

Gwilym Roberts, chairman of Kilburn & Strode, suggested that businesses may find it attractive to forego human judges entirely in favour of AI systems. Even a slightly flawed decision-making system may be acceptable to businesses, since it would deliver a fast, low-cost determination enabling nimble decision making. While in reality that future may be some time away, AI as a tool to augment (rather than replace) human decision making - for example, to provide a preliminary decision before a judge makes a final determination - may not be far away.

Access to justice risks

A challenge that several of the speakers identified was ensuring adequate access to justice. In a future in which AI is used for preliminary decisions with formal decisions being made at a later stage, the risk is that this produces a two-tier justice system. The risk, they warned, is that only businesses and individuals with adequate resources would be able to guarantee brining their case before a human judge, and this could be harmful for the public perception of AI. 
Although the basis of the two-tiered system was not discussed in detail, it is easy to envisage two such possibilities: one wherein lower-value claims are allocated to an AI-judged track with human judges reserved for higher value claims, and the other wherein “AI judges” become the default but for a higher application cost a human judge can be requested.

Patent sufficiency – getting the terminology right

In relation to patents, the increased investment in AI across multiple sectors was highlighted by Heli Pihlajamaa, Director of the Directorate of Patent Law at the European Patent Office (EPO), who explained there has been a significant increase in AI-related patent applications over the past 20 years, with the EPO now receiving such applications in relation to about 1,200 patent families each year.

Ms. Pihlajamaa explained that, in relation to disclosure in patent applications, there is a concern over "black box" patenting; that inventions in the AI space may not be sufficiently disclosed. In her view, however, this issue is common to most new technologies and, in general, is an issue that typically resolves itself once general agreement on the terminology of the new technology is reached. In her view, examiners and industry are approaching that point with AI technologies.

Obviousness – adding AI to the equation

Sir Robin Jacob, the Sir Hugh Laddie chair of IP law at University College London, followed with insightful comments on how patent law will survive the fourth industrial revolution that is AI. He explained that AI might make inventions more difficult to protect since, in addition to enabling better patentability searches, the question of obviousness will have to address whether the invention would have been obvious to the person skilled in the art, such person being equipped with an AI tool.

Sir Robin added that AI systems could simply become an additional part of the team of persons skilled in the art, a concept already well established in English law. The effect would be to make some things more obvious.

Social access - global challenges

Many speakers emphasised the importance of ensuring that the tools and benefits of AI are shared cross-border and across all sectors of the economy. Professor Prabuddha Ganguli, CEO of Vision-IPR & Visiting Professor, Indian Institute of Technology, argued that as AI is a technology of fundamental societal importance, governments have a duty to use IP laws to ensure it is humane and inclusive. This will involve promoting interoperable, multi-lingual, multi-cultural AI and discouraging expertise of AI systems accruing only to well-resourced nations and organisations.

This could be achieved by developing compulsory, FRAND-like licensing terms for "socially essential patents", Professor Ganguli suggested. Although the concern is that - much like with the concept of AI judges - this could lead to a two-tier patent system. Proprietors of deemed "socially essential patents" would be obliged to licence on potentially less favourable terms, while proprietors of other AI patents would have wide discretion to opt not to license, or to seek a licence on commercially competitive terms. It is also unclear how and by which measure a patent would be deemed "socially essential".  And, indeed, to whom the responsibility of making this determination would fall.

Should AI attract IP rights at all?

Picking up on the themes explored by Professor Ganguli, Professor Lawrence Lau of Hong Kong University advocated that, as intellectual property laws have evolved in order to protect creations resulting from the labour of the individual for the benefit of humankind, and as AI does not involve any human creativity, no IP rights should attach to AI-generated works.

Professor Lau advocated John Locke's view that human labour has an intrinsic value, and the policy reason for granting a monopoly right is to recognise this value in the creation process. The outcome of this approach in the AI context would be that while data sets for training and learning models as human-created, would be protectable, the output from AI systems (e.g. inventions, new musical works or new designs) would not be capable of protection.

IP regulators: industry input needed

The conference closed with an address from the CEO of the UK IPO Mr. Tim Moss, who highlighted some of the important themes that had been discussed during the two-day conference. Many speakers had touched on the importance of data to the development of AI, given the need for high-quality training data to feed learning models. Mr. Moss noted that policy makers are alive to this debate and are keen to continue engaging with industry to find the right balance between open access to data and individual privacy.

Mr. Moss noted that the UK IPO, based on its discussions with the IP community, considers that the current IP frameworks are capable of accommodating the changes prompted by AI developments. However, he called on the wider IP community to continue to supply evidence of scenarios that challenge the existing system, in order to provide policy makers with the evidence they require to make changes to address industry concerns. Only by doing this can regulators hope to address the challenges in this rapidly-changing area.

Our report of Day 1 of the conference can be found here.