One wet Sunday afternoon I was playing with an interface to OpenAI’s machine learning model, GPT-2, which was trained to predict the next word in a sentence and which can now generate articles of synthetic text based on a sentence provided to it. I typed, “Can AI own the copyright in the work that it has generated?” After a little pause, I would like to say for thought, the AI provided some text which did not make redundant the writing of this article but nevertheless was grammatically correct and very readable. It ended with a flourish saying, “and that is a philosophical, not a legal, question.”
Amusing, but increasingly and sometimes disconcertingly “human”, AI is now part of our daily lives. It finishes our sentences (for example, Gmail’s SmartCompose), it finds the information we need on a myriad of topics (think of Alexa and other voice assistants) and it curates the ads and news we view online [Note 1]. It is also doing things which we think of as being the exclusive reserve of humans; it is creating art works and writing music.
Not only that, but it is also very “clever”. DeepMind’s neural network, AlphaGo Zero, taught itself the complex game of Go and after three days beat its predecessor, AlphaGo, which had itself beaten the 18-times world champion. Other AI models are helping scientists discover new drugs and develop innovations in clean energy.
This raises interesting questions for intellectual property (“IP”). When AI writes an article, paints a picture or creates some music, who owns the resulting copyright in the work? When AI develops a new idea which might be patentable, who owns the invention?
Who owns an AI-created work?
In order to answer this question, you have to ask first: who is the author of the work? The author is the person who creates the work and will be the first owner, unless that person is an employee in which case the copyright is owned by their employer. As is apparent from this summary of the law, creating a work is essentially a human activity. This is supported, if support is needed, by reference to the automatic transfer of copyright from employee to employer; AI cannot be said to be an employee.
But how do AI created works fit into this scheme? At one end of the scale where someone has used a computerised tool, such as text editing software, to help them compose an article, that person will still be the author of the literary work if the work is original. To be original, a work must be an author’s or artist’s own intellectual creation, reflecting their personality (see the decisions of the EU Court of Justice in Infopaq, C-5/08, and Painer, C-145/10).
At the other end of the scale, a human who simply provides training data to an AI system and presses “go” is unlikely to be considered the author of the resulting work. An example might be “The Next Rembrandt” a unique 3D-printed painting created by an AI system which mimicked Rembrandt’s style, including his brushstroke technique. Those who worked on this fascinating project have pointed out that the AI produced the painting solely from being trained on a dataset of Rembrandt’s paintings.
In most countries, if no human author can be identified for the work, no copyright will subsist in it and therefore it will fall into the public domain. Although there may be copyright in the AI algorithm itself, as computer programs are protected by copyright, this is a separate work whose authorship (and ownership) is separate from the work it creates.
However, in the UK, the law makers have been ahead of the game. Wanting to encourage investment in AI in the 1980s, Parliament created a category of “computer-generated works” in section 9(3) of the Copyright Designs and Patents Act 1988 (CDPA). These are works which are generated by a computer in circumstances where there is no human author. The author is therefore deemed to be the person “by whom the arrangements necessary for the creation of the work are undertaken.” To date, there has been no case law to answer the question of whether there is any requirement of originality for computer-generated works under section 9(3) [Note 2]. Some commentators say there is no such requirement. Some say that there is, but then disagree as to where it should lie, being split as to whether there should be originality in making the arrangements to generate the work or whether the work itself should be original when judged objectively.
Another issue that may well arise in the future is one of joint ownership. If there is a scale with human generated works at one end and computer generated works at the other, there will be an area between them where it could be said that the human and the AI jointly contributed to the work. Joint ownership is a difficult enough question to answer between human authors [Note 3] and can be a very hard fought issue since one co-owner can prevent another co-owner from exploiting the work without their permission.
Who owns an AI-generated invention?
The inventor is the first owner of any patent which is applied for and granted over that invention. As the law currently stands, AI cannot be the inventor (and therefore the owner of a patent) because “devising” an invention is a human activity which involves contributing to the inventive concept. The invention and any patent granted over it will, as a consequence, belong either to the human deviser or, if an employee, their employer.
As noted above in relation to copyright law, there is a scale with, at one end, AI being used as a tool, admittedly a very sophisticated tool, to help develop new inventions. Where AI has been part of the inventive process in this way, it is arguably no different to using any other tool, such as a microscope. The inventor will therefore be the person using the AI. Mere ownership of the AI system would not qualify someone to be an inventor.
At the other end of the scale where AI is devising the invention with no human intervention, then, under the current law, no one can claim to be the inventor and it will not be possible to protect the invention by applying for a patent, although this has not stopped a few people from trying. For example, the University of Surrey has announced that its creation machine, DABUS, is named as the inventor of a new food container and patent applications in its name have been made in the UK, the USA and at the European Patent Office. Possibly in response, the UK Intellectual Property Office (IPO) has, in its most recent update to its Formalities Manual, added a statement that “An AI Inventor is not acceptable as this does not identify ‘a person’ which is required by law. The consequence for failing to supply this information is that the application is taken to be withdrawn.”
What of the future?
Is the IPO’s response the correct one for the long term? It could be argued that it could lead to the arbitrary naming of an “inventor”, whether they have devised the invention or not, in order to prevent the invention from falling into the public domain.
Although at first glance, this might look as if it benefits society, will that actually be the case?
If the system were to be changed and AI considered to be an inventor, rules would then have to be laid down as to who should own the resulting patent. Questions would then need to be asked as to who or what society wants to reward? Should it be the owners of an AI system? Should investment alone ensure a stake in the resulting work? Changing patents into pure economic rights would be a very significant change to the system and significantly affect the industries which rely upon such assets.
In relation to works protected by copyright, AI is getting so good at creating art and music that it is increasingly difficult to tell what has been generated by AI and what has been generated by a person. If an AI-generated work is free to use because no copyright protects it, what will happen to works generated by human authors? Will humans be able to compete in the marketplace with AI-generated material?
One solution could be to award “personhood” to AI, but I believe that it is far too radical a step to take. Another solution is to have a deeming provision for all IP rights generated by AI similar to that found in section 9(3) CDPA, which at present has only been adopted by the UK and one or two other countries.
These questions and more, challenging the settled notion of IP ownership, are currently the subject of much debate [Note 4]. Perhaps OpenAI’s text generator was correct: if AI is going to change our view of what it means to create and own a work of art, that is a philosophical rather than a legal question. But it is one that IP professionals and governments are going to have to tackle.
This article was originally published in The Internet Newsletter for Lawyers
Notes and further reading
1. For a discussion of personalised news and the impact on editorial values, see “Using artificial intelligence in news intelligently: towards responsible algorithmic journalism” posted on LSE’s [email protected] blog.
2. One of the few cases relating to section 9(3) CDPA is Nova Productions v Mazooma Games  EWCA Civ 219, where it was held that a person playing a computer game was not the author of screenshots taken while playing the game and had not undertaken any of the arrangements necessary for the creation of the images. Instead the court held that the persons who made the arrangements necessary for creation of the screenshots were the game’s developers.
3. The Court of Appeal has recently ordered a retrial in Kogan v Martin  EWCA Civ 1645 on the question of whether there is joint ownership in the screenplay of the film “Florence Foster Jenkins” about the eponymous American socialite who believed herself to be a talented operatic singer.
4. See, for example, the US Patent and Trademark Office’s consultation on AI and patents (PDF) and their consultation into other IP rights (PDF).