This article looks in more detail at the purpose and use of transparency in the context of the regulation of AI solutions. We examine the question of how transparency should operate in this context. The UK Royal Society has identified that the transparency of AI solutions comes in many forms.
In the regulatory context we suggest that the scope and level of transparency should be sufficient to enable individuals and civil society bodies to review, analyze and critique AI solutions so that potential or actual harms can be identified. We consider that, except for the most invasive deployments of AI, publicity, debate and persuasion are likely to be more effective in “policing” responsible AI than bureaucratic regulatory approvals system.