Global superpowers sign agreement on AI safety at Bletchley Park

Written By

The world’s first international summit on artificial intelligence (AI) safety, held in the UK, saw the signing of the landmark ‘Bletchley Declaration’ on AI safety by 29 governments.

AI Safety Summit and the Bletchley Declaration

On 1-2 November 2023, the UK hosted the world’s first ‘AI Safety Summit’ at Bletchley Park. The summit was attended by representatives of international governments and leading multinational technology companies, as well as industry experts.

The key development from the summit was the signing of the ‘Bletchley Declaration’, an international agreement between 29 governments which affirmed their collective commitment to work together in developing AI in “a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.” Notable signatories included the UK, US, and China, as well as major European member states such as France, Germany, Italy, and Spain (and the EU itself).

The Bletchley Declaration underscores the need for governments to take proactive measures (both individually and collectively) to ensure the safe development of AI, acknowledging that AI systems are already deployed across many areas of daily life such as housing, employment, education, and healthcare. It also highlights the risks associated with AI, including issues related to transparency, fairness, accountability, safety, ethics, privacy, and data protection.

A significant aspect of the declaration is the focus on “frontier AI,” referring to highly capable general-purpose AI models that could pose substantial risks in fields such as cybersecurity and biotechnology. The agreement emphasises the increasing urgency for understanding and addressing these risks through international cooperation and collaboration, calling for the development of “risk-based policies” as well as “appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

AI Safety Testing

In addition to the Bletchley Declaration, a further agreement which emerged from the summit was a policy paper on AI ‘Safety Testing’. This was signed by 10 countries, including the UK, US, and major European member states, as well as by leading technology companies. 

The policy paper lays out a broad framework for testing of next-generation AI models by government agencies, promoting international cooperation and also providing for government agencies to build their own public sector capacity for testing and developing their own approaches to AI safety regulation. 

What next?

Two further global AI Safety Summits have already been announced:

  • South Korea will host a smaller virtual summit in six months’ time; and
  • France will host the next full in-person summit in one year’s time. 

The countries represented at the Safety Summit also agreed to support an independent ‘State of the Science’ report on frontier AI, led by Professor Yoshua Bengio. It is due to be published ahead of the summit in France and it is hoped that its findings will help to inform policymaking.

At a domestic level, the UK government announced the formation of a new AI Safety Institute (AISI). Its mission is to “minimise surprise to the UK and humanity from rapid and unexpected advances in AI” and it will “work towards this by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance”. The three “core functions” of AISI will be to:

  1. develop and conduct testing and evaluation of advanced AI systems, for example developing techniques to analyse training data for bias;
  2. drive foundational AI safety research, including through exploratory research projects; and
  3. facilitate information exchange, in particular by establishing information-sharing channels between national and international actors.

The UK government’s primary ambition is that AISI will “build its evaluations process in time to assess the next generation of [AI] models, including those which will be deployed next year.” 

Key takeaways 

The Bletchley Declaration is a clear signal of intent from governments that they are starting to take matters more seriously when it comes to ensuring the development of safe AI. What remains to be seen – and what global industry leaders and other experts working in AI will be particularly keen to understand – is how these words will translate into specific policy proposals, both nationally and internationally.

When it comes to the UK, it will be interesting to see exactly where AISI fits into an increasingly crowded regulatory landscape alongside the Office for Artificial Intelligence, the Centre for Data Ethics and Innovation, and the Central Digital and Data Office, and to what extent its future research output and information-sharing networks will be used to inform wider government policy on AI safety. Technology companies developing their own AI systems will also want to understand exactly how AISI intends to conduct ‘testing and evaluation’ of those systems, and to what extent they will be able to engage and collaborate with AISI on such processes.

From a legal and regulatory perspective, there are unlikely to be any changes overnight in the UK, but businesses and legal professionals across a range of sectors such as technology, privacy and data protection, and intellectual property, will be keeping a close eye on developments over the coming year.

Across the Pond

In the US we are already starting to see what legislation on AI safety looks like. Last month, President Biden issued an expansive Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. This Executive Order issues hundreds of directives related to AI safety to more than twenty federal agencies, tasking them with implementing specific policies to address areas of concern such as national security, data protection, workplace bias and public health. It also imposes requirements on private companies which develop powerful AI systems which could pose risks to national security or public health, requiring them to share their safety test results and methods and other critical information with the US government. 

Most of the directives issued by President Biden under the Executive Order are ordered to be implemented within the coming year. We will be following their impact closely. 

Latest insights

More Insights

The Road to MiCAR continues – French view added

Apr 12 2024

Read More

Women in Tech: At the forefront of innovation - Key takeaways from Dr. Sonja Stuchtey, The Landbanking Group

Apr 12 2024

Read More

The first six months of the UPC

Apr 12 2024

Read More