Background
On the 1st & 2nd November, the world’s first AI Security Summit was hosted within the UK. Representatives from 28 nations, including the USA, China, France and Japan, all convened in Bletchley Park to discuss the implications of the rapid advancement of highly capable general purpose AI models known as ‘Frontier AI’. The Summit was further populated with leading AI companies, experts and emissaries from institutions including the Alan Turing Institute, the Organisation for Economic Co-operation and Development, and the European Union.
The two-day summit aspired to commence international discussions on the regulation of AI – acknowledging that, whilst the disruptive potential that it holds can lead to significant benefits across most sectors, it can also bear catastrophic consequences if left unmonitored. The extensive discussions held during the Summit culminated in the 28 attending nations unanimously endorsing the agreement being referred to as the Bletchley Declaration. A synopsis of its contents shall follow.
The Bletchley Declaration
The declaration commences with two key acknowledgements. Firstly, emphasis is put on the significant global opportunities and challenges presented by AI. Secondly, it recognizes that the AI phenomenon is no longer a futuristic concept, as it is already deployed in various aspects of our daily lives. This reality urgently calls for AI to be designed, developed, and used in a safe, human-centric, trustworthy, and responsible manner. The statement elaborates on the dual nature of AI – representing disruptive potential and offering transformative opportunities, but also posing major risks regarding human rights, fairness, transparency, safety, accountability, ethics, and bias mitigation. Particular focus is devoted to the safety risks that highly capable AI models entail. It explains how even the use of AI models which lie at the cutting edge of AI development could lead to unforeseen consequences, which would be especially concerning in sensitive domains like cybersecurity and biotechnology.
The declaration recognizes the paramount importance of international cooperation to address these risks effectively. In light of such, the statement calls for collaboration across nations, international organizations, businesses, civil societies, and academia. The statement highlights the need for an international network of scientific research on frontier AI safety. Moreover, collaboration must also be inclusive so as to bridge the digital divide, thereby ensuring that developing countries can also reap the benefits of AI in a safe and informed manner.
In consideration of such, the agenda outlined for addressing frontier AI risks is two-fold. Initially, a shared set of scientific and evidence-based safety risks need to be identified, understood, and maintained as the AI’s capabilities continue to increase. In fulfilment of such, countries are then urged to collaborate in developing common policies, whilst also recognizing that approaches may differ between nations in compliance to their respective legal frameworks. Whilst formulating said policies, the declaration urges nations to incorporate transparency, evaluation metrics, safety testing tools, and public sector capability development.
The declaration concludes on an encouraging note, underscoring the positive potential of AI whilst also committing to ongoing global dialogue, research, and cooperation to harness AI responsibly and ensure its benefits for everyone. It outlines a commitment to meeting again in 2024 to assess progress in these efforts.
Overview / Way Forward
Notwithstanding its high-level nature, the Bletchley Declaration is a landmark in the history of AI Regulation. It represents the first collective acknowledgement of the risks that Frontier AI poses and may very well act as the catalyst to collaborations and regulatory initiatives which aim to ensure that the development of AI happens within the parameters of law. The AI Security Summit is set to become a yearly event, as France have already committed to hosting it next year.
Author: Matthias Grech
This article was first published in the Times of Malta on 08/11/2023.