12 June 2023
By Annys Rogerson
The UK Government has announced it will be hosting a global summit on AI, focused on AI safety. The summit could be a catalyst for faster, more coordinated action from governments globally on the risks and harms of AI. However, there are a number of steps that must be taken to ensure that this opportunity to act does not go to waste. In this post, we put forward a set of suggestions to help ensure the success of the summit. Our suggestions cover three factors: (1) attendees, (2) agenda, and (3) power and accountability.
Background on the summit
Purpose of summit: to agree safety measures to evaluate and monitor the most significant risks from AI
Who is invited: key countries, leading tech companies, and researchers
What will be discussed: the risks of AI, including frontier systems, and discuss how they can be mitigated through internationally coordinated action
When it is happening: Autumn 2023
Suggestions for the summit
The AI and AI policy community has long been calling for international coordination of government responses to the risks of AI. In recent months, these have accelerated, in response to several large language models (LLMs) that have been deployed and widely adopted.
The announcement of the global summit is a positive step. In order to not waste the opportunity for genuine progress, we consider a number of factors that could determine the summit’s success, bearing in mind we have limited details on the event at the current time. We suggest that the summit’s attendees should be diverse, it should have a broad agenda, and its goal should be to agree an ‘ethical AI’ charter.
(1) Be Diverse
Attendees should include senior political decision-makers from as many countries as possible, at all stages of AI adoption and development.
If we are to take seriously the ‘everyone should reap the benefits’ narrative present in discussions on AI policy then we want to see as many countries represented as possible. Moreover, AI is not just offices in California, it is an entire supply chain, from tin miners in Indonesia to content moderators in Kenya. All countries count as key countries.
Attendees should include researchers and civil society representatives from the fields of AI ethics and technology and society.
Civil society representatives and researchers from a broad range of disciplines have been thinking about and advocating on AI risks for a long time. Their voices are needed alongside those of tech leaders and researchers but so far they are not on the list of invitees.
(2) Think broadly
The agenda should not only include the existential or ‘humanity endangering’ risks of AI. There are, unfortunately, many other harms to respond to.
Firstly, it’s surprising to be writing this as existential risks of AI were a fairly niche topic not long ago. However, given the messaging in the announcement and the placement of the summit as a response to recent letters from industry, it is important to stress that other risks and existing harms need to be addressed at the summit as well. Other vital risks around AI to consider at the summit include: disinformation, bias and discrimination within AI systems, and their environmental impacts, amongst others.
(3) Create an ‘Ethical AI’ Charter
Like the G8 Open Data Charter, the summit should aim to form a voluntary accord on the decisions made at the summit, signed by all participating countries.
It might be an ambitious ask for the first summit to result in an international agreement but this should be a goal going into the event. This means having people there who can make this kind of decision and are willing to.
Outcomes of the summit should include the creation of mechanisms or norms for holding countries accountable for violations of any agreement.
The success of the summit depends on being able to create agreements that carry weight and will affect change. Hence, any agreement made should be accompanied by mechanisms for other governments and/or members of the public to hold governments accountable for their pledges. This should be the first of many summits so that progress is reviewed and agreements can be updated.
Summary of Suggestions:
A successful Global AI Summit would result in an actionable agreement that responds to a comprehensive set of AI risks and sets up the necessary mechanisms for future further coordination. To ensure this, we should include a diverse and informed set of voices in the discussion across a range of topics. We should invite decision makers from a wide and representative selection of countries and ask that decision makers come with a willingness to listen and respond to the discussion.