12 June 2023

Response to the UK’s Global Summit on AI Safety

By Annys Rogerson

 

The UK Government has announced it will be hosting a global summit on AI, focused on AI safety. The summit could be a catalyst for faster, more coordinated action from governments globally on the risks and harms of AI. However, there are a number of steps that must be taken to ensure that this opportunity to act does not go to waste. In this post, we put forward a set of suggestions to help ensure the success of the summit. Our suggestions cover three factors: (1) attendees, (2) agenda, and (3) power and accountability.

Background on the summit

  • Purpose of summit: to agree safety measures to evaluate and monitor the most significant risks from AI

  • Who is invited: key countries, leading tech companies, and researchers

  • What will be discussed: the risks of AI, including frontier systems, and discuss how they can be mitigated through internationally coordinated action

  • When it is happening: Autumn 2023

Suggestions for the summit

The AI and AI policy community has long been calling for international coordination of government responses to the risks of AI. In recent months, these have accelerated, in response to several large language models (LLMs) that have been deployed and widely adopted.

The announcement of the global summit is a positive step. In order to not waste the opportunity for genuine progress, we consider a number of factors that could determine the summit’s success, bearing in mind we have limited details on the event at the current time. We suggest that the summit’s attendees should be diverse, it should have a broad agenda, and its goal should be to agree an ‘ethical AI’ charter.

(1) Be Diverse

Attendees should include senior political decision-makers from as many countries as possible, at all stages of AI adoption and development.

If we are to take seriously the ‘everyone should reap the benefits’ narrative present in discussions on AI policy then we want to see as many countries represented as possible. Moreover, AI is not just offices in California, it is an entire supply chain, from tin miners in Indonesia to content moderators in Kenya. All countries count as key countries.

Attendees should include researchers and civil society representatives from the fields of AI ethics and technology and society.

Civil society representatives and researchers from a broad range of disciplines have been thinking about and advocating on AI risks for a long time. Their voices are needed alongside those of tech leaders and researchers but so far they are not on the list of invitees.

(2) Think broadly

The agenda should not only include the existential or ‘humanity endangering’ risks of AI. There are, unfortunately, many other harms to respond to.

Firstly, it’s surprising to be writing this as existential risks of AI were a fairly niche topic not long ago. However, given the messaging in the announcement and the placement of the summit as a response to recent letters from industry, it is important to stress that other risks and existing harms need to be addressed at the summit as well. Other vital risks around AI to consider at the summit include: disinformation, bias and discrimination within AI systems, and their environmental impacts, amongst others.

(3) Create an ‘Ethical AI’ Charter

Like the G8 Open Data Charter, the summit should aim to form a voluntary accord on the decisions made at the summit, signed by all participating countries.

It might be an ambitious ask for the first summit to result in an international agreement but this should be a goal going into the event. This means having people there who can make this kind of decision and are willing to.

Outcomes of the summit should include the creation of mechanisms or norms for holding countries accountable for violations of any agreement.

The success of the summit depends on being able to create agreements that carry weight and will affect change. Hence, any agreement made should be accompanied by mechanisms for other governments and/or members of the public to hold governments accountable for their pledges. This should be the first of many summits so that progress is reviewed and agreements can be updated.

Summary of Suggestions:

A successful Global AI Summit would result in an actionable agreement that responds to a comprehensive set of AI risks and sets up the necessary mechanisms for future further coordination. To ensure this, we should include a diverse and informed set of voices in the discussion across a range of topics. We should invite decision makers from a wide and representative selection of countries and ask that decision makers come with a willingness to listen and respond to the discussion.

Insights

More insights

12 July 2017

Five levels of AI in public service

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

04 June 2019

Ethics and AI: a crash course

10 January 2020

To tackle regional inequality, AI strategies need to go local

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems