14 November 2023

Navigating the AI summit boom: Initial reflections

By Pablo Fuentes Nettel

In the past few weeks, the field of AI governance has witnessed some major developments with events like the AI Safety Summit, the G7’s statement on the Hiroshima AI Process, and the White House’s release of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. These events have definitely put AI in the spotlight, but that’s just the tip of the iceberg.  As someone who works on AI policy, I keep stumbling upon events on AI governance all the time —summits, conferences, keynotes, and seminars seem to be happening everywhere.  As much as I enjoy reading and listening to my colleagues, I can’t help but wonder whether this AI Summit Boom is making a difference in how AI is governed. So, to organise my thoughts, I wrote down a couple of initial reflections about this phenomenon.

The discussion looks messy

In the first place, it seems like we are witnessing a scenario of normative fragmentation. There are many organisations and thought leaders telling other actors what they should do. We see events organised by governments, tech companies, international organisations, non-profits, think tanks, and the media. They all have unique perspectives and interests. Hence, they address discussions differently, starting from varied questions and using diverse definitions. There’s no doubt that diversity is positive; but when discussions are siloed, reaching a consensus to create cohesive policy frameworks becomes more complex.

On top of that, the global scope of AI governance adds another layer of complexity. When we talk about AI governance there’s no one-size fits all solution. By talking to government officials for the development of OI’s Spotlight Series, we’ve had the opportunity to learn about multiple approaches to AI governance. Governments address AI in various ways, with notable differences in how they deal with it and in their perceptions of risks and opportunities. The current global scenario is one of differing economics, legal systems, and political dynamics, as well as varying levels of technological readiness.

This comes with important risks. This diversity of perspectives may potentially contribute to discussions that appear somewhat superficial, as if some of these events serve primarily as symbolic gestures —perhaps, hosting AI gatherings has become a trend without really delving into the complexities of AI governance. Additionally, there is a concern that, in the middle of this messy landscape, the more influential players may assume a disproportionate role in defining guidelines, adopting a predominantly top-down approach. In that sense, it is imperative to highlight that a robust governance framework demands the comprehensive and inclusive participation of all stakeholders.

Navigating the AI Summit Boom, hence, can be overwhelming, often leaving us sceptical about the state of AI governance. The sheer chaos in discussions and diverse perspectives might make it seem like finding common ground is an impossible mission. Yet, in the midst of this diversity, we can’t forget a critical element for any international governance framework: acknowledging diversity and setting standards aren’t mutually exclusive. Yes, it is possible to embrace the mess brought by the AI Summit Boom to come up with differentiated governance mechanisms while identifying common ground among states. While it’s easy to get lost in the AI buzz, there are noteworthy developments that show promise. Notably, there is a global trend towards adopting the OECD’s AI Principles, and governments are progressing into the implementation phase of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. Similarly, at Oxford Insights we just launched the Trustworthy AI Assessment Tool, which aims is designed to help policymakers in government understand how prepared their government is to use AI in a trustworthy way in the public sector. These initiatives acknowledge that the global AI landscape is diverse and involves different levels of AI readiness, while setting minimal standards that contribute to a more robust governance system.

AI governance is getting popular

Beyond the messy nature of the AI Summit Boom, there’s a pretty evident advantage: the governance of AI has become a hot topic. As the discussion gains traction, people more aware of the societal implications of AI developments. Increasing awareness brings a range of benefits. First off, putting AI in the spotlight on the political agenda is a big plus.  The increased public interest in AI, prompts governments to allocate more attention to it, potentially leading to higher levels of commitment, accountability, and transparency.

Similarly, more awareness opens up additional spaces for dialogue and exchange of best practices. Each day, additional governments join the discussion, sharing their strategies for addressing AI challenges and offering insights into effective approaches. The global community is actively participating in this conversation, sharing experiences to illuminate both successful and unsuccessful endeavours. This collaborative element can serve as a catalyst for crafting comprehensive and inclusive policies and governance frameworks.

Additionally, more awareness can translate into increased funding for AI governance projects. As both public and private stakeholders recognise the relevance of AI, they channel more resources into research endeavours and the establishment of effective regulations. Securing funding is pivotal for ensuring the sustained viability of AI governance initiatives in the medium and long term.

Finally, another crucial benefit has to do with AI literacy. As individuals become more informed about the challenges and opportunities presented by this technology, societies are better equipped to make informed decisions. They can actively participate in shaping regulations for AI that are equitable and sensible for all. This is essential in addressing the risks associated with AI, ensuring that our regulations are comprehensive and protect everyone’s interests.

Looking ahead

With all its complexities, the AI Summit Boom has undoubtedly put the spotlight on the evolving landscape of artificial intelligence governance. Despite the challenges posed by normative fragmentation and diverse perspectives, positive strides are evident through initiatives led by organisations like the OECD and UNESCO.

Navigating the AI Summit Boom might be overwhelming, but embracing this complexity allows for the development of nuanced governance mechanisms while still identifying common ground among nations. Moreover, the increased awareness generated by the AI Summit Boom can be a driving force for public sector reform, prompting governments to double efforts to harness this technology for the public good.

As we look ahead, predicting the trajectory of AI governance remains uncertain. Perhaps, a valuable tool would be delving into the historical trajectory of other major global issues, such as climate change and nuclear technologies. Comparative analyses might suggest that the path forward for AI governance will likely involve a transition from initial chaos and uncertainty to more structured, collaborative approaches. Simultaneously, it could shed light on the potential asymmetries that could be institutionalised if we don’t apply an inclusive approach. Anyway, this is a very complex topic that requires further exploration in a subsequent article.

P.d. At Oxford Insights we just launched the Trustworthy AI Assessment Tool, which is designed to help policymakers in government understand how prepared their government is to use AI in a trustworthy way in the public sector. Check it out!


More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems

28 May 2024

Strengthening AI Governance in Chile: GobLab UAI and the Importance of Collaboration

05 June 2024

Towards a Digital Society: Trinidad and Tobago’s AI Ambitions