09 December 2022

Behind the scenes of the Government AI Readiness Index

By Emma Hankins

On Monday, Oxford Insights published the 2022 Government AI Readiness Index. Now in its fifth edition, the Index has been cited by governments and NGOs around the world.

There are a growing number of AI-based tools that governments can use to provide better, more efficient public services — things like chatbots, language translation tools, or text recognition services on government websites. The Government AI Readiness Index seeks to measure how ready a government is to implement these AI tools in public service delivery.

But what goes into creating the Index each year? What does crafting an influential index look like behind the scenes? Hopefully, by the end of this blog post you will be able to answer these questions and come away with a few tips on how to create an index or data-driven report.

Your guide on this journey will be me, Emma Hankins — the newest member of the Oxford Insights team. When I started in September and learned that I would be working on the Government AI Readiness Index, I was honestly a bit intimidated. Collecting data on 39 indicators for almost every country in the world on the company’s flagship publication as my first project ever? No pressure!

Thankfully, I was joining a great project team (Pablo Fuentes Nettel and Annys Rogerson) who gave me detailed instructions on how to begin the first part of my contribution to the Index: data collection.

Diving into the data

The Government AI Readiness Index measures 39 indicators which make up 10 dimensions, organised into 3 pillars. These indicators come from a mix of desk research and secondary sources, like scores from other indices or datasets from large repositories such as the World Bank or the UN.

“So, you just copy and paste some data?” Well, it’s a little more complicated than that. Raw data needs to be cleaned and adjusted before being ready for analysis. Datasets come in varying formats and cover different countries, which makes combining them difficult. We also check whether the data for any of the indicators is seriously skewed; if this is the case, we replace the data for those indicators with their log transformations. The raw data are then normalised so that all indicators produce scores on a scale from 0 to 100, which is essential for comparison. We also replace any missing data with the average score on that indicator for a country’s peer group according to the World Bank’s income levels and regions.

One of the desk research indicators we use is whether countries have published a national AI strategy. While searching government websites for documents can be time-consuming, I find it to be one of the most interesting parts of creating the index because you get to see the different approaches governments around the world take to regulating, governing and fostering the development of AI. It was also exciting to see that a few countries — Malaysia, Thailand, Oman, and Uzbekistan — had cited our very index in their newly published AI strategies or documents surrounding them. I knew the Index was important, but this really drove the point home. Around the world, policymakers are using the Index as a benchmarking tool and, in some cases, as a metric of success. In other words, the data in the Index had better be right! This brings me to my first tip:

Tip #1: Accept that you will make mistakes.

You may spend weeks collecting data until you have a massive, colour-coded spreadsheet that you are incredibly proud of. And then the minute you send it to the rest of the team to check, they will immediately find typos and places where the Excel formula isn’t quite right. This is normal, and this is why we purposely build in a lot of time for quality checking our data. Remember that the vast majority of your beautiful spreadsheet is still correct, and you would rather your coworkers find your mistakes while you can fix them!

So, I had downloaded, checked, transformed, and normalised the data, and I finally had a complete index with overall country scores. Still, even as someone who had worked with every bit of data that makes up the index, I had a hard time interpreting what that data meant in practical terms. Numbers are great, but they often don’t tell the clearest story on their own. This is where the next step of the index — interviews with regional experts — comes into play.

Talking to experts

To develop a more in-depth view of government AI readiness than we can get through desk research alone, the Index always includes some form of contribution from experts on AI or tech policy in each of the regions in the report. This year, we conducted interviews with each expert, and these formed the basis for our regional reports. My coworker and project lead Pablo had already arranged interviews with AI and tech experts from each region in the report. I started off taking notes in interviews, but I worked my way up to taking the lead in the interview with our North American expert, which I really enjoyed. That brings me to my second tip:

Tip #2: Experts — they’re just like us.

Yes, the experts you interview will often have PhDs and more titles than you can fit in the report. Yes, they are very smart — that’s why you’re interviewing them, after all. But try not to be too intimidated by them. Prepare, know your data, and have your questions ready, but at the end of the day, they are only humans who happen to be interested in a particular topic — just like you.

I found this to be the most interesting part of the project — going from raw data to detail from experts. Our experts are passionate and knowledgeable about their field and can rattle off more fascinating AI use cases in five minutes than I can find via desk research in an hour. In interviews, we try to tease out the emerging narratives so we can tell the story behind the numbers. To do so, we ask questions like, ‘Why do you think country A has a high score this year? What could other countries in the region learn from them? What are the remaining barriers to AI readiness?’ The answers to questions like these plus more desk research on each region send us to the final leg of our journey: writing the report.

Putting it all together

To make drafting a long report easier, the team decided to draft as we went, writing regional reports while each interview was still fresh in our minds. For me, this involved rereading the notes and transcripts from each interview and highlighting key concepts and quotes. This really helped me avoid staring at a blank page waiting for inspiration to strike. It also relates to tip 3:

Tip #3: Let AI help you.

For the Government AI Readiness Index, I didn’t just write about AI; I also used it to make the project easier. This was my first time using AI-enabled transcription tools to make interview transcripts, and they save tons of time. I also used the autosync function on YouTube to make sure the captions on our videos matched what experts were saying in each frame. These seemingly simple tools are powered by quite sophisticated AI software and can help you immensely if you let them.

Now that you have written up regional findings, you’ve got the bulk of the report! There are a few other obvious but important steps, like proofreading, more quality assurance, and double-checking quotations with regional experts. But at this point you’ve truly gone from nothing to a ton of numbers to a cohesive report with interesting stories to tell.In my time working on the Government AI Readiness Index, it was pretty cool to see it coming together firsthand, and I hope it is as fascinating to read as it was to create.

To find out more about our index methodology and see individual country rankings, you can find the 2022 Government AI Readiness Index here. For questions or to work with us, feel free get in touch at info@oxfordinsights.com

Insights

More insights

12 July 2017

Five levels of AI in public service

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

04 June 2019

Ethics and AI: a crash course

10 January 2020

To tackle regional inequality, AI strategies need to go local

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems