09 December 2022
By Emma Hankins
On Monday, Oxford Insights published the 2022 Government AI Readiness Index. Now in its fifth edition, the Index has been cited by governments and NGOs around the world.
There are a growing number of AI-based tools that governments can use to provide better, more efficient public services — things like chatbots, language translation tools, or text recognition services on government websites. The Government AI Readiness Index seeks to measure how ready a government is to implement these AI tools in public service delivery.
But what goes into creating the Index each year? What does crafting an influential index look like behind the scenes? Hopefully, by the end of this blog post you will be able to answer these questions and come away with a few tips on how to create an index or data-driven report.
Your guide on this journey will be me, Emma Hankins — the newest member of the Oxford Insights team. When I started in September and learned that I would be working on the Government AI Readiness Index, I was honestly a bit intimidated. Collecting data on 39 indicators for almost every country in the world on the company’s flagship publication as my first project ever? No pressure!
Thankfully, I was joining a great project team (Pablo Fuentes Nettel and Annys Rogerson) who gave me detailed instructions on how to begin the first part of my contribution to the Index: data collection.
The Government AI Readiness Index measures 39 indicators which make up 10 dimensions, organised into 3 pillars. These indicators come from a mix of desk research and secondary sources, like scores from other indices or datasets from large repositories such as the World Bank or the UN.
“So, you just copy and paste some data?” Well, it’s a little more complicated than that. Raw data needs to be cleaned and adjusted before being ready for analysis. Datasets come in varying formats and cover different countries, which makes combining them difficult. We also check whether the data for any of the indicators is seriously skewed; if this is the case, we replace the data for those indicators with their log transformations. The raw data are then normalised so that all indicators produce scores on a scale from 0 to 100, which is essential for comparison. We also replace any missing data with the average score on that indicator for a country’s peer group according to the World Bank’s income levels and regions.
One of the desk research indicators we use is whether countries have published a national AI strategy. While searching government websites for documents can be time-consuming, I find it to be one of the most interesting parts of creating the index because you get to see the different approaches governments around the world take to regulating, governing and fostering the development of AI. It was also exciting to see that a few countries — Malaysia, Thailand, Oman, and Uzbekistan — had cited our very index in their newly published AI strategies or documents surrounding them. I knew the Index was important, but this really drove the point home. Around the world, policymakers are using the Index as a benchmarking tool and, in some cases, as a metric of success. In other words, the data in the Index had better be right! This brings me to my first tip:
Tip #1: Accept that you will make mistakes.
You may spend weeks collecting data until you have a massive, colour-coded spreadsheet that you are incredibly proud of. And then the minute you send it to the rest of the team to check, they will immediately find typos and places where the Excel formula isn’t quite right. This is normal, and this is why we purposely build in a lot of time for quality checking our data. Remember that the vast majority of your beautiful spreadsheet is still correct, and you would rather your coworkers find your mistakes while you can fix them!
So, I had downloaded, checked, transformed, and normalised the data, and I finally had a complete index with overall country scores. Still, even as someone who had worked with every bit of data that makes up the index, I had a hard time interpreting what that data meant in practical terms. Numbers are great, but they often don’t tell the clearest story on their own. This is where the next step of the index — interviews with regional experts — comes into play.
To develop a more in-depth view of government AI readiness than we can get through desk research alone, the Index always includes some form of contribution from experts on AI or tech policy in each of the regions in the report. This year, we conducted interviews with each expert, and these formed the basis for our regional reports. My coworker and project lead Pablo had already arranged interviews with AI and tech experts from each region in the report. I started off taking notes in interviews, but I worked my way up to taking the lead in the interview with our North American expert, which I really enjoyed. That brings me to my second tip:
Tip #2: Experts — they’re just like us.
Yes, the experts you interview will often have PhDs and more titles than you can fit in the report. Yes, they are very smart — that’s why you’re interviewing them, after all. But try not to be too intimidated by them. Prepare, know your data, and have your questions ready, but at the end of the day, they are only humans who happen to be interested in a particular topic — just like you.
I found this to be the most interesting part of the project — going from raw data to detail from experts. Our experts are passionate and knowledgeable about their field and can rattle off more fascinating AI use cases in five minutes than I can find via desk research in an hour. In interviews, we try to tease out the emerging narratives so we can tell the story behind the numbers. To do so, we ask questions like, ‘Why do you think country A has a high score this year? What could other countries in the region learn from them? What are the remaining barriers to AI readiness?’ The answers to questions like these plus more desk research on each region send us to the final leg of our journey: writing the report.
To make drafting a long report easier, the team decided to draft as we went, writing regional reports while each interview was still fresh in our minds. For me, this involved rereading the notes and transcripts from each interview and highlighting key concepts and quotes. This really helped me avoid staring at a blank page waiting for inspiration to strike. It also relates to tip 3:
Tip #3: Let AI help you.
For the Government AI Readiness Index, I didn’t just write about AI; I also used it to make the project easier. This was my first time using AI-enabled transcription tools to make interview transcripts, and they save tons of time. I also used the autosync function on YouTube to make sure the captions on our videos matched what experts were saying in each frame. These seemingly simple tools are powered by quite sophisticated AI software and can help you immensely if you let them.
Now that you have written up regional findings, you’ve got the bulk of the report! There are a few other obvious but important steps, like proofreading, more quality assurance, and double-checking quotations with regional experts. But at this point you’ve truly gone from nothing to a ton of numbers to a cohesive report with interesting stories to tell.In my time working on the Government AI Readiness Index, it was pretty cool to see it coming together firsthand, and I hope it is as fascinating to read as it was to create.
To find out more about our index methodology and see individual country rankings, you can find the 2022 Government AI Readiness Index here. For questions or to work with us, feel free get in touch at firstname.lastname@example.org