12 July 2017

Five levels of AI in public service

by Richard Stirling


I’ve noticed that one of the challenges people face when talking about artificial intelligence in government is which language to use.  AI is a very broad term which covers everything from machine learning to general intelligence. People can get caught up thinking about how to design the perfect system for dealing with problems we won’t face for years. I’ve been thinking about what a common framework might look like to make it clearer what we mean when we talk about AI.

The big win for the public sector is moving beyond rote following of rules into exploiting the fact that machines can now use judgment. While we are still in the early days, we can see different levels of sophistication emerging.

In self driving cars, the following levels have been agreed by the industry (paraphrasing is mine):

  • Level 0 – no automation – You do all the work, this is most cars
  • Level 1 – driver augmentation – You do most of the work but e.g. the car might moderate speed
  • Level 2 – close supervision – The driver can take hands and feet off the controls but must remain ready to jump in.
  • Level 3 – semi-autonomous – The car can take over the routine monitoring so the driver can relax until alerted
  • Level 4 – automation – The car drives itself unless it is in an ‘extreme’ situation like a dirt road
  • Level 5 – full automation – The car outperforms people even in ‘extreme’ environments.

For context Tesla is currently about Level 2.5 (although officially at level 2) and companies like Ford are aiming for level 4.  Level 4 is what Oxbotica will be testing next year when they run a fleet of cars between Oxford and London.

This type of segmentation can be useful for getting a handle on machine learning in the public sector. I’ve been experimenting with what a similar set of levels might mean for public services. My draft looks something like this:

five levels of AI in Government


We can’t expect government to leap from level 0 to level 2 or 3 across the board. We are talking to governments about where they might start, how to test approaches and how to assemble the right team to execute.

We’re working on refining these concepts as part of a major piece of work on what artificial intelligence means for government. If you have thoughts or feedback on these levels, I’d love to hear them! Are they useful? What are we missing?


More insights

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

04 June 2019

Ethics and AI: a crash course

10 January 2020

To tackle regional inequality, AI strategies need to go local

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems