12 July 2017
I’ve noticed that one of the challenges people face when talking about artificial intelligence in government is which language to use. AI is a very broad term which covers everything from machine learning to general intelligence. People can get caught up thinking about how to design the perfect system for dealing with problems we won’t face for years. I’ve been thinking about what a common framework might look like to make it clearer what we mean when we talk about AI.
The big win for the public sector is moving beyond rote following of rules into exploiting the fact that machines can now use judgment. While we are still in the early days, we can see different levels of sophistication emerging.
In self driving cars, the following levels have been agreed by the industry (paraphrasing is mine):
For context Tesla is currently about Level 2.5 (although officially at level 2) and companies like Ford are aiming for level 4. Level 4 is what Oxbotica will be testing next year when they run a fleet of cars between Oxford and London.
This type of segmentation can be useful for getting a handle on machine learning in the public sector. I’ve been experimenting with what a similar set of levels might mean for public services. My draft looks something like this:
We can’t expect government to leap from level 0 to level 2 or 3 across the board. We are talking to governments about where they might start, how to test approaches and how to assemble the right team to execute.
We’re working on refining these concepts as part of a major piece of work on what artificial intelligence means for government. If you have thoughts or feedback on these levels, I’d love to hear them! Are they useful? What are we missing?