28 November 2023

Service Design in Government 2023: conference reflections

by Kate Iida 

Reflections from our workshop attendees. Photo by Kate Iida.

Back in September, my colleagues Jasmine, Sully, and I took the train up to the beautiful city of Edinburgh for this year’s Service Design in Government conference.  SDinGov is a three day event that brings together people who work to design public services for governments around the world.

At the conference, we hosted a workshop on practical tools to help designers navigate the complex ethical issues related to using artificial intelligence technologies in public services. In our own work in government, we’ve found that people can often have very polarised opinions about AI: either that it’s a perfect solution to nearly every problem, or that it’s too dangerous to use at all.

We think it’s important to strike a balance between these perspectives. AI technologies can be helpful when used to help address certain problems, but they can also pose significant ethical risks, some of which you can read about in our previous blog on this topic.

So how can we ensure that any service using AI is designed and implemented in a trustworthy and ethical way?

Our goal was for workshop participants to think deeply about how they could make design choices that could help to reduce or eliminate potential negative consequences of AI. To do this, we gave each group one of four different theoretical “scenarios” to think through and talk about as a group. The four scenarios were:

  1. an  AI system to identify potential fraud being committed by garages completing checks to determine if cars are safe for the road;

  2. an AI chatbot for immigrants and refugees to help them to access relevant information about services and the status of their applications;

  3. an AI system which would provide judges with a briefing on relevant precedent cases; and

  4. an AI system that would alert people about whether they were at risk of long-term illnesses, to help them receive earlier screenings and treatment.

Our participants blew us away with their deep thinking and thoughtful analysis of each of the scenarios and the ethical risks to consider. Below, I’ve summarised some of the main insights that came out of the discussions.

  1. You don’t always need to use AI.

One group shared that in their discussions about the AI-based chatbot for immigrants, refugees, and asylum seekers, they had a sudden realisation. The purpose of the chatbot in the scenario would be primarily to help immigrants and refugees access information about the services they were eligible for and the status of their applications.

But an AI chatbot in this case, the group realised, wasn’t actually the best tool to meet the users’ needs. Instead, a more simple website, where users could access information about the different services offered by the government, as well as a section where they could log in to access information about the status of their application, would actually be a better way to help people access the information they need.

We thought this was a particularly important insight. We’ve found that governments are often excited to use AI, and want to apply it to a lot of different problems. It’s very important, however, to think deeply about the problem to be solved, and whether AI is actually necessary to use in that case. In many instances, a simpler, more transparent, and more explainable system can actually be a better choice.

2. AI systems can become self-reinforcing.

Another group, when discussing the scenario about creating an AI system that would give judges a briefing on the most relevant precedent cases, raised concerns about the system becoming self-reinforcing. Once judges begin to base their decisions on the cases compiled for them through the model, what happens when those decisions begin to be fed back again into the AI model?

This is one of the main concerns around AI systems used in justice or policing. AI systems are trained on data from the real world, and can therefore include systemic biases. But those systemic biases can become amplified when an AI system is rolled out and in particular if it has the opportunity to become self-reinforcing. This can lead to an exacerbation of existing societal inequalities.

3. Break the cycle of punitive AI.

When discussing the scenario about using an AI system to identify garages potentially committing fraud, one group introduced a completely new approach to the discussion. The group said that instead of using the AI system to penalise garages that were giving different scores than the system expected, the system could instead be used to encourage better behaviour by the garages. If the AI system scores were shared with the garages and mechanics from the beginning, it could instead be used as a tool to help garages see if their scoring matched up, and to help them improve their practices. This in turn would help to build trust between the government and the garages carrying out road safety checks, rather than breaking down that trust.

AI systems can be incredibly useful tools for insight and analysis, but are too often used in punitive ways that undermine trust between communities and their governments. Instead, what would it look like to use AI tools to build trust? The same algorithms used by police departments to pinpoint crime hotspots so that more officers are sent to monitor the location, for example, could instead be used as evidence to support allocating those places greater levels of public funding and improving their infrastructure. In this way, AI systems could help governments build greater trust with their communities, rather than eroding it.  As one workshop participant put it, “let’s not do the same thing we always do with AI.”

As we shared in the session, we think it’s important for service designers to have practical tools to help them make decisions about AI ethics and trust. One discussion definitely isn’t enough, and building trustworthy AI systems requires an ongoing process of dialogue with the community of people who are and will be affected by the system.

A heartfelt thank you to everyone who attended our workshop, and we’re looking forward to seeing you next year!
If you have any other thoughts about AI risks, ethics, and building trust, we’d love to hear from you. Please get in touch with us at info@oxfordinsights.com.

Insights

More insights

12 July 2017

Five levels of AI in public service

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

04 June 2019

Ethics and AI: a crash course

10 January 2020

To tackle regional inequality, AI strategies need to go local

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems