28 November 2023
by Kate Iida
Back in September, my colleagues Jasmine, Sully, and I took the train up to the beautiful city of Edinburgh for this year’s Service Design in Government conference. SDinGov is a three day event that brings together people who work to design public services for governments around the world.
At the conference, we hosted a workshop on practical tools to help designers navigate the complex ethical issues related to using artificial intelligence technologies in public services. In our own work in government, we’ve found that people can often have very polarised opinions about AI: either that it’s a perfect solution to nearly every problem, or that it’s too dangerous to use at all.
We think it’s important to strike a balance between these perspectives. AI technologies can be helpful when used to help address certain problems, but they can also pose significant ethical risks, some of which you can read about in our previous blog on this topic.
So how can we ensure that any service using AI is designed and implemented in a trustworthy and ethical way?
Our goal was for workshop participants to think deeply about how they could make design choices that could help to reduce or eliminate potential negative consequences of AI. To do this, we gave each group one of four different theoretical “scenarios” to think through and talk about as a group. The four scenarios were:
an AI system to identify potential fraud being committed by garages completing checks to determine if cars are safe for the road;
an AI chatbot for immigrants and refugees to help them to access relevant information about services and the status of their applications;
an AI system which would provide judges with a briefing on relevant precedent cases; and
an AI system that would alert people about whether they were at risk of long-term illnesses, to help them receive earlier screenings and treatment.
Our participants blew us away with their deep thinking and thoughtful analysis of each of the scenarios and the ethical risks to consider. Below, I’ve summarised some of the main insights that came out of the discussions.
You don’t always need to use AI.
One group shared that in their discussions about the AI-based chatbot for immigrants, refugees, and asylum seekers, they had a sudden realisation. The purpose of the chatbot in the scenario would be primarily to help immigrants and refugees access information about the services they were eligible for and the status of their applications.
But an AI chatbot in this case, the group realised, wasn’t actually the best tool to meet the users’ needs. Instead, a more simple website, where users could access information about the different services offered by the government, as well as a section where they could log in to access information about the status of their application, would actually be a better way to help people access the information they need.
We thought this was a particularly important insight. We’ve found that governments are often excited to use AI, and want to apply it to a lot of different problems. It’s very important, however, to think deeply about the problem to be solved, and whether AI is actually necessary to use in that case. In many instances, a simpler, more transparent, and more explainable system can actually be a better choice.
2. AI systems can become self-reinforcing.
Another group, when discussing the scenario about creating an AI system that would give judges a briefing on the most relevant precedent cases, raised concerns about the system becoming self-reinforcing. Once judges begin to base their decisions on the cases compiled for them through the model, what happens when those decisions begin to be fed back again into the AI model?
This is one of the main concerns around AI systems used in justice or policing. AI systems are trained on data from the real world, and can therefore include systemic biases. But those systemic biases can become amplified when an AI system is rolled out and in particular if it has the opportunity to become self-reinforcing. This can lead to an exacerbation of existing societal inequalities.
3. Break the cycle of punitive AI.
When discussing the scenario about using an AI system to identify garages potentially committing fraud, one group introduced a completely new approach to the discussion. The group said that instead of using the AI system to penalise garages that were giving different scores than the system expected, the system could instead be used to encourage better behaviour by the garages. If the AI system scores were shared with the garages and mechanics from the beginning, it could instead be used as a tool to help garages see if their scoring matched up, and to help them improve their practices. This in turn would help to build trust between the government and the garages carrying out road safety checks, rather than breaking down that trust.
AI systems can be incredibly useful tools for insight and analysis, but are too often used in punitive ways that undermine trust between communities and their governments. Instead, what would it look like to use AI tools to build trust? The same algorithms used by police departments to pinpoint crime hotspots so that more officers are sent to monitor the location, for example, could instead be used as evidence to support allocating those places greater levels of public funding and improving their infrastructure. In this way, AI systems could help governments build greater trust with their communities, rather than eroding it. As one workshop participant put it, “let’s not do the same thing we always do with AI.”
As we shared in the session, we think it’s important for service designers to have practical tools to help them make decisions about AI ethics and trust. One discussion definitely isn’t enough, and building trustworthy AI systems requires an ongoing process of dialogue with the community of people who are and will be affected by the system.
A heartfelt thank you to everyone who attended our workshop, and we’re looking forward to seeing you next year!
If you have any other thoughts about AI risks, ethics, and building trust, we’d love to hear from you. Please get in touch with us at firstname.lastname@example.org.