21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

Image from fauxels. “Photo of people doing handshakes.” CC

The transformative future potential of AI alongside its ethical  risks are frequently talked about, receiving particular attention since the advent of powerful Large Language Models (LLMs) such as ChatGPT. But AI in government is already a reality. Our role is to ensure it is used responsibly, ethically, and in the right places. There exist many frameworks which outline the principles of ethical/trustworthy AI, but these are often theoretical and leave designers without practical lessons they can apply to their own work.

So, how can designers avoid ethical nightmares when integrating AI into services? Let’s first consider an example of an ethical nightmare that has arisen from attempting to integrate AI into government services.

Dutch tax authorities employed an AI system based on a self-learning algorithm to try and weed out benefits fraud at early stages. This system would generate risk profiles using indicators drawn from vast arrays of data. These risk profiles would often disproportionately suspect those from ethnic minorities or lower economic brackets of committing fraud given the types of data and proxy variables used. Authorities would then penalise those who were suspected of fraud based solely on this system’s outputs.

The consequences of this were dire:  tens of thousands of families were pushed into poverty as a result of huge debts owed to tax authorities and more than a thousand children were taken into foster care. It is important to note here the disparate impact this will have had on minority ethnic communities given how this system created these risk profiles.

Use of this system was halted when external, government-backed auditors concluded that it was insufficiently transparent or accountable:  citizens could not trace how or why a particular decision had been made and had insufficient recourse to challenge such a decision.

This case illustrates a number of the ethical concerns with integrating AI into public services: which groups does it affect? Does it affect them differently? Can they understand how it affects them? Can they challenge any decisions it makes?

Appropriately and comprehensively assessing these risks is a complex task. But that does not mean that it is an insurmountable one. In fact, there are  a number of practical tools that organisations have developed to help implement AI solutions in a considered, human-centred and responsible way. These tools include:

Stakeholder impact assessments are perhaps the most useful tool in a service designers arsenal when looking to help teams think through the potential ways in which the application of AI could cause benefits and harm to different groups of people. SIAs encourage teams to think about the different user groups it is important to test the system with, and to think about potential negative consequences of the system and how to mitigate against them.

In their publication on “Understanding Artificial Intelligence Ethics and Safety,” the Alan Turing Institute presents an outline of a stakeholder impact assessment for an AI project in its alpha phase. Inspired in part by this work, and our own trustworthy AI assessment, the Oxford Insights abridged algorithmic impact assessment offers teams questions they can use to have an initial conversation about the types of stakeholders that could be affected by an AI project and prompts teams to think through how they could mitigate against any negative consequences of the system.

Data ethics frameworks are intended to help teams involved in a project which involves the use and collection of data to explore potential ethical issues related to the collection, storage, transfer and use of data, both at the beginning of a project and throughout the project lifecycle.

The Open Data Institute’s Data Ethics Canvas “provides a framework to develop ethical guidance that suits any context,” while the UK Government’s Central Digital and Data Data Office’s Data Ethics Framework gives “a set of principles to guide the design of appropriate data use in the public sector.” Both of these resources can be helpful to designers as they seek to navigate the ethical issues related to data collection and use, and think through these questions early on in their work.

Designers can also benefit from following AI ethics guidelines which are tailored to local context, to ensure that particular cultural or legislative concerns are addressed. A list of national governments which have started to adopt their own ethical guidelines around the use of algorithms has been mapped in the AI Ethics Guidelines Global Inventory by Algorithm Watch.

Ensuring that examples of AI in government are published to open registries for algorithms and AI applications also represents a positive stride toward increased transparency, allowing residents to track how algorithms and AI are used in their cities or countries and provide feedback. At the city level, Amsterdam and Helsinki  are notable examples of locales that have already implemented such registries. On the national level, the Public Law Project has also created a repository of automation examples in the UK government, ranking them in terms of transparency.

At Oxford Insights, we have also developed a Trustworthy AI assessment, intended to help government officials and researchers evaluate the extent to which AI is being used in a given country in a trustworthy and ethical manner. The assessment includes a framework and assessment questions under the five pillars of public purpose, human-centred values, transparency and explainability, robustness, security and safety, and accountability.

Using the assessment questions covered under each pillar, a user can determine a given government’s score in each of the pillars, showing the areas that the government is doing well in and areas in which they could improve in their implementation of trustworthy AI systems in the public sector.

Popular discourse around AI can range from unbounded optimism to total risk aversion. We believe it’s important to chart a balance between these two perspectives, recognising that AI can have positive benefits while keeping a realistic view of potential negative impacts. Designers have an important role to play in designing through this complexity. By co-designing AI applications alongside those who will be affected by them, we can advocate for a human-centred approach to artificial intelligence, which is vigilant to the risk of discrimination, and champions transparency and accountability.

Building trustworthy and ethical AI systems is not something that should be discussed once, but instead necessitates an ongoing process of dialogue with the community of people affected by the system. We will explore these ideas in more detail in our next blog where we’ll discuss what we learned about AI ethics during our time at the Service Design in Government conference in Edinburgh.

Insights

More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems

28 May 2024

Strengthening AI Governance in Chile: GobLab UAI and the Importance of Collaboration

05 June 2024

Towards a Digital Society: Trinidad and Tobago’s AI Ambitions

17 June 2024

General election 2024 manifestos: the AI, data and digital TLDR

26 June 2024

Building Egypt’s AI Future: Capacity-Building, Compute Infrastructure, and Domestic LLMs

16 July 2024

Beyond the hype: thoughts on digital, data, and AI and the first 100 days