23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

by Richard Stirling

In November 2020, the most senior nuclear scientist in Iran was assassinated, with Iranian officials attributing the killing to Israel. Iran claimed that Mohsen Fakhrizadeh was shot by a gun operated remotely. Apparently, facial recognition technology targeted him but left his wife untouched. 

It’s difficult to know how much of this is true. Did an algorithm really automatically open fire having identified Fakhrizadeh? Was a human behind the controls with facial recognition software present to assist target identification? Or was the whole story exaggerated to make Israel seem malign and all-powerful? Whatever the answer, it didn’t stop The Times running a  headline claiming that ‘the era of AI assassinations have arrived’.

Despite its suspiciousness and extremity, the case neatly captures the inherent ‘creepiness’ of the idea of AI decision-making. Automated decision-making across sectors and at all levels of government is something that we find instinctively worrying, and it’s not at all clear that we have a good ethical framework for understanding and justifying it.

A government seeking to tackle the ethical issues raised by AI and automated decisions needs to consider the problem at different levels.

RS figure.PNG

As with so many other problems the first step is to ask yourself: what does the person expect – are you about to do something creepy? In this case government is here to protect the needs of the citizen and make sure that AI use respects societal expectations. This is an answer that will vary from country to country and will probably have exceptions.

The next question is to ask whether the decision is one that ought to be able taken by computer. Are there are some decisions that are so important that they ought to be taken by people? Many people argue that AI shouldn’t  be able to take kill decisions. Factors to consider are the impact on people’s lives and liberty, and the false positive/false negative rates.

Next is to consider how the analysis is done. Some decisions should be transparent – how they are taken able to be challenged e.g. decisions on health interventions. Others it is enough to know the inputs e.g. prioritisation and triage of a caseload. While techniques like neural nets can be computationally advantageous then they are (currently) opaque. When rolling out an AI a government needs to be careful to make sure it uses a system that provides appropriate transparency and accountability to the public.

The final level is the data – data can contain bias in how it is collected, or even bias in what is collected. We recommend that governments use tools like the data ethics canvas to make sure they have taken these considerations into account.

Ethics is a complex field and this is just one way to think through the issues. Get in touch if you would like to discuss things further.

Insights

More insights

12 July 2017

Five levels of AI in public service

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

04 June 2019

Ethics and AI: a crash course

10 January 2020

To tackle regional inequality, AI strategies need to go local

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems