04 June 2019

Ethics and AI: a crash course

by Sabrina Martin

Walk in to any entry-level Ethics class and you’ll observe academic philosophers teaching their students three theories of what it means to be ethical:

  1. Utilitarianism: Does an action produce net good consequences?
  2. Deontology: Does an action follow a moral rule? e.g. the Golden Rule: ‘Treat others how you want to be treated’;
  3. Virtue Ethics: Does an action contribute to virtue (whatever that means–blame Aristotle)?

Ethical practitioners, on the other hand, will be quick to tell you the world doesn’t work so neatly.

Some Philosophy

Let’s apply these theories to AI. A civil servant in the Department for Education is trying to decide whether or not to let a computer predict children who might be at risk of falling behind at school. We might then ask, ‘Does automating this process produce a net good?’ The answer seems to be, probably, yes: the more children that we can identify as being at risk, the more children we can help.

One potential problem is that there might be a few children who are falsely identified as being ‘at risk’ due to patchy data fed into the automated programme, but the overall net good produced still seems to be positive.

Let’s look at the flipside: what about children who are missed out of our ‘at risk’ lists, because of patchy data? The net benefits of the automation might still be positive, but individuals can fall through the cracks (and this is a common concern with automated processes)!

So it now seems that a theory of net good by itself won’t help us determine whether or not we should automate. So this first theory doesn’t seem to fully solve our ethical conundrums.

Now let’s ask whether

 or not automating the prediction of at-risk children follows the Golden Rule. This ethical framework seems to produce a resounding ‘no’ (for most of us at least). Intuitively, we would not want our kids being categorized and labelled according to impersonal data points.

We now have two conflicting answers from different ethical theories. One says, ‘yes, automate’ the other says ‘no, don’t’.

It’s no surprise to philosophers that utilitarianism and deontology often produce moral answers; some of philosophy’s greatest moral conundrums were invented to highlight this disparity. But when making a real-life policy decision, what do we do? If I’m a utilitarian and my boss is a deontologist, how do we decide whether or not it’s ethical to automate?

There are a few answers to this. First, we might say that ethics isn’t the only consideration at play, and we have to think about cost, resources, and efficiency. So the ethics gets relegated to a secondary consideration–or we pick the ethical theory that is most in-line our economic reasoning and justify our moral rationality post hoc. Secondly, however, we might want to make philosophical purists angry by devising a framework that lets us evaluate automation decisions in a more holistic way.

This brings us to the third moral theory: virtue ethics.

AI and Virtue: Strange bedfellows

Virtue is a concept that has largely gone out of our non-religious vocabularies. In philosophy, the idea roughly signifies moral excellence and human flourishing. So it certainly seems like an odd question to ask about how a machine can contribute to that. But perhaps if a computer can identify children who are at risk of not being able to flourish to their full potential, we might be able to make the case that automation can contribute to virtue. Virtue ethics, however, puts a lot of emphasis on learning through experience and that’s something that a computer seems to take away.

Another characteristic that virtue ethics emphasises is an idea of ‘well-roundedness’: a person can only be virtuous once they are well developed all around. So, as opposed to utilitarianism and deontology, where it’s enough to make you a moral person if you do one thing morally, you can’t be moral according to virtue ethics by just doing one good thing. You have to repeatedly do things well and in good consciousness, and this cultivates a virtuous moral character.

At first glance, this doesn’t appear to apply at all to our case study of flagging at risk students.

So how could this apply to machines?

A new way of thinking about AI ethics 

For a long time, AI ethics tended to be utilitarian. Ethicists prescribed looking at the consequences that automation would bring about and we decide whether or not automation was ethical based on the outcomes.

It’s now becoming more of a fad to consider AI ethics deontologically. Asimov’s Three Laws of Robotics are a great pop culture example of how we might start to think about the role of ethical duties when it comes to AI. The problem with this is, as the political scientist Peter W. Singer points out, is that ‘no technology can yet replicate Asimov’s laws inside a machine’. We can’t program a computer program never to harm anyone.

Instead, I suggest that we begin to look at the process more holistically. As one of my OI colleagues (who also has a penchant for philosophy) put it, ‘it’s not enough for a machine to perform its most immediate task well. It has to be designed in a virtuous spirit’.

Automation is a process. It involves inputs, the actual automation process, and outputs. Whereas utilitarianism and deontology look at decisions individually, I instead propose that we might want to take a process-based approach to AI ethics where we break the ultimate decision down into its constituent parts and evaluate them each. So, perhaps a virtue ethics of AI has to make us look at the process that surround AI rather than the consequences of the machine outputs, or the rules that govern or its behaviour. This gets us back to the human side of things — which is what virtue ethics fundamentally recommends.

Insights

More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems

28 May 2024

Strengthening AI Governance in Chile: GobLab UAI and the Importance of Collaboration

05 June 2024

Towards a Digital Society: Trinidad and Tobago’s AI Ambitions

17 June 2024

General election 2024 manifestos: the AI, data and digital TLDR

26 June 2024

Building Egypt’s AI Future: Capacity-Building, Compute Infrastructure, and Domestic LLMs

16 July 2024

Beyond the hype: thoughts on digital, data, and AI and the first 100 days