16 October 2017

The UK Government’s AI review: what’s missing?

This weekend the UK Government published its independent review into how to make sure the UK remains at the forefront of artificial intelligence (AI) developments. At Oxford Insights we’ve been looking forward to reading the results of this review. AI is an enabling technology with enormous potential. The UK is one of the leading countries in the world in the research and application of AI. The UK has been at the forefront on AI developments, from early work on code by Ada Lovelace and the beginnings of AI with Alan Turing to the development of advanced neural networks and autonomous vehicles today. In the last year there have been strategic statements of intent (and substantial investment) by RussiaChinaCanada and Singapore. The UK needs to act now to make sure it capitalises on its strong research position.

The report has important recommendations about

  • ensuring better access to underlying data – including through new Data Trusts;
  • improving the supply of skills – including from abroad;
  • investing in AI research; and
  • supporting industry uptake – including adoption of AI technologies by the UK government.

These recommendations target a number of the right areas and we look forward to the details contained in the government response.  The UK AI Council, for example, could be a great new development if it gets real teeth, such as being able to influence the development of policy, shape the support offered by the Department for Business, Energy and Industrial Strategy (BEIS) and the UK Department for International Trade (DIT), or help decide where the government invests in AI research or technology applications. Otherwise, there’s a risk that the energy that such a body can generate will atrophy over time.

There are three areas where the report is weaker. We are keen to know how will the UK Government address:

  • the ethical challenges thrown up, for example, by the Stanford experiment to see whether AI can detect sexuality;
  • regulatory oversight of AI and AI-powered organisations; and
  • the impact of AI technologies on the UK workforce?

The review team had a tight scope for the review: make recommendations on how to grow the UK economy and create jobs. Ethics was explicitly out of scope. But ethics, regulation and jobs are dominating the public discourse, both in the UK and around the world. Unless the UK Government addresses them explicitly, these concerns will continue to undermine the positive potential of artificial intelligence.

 

While we will explore these topics in more detail in later blogs, I hoped to see recommendations in the following areas:

An ethical framework

To be considered legitimate and, in turn, to be useful, AI needs to be trusted by society. It can bring huge benefits but also can be used in ways that society would not agree with (and that might achieve counter-intuitive or counter-productive results). Government should work with the UK AI Council to create a clear set of ethical principles governing the use of AI, for example to prevent the use of AI to discriminate against particular segments of society.

I believe the first draft of these principles should be published in the next six months, because creating an ethical framework quickly is critical. Already, many are concerned that AI technology is ahead of our thinking about its ethics. This has led, for example, to DeepMind creating its own ethics research group.

Regulatory oversight

AI can increase power imbalances in society – it can help those with power exercise it more effectively. It can also lead to new monopoly situations, with data or capability getting concentrated in a few key platforms.

There are existing bodies and frameworks for dealing with these issues but too often their powers are framed by the business structures and techniques of the past. Government should work to make sure that existing powers and bodies have the legal framework and skills they need to cope with data and AI. For example, access to data should be one of the aspects that the Competitions and Markets Authority  can consider. The government should publicly report on progress every 12 months in order that it responds quickly to changes as AI technology develops.

Workforce

AI will lead to the replacement of some work previously done by ‘white collar’ workers. The experience of Germany in the 1990s as heavy industry was mechanised shows that this can be positive and with people able to find more jobs in other sections of the economy. Government should put in place plans to manage that transition as soon as possible. This should include supporting workers in picking up skills and jobs which are complementary to work done by AI e.g. supporting people into roles that require human empathy or connection and helping people find the most meaningful parts of their work. This should be reported on as part of BEIS’s ongoing work on industrial strategy.

The UK has a strategic opportunity to be the best place to start an AI business – not because it has the least regulation, but because it has the best regulation. The UK should have a clear concept of AI ethics, a high level of trust, and legal certainty. By creating these structures, the UK can stay at the cutting edge.

Oxford Insights looks forward to reading the details of the Government’s response and contributing to this debate in the UK.

Insights

More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems

28 May 2024

Strengthening AI Governance in Chile: GobLab UAI and the Importance of Collaboration

05 June 2024

Towards a Digital Society: Trinidad and Tobago’s AI Ambitions