08 May 2018

AI in the UK: are we ‘ready, willing and able’?

By Laura Caccia

April’s report from the House of Lords’ Select Committee on Artificial Intelligence asks if the UK is ‘ready, willing, and able’ for AI. Our Government AI Readiness Index ranked the UK government as best positioned in the OECD to take advantage of AI. On many measures, at least, we are ready. Whether we are willing and able is another question.

Based on discussions with a range of organisations and experts, the Committee makes a number of practical recommendations for AI in the UK:

  • A Government Office for AI that helps coordinate and grow AI in the UK;
  • An AI council, to create a system to inform people when decisions are being made by AI;
  • A cross-sector AI code to ensure AI development and implementation remains ethical.

You can find the official conclusions and recommendations here. We have picked out some key areas from the report not mentioned in the official conclusions and recommendations which we think are worth some further thought.

Palace_of_Westminster,_London_-_Feb_2007.jpg

1. Education: widespread education about AI is necessary, but we don’t know what to teach and to whom

As a society, our understanding of AI and other technologies shaping our digital experience is patchy. Statements from US Senators quizzing Zuckerberg in April that show a lack of basic internet awareness have been a source of much public amusement. But the average internet user’s own ignorance can be just as dangerous. It is no longer safe to avoid questions of agency and intent when we use technology: Who has built this? What are they selling? How am I paying?

The Select Committee’s report does not reach a clear conclusion on how far the UK Government should push the AI education agenda. Aside from the recommendation that ‘people should be provided with relevant information’, the Committee does not take a clear stance on who should provide such information, and what counts as ‘relevant’. The Information Commissioner’s Office (ICO) suggested that it would be more helpful to focus on AI’s consequences rather than internal workings, as ‘there is a need to be realistic about the public’s ability to understand in detail how the technology works.’

We agree that a focus on the outcome can make it easier for people to engage in a debate about AI in the short term. But understanding the basics of how a learning algorithm works is a vital skill for all who engage in the world of technology that is currently capitalising on our ignorance. Without an understanding of how AI systems work, and how companies use them, how can we possibly know what we are signing up for when we click ‘agree’ to a set of incomprehensible terms and conditions?

2. Accountability: we need legal clarity of responsibility for AI to encourage innovation as well as to protect internet users

In the report Professor Sir David Spiegelhalter, President of the Royal Statistical Society, is quoted saying that the ultimate responsibility for maintaining clarity about how AI systems work lies with the individual researchers and practitioners who make them. He asks why they are not ‘working with the media and ensuring that the right sorts of stories appear.’ Yet as a society we are shifting towards blaming the companies that buy AI systems from researchers. The reason that the US Senate just grilled Facebook was that we are outraged Facebook did not take more responsibility for educating its users on the meaning of its privacy policy. It is still unclear where along the production chain responsibility lies.

The report’s discussion of the difficult legal issues surrounding AI technology is particularly interesting. It begins with the general point that the process for assigning accountability to an AI system is a significant gap in our current legal framework. However, it has a new take on one of the most common arguments against regulation in technology: that regulation inhibits innovation. Instead, the report states that ‘AI is different,’ since ‘without appropriately complex regulation that assigns responsibility, companies may not want to use AI tools.’ It is clear that we need to make AI safe to develop as well as to use.

3. Investment: media sensationalism is making sensible investment in AI harder

A lack of clarity on AI’s problems and potential is not just an issue for the everyday technology user. The report notes the impact of Hollywood ‘sensationalism’ that has led to a polarisation of attitudes towards AI investment into over enthusiasm and fearful reluctance. Sarah O’Connor, employment correspondent for the Financial Times, notes that articles with ‘robots’ or ‘artificial intelligence’ in the headline ensure that at least ‘twice as many people click on it.’ She mentions that some journalists sensationalise the subject in order to drive web traffic and advertising revenues.

On the one hand, this increase in AI enthusiasm has led some research scientists to inflate the potential of AI to ‘attract prestigious research grants.’ Professor Kathleen Richardson and Nika Mahnič noted that ‘huge EU funded projects are now promoting unfounded mythologies about the capabilities of AI.’ On the other hand, some AI researchers spoke of fears that developments and investment in AI might be ‘threatened with the kind of public hostility directed towards genetically modified (GM) crops in the 1990s and 2000s’ (Raymond Williams Foundation).

Again, this is an issue of responsibility. Companies of all shapes and sizes have not yet had to explain themselves properly, either to their investors or to the public. Therefore, more clarity on the capabilities of AI technologies is needed to drive investment in the UK.

_________________

Roy Amara, President of Palo Alto’s Institute for the Future, famously created ‘Amara’s law.’ This states that ‘we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.’ Hopefully with appropriate discussion and a pursuit of clarity we can strike the right balance; to be be truly ready, willing and able for artificial intelligence.

Insights

More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems