13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

by Scarlet George

In May, we gave you our take on the report on AI in the UK presented by the House of Lords’ Select Committee on Artificial Intelligence, ‘AI in the UK: ready, willing and able?’. Now, the UK Government has responded.

Our previous blog outlined topics in the Lords’ report which we thought needed further examination: education, accountability and engagement. Here, we look at the Government’s take on these key topics, and what areas still need to be addressed.


1. Education: the Government is not doing enough to promote the value of human specialties (such as emotional intelligence) in the AI economy

The Lords’ Report recommended that the Government focus on a few key areas in terms of AI education. The first was that public sector, civil society and private organisations need to work together to ‘improve digital understanding and data literacy’. This led into their next recommendation, that school curriculums need to be adjusted to account for a lack of those skills. The Lords made the final point that a new curriculum must focus on the ‘wider social and ethical aspects of computer science and artificial intelligence’.

In their response, the Government has explained how they aim to educate students at both primary and secondary level and increase teacher knowledge of computer science generally. As recommended in the House of Lords report, the Government aims to educate children and teachers not only on how to use these new technologies, but also on the ethical questions behind their applications and potential risks. £84 million of new funds are set to be injected into the system, some of which will be used to upskill up to 8,000 computer science teachers.

However, as we gain a better sense of what an AI-driven world will look like experts agree that school curriculums should incorporate a greater focus on emotional and communication skills alongside computer science and STEM subjects. Neither the House of Lords’ nor the Government’s reports acknowledge the need to prepare for a new world where human specialties such as emotional intelligence are becoming increasingly important. This is an area that is already being explored by governments in other countries, such as the New South Wales Department of Education.

One option the Government could take would be to create new primary and secondary curriculums that actively teach human specialties. AI is already affecting the way we work. Therefore, the Government must rethink how, when and where we learn. Involving students in collaborative learning projects, ensuring that they learn skills ranging from the scientific to the emotional it necessary to help them grow up to become part of an active citizenry.

2. Accountability: the Government’s response is not clear about who will be responsible for AI and the decisions it makes

The key question here is: who will be responsible if AI tools go wrong? The Lords’ report directed the Government to ensure that there is a legal framework to protect citizens from the potential malfunction of AI systems, and from the decisions they make. The report questioned whether it is necessary to create new legal mechanisms, or whether existing ones could be used.

The government’s response has not provided much more clarity. They have confirmed that the Office for Artificial Intelligence, Centre for Data Ethics and Innovation, and the AI Council will focus on the question of who will be held accountable, but we still don’t know what direction they will take. We do know that regulating AI and creating legislation to govern new technology is not an easy task, given the complexity of the technologies involved and the speed at which they are developing.

A proper conversation about the creation of a regulatory framework that also fosters innovation is necessary. The Lords’ report and Government response would have benefitted from a more specific approach to the issue by tackling concerns about privacy and consent and how these can be regulated. The international community is already discussing how this can be done. The UK Government needs to take a more concrete approach to this issue if the country is to be a global leader in the development of AI.

3. Engagement: the government is making positive steps towards ensuring the general public is informed and consulted on AI

The Lords’ report addressed the need to establish ‘public trust and confidence in how to use artificial intelligence, as well as explain the risks’. However, the report also notes that it is not the role of the government to intervene and prevent the media from sensationalising the topic.

The government directly responds to the issue of media sensationalism by explicitly addressing it on page six. The report states that further communication needs to be led by experts (including government bodies) in the field, to ensure that the benefits and risks of AI are effectively communicated to the public and businesses. An important aim mentioned in the government’s report is to ‘ensure debate and policy-making are sufficiently evidence-based and informed by convening experts across sectors’. This gives a clear indication that the government plans to form policy around AI and aims to work with all relevant stakeholders, including the public.

It is crucial that citizens are not afraid of these new technologies without having a real sense of how they work or what they do. Therefore, the government needs to continue engaging with citizens and actively educating people on the pertinent issues: the risks, AI’s use, and what the future may hold.


Less than three months after the Lords gave their recommendations on how the UK can be a global leader in developing AI, the government has responded. Their response gives clear direction in terms of education around AI, and demonstrates the Government’s aims to seek counsel from public opinion and experts in the field. While some responses were explicit in their aims, others left us with more questions than answers. We are eager see more concrete steps being taken so the public and private sectors understand what to expect and can more easily give their input as the UK’s AI industry develops. To ensure that all stakeholders are involved, the Government needs to map out an engagement strategy that encourages all voices to be heard and demonstrates how the Government will report their findings.


More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems