19 May 2023

LLMs in Government: Brainstorming Applications

At Oxford Insights, we believe in creating a positive vision for how AI can be used by governments to improve public services for the people using them. The release of a number of widely accessible and adaptable large language models (LLMs), such as ChatGPT, Bard and Sydney, in recent months prompted an internal discussion on what this type of model could mean for the public sector.

It might seem premature, or too rash, to be having this discussion. Jan Leike, Alignment Lead at OpenAI, for instance, called for caution in response to what looks like a rapid adoption of these immature technologies across our economies. The technology that governments use supports the delivery of essential services and so the need for caution is arguably even more vital than in other parts of our economies.

However, governments are already finding and implementing use cases of LLMs. Within the UK government, they are being used to spot trends in healthcare reports, for example.

So, we find ourselves at a point in time where there is excitement, early movement towards adoption, and a need for caution. Therefore, we felt that it was a good time to bring together some of our own ideas and concerns about the technology. To do so, we held a group brainstorming session (a neatened version of which can be found here) to bring together our thoughts on the subject.

Framing the discussion

First of all, were we talking about letting ChatGPT loose on government data? Short answer: no.

The long answer involves explaining what LLMs are. LLMs are deep learning algorithms that generate text based on predictions about which words are likely to occur next in a piece of text. The ‘large’ in LLM refers to the number of parameters the model has and the amount of data it has been trained on. For reference, 10 years ago, state-of-the-art algorithms were being trained on 150GB datasets; now, LLMs are trained on up to an estimated 10,000GB. Being trained on all this data means that the text an LLM generates can be very useful. Models can, among other things, summarise information, extract information from large sets of text, translate, answer questions, and follow your instructions.

This type of model is not limited to those built and run by large AI companies, such as OpenAI or Google. In fact, models run by large AI companies seem less promising for integrating into government. Given the legal and security issues—such as those around privacy and GDPR—it would be difficult for governments to make use of them as they stand at the moment. Instead, we saw the most promise in models that are open-source, procured domestically, or built by government, and that are run locally on government computers. This distinction helped scope our discussion of the risks involved and how the technology could be integrated into government operations.

Government Applications

The applications we discussed fell into several categories; user experience, policymaking, service design, data processing, and professional support. Each is expanded below.

User experience

Government services can be hard to find and what the user needs to do to access a service can be unclear. LLMs could help users as they navigate government websites by offering tailored, real-time support.

Examples

Policymaking

Policy decisions need to be reached based on the best evidence available. LLMs could help increase the evidence base and support policymakers’ analysis of it.

Examples

Service design

Digital services benefit from being designed and built according to agile principles. This requires significant user engagement, testing and iteration. LLMs could support teams in each of these steps. This could be impactful within services with limited design and software skills.

Examples

Data processing

Government teams have to process large amounts of unstructured data and are often doing so manually. LLMs could be used to help automate parts of the data processing that is going on by extracting and structuring data currently collected in an unstructured format.

Examples

Professional support

Civil servants can have high caseloads or admin-heavy parts of their job to deal with, which can worsen their professional experience. LLMs could remove some of the burden for civil servants.

Examples

Risks and Concerns for Government Applications

We tried to keep the discussion of risks and concerns to those we think specifically arise from using LLMs in government. Our thoughts covered how models are built and run, alongside how they are used in government.

How models are built and run

How models are used

These concerns demand that governments think deeply about how they manage LLM use in the public sector. Immediate government responses to recent LLM releases range from encouraging the developments to disabling some models for breaching existing regulations. Meanwhile, there is practical research underway into how we can mitigate potential harm, for example into how we should audit these models, which should feed into governments’ longer term responses.

As we collectively make sense of these technologies, we need a public discussion that takes seriously both the valid concerns about LLMs and how they could contribute to the public good. We welcome everyone to carry on our discussion on how LLMs could be used by governments to do so.

Insights

More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems

28 May 2024

Strengthening AI Governance in Chile: GobLab UAI and the Importance of Collaboration

05 June 2024

Towards a Digital Society: Trinidad and Tobago’s AI Ambitions