06 October 2022

Why You Should Know and Care About Algorithmic Transparency

I thought I would never understand nor care about algorithms. I was wrong. Algorithms are now being used in the public sector and government — and they are making decisions that affect us directly. This is why we should know, and care, about algorithmic transparency.

Initially, the word “algorithm” sounded to me like something extremely abstract and complex — the sort of term only geeky people would understand. I didn’t know how to put into plain words what an algorithm did nor how it could be used by government. When I first carried out the classic Wikipedia search, I read: “In mathematics and computer science, an algorithm is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing.” This was my face afterwards:

Luckily, when I joined Oxford Insights, I had the opportunity to learn about how Artificial Intelligence is used by governments across the world to improve their public services (check our AI Readiness Index 2021). I was also given the chance to learn about Trustworthy AI. This is when I finally started to understand in practical terms how algorithms are used in the public sector, and my interest in the topic of ‘algorithmic transparency’ grew.

How are algorithms used in the public sector? 

In the last few years, algorithms have been used in the public sector. Governments are using them to make a wide range of decisions, from dealing with waste collection to allocating heart transplants. Algorithms have been used to support investigations of reports made concerning possible illegal holiday rentals, but also to support child protection services, resource allocation planning (for example, homeless shelter planning), or predictive policing. Algorithms have also been adopted to accelerate efficiency—for instance, by helping to prioritise investments in road work through the analysis of traffic-bottlenecks—or expedite large-scale routine services (such as the processing of VISA applications.)

Algorithms are also supplementing or replacing decision-making previously undertaken by humans. Algorithmic models will analyse data and make predictions based on them. These predictions will then inform decisions. And these decisions could have a very  concrete impact on our lives. For instance, the UK government is trialling an algorithmic model to predict whether universal credit claimants should receive benefits based on their perceived likelihood of committing fraud in the future. While in the past this would have been an activity done by civil servants, now an automated model is analysing historic fraud data and deciding who should receive an economic aid based on the results.

When I learned about this, I couldn’t help but ask myself: how do these algorithms actually work? Who builds them? Are there any ethical concerns about the way they are built? Can I challenge a decision made by an algorithm?

Algorithmic bias and discrimination  

Not knowing the answers to those questions can be dangerous, as automated systems can in fact be discriminatory or biased. Safiya Noble talks about this in her book “Algorithms of Oppression” (check out her TEDx talk about algorithm bias too), where she discusses how algorithms behind search engines embed negative biases against women of colour and privilege whiteness. For instance, she gives the example of how when she first searched for the term “black girls” in Google ten years ago, the top results all led to porn sites. This didn’t happen when searching for “white girls”. According to the author, because search engines such as Google are motivated by commercial interests, big companies with influence and money can skew search results to their own benefit. This makes algorithms not neutral.

Another element that contributes to algorithmic bias is the source and quality of the data that is fed into them. For instance, if you want to train an algorithm to recognise faces, then you should train it with images of faces of people with different age, sex, ethnic background, etc. If you only feed an algorithm with white faces, it will then not learn to recognise faces from other ethnicities. If you only feed an algorithm with old faces, it will then not learn to recognise young ones. Not having quality data representative of every segment of society can hence make algorithms biased.

Learning how algorithms can foster discrimination and bias alarmed me. It made me realise how important it is to open up algorithms for public scrutiny, so that their purpose, structure, actions and outcomes can be challenged or disputed. Also, that it is imperative to make algorithms’ use and characteristics understandable to citizens, so that the latter can exert oversight on them.

Why algorithmic transparency?

All these thoughts brought me to the importance of what we call ‘algorithmic transparency’. How do we promote it and what does it look like? According to the Ada Lovelace Institute, in order to obtain meaningful transparency there are some key questions that need to be answered about an algorithmic system. These include: what are the data sources of a system? What is the logic of the system? Are there groups impacted by it? There are different ways for governments to find out and publicise the answers. Mechanisms such as the assessment and evaluation of automated systems, the standardised disclosure of the data which was used to create those systems, or the publishing of the code behind their functioning are some of them.

And why is having algorithm transparency important? Algorithmic transparency is essential to challenge automated decisions and ensure that our rights are protected. And this is because even algorithms which were created with a good purpose could undermine our rights. For instance, in the Netherlands, a court ruled that an algorithm which had been set up to detect welfare fraud was breaking the right to privacy.

Transparency is also critical to ensure that algorithms established to protect vulnerable populations are used fairly. According to the Fundación Éticas, the algorithm VioGen — launched in Spain to protect victims of sexual harrassment — raises concerns around lack of independent oversight, accountability, and end-user engagement. Being able to carry out external audits of these automated systems, especially when they take decisions which are far-reaching and related to sensitive topics, is key to ensure their correct functioning and maintain their fairness. By having algorithmic transparency, citizens would be able to dispute algorithmic decisions or outcomes. This, in turn, would reduce the existence of algorithmic bias.

How do we promote algorithmic transparency? 

Establishing legal frameworks and international standards that include algorithmic transparency would make governments accountable to promoting the latter. It would ensure that there are rights in place for citizens to oversee and dispute algorithmic decisions. The OECD AI Principles are a good example, as they include a principle on “Transparency and explainability” that mentions how AI Actors should “enable those affected by an AI system to understand the outcome; and, to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors and the logic that served as the basis for the prediction, recommendation or decision.

Some national governments have also started to adopt their own ethical guidelines around the use of algorithms — as mapped by the AI Ethics Guidelines Global Inventory by Algorithm Watch. This is a good way to ensure an ethical use of AI systems and avoid bias. Having public registers of algorithms is also a good step towards more transparency, as citizens can get familiarised with the cities’ algorithmic systems and give feedback on their use. Localities such as Amsterdam and Helsinki have already established their own.

For citizens to exercise public scrutiny over these systems, they need to receive digestible, simple information about them. This is especially so in the case of people who may not be comfortable using, or may lack the access to, technology. Making it easier to understand algorithms, through educational videos or public engagement campaigns, is a potential way to foster public participation in this area. Strengthening civil servants’ knowledge around digital and data rights as well as ethical concerns around the use of these systems is also important. This could contribute to establishing accountability checks within the government itself.

As technology and automated systems are more widely adopted by our own governments, it is critical for civil society to maintain scrutiny on how they are designed and used. The adoption of algorithms for decision-making in the public sector can be very beneficial — to speed up processes, analyse very high amounts of data rapidly, or provide additional evidence to make certain decisions — but it can also be harmful. To avoid them being so, it is key that these systems remain transparent.

In conclusion, no matter how confused you may initially be around what algorithms are (remember the meme?), I hope I made my point that it is very important for you to know and care about algorithmic transparency. 

At Oxford Insights, we care about the trustworthy use of AI. Please get in touch if you would like to seek advice at sales@oxfordinsights.com 

Insights

More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems