02 May 2019

Should we be scared of artificial intelligence?

By Scarlet George



So you’ve probably heard a bit about artificial intelligence in the media. You may have seen snatches of AI breakthroughs on the news, or watched  that Simpsons episode where the house takes over and tries to kill Homer. There is a lot of sensationalism out there, and we know that sensationalism pays. However, some of these fears are becoming close(ish) to reality.  These days it is not entirely off the wall to imagine our jobs being taken over by robots, self-driving cars running over pedestrians, and our personal data being stolen.

If that isn’t bad enough, AI has been shown to amplify bias against certain races and genders due to algorithms being built on data reflecting societal biases. In one example, an algorithm used in courtrooms across the United States of America was shown to be prejudiced against black defendants. The programme, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was created in order to identify whether criminals were more or less likely to reoffend. A report, compiled by ProPublica, asserts that the programme wrongly flagged black defendants as twice as likely to reoffend in comparison to white people. 45% of black defendants were falsely labelled as at higher risk of reoffending, whereas only 24% of white defendants were incorrectly put in the same category. Further evidence showed that only 28% of black defendants were labelled as being at lower risk of reoffending, compared to 48% of white people.

Clearly, we have good reason to be anxious about biased AI programmes. But, the robots are coming, whether or not we want them to. So, instead of fear mongering or burying our heads in the sand, we have to work on a way to build trust in AI as a society. Demonstrating how AI can be used to produce helpful outcomes is key to building trust. It is also up to governments and international organisations to form policies to ease people’s fears.

To build more trust in AI there are a number of areas that need to be addressed:

  • Enabling more women and other underrepresented groups to take leading roles in the creation of AI.
  • Understanding and addressing human prejudices, which can often be written into data and programmes, skewing the actions and outcomes of AI tools.
  • Focusing on addressing gaps (such as equal representation across genders, races, and age groups) in big data.
  • Finding solutions for people who have lost their jobs, or are at risk of losing them, to automation.

With all that said, it’s not as if our governments and international organisations are doing nothing. In fact, a lot of time and money is going into solving these issues, and a number of organisations are making important steps in the right direction.

The United Nations Global Pulse is a network of innovation labs that focuses on ‘harnessing big data for development and humanitarian action’ and helping public sector institutions create well-informed policy that uses big data and AI for public good. UN Global Pulse works on a number of projects to help close data gaps while also contributing research to help achieve the Sustainable Development Goals (SDGs) through the use of real-time data and AI.

Pulse Lab Kampala is one such UN Global Pulse initiative. It has developed a radio application that monitors what local radio discussions are focusing on. The aim of the project was to demonstrate that UN agencies can gain insight into local concerns through the use of AI and big data analytics. Findings from the initiative were then used to help design local programmes to enact the SDGs.

In 2016, the US Government’s National Science and Technology Council formed the Subcommittee on Machine Learning and Artificial Intelligence. It was created with the aim of deriving the greatest possible benefit from AI, as well as addressing the challenges this new technology could pose both now and in the future. The Subcommittee went on to publish the National Artificial Intelligence Research and Development Strategic Plan, with the specific aim of developing policy relating to the use of AI. National policy efforts around AI such as these helped contribute to the US’s high ranking of second place in Oxford Insights’ 2017 Government AI Readiness Index. It remains to be seen whether the work of the new administration will help or hinder the country’s score in the 2019 version of the Index, due to be published later this month.

Many countries have started to develop AI strategies which explicitly tackle the four areas crucial for building trust in AI mentioned above, which is helping citizens not only become more aware of the technology, but also to trust it more. France, Italy and Canada are great examples of countries that have invested time and resources into developing strategies and solutions to the problems faced when it comes to the use of AI and data.

So, should we be scared of AI at all? Our fears are not unfounded, but as we have seen, there are numerous measures being put in place to protect us and our future. So long as governments and international organisations continue to work towards enacting policy that both protects us (whether that’s from job loss and/or out of control self-driving cars) and encourages more diversity in the field, then we may be on track for AI to be a constructive addition to society. But for now, I’ll still remain a little cautious, just in case (as in the Simpsons) my house is taken over by an AI Pierce Brosnan.


More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems

28 May 2024

Strengthening AI Governance in Chile: GobLab UAI and the Importance of Collaboration

05 June 2024

Towards a Digital Society: Trinidad and Tobago’s AI Ambitions

17 June 2024

General election 2024 manifestos: the AI, data and digital TLDR

26 June 2024

Building Egypt’s AI Future: Capacity-Building, Compute Infrastructure, and Domestic LLMs

16 July 2024

Beyond the hype: thoughts on digital, data, and AI and the first 100 days