26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

by Katie Passey

apple-applications-apps-607812.jpg

 

Internet platforms and emerging technologies are at the centre of the debate about how to beat online extremism. However, from narratives of AI successes to accusations of inactivity, it is unclear what work is being done and how successful it has been in the fight to prevent online radicalisation. UK Prime Minister Theresa May accused social media companies of providing platforms for the online radicalisation of violent extremists, she stated:  ‘We cannot allow this ideology the safe space it needs to breed – yet that is precisely what the Internet and the big companies that provide Internet-based services provide’. Meanwhile, Facebook, Twitter and YouTube report statistical successes in their efforts to tackle online extremism: throughout 2017 Twitter suspended 574,109 accounts for violations related to the promotion of terrorism; between January and June 2018 over 75 percent of content flagged by YouTube’s automated systems was removed before it had received any views; and Facebook acted upon 1.9 million pieces of terrorist content between January and March 2018.

But there’s more to the story than these numbers suggest. Oxford Insights’ forthcoming report will illustrate the harsh reality that despite the impressive statistics and the evidenced capabilities of emerging technologies, more could be done by social media platforms to remove violating and illegal terrorist content online.

I interviewed Dr Hany Farid and Joshua Fisher-Birch from the Counter Extremism Project [CEP] on their recent reports – Okay Google, Show me Extremism and the eGlyph Web Crawler – both of which confirmed my suspicions that social media platforms are not doing as much as they could to combat online terrorist content. From hash sharing and image matching to conduct analysis and cluster profiling, there are a plethora of ways emerging technologies are being used to combat sinister content online; yet, when applied to online terrorist content these technologies are not being developed or used as aggressively as they could be. CEP found that online hashing systems are not working effectively to prevent re-uploads of known terrorist videos (with the same extremist content being repeatedly uploaded and deleted) and that a user searching for ISIS material on YouTube was more than three times as likely to encounter extremist material than counter-narratives. Many cases of known terrorist content from proscribed organisations are left on social media pages to inspire and recruit. Two recent examples found by CEP included: ISIS’ video ‘Graveyard of Enemies’, found on 29 August 2018 after two weeks with over 1,000 views, 93 likes and 80 shares, and ISIS’ video ‘You are not held responsible except for yourself’, found on 5 September 2018 after five days with 806 views, 90 likes and 13 shares.

The hindrance in removing non-state terrorist content does not derive from the technology, but alternatively the social media companies themselves. The action needed to effectively deny non-state terrorist organisations access to surface web platforms is not yet consistent with the business model and ideological principles of social media and communications providers. This is not to say that social media companies are deliberately encouraging terrorism, or deliberately making money from promoting it. Instead, platforms are struggling to find a balance between legitimate censorship and freedom of speech; they have tended to see themselves as communities that enable conversation and global connectivity, not the regulator of opinions. Twitter’s CEO Jack Dorsey this month argued that Twitter would ‘rather be judged by the impartiality of outcomes, and criticised when we fail’. A similar opinion was expressed by Facebook’s Chief Executive, Mark Zuckerberg, in March: ‘I feel fundamentally uncomfortable sitting here in California… making content policy decisions around the world… things like “where’s the line on hate speech?’“, I mean, who chose me to be the person that did that?’.

In terms of business models, being over-censorious, even of illegal content, can be bad for social media businesses as it could deter people from the platform if it is perceived that democratic principles are being compromised online. Social media platforms rely on advertising as their main source of revenue; fewer users on the platform clicking adverts means less revenue. As revealed by Cambridge Analytica’s psychographics, advertisers can entice users to remain clicking on platforms through tailored adverts constructed from a user’s extracted data activity across the Internet, to build a user psychological profile, and using that profile to show the user content that gets them clicking more and more. Cambridge Analytica’s former CEO Alexander Nix argued that: ‘if you know the personality of the people you are targeting, you can nuance your messaging to resonate more effectively with those key audience groups’. Typically, outrageous and controversial content gets users clicking the most because of its ability to evoke an emotional response that drives social sharing. Although this claim was disputed by Facebook’s Vice President of Public Policy Lord Richard Allan, who protested that ‘shocking content does not make us money, that is just a misunderstanding of how the system works’, it is not difficult to see how Facebook, and other platforms, might want to protect their high value content. This raises some uncomfortable questions about platforms’ ability to prevent online radicalisation and future efforts of governments to collaborate with them.

Emerging technologies have become useful in the fight against online radicalisation because of their speed, versatility and ability to process data more quickly than people can. However, until a strategy is in place that is mutually agreed between governments and social media companies on the ways in which non-state terrorist organisations are denied access to surface web platforms, the technology will only be as effective in the fight as social media companies dictate.

The next in this series of insights examines the final side of this relationship, asking: what role can the UK Government play in better combating terrorist content online?

We will be publishing a series of insights into the role of technology in non-state terrorist content online over the coming months, culminating in a full report. In the meantime, look out for part three in this three-part series. Missed part one?

Click  here.

If you have any queries or questions, or would like to be involved with the report, please get in touch at info@oxfordinsights.com.

Insights

More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems