09 November 2023

Inclusive consensus building: Reflections from day 4 of AI Fringe

By Gonzalo Grau

Whilst intended to be a ‘complementary’ event to the Global AI Safety summit held at Bletchley Park, AI Fringe embodied the desire of those ‘left out’ of the summit to deepen their involvement in discussions around AI deployment and regulation. I attended day 4 of the event series, which revolved around work, safety, law, and democracy. As is usually the case with these big topics, there was a strong, palpable sense of excitement in the dimly lit lecture hall tucked away in London’s British Library.

In line with their participatory ethos, AI Fringe invited a broad range of participants from the public sector, civil society, academia, and tech. It was refreshing to see this diversity on an individual level as well — panellists came from diverse backgrounds and cultures. I felt this reflected a desire to broaden the scope of a discussion traditionally confined to the limits of technical expertise to one that recognized AI as a fundamentally socio-technical issue. That is, AI’s performance can only be understood and improved upon if we consider its social and technical aspects as inextricably linked and of equal importance.

After some captivating discussions touching on the future of work, the role of open source software in AI development, responsible engineering, and the democratic implications of these technologies, I came away with four common talking points.

1. Focusing on immediate harms is not short-termism

The Global AI Safety Summit was a step in the right direction. Holding a global discussion on the potential risks of powerful generative AI models – maybe even Artificial General Intelligence (AGI) – was necessary for value alignment and setting common goals.

That being said, focusing on hypothetical threats of existential proportions risks overshadowing the actual harms of AI in the present. These harms are well documented. The black box nature of some predictive models as well as their reliance on inherently biased data sources often results in discriminatory decisions that are difficult to contest. These limitations can cause a lot of harm in sensitive use cases like predictive policing, HR, and financial risk management.

Diverting our attention to long-term scenarios risks amplifying these harms. As Dr. Abigail Gilbert, Head of Praxis at the Institute for the Future of Work, put it, ‘79% of firms in the UK are adopting AI, but not all of them are innovation-ready.’ As systems develop on the foundations of current models, mistakes and mismanagement will remain ingrained in future versions, potentially at a much larger scale. Ensuring current AI uptake is done safely and responsibly right now is key to mitigating future risks. Our decisions matter now.

2. Big red lines

On this point, the opinions of panellists were more or less in line with the UK Government’s stance. Former Secretary of State for Science, Innovation and Technology Chloe Smith highlighted that the need for industry-specific regulation should not result in overly granular and stifling legislation.

Instead, AI regulation should be approached as a balancing act between ensuring adaptable legislation that is robust enough to adhere to core principles. Though Rishi Sunak has made it clear that the UK will not ‘rush to regulate’ AI, there is some consensus on the need for some ‘big red lines’. The CEO of OpenUK Amanda Brock echoed the need for a principles-based approach to legislating on AI.

Examples of such legislation exist, and were drafted well before AI development accelerated to what we see today. The Oxford Internet Institute’s Brent Mittelstadt points to the Data Protection Directive of 1995 as a good example of how to go about regulating frontier models. The European Union’s AI Act is also a good example of sober AI legislation. Sarah Chander, senior policy advisor at European Digital Rights, emphasised how the fact that it had been in development long before the launch of generative models like ChatGPT protected it from the distortionary effects of generative AI hype.

3. The need to meet halfway

Much of the debate around AI regulation has focused on one side of the market. Whether the burden of adaptation and risk mitigation falls on the demand or supply side, the question of who bears the burden of the transitional risk of AI uptake is an important one. In the labour market, the discussion around how to ‘augment’ jobs as opposed to automating them has focused on how to equip workers with the skills they need to interact with AI in a sustainable way.

Wilson Wong of the Chartered Institute of Personnel and Development pointed out that re-skilling and lifelong learning are ‘not a silver bullet’. He is right — it’s important to consider that many workers may struggle with re-skilling due to factors out of their control such as disabilities or duties of care. To ensure AI uptake in the workplace doesn’t disproportionately affect workers from vulnerable groups, it’s important to consider what existing skills workers already have and design implementation strategies to complement these. Similarly, regulators should make an effort to understand how software works before designing overly draconian compliance requirements.

Legislators should be careful with overburdening specific actors with adaptation and mitigation requirements — responsibilities should be spread evenly to reflect individual capabilities. In a general sense, shaping the development of these systems is as important as equipping consumers for AI deployment.

4. Openness and inclusivity for risk reduction

The idea that inclusivity and representation is important to responsible AI development is not news. This inclusivity should not be limited to training data, however. The AI Safety Summit was partly inclusive in extending an invitation to AI superpowers like China and actors from the global south, but is severely lacking in accurately representing private and civil society actors. Moreover, the presence of transnational tech corporations as, in the words of Dr. Abigail Gilbert, ‘pseudo-nation states’, highlighted the need to amplify the scope of actors involved in shaping consensus around AI.

Glitch founder and activist Seyi Akiwowo was clear in her denouncement of the failure of these technologies to attract participation from underrepresented groups in society. If this technology does not serve their needs, why would they use it? On this point, Katerina Spranger (Oxford Heartbeat) stressed the importance of user involvement in system design — the surgeons using her company’s technology needed to be educated on this technology in a non-coercive way so as to build trust and ensure sustainable uptake.

Similarly, open source software was mentioned as a potential pathway to risk reduction. Though the definition of open source AI is not clear as of yet, its enabling of diverse actors to participate in model design may reduce systematic risk and ensure different perspectives are embedded into these. Moving away from proprietary software also increases transparency in model design. GitHub’s Peter Cihon reminded us that currently, open source data pioneers transparency in AI development, and is considered industry best practice in explainability and accountability, two serious issues in AI.

In what may be the UK’s most AI-focused week in history, it was vital to have as many discussions with as broad a range of actors on the topic as possible. AI Fringe did a wonderful job of complementing the AI Safety Summit. Though it may not reflect the UK Government’s official stance on the matter, AI Fringe felt like a solid first step in an inclusive conversation on society’s priorities for the future of AI.

The full sessions of the event are available on Youtube and the AI Fringe website.

Insights

More insights

21 April 2017

Why Government is ready for AI

12 July 2017

Five levels of AI in public service

26 July 2017

Making it personal: civil service and morality

10 August 2017

AI: Is a robot assistant going to steal your job?

19 September 2017

AI and legitimacy: government in the age of the machine

06 October 2017

More Than The Trees Are Worth? Intangibles, Decision-Making, and the Meares Island Logging Conflict

16 October 2017

The UK Government’s AI review: what’s missing?

23 October 2017

Why unconference? #Reimagine2017

03 November 2017

AI: the ultimate intern

09 November 2017

Motherboard knows best?

23 November 2017

Beyond driverless cars: our take on the UK’s Autumn Budget 2017

05 December 2017

Why Black people don’t start businesses (and how more inclusive innovation could make a difference)

06 December 2017

“The things that make me interesting cannot be digitised”: leadership lessons from the Drucker Forum

23 January 2018

Want to get serious about artificial intelligence? You’ll need an AI strategy

15 February 2018

Economic disruption and runaway AI: what can governments do?

26 April 2018

Ranking governments on AI – it’s time to act

08 May 2018

AI in the UK: are we ‘ready, willing and able’?

24 May 2018

Mexico leads Latin America as one of the first ten countries in the world to launch an artificial intelligence strategy

05 July 2018

Beyond borders: talking at TEDxLondon

13 July 2018

Is the UK ready, willing and able for AI? The Government responds to the Lords’ report

17 July 2018

Suspending or shaping the AI policy frontier: has Germany become part of the AI strategy fallacy?

27 July 2018

From open data to artificial intelligence: the next frontier in anti-corruption

01 August 2018

Why every city needs to take action on AI

09 August 2018

When good intentions go bad: the role of technology in terrorist content online

26 September 2018

Actions speak louder than words: the role of technology in combating terrorist content online

08 February 2019

More than STEM: how teaching human specialties will help prepare kids for AI

02 May 2019

Should we be scared of artificial intelligence?

04 June 2019

Ethics and AI: a crash course

25 July 2019

Dear Boris

01 August 2019

AI: more than human?

06 August 2019

Towards Synthetic Reality: When DeepFakes meet AR/VR

19 September 2019

Predictive Analytics, Public Services and Poverty

10 January 2020

To tackle regional inequality, AI strategies need to go local

20 April 2020

Workshops in an age of COVID and lockdown

10 September 2020

Will automation accelerate what coronavirus started?

10 September 2020

Promoting gender equality and social inclusion through public procurement

21 September 2020

The Social Dilemma: A failed attempt to land a punch on Big Tech

20 October 2020

Data and Power: AI and Development in the Global South

23 December 2020

The ‘Creepiness Test’: When should we worry that AI is making decisions for us?

13 June 2022

Data promises to support climate action. Is it a double-edged sword?

30 September 2022

Towards a human-centred vision for public services: Human-Centred Public Services Index

06 October 2022

Why You Should Know and Care About Algorithmic Transparency

26 October 2022

Harnessing data for the public good: What can governments do?

09 December 2022

Behind the scenes of the Government AI Readiness Index

06 February 2023

Reflections on the Intel® AI for Youth Program

01 May 2023

Canada’s AI Policy: Leading the way in ethics, innovation, and talent

15 May 2023

Day in the life series: Giulia, Consultant

15 May 2023

Day in the life series: Emma, Consultant

17 May 2023

Day in the life series: Kirsty, Head of Programmes

18 May 2023

Day in the life series: Sully, Partnerships Associate/Consultant

19 May 2023

LLMs in Government: Brainstorming Applications

23 May 2023

Bahrain: Becoming a regional R&D Hub

30 May 2023

Driving AI adoption in the public sector: Uruguay’s efforts on capacity-building, trust, and AI ethics

07 June 2023

Jordan’s AI policy journey: Bridging vision and implementation

12 June 2023

Response to the UK’s Global Summit on AI Safety

20 June 2023

 Unlocking the economic potential of AI: Tajikistan’s plans to become more AI-ready

11 July 2023

Government transparency and anti-corruption standards: Reflections from the EITI Global Conference in Dakar, Senegal

31 August 2023

What is quantum technology and why should policymakers care about it?

21 September 2023

Practical tools for designers in government looking to avoid ethical AI nightmares

23 October 2023

Collective Intelligence: exploring ‘wicked problems’ in National Security

23 October 2023

Exploring the concepts of digital twin, digital shadow, and digital model

30 October 2023

How to hire privileged white men

13 November 2023

AI for Climate Change: Can AI help us improve our home’s energy efficiency?

14 November 2023

Navigating the AI summit boom: Initial reflections

20 November 2023

AI for Climate Change: Improving home energy efficiency by retrofitting

24 November 2023

Will AI kill us all?

27 November 2023

AI for Climate Change: Preventing and predicting wildfires 

28 November 2023

Service Design in Government 2023: conference reflections

04 December 2023

AI for Climate Change: Using artificial and indigenous Intelligence to fight climate change

06 December 2023

Release: 2023 Government AI Readiness Index reveals which governments are most prepared to use AI

11 December 2023

AI for Climate Change: AI for flood adaptation plans and disaster relief

18 December 2023

AI for Climate Change: Managing floods using AI Early Warning Systems

28 May 2024

Strengthening AI Governance in Chile: GobLab UAI and the Importance of Collaboration

05 June 2024

Towards a Digital Society: Trinidad and Tobago’s AI Ambitions