09 November 2023
By Gonzalo Grau
Whilst intended to be a ‘complementary’ event to the Global AI Safety summit held at Bletchley Park, AI Fringe embodied the desire of those ‘left out’ of the summit to deepen their involvement in discussions around AI deployment and regulation. I attended day 4 of the event series, which revolved around work, safety, law, and democracy. As is usually the case with these big topics, there was a strong, palpable sense of excitement in the dimly lit lecture hall tucked away in London’s British Library.
In line with their participatory ethos, AI Fringe invited a broad range of participants from the public sector, civil society, academia, and tech. It was refreshing to see this diversity on an individual level as well — panellists came from diverse backgrounds and cultures. I felt this reflected a desire to broaden the scope of a discussion traditionally confined to the limits of technical expertise to one that recognized AI as a fundamentally socio-technical issue. That is, AI’s performance can only be understood and improved upon if we consider its social and technical aspects as inextricably linked and of equal importance.
After some captivating discussions touching on the future of work, the role of open source software in AI development, responsible engineering, and the democratic implications of these technologies, I came away with four common talking points.
The Global AI Safety Summit was a step in the right direction. Holding a global discussion on the potential risks of powerful generative AI models – maybe even Artificial General Intelligence (AGI) – was necessary for value alignment and setting common goals.
That being said, focusing on hypothetical threats of existential proportions risks overshadowing the actual harms of AI in the present. These harms are well documented. The black box nature of some predictive models as well as their reliance on inherently biased data sources often results in discriminatory decisions that are difficult to contest. These limitations can cause a lot of harm in sensitive use cases like predictive policing, HR, and financial risk management.
Diverting our attention to long-term scenarios risks amplifying these harms. As Dr. Abigail Gilbert, Head of Praxis at the Institute for the Future of Work, put it, ‘79% of firms in the UK are adopting AI, but not all of them are innovation-ready.’ As systems develop on the foundations of current models, mistakes and mismanagement will remain ingrained in future versions, potentially at a much larger scale. Ensuring current AI uptake is done safely and responsibly right now is key to mitigating future risks. Our decisions matter now.
On this point, the opinions of panellists were more or less in line with the UK Government’s stance. Former Secretary of State for Science, Innovation and Technology Chloe Smith highlighted that the need for industry-specific regulation should not result in overly granular and stifling legislation.
Instead, AI regulation should be approached as a balancing act between ensuring adaptable legislation that is robust enough to adhere to core principles. Though Rishi Sunak has made it clear that the UK will not ‘rush to regulate’ AI, there is some consensus on the need for some ‘big red lines’. The CEO of OpenUK Amanda Brock echoed the need for a principles-based approach to legislating on AI.
Examples of such legislation exist, and were drafted well before AI development accelerated to what we see today. The Oxford Internet Institute’s Brent Mittelstadt points to the Data Protection Directive of 1995 as a good example of how to go about regulating frontier models. The European Union’s AI Act is also a good example of sober AI legislation. Sarah Chander, senior policy advisor at European Digital Rights, emphasised how the fact that it had been in development long before the launch of generative models like ChatGPT protected it from the distortionary effects of generative AI hype.
Much of the debate around AI regulation has focused on one side of the market. Whether the burden of adaptation and risk mitigation falls on the demand or supply side, the question of who bears the burden of the transitional risk of AI uptake is an important one. In the labour market, the discussion around how to ‘augment’ jobs as opposed to automating them has focused on how to equip workers with the skills they need to interact with AI in a sustainable way.
Wilson Wong of the Chartered Institute of Personnel and Development pointed out that re-skilling and lifelong learning are ‘not a silver bullet’. He is right — it’s important to consider that many workers may struggle with re-skilling due to factors out of their control such as disabilities or duties of care. To ensure AI uptake in the workplace doesn’t disproportionately affect workers from vulnerable groups, it’s important to consider what existing skills workers already have and design implementation strategies to complement these. Similarly, regulators should make an effort to understand how software works before designing overly draconian compliance requirements.
Legislators should be careful with overburdening specific actors with adaptation and mitigation requirements — responsibilities should be spread evenly to reflect individual capabilities. In a general sense, shaping the development of these systems is as important as equipping consumers for AI deployment.
The idea that inclusivity and representation is important to responsible AI development is not news. This inclusivity should not be limited to training data, however. The AI Safety Summit was partly inclusive in extending an invitation to AI superpowers like China and actors from the global south, but is severely lacking in accurately representing private and civil society actors. Moreover, the presence of transnational tech corporations as, in the words of Dr. Abigail Gilbert, ‘pseudo-nation states’, highlighted the need to amplify the scope of actors involved in shaping consensus around AI.
Glitch founder and activist Seyi Akiwowo was clear in her denouncement of the failure of these technologies to attract participation from underrepresented groups in society. If this technology does not serve their needs, why would they use it? On this point, Katerina Spranger (Oxford Heartbeat) stressed the importance of user involvement in system design — the surgeons using her company’s technology needed to be educated on this technology in a non-coercive way so as to build trust and ensure sustainable uptake.
Similarly, open source software was mentioned as a potential pathway to risk reduction. Though the definition of open source AI is not clear as of yet, its enabling of diverse actors to participate in model design may reduce systematic risk and ensure different perspectives are embedded into these. Moving away from proprietary software also increases transparency in model design. GitHub’s Peter Cihon reminded us that currently, open source data pioneers transparency in AI development, and is considered industry best practice in explainability and accountability, two serious issues in AI.
In what may be the UK’s most AI-focused week in history, it was vital to have as many discussions with as broad a range of actors on the topic as possible. AI Fringe did a wonderful job of complementing the AI Safety Summit. Though it may not reflect the UK Government’s official stance on the matter, AI Fringe felt like a solid first step in an inclusive conversation on society’s priorities for the future of AI.
The full sessions of the event are available on Youtube and the AI Fringe website.