24 November 2023
By Livia Martinescu
Existential AI risks are those that threaten the existence of humanity. Three common types of scenario that are proposed are:
Loss of control: Rogue AIs take steps to avoid human oversight or control and pursue undesirable goals to our detriment, for example through power-seeking or deceptive activities.
Displacement: AI becomes more intelligent than humanity and is capable of updating and improving itself. This system competes with humans for resources and control, and faces evolutionary pressures to displace the human species.
Catastrophic misuse: AI is used to create chemical and biological weapons — a lack of specialist knowledge will no longer prevent non-specialist actors from developing chemical and biological weapons; AI could significantly expedite drug/pathogen creation, threatening human existence.
In the interest of time, let’s break down the last scenario. A common argument is that AI has the potential to lower the barriers to the creation of chemical or biological weapons. In particular, in lowering the barriers to acquiring the knowledge required to design and produce these weapons. Before we had generative AI systems, the knowledge and skills required to (1) identify harmful molecules and (2) know how to create them in a lab, were limited to a small group of experts. AI systems are proven to be useful for both (1) and (2). Therefore, AI opens the door to non-experts,—i.e. many more actors—creating their own chemical or biological weapons.
However, while the knowledge required to make weapons can be accelerated with AI, this is not a game changer for the threat we already face from biological and chemical weapons. Other barriers that protect us from biological and chemical weapons attacks are still in place. These include acquiring the resources and infrastructure needed to create and disseminate the weapon. These are arguably, bigger barriers than access to knowledge was before we had AI. There are mechanisms to get the knowledge to help us learn how to make biological weapons, which don’t require AI and are possibly cheaper and faster. We need to ask, are there many actors currently prevented from making biological or chemical weapons that are prevented only due to a lack of knowledge? If we think the answer is no, then there is unlikely a significant change in the risk posed by chemical and biological weapons.
The focus should be on controlling other factors of production, with a closer examination of non-proliferation and enforcement. While it’s understood that control regimes cannot be flawless, the key lies in refining regulatory frameworks, particularly in managing the licensing of technology and implementing dual licensing approaches.
It is important that governments are focusing on the right priorities and that their decisions reflect the more immediate AI risks.
Photo credit: https://www.vpnsrus.com/