Welcome

Welcome to the official publication of the St Andrews Foreign Affairs Society. Feel free to reach out to the editors at fareview@st-andrews.ac.uk

Artificial Intelligence’s Transformative Role in Statecraft and Warfare

Artificial Intelligence’s Transformative Role in Statecraft and Warfare

In a world of tense strategic competition and geopolitical conflict, global powers are anxious to keep up with the current revolution in technology. As the rapid rise of artificial intelligence continues to flood every sector of international society, its role in transforming statecraft and warfare becomes both more pertinent and potentially perilous.

Espionage is a key example of long-established statecraft that will inevitably be transformed by artificial intelligence, bringing both advancements and potential complications to the profession. Director of the Central Intelligence Agency, William J. Burns, describes the vast influences that rapidly developing technology will have on the profession of intelligence. These stem from “microchips to artificial intelligence to quantum computing,” but Burns points out that such developments in many ways will “make the CIA’s job harder than ever, giving adversaries powerful new tools to confuse us, evade us, and spy on us.” This is a concern on the agenda for intelligence agencies worldwide.

The adoption of AI systems will undoubtedly create important new opportunities for intelligence analysts, primarily in increasing the speed at which analysts collect large amounts of open-source data. As discussed by Burns, the CIA is currently developing AI tools to digest “material faster and more efficiently” so that officers may focus primarily on providing “reasoned judgments and insights on what matters most to policymakers” as well as to state interests. As information nowadays travels at substantially greater speeds, intelligence decision-making needs to keep up. As described by international security expert Amy Zegart, it took President Kennedy thirteen days during the Cuban missile crisis to deliberate on the US’s next step after the discovery of Soviet missiles in Cuba; meanwhile, it took thirteen hours to weigh intelligence regarding the culprit of the 9/11 attack and plan the US’s response. Zegart highlights how decision time nowadays “could be thirteen minutes or less” with the help of AI.

Additionally, much of the human analytical work being done nowadays is on mundane tasks that AI tools could automate at a faster speed. Locating Chinese surface-to-air missiles over its vast territory would take a great amount of time for human analysts, while an AI algorithm “analyzing satellite images can reduce the number of suspect sites” to then free up “bandwidth for humans to do higher-level analytical thinking.”

AI systems will effectively make espionage more efficient and productive but will not, however, overtake human analysts or render human intelligence obsolete. Humans will always be essential to the intelligence profession in their ability to weigh the important role of wishes and intentions behind intelligence decisions. Only humans can be trusted with collecting certain information and conducting certain clandestine operations. The future success of intelligence services will thus rest on countries’ ability to blend the critical roles and skills of humans with emerging developments in AI.

What does present itself as potentially harmful are the ways in which integrating AI systems into modern-day militaries, with the deployment of autonomous weapons, will incite a new era of technological warfare. Innovation in military technology has accelerated since the war in Ukraine, with both Moscow and Kyiv utilizing remotely controlled drones more extensively. This has incited the faster development and implementation of autonomous drones. The Ukrainian company, Saker, was founded in 2021 and develops AI systems. Last year, they began using on a small scale the Saker Scout, a fully autonomous weapon that uses AI to carry out decisions on its own, such as targets to execute on a battlefield. The deployment of this weapon has not yet been officially verified, but the technology to create such a weapon does exist, which raises a magnitude of potential risks and ethical questions concerning the future of warfare.

Like their advantageous contributions to espionage, AI systems will accelerate information processing, and reduce decision-making cycles by reducing the time it takes to “find, identify, and strike enemy targets.” Nothing, however, is to stop competitors from wanting to reap the same benefits for their own military operations, which will then further escalate the use of automation and limit human control over the battlefield. The deployment of the Saker drone has thus highlighted for international society the urgent need to implement regulatory frameworks concerning the use of autonomous weapons to limit their potential risks to humanity.

Leading AI scientists Stuart Russell and Yann LeCun have warned of the perilous dangers of autonomous weapons, driven by concerns that “massive hordes of autonomous weapons” could have the potential to be “deployed to target and kill thousands at a time.” Without restraining the development of autonomous weapons, the future of warfare would rest beyond human control and be devoid of proper protection for both combatants and civilians. Over 250 nongovernmental organizations, including Amnesty International and Human Rights Watch, have formed campaigns that call for the creation of a legally binding international treaty that bans the use of autonomous weapons. Although a complete ban on autonomous weapons does not seem feasible, as their military value for states is too great, states in the future must commit to regulation measures that securely control the development and deployment of AI automated weapons. As the world continues to be propelled into a new technological age, the integration of AI systems into statecraft and warfare must therefore be handled responsibly in order to mitigate the risks of their worst dangers.

Image courtesy of Ian Usher via Unsplash, ©2018. Some rights reserved.

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of the wider St. Andrews Foreign Affairs Review team.

China’s Technological Odyssey: Exploring the Digital Silk Road Initiative

China’s Technological Odyssey: Exploring the Digital Silk Road Initiative

Pro-Choice, Always: France Enshrines Right to Abortion in Constitution

Pro-Choice, Always: France Enshrines Right to Abortion in Constitution