Regulating the Rapid Rise of AI
The rapid rise of artificial intelligence is transforming the entire political, economic and social landscape of the world. Fast developing technologies such as ChatGPT and other AI systems have been programmed to transform the ways in which humans interact with information and each other. Such growing AI technologies will allow people around the world to reach new frontiers of knowledge and unprecedented levels of productivity that will transform labour markets and produce record numbers of economic growth and social progress. Within the next decades, AI will become more and more integrated into the world order, having the capacity to operate surgeries, participate in courtrooms, fly airplanes, and drop daily new products and medical innovations into global markets.
Such a phenomenon seemed unimaginable just a year ago; however, generative AI systems are already surpassing humans in certain skills. They have the ability to write more clearly and persuasively than most humans. They are also able to generate complex images and computer codes just from input of simple prompts. Within five years, tech firms plan to introduce “brain scale” AI models that will have more than 100 trillion parameters (a measure of an AI system’s scale and complexity), an amount roughly the same as the number of synapses in the human brain. AI, in distinction to other technologies, possesses a hyper-evolutionary nature that allows it to double in computing power at increasingly fast rates. Additionally, if AI systems are programmed to have self-improving capacities, as AI developers are currently working towards, they will not only be able to continue to rapidly grow in scale and capability, but be able to self-sufficiently improve their own systems in surpassing human abilities without human control.
The rapid advancement of AI systems must therefore be met with great precaution. Countries around the world will be tasked with the responsibility of learning how to cope with this phenomenon that once seemed unimaginable and ensure its continual growth, while paying great attention to AI’s potentially disastrous risks to humanity. Additionally, the significant impact on the job market that the growing omnipresence of AI technology is starting to have, as well as the threat of AI being used as a cyberweapon by terrorist groups are reasons for serious concern. Governments across the world must therefore respond to the blistering spread of AI systems and its many potential negative ramifications by a creating framework for AI regulation that does not mitigate AI innovation, but makes sure it develops in accordance with protection of the safety and security of civil liberties and human rights.
Three already very different frameworks for governmental regulation have emerged in the United States, Europe, and China, all reflective of different values and political incentives. The United States has adopted a market-driven approach in terms of digital regulation with tech companies in the drivers’ seat of progress, reflecting a great faith in markets with limited room for government intervention. The US views the rapid digital revolution as a source of political freedom and economic growth; therefore, Washington is reluctant to impose any strict measures of regulation that would restrict AI growth. Additionally, AI serves as an opportunity to assert US tech dominance in the midst of the competition and political tensions with China.
On the other end of the spectrum, China has pursued a state-driven approach to AI regulation as a primary component of the country’s efforts to assert itself, like the US, as the world’s leading tech superpower. Within China, AI systems have been mostly used as tools for state censorship, propaganda and surveillance, all supporting the greater aim of strengthening the power of the Chinese Communist Party. Differing both from the US and China, the EU has adopted a rights-driven approach, not tailored to the interests of tech companies but more concerned with the rights of users and citizens in the midst of the transformation and potential risks that AI growth presents. The EU approach is more grounded in the rule of law and democratic governance, as opposed to its two counterparties, and has already passed binding legislation on digital tech regulations in order to ensure the protection of individuals’ fundamental rights.
Despite these vastly differing approaches to digital regulation, governments across the world must implement an important measure to effectively keep up with the growth of AI: include tech companies in the conversation. In a recent panel discussion with the Council on Foreign Relations, president and founder of the Eurasia Group and GZERO Media Ian Bremmer discussed how everything ‘involving the AI space and the sovereignty of that decision-making process is, as of right now, overwhelmingly in the hands of a small number of technology companies. They’ve got the resources. They have the knowledge. And they know where it’s going.’ While such does not render governments powerless amidst the increasing development of AI, it highlights that if governments want to play any significant role in the AI revolution, and implement an efficient and informed regulation framework to benefit from the growth of AI growth while mitigating its potentially disastrous effects, they cannot do it by themselves. State governments will need to learn to work with AI companies to shape the pertinent role AI technology will play in the future world order.
Image courtesy of Tara Winstead via Pexels, ©2021. Some rights reserved.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of the wider St. Andrews Foreign Affairs Review team.