Artificial intelligence is already turning geopolitics upside down – TechCrunch

Angela Kane is Vice President of the International Institute for Peace, Vienna and former United Nations Under-Secretary-General.

Wendell Wallach Contributor

Wendell Wallach is a Carnegie-Uehiro Fellow at the Carnegie Council for Ethics in International Affairs, where he is co-director of the Artificial Intelligence & Equality Initiative (AIEI).

The TechCrunch Global Affairs Project examines the increasingly intertwined relationship between the tech sector and global politics.

Geopolitical actors have always used technology to achieve their goals. Unlike other technologies, artificial intelligence (AI) is much more than just a tool. We don’t want to anthropomorphize AI or suggest it has its own intentions. It is not – yet – a moral agent. But it is fast becoming a primary determinant of our collective destiny. We believe that AI is already threatening the foundations of global peace and security because of its unique characteristics and its impact in other areas, from biotechnology to nanotechnology.

The rapid pace of AI technological development, coupled with the size of new applications (the global AI market is expected to more than tenfold between 2020 and 2028), means that AI systems are being deployed on a large scale without adequate legal oversight. or full attention to their ethical effects. Often referred to as the tempo problem, this gap has left lawmakers and executives simply unable to cope.

After all, the effects of new technologies are often difficult to foresee. Smartphones and social media were embedded in everyday life long before we fully understood their potential for abuse. Likewise, it took time to realize the implications of facial recognition technology for privacy and human rights violations.

Some countries will use AI to manipulate public opinion by controlling what information people see and by using surveillance to curtail freedom of expression.

Looking further ahead, we have little idea what challenges currently under investigation will lead to innovations and how those innovations will interact with each other and the wider environment.

These issues are especially acute in AI, as the way learning algorithms arrive at their conclusions is often unfathomable. When unwanted effects come to light, it can be difficult or impossible to determine why. Systems that are constantly learning and changing their behavior cannot be continuously tested and certified for safety.

AI systems can act with little or no human intervention. You don’t have to read a science fiction novel to imagine dangerous scenarios. Autonomous systems threaten to undermine the principle that there must always be an agent – human or business – who can be held accountable for actions in the world – especially when it comes to issues of war and peace. We cannot hold systems accountable ourselves, and those who deploy them will argue that they are not responsible if the systems operate in unpredictable ways.

In short, we believe that our societies are not prepared for AI – politically, legally or ethically. Nor is the world prepared for how AI will transform geopolitics and the ethics of international relations. We distinguish three ways in which this can happen.

First, developments in AI will shift the balance of power between nations. Technology has always shaped geopolitical power. In the 19th and early 20th centuries, the international order was based on emerging industrial capabilities – steamships, airplanes and so on. Later, control over the oil and natural gas wells became more important.

All the great powers are well aware of the potential of AI to advance their national agendas. In September 2017, Vladimir Putin told a group of schoolchildren: “Whoever becomes the leader, [in AI] will become the ruler of the world.” While the US is currently at the forefront of AI, Chinese tech companies are advancing rapidly and are demonstrably superior in developing and applying specific areas of research, such as facial recognition software.

Domination of AI by superpowers will increase existing structural inequalities and contribute to new forms of inequality. Countries that already lack access to the internet and depend on the generosity of wealthier countries will be far behind. AI-powered automation will transform employment patterns in ways that benefit some national economies over others.

Second, AI will empower a new set of geopolitical players outside of nation-states. In some ways, leading digital technology companies are already more powerful than many countries. As French President Emmanuel Macron asked in March 2019, “Who can claim to be sovereign single-handedly, against the digital giants?”

The recent invasion of Ukraine is an example of this. National governments responded by imposing economic sanctions on the Russian Federation. But perhaps just as impactful were the decisions by companies such as IBM, Dell, Meta, Apple and Alphabet to shut down operations in the country.

Likewise, when Ukraine feared the invasion would disrupt its internet access, it appealed for help, not to a government friend, but to tech entrepreneur Elon Musk. Musk responded by enabling his Starlink satellite internet service in Ukraine and providing receivers so the country can continue to communicate.

The digital oligopoly, with access to large and growing databases that fuel machine learning algorithms, is fast becoming an AI oligopoly. Given their vast wealth, leading companies in the US and China can develop new applications or acquire smaller companies that invent promising tools. Machine learning systems can also be helpful to the AI ​​oligopoly in circumventing national regulations.

Third, AI opens up possibilities for new forms of conflict. These range from influencing public opinion and election results in other countries through fake media and manipulated social media posts, to disrupting the operation of other countries’ critical infrastructure such as electricity, transportation or communications.

Such conflicts will be difficult to manage and will lead to a complete rethink of arms control instruments that are not suitable for dealing with coercive weapons. Current arms control negotiations require the adversaries to clearly see each other’s capabilities and their military necessity, but while atomic bombs, for example, are limited in their development and application, almost anything is possible with AI, as capabilities can evolve both quickly and opaquely.

Without enforceable treaties limiting their deployment, autonomous weapon systems composed of turnkey components will eventually be available to terrorists and other non-state actors. There is also a high probability that poorly understood autonomous weapon systems inadvertently cause conflict or escalate existing hostilities.

The only way to mitigate the geopolitical risks of AI and provide the flexible and comprehensive oversight needed is through an open dialogue about its benefits, limitations and complexity. The G20 is a possible meeting place, or a new international governance mechanism could be created to involve the private sector and other key stakeholders.

It is widely recognized that international security, economic prosperity, the public interest and human well-being depend on controlling the proliferation of deadly weapon systems and climate change. We believe they will increasingly depend at least as much on our collective ability to shape the development and trajectory of AI and other emerging technologies.
Read more about the TechCrunch Global Affairs Project


This post Artificial intelligence is already turning geopolitics upside down – TechCrunch was original published at “https://techcrunch.com/2022/04/06/artificial-intelligence-is-already-upending-geopolitics/”

Leave a Reply

Your email address will not be published.