Language : English 简体 繁體
Security

Navigating the Complex World of AI

Jan 09, 2024
  • Li Zheng

    Assistant Research Processor, China Institutes of Contemporary International Relations

AI.jpg

At the China-U.S. summit in San Francisco, the leaders reached agreement on a number of proposals for potential collaboration, including advancing an intergovernmental dialogue mechanism on artificial intelligence as a common concern for all mankind. With regard to this new tech topic, China and the U.S. have some past experience to guide them, while at the same time facing some new challenges. They need to find a balance point in their complex ties — both in competition and cooperation — and identify a direction for global AI governance.

The two countries have both conceptual agreement and practical differences on AI. On one hand, they share a desire to advance AI governance and to control strategic risks in the field. In November, 28 countries, including China, the United States and the European Union signed the Bletchley Declaration, articulating a shared view that AI poses a common challenge to all mankind and vowing to follow up with an AI governance agenda. On the other hand, the U.S. has imposed unilateral sanctions on China to prevent it from acquiring the most advanced AI technologies and devices from American sources. These suppressive measures will inevitably affect the atmosphere of dialogue and cooperation, contribute to misunderstandings and lead to bad judgments.

The U.S. sees the complex relationship as having elements of both competition and cooperation, and it argues that the competitive elements will not affect the collaborative part, which should be protected for the greater good of all mankind. In the U.S. view, AI cooperation should be treated the same as cooperation in climate change and cybersecurity. The two major powers had heated debates on both topics, but in the end arrived at practical cooperation. AI differs significantly from the other two areas, and it will be no easy job for the two countries to copy their past experience and proven pathways. This is clear for near-, medium- and long-term future.

In the near term, AI remains in its early stage of development so it’s hard to evaluate all its risks and benefits with reliable accuracy. At present, a new wave of AI is advancing at high speed. It has the potential of being applied in more fields and accelerating scientific research and development in various countries. No one can afford to ignore the huge potential benefits of this technology and no one can afford to fall behind in international competition in the sector.

In this context, risks and challenges are considered secondary to potential benefits. Countries are therefore more reluctant to accept international rules that are as binding as those that address climate change and cybercrime. As a result, there will not be strong third-party pressure for China-U.S. cooperation on AI governance.

In the medium term, AI will develop in diverse ways that are dispersed and prone to proliferation. Compared with other cutting-edge technologies, AI relies more on the open source ecosystem, with many technological advances stemming from collaboration and brainstorming between scientists from various countries. This trend will not fundamentally change in the future, meaning that transnational tech elites, rather than countries or companies, will have the most direct impact on the direction of AI development. These non-state, non-corporate actors will make governance more complicated and traceability of risk more difficult. Sole reliance on intergovernmental cooperation and corporate governance will not be sufficient to protect the public interest from the negative impacts of AI development.

In the long run, AI represents a great and sudden strategic risk. In December, the United Nations High-Level Advisory Body on Artificial Intelligence issued an interim report that concluded it was impossible to put up a comprehensive list of AI risks. It said that human society will have to make all-around adjustments.

According to some experts, the biggest risk of AI lies in its getting out of control, proliferating and destroying human societies. In that situation, the digital infrastructure and information ecology in most countries may encounter subversive damage and suffer immeasurable losses. Moreover, the evolution of AI features a certain level of suddenness. Some technological breakthroughs involving strategic risks may come abruptly, beyond the expectations of any and all regulatory authorities. Once such a situation emerges, AI may be turned to oppose human society, and all its benefits will be instantly zero.

The proposed China-U.S. dialogue and cooperation on AI needs to take these factors into account and respond with corresponding adjustments. The two countries need to discuss the various risks in different time dimensions and at different levels, follow the most important strategic risks and lead a global governance agenda to deal with those. The U.S. must squarely face the fact that the most important risks of AI don’t come from international competition. Export controls simply cannot eliminate strategic risks.

The United States should not deprive any country of its legitimate right to develop AI. Actions should be taken to prevent the division and decoupling of the global technological ecosystem, which may leave some pipelines out of regulatory supervision or even the attention of technical elites.

While advancing their intergovernmental dialogue, China and the U.S. also need to continue to expand AI discussions in tech communities, academia and think tanks. They must promote scientific research and development cooperation to steer AI onto constructive paths, thereby setting an example for other countries.

You might also like
Back to Top