The rapid developments in Artificial Intelligence (AI) have given rise to a plethora of challenges for humanity. From generative AI producing highly advanced text, imagery, and audio, to AI-powered systems that have effectively substituted for human workers across a wide range of skilled professional jobs, it is evident that AI possesses the potential to alter not just the technology frontier of possibilities, but also the very social fabric and political structures of the countries and communities humanity inhabits.
As the two largest national economies in the world, both the U.S. and China alike have adopted - with varying degrees of success - regulations aimed at moderating and setting guardrails to the pace and objectives of AI development.
The White House published a landmark Blueprint for an AI Bill of Rights in July, with the intention of “protect[ing] Americans from the risks posed by AI, including through preventing algorithmic bias” across a spectrum of domains, including home valuation, law enforcement, and digital informational dissemination. This comes on the heels of OpenAI CEO Sam Altman’s testifying in the Senate in May, in which he called for more centrally coordinated regulation - plausibly via the establishment of a U.S. agency - of AI research and deployment. Notably, OpenAI is also amongst the very companies that pushed back against the extensive controls proposed by the EU, and has advocated the implementation of softer measures - e.g. guidance and nudging - in lieu of punitive laws and concrete sanctions for inappropriate or detrimental uses of AI.
On the other side of the Pacific, Beijing has rolled out some of the most sweeping regulations concerning generative AI. In July this year, the Cyberspace Administration of China (CAC) and several ministry-level departments published the final version of the ‘Interim Measures for Administration of Generative Artificial Intelligence Services,’ which took effect mid-August. Whilst the measures were calibrated to reflect the business interests and commercial rights of the AI industry - itself a growing pillar in Chinese economic growth, it is clear that the Central Administration is concerned by, and seeks to pre-emptively respond to, the risks brought about by AI when it comes to services and goods accessed by the general public.
What is palpably missing from both sets of policy regimes, however, is an acknowledgment that the harms of unbridled AI development extend across borders - with very real potential consequences for the entirety of humanity and the world over. Neither “American citizens” nor “Chinese citizens” suffice in capturing all there is to the stakeholders who could be impaired by non-alignment between AI and humans, militarisation or mobilisation of AI in conflictual scenarios, or, indeed, deliberately malicious deployment of AI by hostile non-state actors.
With Washington stepping up its efforts in curbing China’s developmental progress in areas of ‘sensitive technologies,’ and incipient hyper-defensiveness and guardedness concerning prospective espionage and sensitive information leakage in China (see the newly revised Counter-Espionage Law, which came into effect on July 1 2023), the prospects for Sino-American collaboration on AI do not look particularly hopeful or inspiring. Indeed, a formal mechanism for coordination, dialogue, and engagement on such critical issues remains elusive.
As I have argued elsewhere, the intersection of geopolitical tensions and the rise of artificial intelligence could give rise to uniquely idiosyncratic risks - especially in light of the escalating great power rivalry between China and the U.S. From the dangers of AI being trained with vastly distinctive data sets - given the firewalling off of data within China, but also the exclusion of Mainland China and Hong Kong by leading generative AI companies - developing ‘hostile’ judgments and preferences towards the ‘other’, to the potential incorporation of dual-use AI into military or para-military operations, to, indeed, the susceptibility of AI-native systems to sophisticated cyber-mercenaries seeking to stoke conflict, there are many reasons why mistrust between China and the U.S. compounds pre-existing risks associated with AI.
As noted by former Secretary of State Henry Kissinger, any future military confrontation between the U.S. and China - as unlikely and unthinkable as this should be - would be the first ‘AI war’ in the history of humanity. There can be no winners from this - only losers. It is no hyperbole to say that those fighting for better Sino-American relations are also fighting to prevent humanity from inadvertently sleepwalking into Armageddon.
More specifically, on the subject of AI regulation, there are a number of ‘low-hanging’ fruits that Beijing and Washington should consider imminently and seriously.
First, China and the U.S. should imminently set up a Sino-American AI safety coordination committee and panel, featuring a mixture of politicians from the executive branches, leading technocrats and bureaucrats, leading academics in AI regulation and safety, and representatives of key players in the private sector. This coordination committee should meet frequently, both online and off-line, with the latter meetings taking place in neutral territory acceptable to both countries. The objectives here are threefold: a) to share - wherever is possible - findings and insights concerning AI behaviours and mechanics; b) to identify mutually agreeable rules of AI engagement and regulation, and c) to develop ‘limits’ on the data and methods with which AI in China and the U.S. is trained - in order to prevent U.S.- or China-trained AI from acquiring actively antagonistic preferences towards the other country and its citizens.
Second, both countries must recognise that certain areas of AI research should remain open to international funding from all sources - including from the ‘other.’ It is understandable, albeit regrettable, that the governments of China and the U.S. alike are imposing additional restrictions and directing greater scrutiny towards international academic collaborations in select advanced technological sectors. Whether it be the ignominious China Initiative or the limited number of American students enrolling in Chinese universities, the chilling effects are palpable.
Yet the least both countries should commit to is a full renewal of a key symbolic agreement that affirms Sino-American scientific and technological agreement - a 44-year-old pact that had been extended for six months, but has yet to be renewed. Additionally, both Chinese and American universities should create clearly stipulated - and ideally relatively lenient - guidelines concerning jointly funded research projects on where AI intersects the law, safety, regulation, and public policymaking. These are fields with minimal, comparatively ‘sensitive information’ - they are unlikely to involve cutting-edge technological designs and data that have proven to be highly sensitive to both countries.
By differentiating between ‘technical research’ - e.g. the raw constitutions and design of semiconductors, quantum computing, and advanced AI - from ‘management research’ - e.g. the governance and regulation of AI, China and the U.S. can carve out a unique space that enables them to weather the storm of the ‘national securitisation’ of everything. Indeed, the U.S. may find that it has something to learn from China when it comes to the expansive and outcome-oriented nature of the country’s regulations on AI, whilst Chinese policymakers would benefit from observing the organic and interactive consultative processes in Washington, playing out between leading tech companies, policymakers and parliamentarians, and technical advisors. Both sides can learn from one another, in more ways than one.
Third, China and the U.S. alike should be open to working with other leading geopolitical players and economies in the world, in drawing upon AI to collectively combat international challenges on a global scale. The deleterious effects of climate change cannot be offset, but can at least be mitigated through AI-informed disaster planning processes and rezoning. Food shortage crises cannot be alleviated through AI alone, but AI simulations could be pivotal in improving crop yield and agricultural productivity. It is not the case that AI only brings doom and harm - that would be an unduly techno-pessimistic view. Yet what is needed, in face of the plethora of challenges outlined above, is techno-pragmatism: an outlook that enables us to deploy AI for the good of all, as opposed to the few, whilst remaining fully cognizant of and responsive to potential risks and challenges.
If the U.S. and China can’t put their heads together and set aside some of their differences on AI regulation and safety, the whole world will suffer.