Language : English 简体 繁體
Security

AI Safety Is Where U.S.-China Cooperation Still Matters

May 11, 2026
  • Xiao Qian

    Deputy Director, Center for International Security and Strategy at Tsinghua University

AI.png

As discussions grow around the upcoming visit by U.S. President Donald Trump, much attention has focused on tariffs, trade, and semiconductors. Many expect that artificial intelligence will also feature prominently on the agenda. 

But if AI is discussed only through the lens of technological rivalry, both countries may miss one of the few remaining areas where meaningful cooperation is still possible – and urgently needed. 

In recent years, Washington’s approach to AI has increasingly shifted from innovation governance towards strategic competition. The language of “winning the AI race” now dominates many policy discussions in the United States. Export controls, investment restrictions and technology alliances are increasingly framed through national security considerations. 

It is true that AI is rapidly becoming a foundational technology with major economic, military and geopolitical implications. No major power wants to fall behind. Yet the problem is that AI is not only a competitive technology. It is also a technology that may cause systemic risks. 

Some of the most serious risks associated with advanced AI do not stop at national borders and cannot be managed by one country alone. Whether in AI-enabled cyber operations, biological research, autonomous military systems or large-scale disinformation, instability created in one country can quickly spill across the international system. 

This is why AI safety should become an important topic during any future high-level engagement between China and the United States. 

China has repeatedly demonstrated that it is open to international dialogue on AI governance and safety. In recent years, Beijing has issued multiple policy documents and governance initiatives emphasizing the importance of balancing innovation and safety, while Chinese institutions, scientists and policy experts have actively participated in international AI safety discussions. 

This point is often overlooked in the current geopolitical atmosphere. Much of the international discussion today focuses on strategic competition in AI capabilities, chips and infrastructure. But behind the headlines, there is also another reality: Chinese and American experts are still talking to each other seriously about AI risks. 

Despite growing tensions between the two countries, Track II dialogues involving scientists, researchers and policy experts from both sides have continued quietly over the past few years. These discussions cover issues ranging from loss-of-control risks to AI-enabled cyber threats and Ai x bio risks, from terminology clarification to confidence-building measures. 

I have personally observed that many experts on both sides, regardless of political differences, share similar concerns about the future trajectory of AI development. There is growing recognition that some risks are simply too large for any country to handle independently. 

Both China and the United States have a responsibility to mitigate AI-related risks. As the world’s two leading AI powers, decisions made in Beijing and Washington will shape not only the future of technological development, but also the global risk environment surrounding advanced AI systems. 

The latest International AI safety report led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, has listed two disconcerting areas which deserve particular attention. The first is the intersection between AI and cyber attacks. Advanced AI systems are rapidly changing the cyber domain by lowering barriers to sophisticated attacks, accelerating automated vulnerability discovery and increasing the scale of information manipulation. In an environment already characterised by deep mistrust between major powers, AI-enabled cyber incidents could easily be misinterpreted as intentional escalation. 

The second area is AI and biological risks. AI is already accelerating scientific discovery in medicine and biology in remarkable ways. But many scientists also worry that increasingly capable AI systems could lower the technical barriers for harmful biological misuse. This concern is no longer limited to governments. It increasingly involves the possibility that small groups or even individuals could gain access to capabilities once restricted to advanced state laboratories. 

Neither China nor the United States benefits from such a future. This is precisely why maintaining channels of communication on AI safety matters, even during periods of broader strategic competition. One of the underappreciated dangers of the AI era is that system failures may increasingly be seen as hostile intent. As AI systems become more autonomous and opaque, it may become harder for governments to distinguish between technical malfunction, unintended escalation and deliberate attack. In a crisis situation, this ambiguity itself could become destabilising. 

History has repeatedly shown that excessive securitisation of technology can deepen mistrust and fragment the global innovation ecosystem. Today, there is a growing risk that AI governance itself becomes absorbed entirely into geopolitical confrontation. 

If every AI advance is interpreted primarily through a zero-sum framework, then safety cooperation becomes politically difficult precisely when it is most necessary. 

A more pragmatic approach would recognise that AI safety cooperation serves the interests of both countries. China and the U.S. could expand expert dialogues on frontier AI risks, strengthen communication channels regarding major AI incidents, support joint discussions on AI evaluation and safety testing, and explore confidence-building measures in areas related to military AI and cyber stability. 

Some foundations already exist. Over the past few years, Chinese and American experts have worked together through a number of Track II dialogues on AI and international security. Scientists and researchers from both countries continue to exchange views on risk governance, and emerging technological threats. These exchanges may not attract headlines, but they remain valuable because they help reduce misunderstanding in an increasingly uncertain technological environment. 

At a time when many traditional channels of bilateral trust have weakened, AI safety remains one of the few areas where constructive engagement is still possible. 

For Washington, however, this also requires a shift in mindset. Treating China exclusively as a strategic rival in AI may generate domestic political consensus, but it risks narrowing the space for practical cooperation on shared risks. Not every aspect of AI governance should be viewed through the framework of technological containment or ideological competition. 

The real question facing both countries is not simply who leads in AI capability development. It is whether the world’s two largest powers can act responsibly enough to prevent AI-related risks from escalating into crises that affect everyone. 

That is why AI safety is not a peripheral issue in U.S.-China relations. It may become one of the most important tests of whether cooperation between the two countries is still possible in this age of uncertainty. 

You might also like
Back to Top