Where is China-U.S. AI Relationship Headed?
The birth of artificial intelligence has brought China and the United States to a historic inflection point. They may find it difficult to become partners in the AI field, yet they share a responsibility to ensure that they do not become joint contributors to systemic, or even existential, risk.
Looking back at China-U.S. technology relations in 2025, artificial intelligence undoubtedly stands out as one of the most emblematic keywords. It is both the central driving force of a new technological revolution and the most sensitive and volatile variable in contemporary great-power competition. The evolving dynamics surrounding AI are reshaping the structural trajectory of China-U.S. relations, while casting a long shadow over global technology governance and international security.
Yet when viewed over a longer time horizon, the past decade tells a more complex story. For much of this period, China and the United States maintained a highly interactive relationship in the field of AI. During the early surge of deep learning, researchers from both countries were consistently present at the same leading international conferences, engaging in professional exchanges on algorithms, safety and ethics. At the corporate level, companies competed and collaborated within a relatively open technological ecosystem. Early consensus around principles such as “human-centered AI,” “mitigating algorithmic bias” and “maintaining meaningful human control” gradually emerged from precisely this kind of international epistemic community.
However, this trajectory—once not inherently zero-sum—has undergone a marked shift in recent years.
The changing status of AI is first reflected in the evolution of U.S. policy narratives. As technological capabilities have advanced and application scenarios have expanded, the U.S. conceptual framework for AI has gradually moved from viewing it as a “next-generation general-purpose technology” to defining it as a “core technology of national competition” and ultimately elevating it to a national security priority.
During the Trump administration, official U.S. documents explicitly called for “winning the AI race,” framing AI as a core technology that would determine future national security, military superiority and economic competitiveness. This formulation itself signaled that AI had been formally incorporated into the main arena of geopolitical rivalry. Technological issues were no longer confined to questions of efficiency and innovation; they were re-coded as security concerns bearing directly on national power.
This shift was not a short-term tactical adjustment, but a structural turn that has continued to shape policy to this day. The “securitization” of artificial intelligence is profoundly transforming the underlying logic of China-U.S. relations.
Against this backdrop, the erosion of trust has become the most immediate constraint on cooperation. In discussions about AI governance, the term “trust” is frequently invoked—trust in machines, trust in human-machine collaboration, and trust in people who design and deploy AI systems. Trust is both the most fundamental element and the most valuable resource. Yet in international relations, trust is never a purely emotional variable. It is institutional and structural in nature. What China and the United States currently face in the field of artificial intelligence is a classic condition of “trust deficit.”
On one hand, the U.S. has increasingly interpreted China’s AI development through the lens of institutional rivalry and strategic confrontation, viewing it as a potential systemic challenge. On the other, China widely perceives the U.S.-led AI framework as an exclusive system of rules and technological alliances, and it believes that behind the security-oriented narrative lies a strategic intent to contain China’s development.
The intrinsic characteristics of AI further amplify this mistrust. As a highly complex, opaque and rapidly evolving technological system, AI makes intentions difficult to verify and risks difficult to assess. What this generates is not merely concern over technical accidents but fear of strategic miscalculation.
Within such a structure of mutual suspicion, even when both sides share substantial common ground regarding certain risks—such as the danger of loss of control in the military applications of AI, the misuse of generative models or vulnerabilities of critical infrastructure. These shared concerns have rarely translated into practical cooperation. Any cooperative initiative is first weighed on a political scale: Does it advantage the other side? In other words, the issue is not whether China and the United States understand the risks posed by AI but whether they trust each other to act with relative restraint and responsibility.
It is important to note that there are significant cognitive differences between China and the United States regarding the very notion of trust. In Western traditions of security governance, trust is often understood as something that can be gradually built through rules, mechanisms and transparency. In China’s diplomatic and security culture, by contrast, trust is more commonly viewed as a precondition for cooperation rather than its natural byproduct. From the Chinese perspective, advancing institutionalized cooperation in technologically sensitive domains under conditions of profound distrust is often seen as carrying risks that outweigh potential benefits.
Yet the distinctive nature of AI is quietly reshaping this logic. Unlike traditional technologies, AI is characterized by high uncertainty, low predictability and rapid diffusion. Once systems move beyond control, their spillover risks frequently transcend borders, alliances and institutional divides. In this sense, limited cooperation in the AI domain is less about building broad strategic trust and more about seeking a shared understanding of risk.
Under current conditions, expecting China and the United States to achieve “strategic mutual trust” in AI is unrealistic. A more feasible path would be to promote limited cooperation that is risk-oriented and technically focused. Such cooperation would not presuppose value convergence or institutional integration; rather, it would concentrate on issue areas that are relatively low in political sensitivity yet high in global public relevance. These might include exchanges of information about AI system failures and accidents, baseline safety principles for high-risk AI applications, technical discussions on model misuse and abuse, communication channels in times of crisis and risk notification within international multilateral platforms. The purpose of such cooperation would not be to eliminate competition, but to prevent competition from spiraling out of control; not to establish trust, but to reduce the probability of miscalculation.
At present, global AI governance is exhibiting pronounced trends of fragmentation. Some countries are accelerating rule consolidation through alliance mechanisms, excluding many others—including China—from the rule-making process. This structural exclusivity is undermining the overall effectiveness of global risk governance.
As political theorist John Herz argued in his theory of the “security dilemma,” when countries stop cooperating out of fear, they often end up making everyone less secure. In today’s environment, if China and the United States were to abandon all cooperation on AI, it would not make either side safer—instead, it could increase the risk of broader instability. Historical experience repeatedly demonstrates that within highly competitive international systems, even minimal communication and cooperation can function as an important mechanism of strategic restraint and confidence building.
Early in 2026, we are compelled to raise what appears to be a simple yet increasingly urgent question: In an era marked by profound distrust, does meaningful China-U.S. cooperation on AI remain possible? This is not an abstract exercise in idealism. It is a practical challenge with direct implications for global technological risk management, international security and our shared future.
The answer may depend less on grand political declarations than on a more sober recognition that what ultimately determines the prospects for cooperation is not technological capability alone but whether both sides are willing, even in an environment of deep distrust, to preserve a minimal space for rationality, so as to avoid unintended escalation or loss of control.
Standing at this historical inflection point, China and the United States may find it difficult to become partners in the AI field. Yet they share a responsibility to ensure that they do not become joint contributors to systemic—or even existential—risk. Such a choice may not resolve all differences, but it could serve as one of the few stabilizers still worth attempting in an uncertain age.
