By equating artificial intelligence data flows with national security risks, the United States has effectively designated China as a presumptive problem. This has not only soured the atmosphere for bilateral AI cooperation but also promises to cast a long shadow over global AI collaboration.
Artificial intelligence is redrawing the contours of international competition at unprecedented speed. Amid the escalating global tech rivalry and mounting geopolitical pressure, cross-border flows of data have emerged as a highly sensitive regulatory flashpoint for nations. This is especially pronounced in China-U.S. relations, where data security and digital sovereignty are becoming increasingly politicized and scrutinized through the lens of national security. However, if security-driven restrictions are misapplied or overused, they could readily morph into instruments of technological decoupling, thereby undermining the long-term interests of AI development for both China and the United States, as well as the global community at large.
As AI tech advances, data has become a vital resource for training models, refining algorithms and aiding business decisions. In recent years, major developed countries, including the U.S. and those in Europe, have been actively devising and updating AI-related data security regulations under the guise of data sovereignty and security reviews. Countries worldwide have been tightening control over data through legislation for local use and storage of data. As of 2023, 40 economies had rolled out 96 data localization measures. More than two-thirds of these measures not only mandate local data storage but also ban cross-border data flows. This has given rise to a fragmented regulatory environment, which has intensified the challenges faced by developing countries. Also, multinational tech companies, by disseminating technological frameworks via cloud services and algorithm platforms, risk undermining the digital sovereignty of developing countries. Additionally, the disparities in personal data protection standards in different countries have fostered a fragmented data governance landscape, and that has impeded the data flows across borders that are essential for AI development.
In recent years, the U.S. has employed a range of measures, including laws and regulations, bans on technology exports, investment restrictions and supply chain decoupling to intensify the process of “de-sinicization” in the data and AI systems, and sever industrial linkages.
First are law and policy restrictions. Since 2023, the U.S. has been progressively tightening its export controls. It has specifically incorporated key capability elements like “core models” and “training computing power” into the scope of regulation. The primary goal is to prevent these crucial technologies from being transferred abroad. In particular, China and other nations deemed strategic competitors have been the main targets of these restrictive measures.
In January, the U.S. Department of Commerce’s Bureau of Industry and Security, or BIS, revised the country’s export administration regulations and unveiled the “Framework for Artificial Intelligence Diffusion.” This framework aims to strengthen export controls on advanced chips and AI models. It seeks to build a reliable ecosystem for the U.S. and its allies, protect national security and foreign policy interests and promote the responsible proliferation of AI technologies. The Committee on Foreign Investment in the United States has tightened its review scope and stipulates that Chinese-funded mergers and acquisitions, technology licensing and data cooperation projects involving AI data-related enterprises or key AI technologies may be intercepted or vetoed.
Second are technological constraints. At present, U.S. companies such as OpenAI and Nvidia are withholding access to high-quality training data and application programming interfaces (APIs) from China. They are also fortifying their strategic monopoly over the core resources of large models. For Chinese large models to gain entry, they must self-prove transparency and pass local security review.
Third, there are stipulations on data localization and partitioned storage. For instance, the U.S. is rallying its allies to establish a data circle of trustworthy countries, which effectively blocks data interconnectivity with China. Concurrently, the U.S. is working with the European Union and Japan to create data trust zones, thereby strengthening the data barriers of the technological alliance.
The restrictive measures outlined above mean that Chinese AI technologies and enterprises will not only encounter the hard barriers of a technology blockade but also soft barriers of rule discrimination.
By equating AI data flows with national security risks, the U.S. has effectively designated China as a presumptive risk. This has not only soured the atmosphere for bilateral AI cooperation but also promises to cast a long shadow over global AI collaboration.
To begin with, it is increasingly difficult to acquire high-quality training data. Amid the tightening of controls on global data flow, AI model training is shifting toward localization. Should Chinese AI models be denied access to medical images, academic literature and English semantic data, their capabilities in scientific research and multilingual tasks will be severely curtailed.
In addition, the global collaborative capabilities of the AI industry chain will be impaired. The global operation of AI enterprises relies on the free flow of data, shared algorithms and the interconnectivity of computing infrastructure. At present, global data barriers not only affect model training but also hinder the global delivery capabilities of AI cloud services.
Meanwhile, the disparities in AI safety standards and certification requirements will also make the export of AI products more complex. For example, the EU’s Liability Rules for AI requires algorithm explainability and a list of “high-risk AI,” while the U.S. focuses on corporate self-discipline and transparency ratings. This means that the same product needs to undergo multiple sets of compliance designs.
Moreover, the data alliances between trusted allies will further exacerbate the fragmentation of the global AI landscape. In recent years, in the name of “trusted data circulation,” the U.S. has collaborated with the EU, Japan and Australia to form the Democratic Data Alliance. As a result, global cross-border data flows have transitioned from “free” to “trusted.” Looking ahead, cross-border data flows are poised to achieve conditional interconnectivity predicated on national security, ethical review and algorithm controllability. Alternatively, they may need to be routed through third-party trusted neutral data hubs to facilitate compliant data transfer.
The development of AI is fundamentally predicated on openness, collaboration and data sharing. Over-securitizing and politicizing cross-border data flows will not only curtail the accuracy and generalization capabilities of AI models but also impede cooperation between China and the U.S. in global public interest domains such as healthcare, education and climate modeling.
Despite the institutional differences and value differences between China and the U.S., there are extensive common interests in AI standard building, algorithm ethics, model safety and the prevention of misuse. Establishing a trustworthy, limited and transparent data exchange mechanism, for instance, in areas such as the labeling of generative AI content, governance of deep fakes and data sharing for auto driving would be mutually beneficial.
Therefore, restrictions on cross-border data flows should not become an impediment to China-U.S. AI cooperation. Data should not become a set of handcuffs that fragments global AI development. Rather, it should serve as the foundation that connects future cooperation. Only in this way can we avert the balkanization of AI technology, which will only will to closed loops. Only through data openness can a global consensus be achieved on technological ethics and the sharing of AI benefits.