Language : English 简体 繁體
Security

Munich in the Age of AI: When AI Security Meets European Strategic Autonomy

Mar 06, 2026
  • Xiao Qian

    Deputy Director, Center for International Security and Strategy at Tsinghua University

The Munich Security Conference reveals a shift in transatlantic ties as AI security becomes central to global debate. Europe is increasingly linking AI policy to technological autonomy, seeking to balance cooperation with the United States while strengthening its own strategic capabilities.

One of Europe's largest AI factories strengthens Europe's digital sovereignty. © Deutsche Telekom AG / Cindy Albrecht

One of Europe's largest AI factories strengthens Europe's digital sovereignty. (Photo: Deutsche Telekom AG / Cindy Albrecht)

The 62nd Munich Security Conference (MSC) was held from February 13 to 15, 2026. If past conferences focused on tanks, missiles, and alliance commitments, the newly added keywords at Munich in 2026 were models, computing power, and chips. 

The prominence of artificial intelligence (AI) and other critical technologies on this year’s agenda was not merely a reflection of technological hype, it signaled a deeper structural transformation in the global security architecture. Particularly noteworthy is that as “AI security” has become a central concern within Western policy circles, Europe’s long-standing debate over “strategic autonomy” is increasingly shifting toward technological autonomy and digital sovereignty. AI is emerging as a pivotal lever through which Europe seeks to reposition itself in a changing geopolitical order. 

2024: A Transatlantic Moment of Alignment 

Looking back at the MSC 2024, it unfolded in the immediate aftermath of the AI Safety Summit in Bletchley Park, United Kingdom. At that time, the Biden administration actively promoted a transatlantic framework for AI safety cooperation, elevating AI safety into a key institutional bond within the Atlantic alliance. The underlying logic was straightforward: amid intensifying technological competition with China, the United States and Europe needed to forge a common position on AI governance. 

Through summit diplomacy, policymakers advanced a “safety-first” consensus. Frontier AI systems were framed as potential sources of systemic risk requiring governance at the state level. A shared risk narrative began to take shape around issues such as frontier model risks, national security spillovers, and the prevention of misuse and malicious applications. 

Efforts were also made to establish a network of national AI safety institutes. The United States created its own AI safety institute and worked with the United Kingdom and the European Union to coordinate cooperation among their respective institutions. Institutionalized channels of dialogue were developed in areas such as model evaluation, red-teaming, and risk classification. 

In addition, the United States and Europe increasingly coordinated on export controls and technology governance. Restrictions on advanced chips, management of computing thresholds, and investment screening policies all rolled out across the Atlantic. AI was also incorporated into NATO’s framework on “emerging and disruptive technologies,” extending discussions into military applications, strategic stability, and crisis management. 

The year 2025 witnessed a quiet transition from AI safety to AI security. Following the Paris AI Summit, the United Kingdom’s Artificial Intelligence Safety Institute (AISI) quietly changed its name to the AI Security Institute. A few months later, the United States also renamed its counterpart as the Center for AI Standards and Innovation (CAISI), redirecting its emphasis from broad AI safety toward addressing national security risks and preventing what it described as “burdensome and unnecessary” overseas regulation. The shift was not merely a matter of rebranding; it reflected a transformation in risk perception—from concerns about technical hazards to a focus on national security threats. 

At this stage, security governance functioned not only as a tool for risk management but also as a platform for alliance consolidation. AI security became a form of institutional glue, helping to keep the United States and Europe aligned in both values and regulatory approaches. 

2026: From Alignment to Autonomy 

Yet the MSC 2026 sent a different signal: Europe is no longer focused solely on coordination with Washington, it is increasingly prioritizing technological autonomy. 

Discussions on European competitiveness, digital sovereignty, and supply chain resilience gained prominence. The central question is no longer simply “how to cooperate with the United States,” but increasingly “how to reduce dependence and strengthen Europe’s own capabilities.” Debates over how to narrow the technological gap with both the United States and China, how to enhance Europe’s digital competitiveness, and how to avoid excessive reliance on external computing power and platforms all reflect a shift in Europe’s internal strategic thinking. 

Several European leaders articulated similar positions at the conference. European Commission President Ursula von der Leyen stated that Europe must reduce its dependence on external powers and enhance its own autonomy—not only in defense and energy, but also in critical technological domains, including digital technologies, in order to safeguard Europe’s security and prosperity. UK Prime Minister Keir Starmer called on Europe to build “hard power,” achieving greater self-reliance in defense, industry, and technology, and to develop a more distinctly European framework for defense cooperation. French President Emmanuel Macron similarly stressed that, in light of a shifting geopolitical environment, Europe must strengthen both defense and technological autonomy, reduce external dependencies, and advance a “Europe First” approach across key industrial chains, including AI and other digital technologies. These remarks reflect a clear policy ambition among European leaders to extend the concept of strategic autonomy from traditional security domains into high-end technological fields such as AI. 

This shift is not merely rhetorical; it has structural roots. 

First, dependence on external compute has become increasingly evident. Europe remains highly reliant on U.S. firms for advanced chips, cloud platforms, and large-scale model development. The intensifying global AI race has magnified this asymmetry. 

Second, industry and security are becoming deeply intertwined. Data centers, semiconductor supply chains, and critical minerals have all been incorporated into national security frameworks, with technological dependence increasingly framed as strategic vulnerability. 

Third, uncertainty within the transatlantic alliance has grown. Since the remarks delivered by U.S. Vice President J.D. Vance at the MSC 2025 sparked significant debate, transatlantic relations have experienced renewed political volatility and policy uncertainty. Europe has consequently become more sensitive to the long-term risks of relying heavily on the U.S. technological ecosystem. 

Fourth, divergences in development philosophy persist across the Atlantic. The Trump administration reinforces a state–technology champion model, emphasizing the consolidation of competitive advantage through close alignment with leading U.S. tech firms. The European Union, by contrast, has long maintained a structurally cautious stance toward major technology companies. From the Geneva Data Protection Regulation (GDPR) to the EU AI Act, Europe has preferred to shape order through regulatory frameworks rather than platform alliances. The U.S. government’s reliance on large technology corporations stands in contrast with the EU’s structurally skeptical posture toward Big Tech, resulting in visible tensions. 

The MSC 2026 revealed a subtle but meaningful recalibration: for Europe, AI security is no longer solely about cooperation with the United States; it is equally about strengthening Europe’s own capacity. On the one hand, Europe continues to value collaboration with Washington on AI safety standards. On the other hand, it is placing growing emphasis on capability-building, including but not limited to, compute infrastructure, data ecosystems, model development capability, cybersecurity and system robustness. This rebalancing does not signal confrontation, but rather structural adjustment. AI security is both a domain of cooperation and a field of competition. 

From AI Security to AI Sovereignty 

European strategic autonomy was once primarily articulated in the domains of defense and energy. Today, technological strategic autonomy has emerged as a new dimension. At the MSC 2026, this trend was evident in discussions on digital sovereignty and standard setting, the localization of critical technology supply chains, the EU’s leadership in shaping AI security rules, and the development of European computing and model capabilities. 

On AI security, the divergence between Europe and the United States is becoming increasingly clear. 

In the United States, AI security debates are closely tied to frontier model capabilities, extreme risks, and strategic competition. A high degree of coordination has formed between the government and major technology companies. Security is often framed as the challenge of preventing loss of control or adversarial exploitation while maintaining technological leadership. 

For the European Union, however, AI security is first and foremost a question of fundamental rights, privacy and social stability, rather than merely technological capability. Through legislative frameworks such as the EU AI Act, the EU embeds safety and security within institutional boundaries. With risk classification, compliance obligations, and transparency requirements at its core, security is translated into rules governing market access. In this sense, AI security is not merely a tool of risk management; it has become an instrument for reshaping technological sovereignty. 

Europe’s Positioning in the Age of AI 

The rapid advancement of AI has ushered global security competition into an age of AI. In this context, Europe faces a fundamental question: should it continue to act primarily as a rule-maker complementing the U.S. technological ecosystem, or should it pursue a higher degree of independence in computing power, model development, and supply chains? 

In late January 2026, the European Parliament adopted, by an overwhelming majority, a comprehensive report calling on the European Commission to assess and reduce Europe’s dependence on non-EU suppliers in critical technological domains, including semiconductors, cloud infrastructure, software, and AI systems. The report also proposed the development of a “European tech stack” and open-standard infrastructure. Although the resolution does not carry binding legal force, it signals that political consensus has moved beyond regulatory governance toward a broader strategic vision centered on industrial autonomy and digital sovereignty. 

The changes emerging from MSC 2026 suggest that Europe is attempting to strengthen capabilities beyond rule-making and to reinforce autonomy alongside cooperation. This does not imply decoupling from the United States. Rather, it reflects an effort to reduce singular dependencies and secure greater agenda-setting leverage. 

As security debates shift from nuclear deterrence to algorithmic risk, and as alliance cooperation expands from joint military exercises to shared computing resources and model evaluation, Munich has entered a new historical phase. Future security forums will no longer focus solely on troop deployments; they will also debate model training. They will not only negotiate treaties; they will also shape technical standards. 

In this AI era, whether Europe can successfully transform AI security into a fulcrum of strategic autonomy will determine its position in the next reconfiguration of global order. 

The MSC 2026 may be just one moment in this transition, but it has already made one thing clear: the center of gravity in security is shifting from steel to code—and Europe is trying to redefine its role in that transformation.  

You might also like
Back to Top