top of page

Command and Control: An In-Depth Analysis of China's New Framework for Artificial Intelligence in National Security and Global Influence

An analysis of how a centralized governance model and push toward technological self-sufficiency are positioning China as a leader in the next era of global competition, from supply chains to strategic military applications


China has released the latest version of its "AI Safety Governance Framework 2.0," a crucial document reflecting its strategy to balance AI development with risk management. Intended for a high-level audience in the commercial, military, and academic sectors, this document is not just a regulatory framework but an indication of China's geostrategic ambitions in the global technological domain. This update, which evolves from the 2024 Framework 1.0, introduces more refined risk classifications and strengthened governance measures. An analysis of this document reveals a two-pronged approach: on one hand, promoting innovation and the widespread application of AI; on the other, establishing a control system to mitigate emerging risks, particularly those that threaten national security and social stability.


Artificial Intelligence Safety Governance Framework Version 2.0
Artificial Intelligence Safety Governance Framework Version 2.0

Strategic Vision and New Measures

The release of Framework 2.0 on September 15, 2025, at the main forum of the National Cybersecurity Awareness Week, marks a significant step in China's AI governance policy. Developed under the guidance of the Cyberspace Administration of China, in collaboration with the National Technical Committee 260 on Cybersecurity of SAC and the National Computer Network Emergency Response Technical Team/Coordination Center of China, the document aims to position China as a leader in defining global standards for AI governance.

China's strategy is based on a precarious balance between development and security. The primary objective is to promote innovation, while recognizing that the risks associated with AI are rapidly evolving. Framework 2.0 introduces new measures for:

  • Risk Management: Dynamically adapting and updating preventive and governance measures to address the evolution of risks.

  • International Collaboration: Promoting cooperation in AI safety governance within multilateral mechanisms such as APEC, G20, SCO, and BRICS. The goal is to increase the representation and voice of developing countries and the "Global South."

  • Security and Control: Supporting the creation of a secure, reliable, and controllable AI development ecosystem.

A crucial aspect is the promotion of a "Global AI Governance Action Plan," which aims to disseminate the Chinese approach internationally and ensure that the development of AI "benefits humanity" as a whole.


Risk Analysis: A Holistic Approach

The document classifies risks into three macro-categories, each with specific details, offering a granular view of the threats perceived by China. This classification is fundamental to understanding Beijing's security priorities.


Inherent AI Technology Risks

These risks are directly linked to the intrinsic defects of models and algorithms.

  • Unreliability and Opacity: The insufficient explainability of deep learning algorithms (the so-called "black box problem") makes it difficult to predict and attribute decisions, with consequent difficulties in correcting errors and tracing responsibilities. The inability of an AI to accurately reflect the real world leads to "hallucinations" (plausible but unreliable outputs).

  • Bias and Discrimination: The presence of intentional or unintentional biases and discrimination in training data can lead to discriminatory results based on ethnicity, religion, nationality, region, and gender.

  • Vulnerabilities and Defect Propagation: Models are susceptible to "adversarial attacks," where attackers create data to alter AI decisions. The open-sourcing of foundational models amplifies the propagation of defects, making it easier for criminals to train "malicious models."

  • Data Risks: The unauthorized collection and improper use of personal data are considered significant risks. Training data can be "poisoned" with false or harmful information, compromising the alignment of values and the credibility of the model.


AI Technology Application Risks

This category focuses on how AI is used, especially in contexts that can affect the real world.

  • Cyber System Risks: AI depends on a complex infrastructure (development frameworks, computing platforms), which presents risks of defects, vulnerabilities, and backdoors. The dependence on global supply chains, threatened by "unilateral coercive measures" and export controls (a clear reference to U.S. policy), is a central concern, raising risks of disruption to the supply of chips, software, and tools.

  • Abuse for Cyberattacks: AI can be used to lower the threshold for cyberattacks and automate them. The generation of "deepfake" content is a direct threat to authentication mechanisms and information security.

  • Content Risks: AI can generate and disseminate harmful information, misinformation ("distortion of facts") and content that "pollutes the web ecosystem."

  • Real-World Risks: The application of AI in critical infrastructures (energy, telecommunications, finance) can lead to service interruptions and loss of operational control. An alarming concern is the use of AI to create weapons of mass destruction (nuclear, biological, chemical). This point highlights a maximum-level threat to national and global security that deserves the attention of intelligence services.

  • Cognitive Risks: AI can exacerbate the "information cocoon" effect and manipulate public opinion, facilitating cognitive warfare operations.


Derivative Risks from AI Application

This section outlines long-term risks to society, the environment, and ethics.

  • Social Impact: AI is expected to drastically change the employment structure, reducing the demand for traditional labor.

  • Environmental Impact: The disorderly construction of computing infrastructures and the inefficient development of models consume energy and water resources, posing challenges for sustainable development.

  • Ethical Risks: AI can sharpen social prejudices and widen the "intelligence divide." The concern is raised that AI could suppress creativity and innovation in students and researchers, leading to a dependence on tools. A primary ethical risk is the ability of AI to lower the threshold for research in sensitive fields (biology, genetics), risking opening the "Pandora's box" of technology.

  • Loss of Control: The document warns of a catastrophic risk: that AI could "develop self-awareness" and compete with humans for control. This scenario, while futuristic, highlights the seriousness with which China considers AI governance as a matter of survival.


Governance Measures: Technology, Laws, and Cooperation

Framework 2.0 proposes a set of technical countermeasures and comprehensive governance measures to address the identified risks.


Technological Countermeasures

  • Explainability and Robustness: Improve the interpretability and transparency of models. Increase model robustness through adversarial training and strengthen the assessment of defect propagation.

  • Data Security: Ensure legal compliance in data collection, use "truthful, precise, objective, and diversified" training data, and promote the use of synthetic data to reduce reliance on personal information.

  • Application Control: Introduce "safety guardrails" to filter inputs and outputs, prevent malicious injection and the generation of harmful content. Establish "circuit breaker" and "one-click control" mechanisms for autonomous agents in extreme situations.

  • Traceability: Promote the traceability of AI-generated content (AIGC) through the use of explicit labels (text, audio) and implicit labels (hidden in file data).


Comprehensive Governance Measures

  • Legislation and Regulations: Accelerate legislation on AI safety, with a focus on infrastructure protection, classification-based supervision, and end-use management.

  • Ethical Principles: Establish globally recognized ethical principles for research and development, and institute ethical reviews for high-risk activities.

  • Lifecycle Management: Improve the inherent safety of algorithms and models throughout the entire lifecycle, from R&D to application.

  • Supply Chain Security: Strong encouragement for the transparency of open-source models and collaboration between suppliers and communities to clearly define "prohibited" behaviors and responsibilities. This point reflects a clear concern about geopolitical disruptions.

  • Risk Classification: Implement a risk classification system based on context, level of intelligence, and application scale, as described in Appendix 1 of the document.

  • Collaboration and Information Sharing: Create an AI vulnerability database and an information-sharing mechanism among developers, service providers, and technical agencies. International cooperation is promoted to prevent the large-scale spread of risks.

  • End-Use Control: Intensify control over the end-use of AI, particularly in high-risk contexts such as nuclear, biological, and chemical weapons.

  • Talent Training and Public Awareness: Strengthen the training of professionals and increase public awareness of the risks and limitations of AI.


Geopolitical and Geoeconomic Implications

Framework 2.0 is not a simple technical document but an explicit declaration of geostrategic intent.


State Control and the Centralized Governance Model

Framework 2.0 strengthens the central role of the state in AI governance. Unlike the more fragmented, market-oriented approach of the United States, China adopts a unitary and top-down strategy, led by entities such as the Cyberspace Administration of China and the National Computer Network Emergency Response Technical Team/Coordination Center of China. The document introduces a "classification-based governance" system, a granular supervision system that establishes the level of state control based on the AI application, especially in critical infrastructures. China reserves the right to impose stringent rules, a model that contrasts with the lack of a federal registration requirement for AI models in the United States.


Technological Self-Sufficiency and Geopolitical Risks 

Framework 2.0 is an explicit response to external geopolitical pressures. The document explicitly identifies the risks arising from "unilateral coercive measures" and "export controls," a clear reference to U.S. policy aimed at limiting China's access to advanced technologies, particularly semiconductors. In this context, the emphasis on "supply chain security" is not just a precaution but a strategic imperative to achieve technological self-sufficiency. This contrasts with the U.S. model, which favors market-driven innovation and collaboration with allied countries, but which may risk alienating partners who do not desire political alignment.


The Global Vision and the New AI Order 

China is actively seeking to position itself as a leader in defining global standards for AI governance. Framework 2.0 promotes "cross-border cooperation" and the "universal sharing of technological results." This strategy is based on offering access to AI tools and markets with "minimal political conditions," in an attempt to gain influence among emerging economies and the "Global South." This approach directly clashes with the U.S. strategy, which tends to link its AI initiatives to political alignment with partners.


Talent Creation and Innovation Ecosystems 

The document reveals the highly integrated structure of China's research and development ecosystem. The framework was developed with the contribution of research institutes, universities, and leading companies. Partners mentioned include Peking University, Tsinghua University, Huawei, and Alibaba Group. This deep collaboration between the state, academia, and industry is a key element of Beijing's strategy. Instead of relying primarily on private initiatives or federal funding policies like in the United States, China is building a true national ecosystem to accelerate innovation and talent training.


Conclusion

Framework 2.0 is an intelligence document in every sense, revealing the plan of a major power to control a transformative technology and win the global competition on multiple fronts. The analysis of Framework 2.0 reveals a China that is positioning itself as a leading power in AI, with a well-defined strategy to dominate the technology of the future, protect its national interests, and promote its vision of governance globally.

Commenti


©2020 di extrema ratio. Creato con Wix.com

bottom of page