The Belt and Road News Network

AI Guideline for Global Collaboration, Risk Management

By LIN Yuchen       13:34, September 30, 2025

Over the past year, the rapid evolution of AI technologies and applications has created both unprecedented opportunities and growing risks. To address both, the Chinese government has drafted the AI Safety Governance Framework 2.0 in collaboration with professional institutions, research organizations, and leading enterprises.

The new framework was released on September 15 during the 2025 National Cybersecurity Awareness Week. It is a significant upgrade of the first version launched last year, which was closely tied to the Global AI Governance Initiative and drew widespread international attention.

Framework 2.0 refines risk classifications, introducing graded approaches to different levels of risk, and establishing dynamic mechanisms to update countermeasures.

It reflects China's efforts to balance innovation with governance, while promoting global consensus on AI security, ethics, and regulation. It is designed to shape a safe, trustworthy, and controllable AI ecosystem, while encouraging collaboration across borders, industries, and sectors. It also signals China's commitment to multilateral cooperation, ensuring inclusive sharing of AI's technological benefits, and promoting equitable distribution of development opportunities worldwide.

AI is reshaping economies, societies, and ways of life, representing both a new frontier for human progress and a source of risks that cannot be ignored.

Guided by the principle of human-centered and beneficial AI, Framework 2.0 stresses inclusive but prudent innovation, risk-based agile governance, a balance of technical and managerial safeguards, and open international cooperation. It calls for a secure environment for innovation, guarding against threats to national security, public interest, and individual rights.

It highlights the importance of monitoring technological trends, classifying risks dynamically, and assigning clear security responsibilities to developers, service providers, and users. At the same time, it underscores the need for trustworthy AI applications that avoid uncontrollable risks to human survival and development.

The framework details various risk categories. They range from technical issues (such as bias, robustness, and adversarial attacks), to data risks (including poisoning and privacy leakage), application security risks (vulnerabilities, critical infrastructure misuse, or malicious exploitation), and finally, broader societal risks (such as disinformation, crime facilitation, and potential weaponization).

For each category, it provides technical measures, governance tools, and operational guidelines for research and deployment.

With this update, China seeks to establish an AI governance model that advances security and innovation in equal measure, while promoting dialogue and cooperation worldwide to safeguard the shared future of AI.

Source: Science and Technology Daily