Author: Zhang Feng
This article will discuss and compare the characteristics of AI standardization in two countries to explore how the advancement of standardized infrastructure can reshape industrial development and fundamentally change the valuation logic of AI companies.
In recent years, the rapid development of artificial intelligence (AI) technology has propelled it from cutting-edge laboratory research to commercial applications across various industries. However, behind this technological frenzy, the valuation logic of AI companies has long been controversial, with market judgments often mixed with boundless optimism about the future. As the application of the technology enters deeper waters, risks and uncertainties are becoming increasingly prominent, prompting policymakers, regulators, and investors to seek more robust and sustainable development paths.
Against this backdrop, regulators and industry professionals in both China and the United States have unanimously turned their attention to the standardization and risk management of AI. It is clear that standardization is becoming a key driver for the AI industry to move from "storytelling" to "practical implementation." I. Standardization Characteristics of AI Dictionaries and Risk Management in the United States The U.S. Treasury Department recently released two new resources to guide the application of artificial intelligence in the financial sector: the Shared AI Dictionary and the Financial Services AI Risk Management Framework (FSAIRMF). This move supports the President's Artificial Intelligence Action Plan, which calls for clear standards, shared understanding, and risk-based governance to ensure the safe and responsible deployment of artificial intelligence. “Implementing the President’s AI Action Plan requires more than just idealistic rhetoric; it requires tangible resources that agencies can utilize,” said Derek Thurler, Under Secretary of the Treasury. “By establishing a common language for AI and a tailored financial services and AI risk management framework, these deliverables help protect consumers while supporting responsible innovation.” The United States demonstrates a distinct “pragmatism” and “collaborative governance” approach in promoting AI standardization, particularly in key areas such as finance. Its core lies in translating macro-level national strategies into actionable guidelines for micro-level entities by building a common language and operational framework, thereby encouraging innovation while safeguarding security and stability. First, the release of the "Shared AI Dictionary" marks a crucial step forward for the United States in addressing fundamental challenges in AI governance. For a long time, AI terminology has varied significantly due to differences in academic background, application scenarios, and stakeholders. The "model interpretability" discussed by technology developers, the "algorithm transparency" focused on by legal compliance departments, and the "decision logic" understood by business departments often point to different levels of issues. This inconsistency in terminology directly leads to inefficient cross-departmental and cross-agency communication and poses significant challenges to regulation. The AI dictionary launched by the U.S. Treasury Department aims to break this "Tower of Babel" dilemma. By establishing a set of officially recognized and unified definitions for key AI concepts, capabilities, and risk categories, it achieves "synchronous resonance" among regulators, technology experts, legal advisors, and business leaders. This not only helps financial institutions develop a consistent understanding of AI risks but also provides a clear benchmark for external regulation, thus supporting more consistent and predictable implementation. This standardization of the "language" itself reflects the high importance the United States places on the foundation of AI governance and is the cornerstone of building a complex risk prevention and control system. Secondly, the "Financial Services AI Risk Management Framework" is an "operating manual" built upon this unified language. This framework does not start from scratch but cleverly adjusts and refines the macro-level AI risk management framework published by the National Institute of Standards and Technology (NIST), deeply aligning it with the specific context of financial services. This "tailor-made" approach reflects the flexibility and precision of US regulation. The core features of the FS AI RMF lie in its full lifecycle and scalability. It covers the complete AI lifecycle from design, development, and validation to deployment, monitoring, and updates, guiding institutions on how to identify AI application scenarios, assess potential risks, and embed accountability, transparency, and operational resilience into every stage of AI deployment. Crucially, the framework is designed to be scalable and flexible, adaptable to the specific needs of institutions of varying sizes and complexities, from startups to large multinational financial institutions. For example, small fintech companies can utilize the framework's simplified tools for initial risk assessments, while systemically important banks may require more complex governance structures. This tailored design significantly increases the likelihood of widespread industry adoption. Finally, the standardization of AI in the United States exhibits a distinct characteristic of "public-private partnerships and multi-party governance." Neither the dictionary nor the risk management framework was developed solely by regulatory agencies; rather, it was a collaborative effort involving public-private partnerships such as the Financial and Banking Information Infrastructures Committee and the Artificial Intelligence Executive Oversight Group under the Coordinating Committee for Financial Services. Positive feedback from industry organizations like the Cyber Risk Institute further underscores the framework's industry acceptance. This multi-party participation model ensures that standardized outcomes reflect both regulatory concerns about security and stability, and industry considerations regarding innovation efficiency and cost. Its ultimate goal is to "support faster and wider adoption of AI in the financial sector," empowering the industry by enhancing cybersecurity and operational resilience, rather than simply creating obstacles. II. China AI Terminology and Risk Management Framework Characteristics China has official terminology standards corresponding to the US Treasury Department's AI dictionary and AI risk management framework, and a national-level AI security governance/risk management system, forming a multi-level, full-process governance framework. Its core characteristics can be summarized as "promoting development through standards and ensuring security through regulations," striving to establish rule-making dominance in the fierce global AI competition and safeguarding the healthy and orderly development of domestic industries. Its main content is based on the national core framework of the national standard "Information Technology - Artificial Intelligence Terminology" (GB/T 41867-2022) and the "Artificial Intelligence Security Governance Framework" (Version 2.0, 2025-09), and is supplemented by GB/T 46347-2025 "Artificial Intelligence Risk Management Capability Assessment". It provides organizational-level AI risk management capability classification, assessment process and compliance guidelines. At the same time, the "Interim Measures for the Administration of Generative Artificial Intelligence Services" (2023) clarifies the mandatory requirements for security assessment, filing, content review and data compliance of generative AI services. In addition, there are some best practice-related standards, such as the AI application risk management guidelines released by key industries like finance, healthcare, and education. Compared to the US's pragmatic and industry-segmentation-oriented incremental approach, China's construction of AI terminology and risk management frameworks demonstrates a stronger top-level design focus, faster progress, and closer integration with national strategy. Firstly, in terms of terminology standardization, China has adopted a "systematic and forward-looking" construction strategy. Led by the National Standardization Management Committee, China is accelerating the construction of an AI standards system covering multiple levels, including basic commonalities, supporting technologies, product services, industry applications, and security management. For example, the published national standard "Artificial Intelligence Terminology" aims to provide a basic "common language" for the entire AI field. Unlike the US's "Shared AI Dictionary," which focuses on specific areas of financial services, China's terminology standardization work is more holistic, attempting to clarify the basic concepts, technological classifications, and development stages of AI from their root. The advantage of this approach is that it provides a unified "foundation" for the subsequent development of sub-standards in various industries, effectively preventing contradictions and conflicts between different industry standards, reflecting China's institutional advantage of "concentrating resources to accomplish major tasks." At the same time, the development process of these terminology standards closely follows international cutting-edge trends, striving to integrate China's practices and understanding in the AI field into the international standards system, enhancing China's voice in global AI governance. Secondly, in terms of risk management frameworks, China exhibits a prominent characteristic of "ethics first, safety as the foundation." China's AI governance framework is profoundly influenced by its legal systems for cybersecurity, data security, and personal information protection. Regulatory bodies such as the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security have issued a series of normative documents targeting specific technologies and applications such as algorithm recommendation, deep learning, and generative AI, forming a multi-layered regulatory matrix. For example, regarding generative AI services, China pioneered an algorithm registration and security assessment system, requiring service providers to be responsible for the legality of training data, the fairness of algorithms, and the authenticity of generated content. This regulatory approach, compared to the US FS AI RMF model which emphasizes internal governance and risk self-assessment, is more mandatory and reflects a bottom-line mentality. It clearly defines the "red lines" for AI development, especially in areas such as data security, ideological security, and the protection of citizens' rights, demonstrating extremely high regulatory requirements. China's risk management framework is more of an "external compliance constraint," driving companies to establish internal risk control systems to meet regulatory requirements. Finally, the advancement of AI standardization in China has achieved a high degree of synergy with industrial development and national strategic goals. Standardization is considered a key infrastructure for promoting AI to empower the real economy and achieve high-quality development. For example, in the financial sector, the "Financial Technology Development Plan" issued by the People's Bank of China explicitly requires strengthening the supply of standards for AI financial applications, covering multiple aspects such as intelligent risk control, intelligent marketing, and intelligent customer service. These standards not only focus on risk prevention and control but also strive to improve the efficiency and inclusiveness of financial services. The underlying logic is that by standardizing technical interfaces, data formats, and evaluation methods, the collaboration costs of upstream and downstream industries in the industrial chain can be reduced, promoting the large-scale application of AI technology in the financial sector. At the same time, the implementation of standards also provides a "touchstone" for leading technology companies, prompting them to transform mature technical solutions into industry norms, thereby consolidating their market position. This "standards-driven industry" approach makes China's AI standardization process not only a regulatory tool but also a crucial engine for promoting industrial upgrading and cultivating new productive forces. III. A Comparison of AI Standardization Infrastructure Between China and the US Although both China and the US have deeply recognized the importance of AI standardization and actively taken action, due to fundamental differences in their political systems, market environments, innovation cultures, and regulatory philosophies, their paths to building AI standardization infrastructure, core characteristics, and implementation effects differ significantly. From the perspective of top-level design and bottom-level drivers, China's AI standardization is a typical "government-led, top-down" model. At the national level, there is a clear strategic plan for AI development. Standardization work, as a key support for realizing this plan, is coordinated by the National Standardization Management Committee, with various ministries working together in their respective areas of responsibility. The priority of standard setting is highly consistent with national industrial policies and key scientific and technological research directions, possessing strong guiding and mandatory characteristics. The advantage of this model lies in its high efficiency and strong execution, enabling the rapid establishment of a comprehensive standard system. In contrast, AI standardization in the United States exhibits a "market-driven, bottom-up" characteristic. The government's role is more focused on being a "convener" and "promoter," guiding the industry to spontaneously form a consensus by issuing guidelines, frameworks, and best practices. Its standardization process emphasizes multi-party participation and consensus-building, fully respecting the innovative vitality and professional judgment of market players. The development process of the FS AI RMF is a typical example, and its results lean more towards "recommended guidelines" than "mandatory regulations." The advantage of this model lies in its greater flexibility and adaptability, making it less likely to stifle innovation, but it may lag slightly behind in terms of standard consistency and speed of adoption. There are also subtle differences between China and the US in the core focus of their standards systems. China's AI standards system, especially in risk management, highly focuses on "safety and controllability" and "ethical compliance." This stems from China's high regard for cybersecurity, data sovereignty, and social stability. Therefore, standards often impose strict requirements on the legality of data, the fairness of algorithms, the authenticity of content, and the accountability of systems, and are often closely aligned with higher-level laws such as the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law. Regulatory agencies tend to conduct pre-emptive or in-process supervision of AI applications through clear rules and procedures such as filing and evaluation. While the US AI risk management framework also comprehensively considers safety and fairness, its core logic leans more towards "risk-based" institutional self-governance. Its starting point is to help institutions identify, assess, and manage their operational, reputational, and compliance risks to support the achievement of their business objectives. It emphasizes that institutions should establish dynamic and continuous risk management processes based on their own risk appetite and application scenarios, rather than mechanically adhering to a fixed set of rules. This difference reflects the fundamental difference in regulatory philosophies between the two countries: China tends to use unified rules to regulate market behavior to prevent systemic risks, while the US places more trust in the self-management capabilities of market entities. From the perspective of the interaction between standards and industry, the Chinese model aims to "drive" industrial development through standards. Leading AI companies, especially top-tier technology companies, are often deeply involved in the formulation of national and industry standards. This reflects their technological strength and is also an important means for them to build industrial ecosystems and establish competitive advantages. Standards have become an important catalyst for technology diffusion and large-scale application. In the United States, standards are more of a "summary" and "refinement" of best practices in the industry. FS AI RMF largely incorporates the risk management experience accumulated by financial institutions and technology companies in practice. This model ensures that standards always keep pace with the forefront of the industry, avoiding the problem of standards lagging behind technological development. However, this may also lead to the fragmentation of the standard system, requiring integration and coordination at the government level. Regarding international influence and compatibility, both China and the United States are committed to promoting their national standards internationally. Leveraging its vast market and robust industrial strength, China is actively exporting its standardization concepts and practices through international standardization platforms such as ISO/IEC JTC 1/SC 42. Meanwhile, the United States, with its traditional dominance in the global technology sector, wields significant influence and de facto power globally through its NIST framework and other "soft laws." In the future, global AI governance is likely to evolve into a complex landscape of both competition and limited cooperation between the two major standardization systems of China and the United States. IV. The Impact of AI Infrastructure Advancement on Industrial Development and Valuation Logic Whether it's China's top-down systematic construction or the US's bottom-up industry consensus building, one undeniable fact is that the increasingly完善的AI standardization infrastructure is profoundly reshaping the development trajectory of the AI industry and fundamentally overturning the irrational prosperity of the past that relied on "storytelling" to support valuations. First, standardization greatly reduces transaction costs and entry barriers in the AI industry, promoting the "ubiquitous" application of technology throughout the economic system. Unified terminology and interface standards enable the flexible assembly and deployment of AI components developed by different companies. This "plug-and-play" standardized model greatly accelerates the process of AI technology moving from the laboratory to factory workshops, fields, and bank counters. The focus of industrial development will shift from "how to create AI" to "how to use AI effectively." This means that companies that only possess algorithmic technology but lack a deep understanding of vertical industries and the ability to implement application scenarios will face a revaluation. Conversely, "AI + industry" solution providers that can deeply understand industry pain points and combine standardized AI technology with specific business processes to create significant business value will gain market favor. Secondly, the establishment of a risk management framework provides the market with a universal benchmark for assessing the "health" of AI companies. In the past, risk assessments of AI companies were often vague and subjective. Now, both the US FS AI RMF and China's regulatory requirements in the financial and cybersecurity sectors provide concrete dimensions for evaluating an AI company's sustainable operating capabilities. Investors are beginning to focus on: Does the company's AI model have bias risks? Are the sources of training data legal and compliant? Is the model's decision-making process interpretable? Has the company established a risk management process covering the entire AI lifecycle? These previously overlooked "soft power" aspects are now becoming key factors determining a company's success or failure. A company that can provide efficient AI services while ensuring data privacy, algorithmic fairness, and system security undoubtedly possesses a more resilient and sustainable business model and deserves a valuation premium. Furthermore, standardization and compliance requirements are becoming a key screening mechanism for the survival of the fittest in the AI industry. Meeting increasingly complex compliance requirements requires significant investment of human and financial resources. For startups, this constitutes a considerable "compliance threshold." This objectively benefits larger, more resource-rich, and better-managed leading companies. Simultaneously, standardization also provides customers with a basis for choosing AI products and services. An AI product that has passed relevant national standard certifications or follows internationally recognized risk management frameworks is more likely to gain customer trust. This trust based on standards will become an important part of a brand, further consolidating the market position of leading companies. This means that future AI competition will no longer be merely a competition of technology and algorithms, but a comprehensive competition of governance capabilities, compliance capabilities, and brand reputation. Ultimately, all of this will lead to a fundamental shift: the core of AI company valuation is shifting from "possibility" to "certainty." In the early stages of AI development, the market was keen to chase stories that painted a picture of the "future world." This "storytelling" logic supported a large amount of early investment and high valuations, but also created huge bubble risks. The improvement of AI standardization infrastructure is precisely the process of squeezing out this bubble. It requires companies to break down their grand visions into measurable, manageable, and verifiable specific metrics. A company's value no longer depends solely on its founders' vision or the number of papers published at top academic conferences, but rather on healthy revenue growth, customer success stories, core technological barriers, effective risk management, and a track record of compliant operations. In summary, while the paths taken by China and the US in AI standardization differ, they both point to a clear future: AI is evolving from a technology gold rush into a mature industry with clear rules, infrastructure, and risk management. The release of the AI dictionary has eliminated communication noise; the implementation of the risk management framework has defined the boundaries of action; and the improvement of standardized infrastructure has built a sustainable development ecosystem. Against this grand backdrop, the valuation logic of AI companies will inevitably undergo profound changes. Those companies that can pierce through the fog of concepts and build safe, reliable, efficient, and commercially valuable AI applications on a solid foundation of standardization will become the winners of the new era. The once-prevalent logic of pure "storytelling" will ultimately be abandoned by the market.