China is in the process of implementing tighter restrictions on the application of generative artificial intelligence (AI) services within the country.
These measures are part of the authorities' efforts to find a balance between harnessing the benefits of this technology and mitigating associated risks.
China's Stricter Control
China has recently unveiled draft security regulations aimed at companies providing generative artificial intelligence (AI) services, which include significant restrictions on the data sources used for training AI models.
These proposed regulations were released by the National Information Security Standardization Committee, representing key entities such as the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology, and law enforcement agencies.
Generative Artificial Intelligence
Generative AI, a field exemplified by the capabilities of OpenAI's ChatGPT, involves AI models acquiring the ability to perform tasks by analyzing historical data and generating new content, like text and images, based on their training.
The draft regulations focus on controlling the content used to train publicly accessible generative AI models.
Content that exceeds "5% in the form of unlawful and detrimental information" will be earmarked for blacklisting, encompassing content that promotes terrorism, violence, subversion of the socialist system, damage to the country's reputation, and actions that undermine national cohesion and societal stability.
Banning Censored Data
Additionally, these regulations highlight the prohibition of using data subject to censorship on the Chinese internet as training material for these models.
This development follows permission granted to various Chinese tech companies, including the prominent search engine Baidu, to introduce generative AI-powered chatbots to the general public.
Security Evaluations and Protecting Personal Data
The Cyberspace Administration of China has been consistently emphasising the requirement for companies to provide security evaluations to regulatory bodies before rolling out generative AI-powered services to the public since April.
In July, the cyberspace regulator released a set of guidelines governing these services, which industry analysts noted were notably less stringent compared to the initial April draft.
The newly proposed security regulations stipulate that organisations engaged in training these AI models must obtain explicit consent from individuals whose personal data, including biometric information, is utilised for training purposes.
Furthermore, the guidelines encompass comprehensive instructions on preventing infringements related to intellectual property.
China's 2030 AI Ambitions
China has outlined a comprehensive three-step plan to attain a leading position in the field of artificial intelligence (AI).
The first phase, slated for completion by 2020, involves ensuring that China remains on par with cutting-edge AI technology and its broad applications.
The second phase, set for 2025, aims to achieve significant breakthroughs in AI development.
This progress is expected to culminate in the third and final phase of the plan, positioning China as the global leader in AI by 2030.
In light of advancing technology, countries around the world are facing the challenge of formulating comprehensive regulatory frameworks for the governance of artificial intelligence (AI) technology.