China to Strengthen Generative AI Regulation

Written By

james gong Module
James Gong

Legal Director
China

I am a Legal Director based in Hong Kong and lead the China data protection and cybersecurity team.

Regulation and enforcement of laws around AI are developing rapidly; on 20 June 2023, the Cyberspace Administration of China (CAC) released the Announcement on the Record of Deep Synthesis Service Algorithms. This development reveals the official channel and practical guidance for filing records of deep synthesis algorithms, which follows on from publication of the Administrative Measures for Generative Artificial Intelligence Services (Draft for Comment) (《生成式人工智能服务管理办法(征求意见稿)》) (“Draft Measures”) released on 11 April 2023 and specifies the procedures for algorithms filling as mentioned in the Draft Measures.

In this article, we highlight the key provisions of the Draft Measures and set out our observations on this rapidly evolving area of regulation.

Background

Generative AI promises to lead a profound productivity revolution in the coming months and years as new technologies and products come to market. Within a few months of generative AI products becoming widely available, we have seen regulatory authorities across the world race to keep pace with adoption. A wave of draft guidance looking to regulate generative AI creation is now coming through across many jurisdictions.

China has also been an active jurisdiction in AI and algorithm governance. In September 2021, a policy statement from several ministries included a pledge to establish a regulatory framework for algorithms used in internet information services within three years. Later that year, ministries led by the CAC jointly published a regulation on algorithm-based online recommendation technologies.

In September 2022, China announced regulations on internet information services using deep-synthesis technology. In addition to the previous AI and algorithm specific regulations, the overarching trio of the Personal Information Protection Law, the Data Security Law and the Cybersecurity Law also form an important backdrop to the regulatory framework for generative AI. 

Key provisions and observations

  1. Scope of Draft Measures

    The Draft Measures apply to development and/or use of generative artificial intelligence (“Generative AI”) products that provide services to the “public” within the territories of China. Generative AI is defined as technologies that use algorithm, models and rules to generate content, such as texts, images, voices, videos and codes.

    There is a question over the interpretation of the word “public” in the Draft Measures, applying potentially to individuals or meaning that the services are provided publicly. In the former case, companies that provide Generative AI only to corporate entities may be exempted from the Draft Measures.

    Organisations and individuals (“Service Providers”) that use Generative AI to provide content generation services will be responsible for the content generated using the product. Notably, companies that enable the public to generate content via application programming interfaces (APIs) will also be considered Service Providers. This may mean that the Draft Measures will also apply to Service Providers which do not operate Generative AI services but provide access to such services via an API.

    Further, the Draft Measures may potentially apply to Service Providers outside China if they provide Generative AI services to the public in China.

  2. Content Management

    The Draft Measures require Services Providers to be responsible for the content generated by the Generative AI, including ensuring that the content is:

    1. in line with the laws, regulations, political structure, moral values and social and economic orders in the PRC;
    2. is not discriminatory; and
    3. is accurate and truthful.

      Where non-compliant content is generated, Service Providers must take content filtering measures and, within three months,optimise the training model to prevent such content from being generated again.

      Service Providers are required to mark to the public the relevant content as generated by Generative AI in a conspicuous manner.

  3. Disclosure and Transparency

    Draft Measures mandate that Service Providers should, upon the authorities' request, furnish necessary information that may affect the trust and choice of the end users, such as a description of the source, scale, types and quality of the dataset, rules and types of manual labelling, the basic algorithm and the technology stack.

    In addition, Service Providers must make public the targeted group of users, scenarios and purposes of the Generative AI services.

  4. AI Training

    The Draft Measures have set out specific compliance requirements for training the Generative AI model. Key requirements include:

    Lawful Datasets: Service Providers will be responsible for the legality of the source of data used for training Generative AI, including ensuring that the data does not infringe intellectual property rights, complies with cybersecurity and personal information protection laws and is accurate, truthful, objective and diverse. Especially, if the data contains personal information, the personal information shall not be used for training the Generative AI model unless it is expressly allowed by PRC laws or administrative regulations, or the personal information subject has consented to such processing.

    Manual Labelling: Service Providers must formulate clear, specific and operative labelling rules, train the labellers and verify the accuracy of the labelling by random checks.

    Non-discrimination: when designing an algorithm, selecting a training dataset, generating and optimising models and providing services, Service Providers must take measures to prevent discriminatory content from being generated based on race, ethnicity, religious belief, nation, region, gender, age or occupation.

  5. Government Assessment, Filing, Transparency and Ethics Review

    The Draft Measures require that before providing Generative AI services to the public, the Service Providers should apply for a government security assessment and a filing of algorithm - both of which are supervised and coordinated by the CAC.

    Although ethics are not expressly discussed in the Draft Measures, the government also intends to strengthen ethical reviews of technologies. In an official opinion issued by the central government in 2022 (Opinions on Strengthening the Governance of Scientific and Technological Ethics《关于加强科技伦理治理的意见》), AI is one of the three key areas where the government will focus on formulating laws, regulations, standards and guidance. Additionally, we note a recent draft Measures on Ethical Review of Scientific and Technology (《科技伦理审查办法(试行)(征求意见稿)》), under which technologies involving data and algorithms should be reviewed for their security, fairness, transparency, reliability and trustworthiness.

  6. Managing Use of Generative AI Services

    Service Providers are required to manage the use of the services by:

    1. Requiring users to provide their real identity information;
    2. Taking measures to prevent excessive dependence on or addiction to the services;
    3. Guiding the users to use the content produced by Generative AI in a sensible manner and avoid infringing others’ rights to image, reputation and other interests; and
    4. Suspending or ceasing services to users, if they are found to have violated laws or regulations, commercial ethics or social morality.
  7. Protection of Rights

    Service Providers are required to compete fairly and to respect the rights of others, including not infringing their rights to portrait, reputation, privacy, personal information, and commercial secrets. Service Providers are also required to establish mechanisms for handling complaints and requests to promptly respond to individuals’ requests for correcting, deleting, or masking their personal information.

    Where rights of others are infringed, the Draft Measures also require Service Providers to take measures to cease the generation and prevent harms.

  8. Liabilities for Non-Compliance

Violation of the Draft Measures will be penalised in accordance with Personal Information Protection Law, the Data Security Law and the Cybersecurity Law as well as public security and criminal laws. Where such laws do not apply, the Draft Measures impose a fine of up to RMB 100,000 besides an order to suspend or cease its services of Generative AI, if the Service Provider in question refuses to make rectifications within a time limit or if its violation is severe.

In this regard, the penalties under the Draft Measures do not go much beyond those provided for under the existing laws and administrative regulations. Where there is no violation of the existing laws, the penalties should also be noted.

Conclusion

The Draft Measures have laid down comprehensive obligations relevant to content management, data privacy and security, and transparency of Generative AI, building upon the previous regulations on deep synthesis and algorithm management. Service Providers are required to perform these obligations throughout the development, training and operation of their Generative AI products and services and apply for relevant filings, assessment and reviews where necessary.

As it is the first draft measures on Generative AI in China and is subject to further development, the Draft Measures set a high standard for service providers, which may expect more balance between regulation and development of the industry.

Latest insights

More Insights
headphones and keyboard

Kingdom of Saudi Arabia: Imminent deadline for digital content platforms to comply with new KSA regulations

Oct 03 2024

Read More
digital data security

UK Information Commissioner offers advice to the UK finance sector on how to improve data subject access right processes following increase in complaints

Oct 01 2024

Read More
cloud shape

Long-awaited German judgment by the District Court of Hamburg (Kneschke v. LAION) on the text and data mining exception(s)

Oct 01 2024

Read More