China TMT: Bi-monthly Update – May and June 2025 Issue
This newsletter summarises the latest developments in Technology, Media, and Telecommunications in China with a focus on the legislative, enforcement and industry developments in this area.
If you would like to subscribe for our newsletters and be notified of our events on China Technology, Media, and Telecommunications, please contact James Gong at [email protected].
In May and June 2025, Chinese regulators intensified efforts in artificial intelligence (AI), online platform governance, and minor protection, advancing industry standardization through legislation, enforcement, and guidance. Notably, enforcement actions targeting platform misconduct have significantly increased, reflecting heightened regulatory scrutiny:
Follow the links below to view the official policy documents or public announcements.
The Standing Committee of the National People’s Congress passed the amendment of the Anti-Unfair Competition Law to regulate market competition and ensure a healthy market economy. Tailored to the internet sector’s evolution, the law strengthens fair competition rules for the digital economy, prohibiting operators from using data, algorithms, technology, or platform rules to disrupt the lawful operation of others’ online products or services. Platform operators are barred from directly or indirectly forcing in-platform merchants to sell below cost under platform pricing rules, which disrupts market order. Platforms must establish clear competition rules and mechanisms for reporting and resolving unfair competition complaints. Violators face orders to cease unlawful conduct, fines, and other administrative penalties.
2. CAC and other authorities planned to issue draft rules, classifying online information that may affect minors (20 June)
The CAC, alongside other ministries, proposed the Measures on the Classification of Online Information That May Affect Minors to enhance the national child protection framework and foster a healthy online environment. As a companion to the Regulation on the Protection of Minors in Cyberspace, the measures define categories, scope, and criteria for content harmful to minors’ physical or mental health, identifying five types: content inducing improper behaviour, negatively affecting values, misusing minors’ images, disclosing their personal data, or encouraging internet addiction. Content producers and platforms must label such material prominently, avoid displaying it in prime homepage positions, and exclude it from minor-dedicated products. The measures also mandate safety protocols for algorithmic recommendations and generative AI to ensure graded protection for minors.
3. Tc260 planned six guidelines on implicit metadata labelling and AIGC detection techniques (24 June)
TC260 planned to issue six cybersecurity practice guidelines that respectively address: detection techniques for AI generated synthetic content; implicit metadata labelling methods for image files, text files, video files and audio files; and security protection measures for such metadata labels. These documents will implement the compulsory national standards Measures for the Identification of AI‑Generated Synthetic Content and Cybersecurity Technology—Labelling Method for Content Generated by Artificial Intelligence. Four of the guidelines set out technical solutions for writing implicit AIGC labels into the metadata fields of mainstream file formats—OOXML for text, EXIF for images, WAV for audio and BMFF for video—and supply sample code, field structures and compatibility requirements to show how information on the source, generator and distributor can be preserved throughout the dissemination chain. Building on that foundation, the guideline on securing metadata‑based implicit labels adds anti‑tamper protections such as digital signatures, message authentication codes, digital certificates and binding mechanisms, and recommends using reserved metadata fields to store verification data, thereby safeguarding the authenticity and integrity of the labels. The guideline on detection technology standardises the workflow—from system framework to algorithms and service encapsulation—for recognising traces of AIGC in text, images, audio and video, enabling platforms to flag “suspected synthetic content.” Together, the six guidelines aim to help generation‑service providers and distribution platforms apply implicit labelling consistently, strengthen label security and enhance automated detection, thus supplying a technical bedrock for future regulation and content traceability.
4. SAMR planned to draft measures focusing on supervision of online transaction platform rules (4 June)
SAMR’s draft Measures on the Supervision and Administration of Online Transaction Platform Rules aim to maintain e-commerce order and promote sustainable platform growth. Platforms must display new or amended rules prominently on their homepage for at least seven days, provide technical means for stakeholder feedback, retain rule versions for three years, and establish consultation and transition mechanisms for major changes. Platforms are prohibited from forcing or inducing acceptance by traders or consumers. Dedicated chapters address trader rights, consumer protection, and information, network, and data security, requiring clear security clauses and prohibiting harm through unreasonable fees or differential pricing.
5. SAMR planned to draft measures strengthening live‑commerce supervision (10 June)
SAMR’s draft Measures on the Supervision of Live E‑commerce aim to regulate the live-commerce market. The measures cover four key actors—platform operators, studio operators, service agencies, and marketers—under a unified supervisory framework. Platforms must verify real-name identities, implement tiered management, share blacklists, report closed accounts to regulators within seven days, retain livestream videos for at least three years, and keep order-replay data for 30 days. Platforms must establish technical checkpoints for product libraries, pricing, advertising compliance, data retention, and AI virtual host labelling, alongside advance compensation and rapid complaint resolution mechanisms. Studio operators bear primary responsibility for product quality, lawful sourcing, and truthful promotion, and must display business licenses, return policies, and after-sales channels while cooperating on data retention.
6. SAMR planned to issue notice to reinforce oversight of online consumer‑goods recalls (13 June)
SAMR’s Notice on Further Strengthening the Recall of Consumer Goods Sold Online ensures that e-commerce operators fulfil recall obligations. Marketplace merchants, self-hosted sites, and other online vendors must execute prompt, comprehensive recalls of defective goods. Platforms receive recall information via the national online trading supervision system’s “Consumer Goods Recall Information Sharing” module, must halt sales immediately and provide relevant data to regulators. Vendors must establish defect information systems, stop sales upon recall notices, and, for defective goods, display recall details prominently and inform consumers of their rights. Producers must include barcodes in recall plans for consumer verification, and cross-border e-commerce domestic service providers are treated as importers with equivalent recall duties. SAMR will monitor platform co-governance commitments and penalize non-compliance to enhance online product safety and consumer protection.
7. CAC released the sixth batch of online rumour cases, focusing on AI misuse, splicing and other methods (12 May)
The CAC published its sixth batch of typical online rumour cases involving public policy, emergencies, and social livelihood issues. Some accounts used AI generation or splicing to fabricate incidents, policies, or social rumours, misleading users and disrupting public order. The CAC has addressed 2,210 offending accounts. The eight cases include false claims like “AI predicts lottery numbers with 100% accuracy,” “a serious crime in Hangzhou’s Binjiang District,” and “scan for social insurance subsidies.” Offending accounts were closed, and operators faced administrative penalties.
8. CAC announced eleventh batch of deep synthesis algorithm filings, adding 211 registered algorithms (19 May)
Under the Administrative Provisions on Deep Synthesis Internet Information Services, the CAC announced the eleventh batch of registered deep synthesis algorithms, with 211 additional domestic algorithms filed as of May 2025.
The CAC’s “Qinglang–Rectifying Typical Algorithmic Problems on Online Platforms” campaign prompted major platforms to sign an “algorithms for good” pledge and implement systematic reforms. Platforms providing services, such as short-video, graphic-social, social media, short-video, and video community, published algorithm rules, launched transparency columns, and introduced tools like “cocoon assessment” and “one-click breakout” for users to visualize and adjust content preferences. They increased positive content weighting, enhanced vulgar content filtering, verified hot topics, and offered finer interest-tag controls and negative feedback options. The CAC will conduct routine inspections to ensure ongoing algorithm and content quality improvements.
10. CAC stepped up crackdown on doxing, balancing tough enforcement with robust protection (27 May)
The CAC issued a notice and held a special meeting, directing local cyberspace bureaus and platforms (e.g., a social media platform , a content platform , a short-video platform ) to adopt “zero tolerance” for doxxing. Platforms must block distribution channels, remove posts exposing personal data or selling doxxing tutorials, and close related accounts and groups. They must also implement risk alerts and fast-track reporting tools. Platforms are required to publish governance bulletins, report typical cases, and forward criminal leads to police.
11. CAC published typical cases debunking AI generated rumours, offering lessons for governance (12 June)
The CAC released eight AI-generated rumour cases, including false claims like “Guangzhou bans food delivery,” “Chongqing Metro Line 18 collapse,” and “street vendor fined CNY 1.45 million.” These cases demonstrate multi-agency coordination in tracing rumours, issuing corrections, and pursuing accountability, offering guidance for AI rumour governance. The CAC emphasized severe penalties to ensure AI serves the public good.
The CAC completed the first phase of its “Qinglang–Cracking Down on AI Technology Misuse” campaign, removing over 3,500 non-compliant AI products (mini-programs, apps, agents), clearing 960,000 illegal content items, and addressing 3,700 accounts. Beijing, Shanghai, Zhejiang, Jiangsu, and Tianjin CACs implemented reporting channels, enforced content-labelling, blocked illicit queries, screened risk domains, and tested large-model security. Platforms strengthened training data management, security controls, and content labelling. The next phase will target seven issues, including AI-driven rumours and vulgar content.
The CAC released an enterprise-focused administrative inspection checklist outlining authorities, legal bases, frequency limits, and evaluation criteria. “Qinglang” follow-up inspections, platform content management checks, internet news service oversight, and foreign financial information service checks are limited to once annually; new technology and application security assessments may occur twice yearly; and data security, personal information protection, and cross-border data transfer compliance evaluations are capped at once per year.
14. SAMR released five online unfair‑competition cases, involving data scraping and mass returns by technical means (27 June)
SAMR publicized five online unfair competition cases: a software firm was fined CNY 530,000 for cross-platform data scraping; a brand agency was fined CNY 200,000 for abusing return rules to force competitor price cuts; a tech firm running a fake order platform was fined CNY 390,000, with its principal referred for prosecution; a game merchandise operator was fined CNY 500,000 for using an unauthorized “5E” logo; and a jewellery live-streamer was fined CNY 200,000 for staging a false “first-hand supply” scene. SAMR urged lawful competition to maintain a fair market.
The Supreme Court published Civil Code fifth‑anniversary typical cases. In one commemorative case under the Civil Code, a streamer, Jiao, on a leading live-commerce platform fabricated a “trapped mother and daughter” story to sell jade, inducing viewer Xie to buy 33 pieces for CNY 10,300. The court ruled Jiao’s actions constituted fraud under the Consumer Protection Law, ordering a refund and triple damages (CNY 31,000). The platform, which promptly suspended Jiao’s sales and disclosed his identity, fulfilled its obligations and avoided liability.
Beijing SAMR prosecuted China’s first case against “professional bullet screen posters” in live-commerce. A biotechnology firm used “water army” accounts to flood its “XX Jelly” livestream with false weight-loss claims (e.g., “lose 10 pounds in 7 days”). Using IP analysis and payment records, SAMR deemed this comment manipulation as fabricated user reviews, fining the firm CNY 100,000 under the Anti-Unfair Competition Law and its Interim Provisions. This case sets a precedent for regulating “live-stream review brushing” and introduces a “platform data penetration + fund-flow tracing” evidence model.
17. Beijing Tongzhou District People’s Court recognised criminal copyright protection for illustrations infringed through AI (12 June)
Beijing’s Tongzhou District People’s Court ruled in China’s first criminal AI copyright case that unauthorized AI replication of illustrations constitutes infringement. Defendants used AI to copy and alter protected illustrations for sale as jigsaw puzzles. The court emphasized that AI-generated work originality depends on human creative control. Defendants received up to 18-month sentences (some suspended) and fines, with the company fined CNY 100,000.
18. Shanghai CAC summoned AI chat app operator, demanding stronger review of generated content and stricter protection for minors (19 June)
After media exposed an AI chat app generating salacious content harmful to minors, Shanghai’s CAC summoned its operator, ordering immediate rectification, robust content review mechanisms, enhanced technical controls, and removal of harmful content. The operator pledged comprehensive reforms. Shanghai CAC will continue targeting AI-driven rumours, pornography, impersonation, and “water army,” with penalties and public exposure for repeat offenders.
19. Shanghai cyberspace administration opened cases against generative AI websites that relaunched without rectification (24 June)
Shanghai CAC filed cases against generative AI websites for producing content violating personal information rights, defaming others, or containing illegal pornography/violence. These sites, ordered offline during the “Bright Sword on the Huangpu 2025” campaign, relaunched without passing security assessments or implementing safeguards. Shanghai CAC criticized the operators, demanded rectification, and initiated investigations.
20. Shanghai CAC published filing information for generative AI service, adding 8 new generative AI services (30 June)
Shanghai CAC announced that, as of 30 June, eight additional generative AI services were registered, bringing Shanghai’s total to 95 registered services.
21. Guangzhou Internet Court ruled that an app’s hidden, misleading auto‑renew interface is illegal (4 June)
The Guangzhou Internet Court ruled against a cloud-storage app that charged CNY 29.9 monthly for seven months after a CNY 0.3 one-day trial. The app’s auto-renewal notice used small, grey text, defaulted to consent, and failed to provide conspicuous reminders, violating consumer choice and notification obligations. The court ordered a full refund.
The Guangzhou Internet Court issued six rulings on cross-border e-commerce: malicious “notice-and-takedown” complaints constitute unfair competition; unqualified authentication platforms’ opinions lack evidentiary weight; website-building service providers are not liable for store operations; third-party payment firms’ duties are limited to verifying merchant credentials and transaction instructions; absent contrary proof, platform-disclosed information identifies account operators; and platforms can mediate disputes between domestic consumers and overseas merchants.
23. CAC and SAMR issued guideline to standardise intelligent society development and governance (10 June)
The CAC and SAMR released the Guideline on the Standardization of Intelligent Society Development and Governance (2025 Edition) to establish a standards system spanning technological innovation, industrial application, and social governance. It outlines principles, common AI application scenarios, social impact indicators, and AI social experiment procedures, structuring a five-part framework (fundamentals, governance principles, scenario applications, technologies, and evaluation) to guide research and practice for a unified national market and modernized governance.
MIIT announced plans to focus on general and industry-specific large models and high-quality industrial datasets. It will deploy large models in electronics, raw materials, and consumer goods for R&D, prototyping, production, and operations, while nurturing leading firms, specialized SMEs, open-source communities, and standards to advance AI-driven industrialization.
25. MIIT convened meeting to advance the AI industry and empower new‑type industrialisation (4 June)
MIIT’s special meeting emphasized systematic AI development for industrialization, prioritizing: strengthening computing power and industry datasets; deploying large models in key manufacturing sectors; establishing tiered AI standards; supporting enterprises, open-source communities, and global cooperation; and balancing development with security through risk governance, deep synthesis detection, and ethical norms.
26. MSS issued guide on safeguarding confidentiality when using AI applications (24 May)
The MSS issued the Security Guide for Using AI Applications to prevent data leaks and protect sensitive information. Users must choose reputable domestic platforms, limit permissions to essentials, adhere to “no classified data online,” process sensitive data on isolated devices, disable cross-platform sync, clear caches, and use professional tools to wipe residual data.
27. CSAC issued safety guideline for generative‑AI services aimed at minors (10 June)
The CSAC, with 60 partners, released the Safety Guideline for Providing Generative AI Services to Minors, emphasizing the “best interests of minors.” It establishes a lifecycle framework for content safety, data protection, and information distribution, promoting age-appropriate products, AI literacy education, and industry self-regulation, with operational guides for content, data security, and minor modes.
28. CSAC issued industry initiative to promote safe, reliable and controllable AI (11 June)
The CSAC, with 65 partners, launched the Industry Initiative on Promoting Safe, Reliable and Controllable AI to align with regulations like the Interim Measures for Generative AI Services. It calls for lawful compliance, robust security, reliable technology, optimized algorithms, data protection, talent development, ethical values, and shared governance for inclusive AI development.