Australia’s online safety regulatory landscape continues to undergo significant transformation.
Published under s 137 of the Online Safety Act 2021 (Cth), industry codes form part of the eSafety commissioner’s suite of regulatory tools regarding the management of potentially harmful online content.
With the registration of three industry codes on 27 June 2025, and a remaining six on 9 September 2025, the eSafety Commissioner’s phase 2 industry codes (covering Class 1C and Class 2 material) are now fully published. Following the registration of those final six codes this week, online service providers across several sectors – ranging from social media platforms to search engines, online games and app distributers – will be facing new compliance obligations that need to be met before the codes come into effect on 27 December 2025 (for the three first registered codes), and 9 March 2026 (for the remaining six).
This article examines the background, key features and compliance requirements for the Phase 2 industry codes.
While members of the online industry are responsible for the development of industry codes, the eSafety Commissioner has final say on whether they will be registered or whether they need to be further amended, and unlike other industry-developed standards or codes, these industry codes are legally enforceable under the Online Safety Act 2021 (Cth) with civil penalties ranging up to AU $49.5 million. It is also important to note that the industry codes do not permit entities to breach other Australian obligations or laws (such as the Privacy Act 1988 (Cth)) when complying with the industry codes.
Development of these industry codes was split into two phases. Phase 1 considered the regulation of ‘class 1A’ and ‘class 1B’ materials, broadly considered illegal and restricted online content such as child sexual abuse, terrorism and other extreme violence. Phase 2 captured ‘class 1C’ and ‘class 2’ materials, such as harmful and age-inappropriate content such as content which is sexually explicit, depicts high-impact violence or simulated gambling.
The Phase 2 industry codes apply broadly across the online ecosystem. They capture a wide range of service providers and technology organisations that make services available to Australian end-users, including:
Obligations under each code vary depending on the type of service, the level of user interaction involved, and the assessed risk profile of the service. However, any organisation operating within one of the above categories should assume that the Phase 2 codes will apply to them.
Although each phase 2 industry code is tailored to different sections of the online industry, there are a few key themes that are aligned across each code:
As mentioned above, organisations impacted by the codes are now required to undertake a broad risk assessment regarding the likelihood of harmful content (Class 1C and class 2 materials) being accessed through their services. Such risk assessments are contained within each of the phase 2 codes, and are an opportunity for those who provide social media, online messaging, or other related services to engage with the real risk of harm that their services and accompanying AI chatbots imbue.
For services that provide an AI companion chatbot feature, the eSafety Commissioner now also mandates a separate risk assessment procedure that evaluates the risk that the AI chatbot specifically will generate harmful content (e.g. high impact violence, self-harm, sexually explicit material) for and by Australian children. For example, the code related to social media services contemplates the risk that harmful material will be accessed, distributed, or generated by an AI chatbot.
With regard to the risk assessments at a general level, organisations are required to:
1. be able to reasonably demonstrate that the risk assessment methodology is based on reasonable criteria which must - at a minimum - include criteria relating to the functionality, purpose and scale of the relevant service;
2. formulate in writing a plan and methodology for carrying out the risk assessment that ensures that each risk factor is accurately assessed;
3. carry out the risk assessment in accordance with the plan and methodology, and by persons with the relevant skills, experience and expertise; and
4. as soon as practicable after determining the risk profile of a relevant electronic service or AI companion chatbot feature (as applicable), record in writing:
a. details of the determination; and
b. details of the conduct of any related risk assessment
The level of risk assessment required varies based on the services provided by an organisation, and the likelihood of harmful content being propagated through said services.
Regardless, these obligations exist insofar as the code remains in effect. This means that if a provider makes a change to its service such that it would no longer be exempt from carrying out a risk assessment, or it has previously carried out a risk assessment but makes a change to its service that would result in the service falling within a higher risk tier, it is required to reengage with the risk assessment process.
Following the risk assessment process, organisations governed by the code must abide with a stringent compliance framework. The compliance measures for services differ depending on the type of service provided, what content propagates through said services, and the results of the risk assessment undertaken. For example, the code applicable to social media splits the compliance measures into steps that should be taken, along with other compliance measures, in circumstances where:
Where online pornography, self-harm material or high-impact violence material is allowed, organisations will now be required to take appropriate age assurance and access control measures before providing access to such material. It must also take appropriate steps to test and monitor the effectiveness of such measures. This brings the Australian regulatory landscape in line with the United Kingdom, which recently implemented the requirement for age assurance control measures on websites with potentially harmful material.
A service provider of this kind must now also put in place appropriate safety tools, which need to be defaulted to an appropriate setting for Australian child end users (but can be opt-in for everyone else), and publish clear and accessible information to Australian end-users on its tools and settings available to limit exposure to such content. Companies in this category are now also required to manually report to the eSafety Commissioner on a yearly basis regarding compliance with the code.
In circumstances where online pornography, self-harm material or high-impact violence material is not allowed, but the risk of such content is high or moderate, organisations are required to implement systems or technologies to flag and remove harmful content, and take appropriate steps to continuously improve these systems. Companies in this category must also be prepared to report to the eSafety Commissioner on written request to do so.
If a service has an AI companion chatbot feature, the compliance burden is slightly higher for each risk level. Reasonable age assurance measures must be implemented for any services where there is a high risk of inappropriate content, or at least a minimum safety by design (through systems, reviews, and revisions) for services with a moderate risk. All services with high or moderate risk are required to have terms and conditions and reporting mechanisms, and proactive reporting to the eSafety Commissioner is mandatory in circumstances where there are significant changes.
Other compliance measures contained within the codes include having, and enforcing, clear actions, policies or terms and conditions relating to harmful material - and outlining what is allowed or not allowed on the service. There are also requirements for sufficient personnel to oversee the safety of the service, and tools which enable users to report material they believe to be contrary to the terms and conditions of the service, or to make complaints. Organisations are also required to engage with the eSafety Commissioner in numerous instances – for example were the functionality of their service changes significantly to the point it materially impacts access or exposure to harmful material.
This article is a high-level overview of, what is, a relatively comprehensive mandatory risk assessment and compliance framework. The eSafety commissioner has published separate frameworks for the prevention of this kind of material from various service and product providers.
Each of these codes has its own specific rules and intricacies which need to be considered and implemented quickly, but with the due care and documentation to promote compliance.
As a result, organisations impacted by the codes should seek to assess and implement compliance requirements proactively. For advice on engaging with this process, mitigating these risks, and general compliance with the new eSafety regime, please contact one of our experts listed in this article.