Digital Duty of Care: What Phase 2 eSafety Codes Demand from Providers

Australia’s online safety regulatory landscape continues to undergo significant transformation.

Published under s 137 of the Online Safety Act 2021 (Cth), industry codes form part of the eSafety commissioner’s suite of regulatory tools regarding the management of potentially harmful online content.

With the registration of three industry codes on 27 June 2025, and a remaining six on 9 September 2025, the eSafety Commissioner’s phase 2 industry codes (covering Class 1C and Class 2 material) are now fully published. Following the registration of those final six codes this week, online service providers across several sectors – ranging from social media platforms to search engines, online games and app distributers – will be facing new compliance obligations that need to be met before the codes come into effect on 27 December 2025 (for the three first registered codes), and 9 March 2026 (for the remaining six).

This article examines the background, key features and compliance requirements for the Phase 2 industry codes.

Scope of Application of the Phase 2 Codes

While members of the online industry are responsible for the development of industry codes, the eSafety Commissioner has final say on whether they will be registered or whether they need to be further amended, and unlike other industry-developed standards or codes, these industry codes are legally enforceable under the Online Safety Act 2021 (Cth) with civil penalties ranging up to AU $49.5 million. It is also important to note that the industry codes do not permit entities to breach other Australian obligations or laws (such as the Privacy Act 1988 (Cth)) when complying with the industry codes.

Development of these industry codes was split into two phases. Phase 1 considered the regulation of ‘class 1A’ and ‘class 1B’ materials, broadly considered illegal and restricted online content such as child sexual abuse, terrorism and other extreme violence. Phase 2 captured ‘class 1C’ and ‘class 2’ materials, such as harmful and age-inappropriate content such as content which is sexually explicit, depicts high-impact violence or simulated gambling.

The Phase 2 industry codes apply broadly across the online ecosystem. They capture a wide range of service providers and technology organisations that make services available to Australian end-users, including:

  • Core social media services – platforms that enable sharing and interaction with user-generated content (e.g. social media platforms).
  • Social media messaging services – private or semi-private messaging functions, particularly where they are linked to social media platforms.
  • Relevant electronic services – interactive online services that allow communication or sharing of content (for example, gaming platforms with chat functions).
  • Designated internet services – broader internet services such as search engines and web-hosting providers.
  • App distribution services – app stores and marketplaces that distribute online services to Australian users.
  • Equipment providers – manufacturers and distributors of devices (such as smartphones, tablets, and gaming consoles) that can access or display online content.

Obligations under each code vary depending on the type of service, the level of user interaction involved, and the assessed risk profile of the service. However, any organisation operating within one of the above categories should assume that the Phase 2 codes will apply to them.

Key Features of the Phase 2 Industry Codes

Although each phase 2 industry code is tailored to different sections of the online industry, there are a few key themes that are aligned across each code:

  • Self-Assessment of Risk Profiles: broadly, risk profiles fall between Tier 1 to 3, representing high, moderate and low risk respectively. Each risk profile imports different compliance obligations. Entities are expected to undertake a risk assessment to assess the risk profile of their relevant services (or voluntarily assign a Tier 1 risk profile).
  • Age Assurance Mechanisms: ensuring appropriate age-assurance mechanisms are present to prevent children from accessing age-inappropriate content is a common theme throughout the phase 2 industry codes. This includes AI-recognition, ID / Digital ID verification and credit card checks. Notably, it also makes reference to the recent findings of the Age Assurance Technology Trial, which has been discussed in a previous article here.
  • End-User Reporting Mechanisms and Controls: where end-users are involved (such as on social media, messaging, online games and search engines), the phase 2 industry codes now require the implementation of reporting mechanisms which allow Australian end-users to report breaches of prohibitions. Furthermore, platforms must educate Australian end-users on the role and functions of the eSafety Commissioner and how to make a complaint. Where possible, the phase 2 industry codes also encourage the implementation of parental control mechanisms, which filter inappropriate materials to ensure they cannot be accessed by child users/accounts.
  • AI Chatbots: the codes consider the impact of AI chatbots on end users. Services with AI Chatbot companions require additional consideration in their risk-assessments, and are likely to end up with more comprehensive compliance obligations as a result; especially in circumstances where the chatbot may generate the potentially harmful content itself.

Risk Assessment and Documentation

As mentioned above, organisations impacted by the codes are now required to undertake a broad risk assessment regarding the likelihood of harmful content (Class 1C and class 2 materials) being accessed through their services. Such risk assessments are contained within each of the phase 2 codes, and are an opportunity for those who provide social media, online messaging, or other related services to engage with the real risk of harm that their services and accompanying AI chatbots imbue.

For services that provide an AI companion chatbot feature, the eSafety Commissioner now also mandates a separate risk assessment procedure that evaluates the risk that the AI chatbot specifically will generate harmful content (e.g. high impact violence, self-harm, sexually explicit material) for and by Australian children. For example, the code related to social media services contemplates the risk that harmful material will be accessed, distributed, or generated by an AI chatbot.

With regard to the risk assessments at a general level, organisations are required to:

1. be able to reasonably demonstrate that the risk assessment methodology is based on reasonable criteria which must - at a minimum - include criteria relating to the functionality, purpose and scale of the relevant service;

2. formulate in writing a plan and methodology for carrying out the risk assessment that ensures that each risk factor is accurately assessed;

3. carry out the risk assessment in accordance with the plan and methodology, and by persons with the relevant skills, experience and expertise; and

4. as soon as practicable after determining the risk profile of a relevant electronic service or AI companion chatbot feature (as applicable), record in writing:

a. details of the determination; and

b. details of the conduct of any related risk assessment

The level of risk assessment required varies based on the services provided by an organisation, and the likelihood of harmful content being propagated through said services.

Regardless, these obligations exist insofar as the code remains in effect. This means that if a provider makes a change to its service such that it would no longer be exempt from carrying out a risk assessment, or it has previously carried out a risk assessment but makes a change to its service that would result in the service falling within a higher risk tier, it is required to reengage with the risk assessment process.

Compliance Measures

Following the risk assessment process, organisations governed by the code must abide with a stringent compliance framework. The compliance measures for services differ depending on the type of service provided, what content propagates through said services, and the results of the risk assessment undertaken. For example, the code applicable to social media splits the compliance measures into steps that should be taken, along with other compliance measures, in circumstances where:

  • Online pornography, self-harm material or high-impact violence material is allowed;
  • Online pornography, self-harm material or high-impact violence material is not allowed but the risk of such content is high or moderate; and
  • Where an AI chatbot companion is available.

Where online pornography, self-harm material or high-impact violence material is allowed, organisations will now be required to take appropriate age assurance and access control measures before providing access to such material. It must also take appropriate steps to test and monitor the effectiveness of such measures. This brings the Australian regulatory landscape in line with the United Kingdom, which recently implemented the requirement for age assurance control measures on websites with potentially harmful material.

A service provider of this kind must now also put in place appropriate safety tools, which need to be defaulted to an appropriate setting for Australian child end users (but can be opt-in for everyone else), and publish clear and accessible information to Australian end-users on its tools and settings available to limit exposure to such content. Companies in this category are now also required to manually report to the eSafety Commissioner on a yearly basis regarding compliance with the code.

In circumstances where online pornography, self-harm material or high-impact violence material is not allowed, but the risk of such content is high or moderate, organisations are required to implement systems or technologies to flag and remove harmful content, and take appropriate steps to continuously improve these systems. Companies in this category must also be prepared to report to the eSafety Commissioner on written request to do so.

If a service has an AI companion chatbot feature, the compliance burden is slightly higher for each risk level. Reasonable age assurance measures must be implemented for any services where there is a high risk of inappropriate content, or at least a minimum safety by design (through systems, reviews, and revisions) for services with a moderate risk. All services with high or moderate risk are required to have terms and conditions and reporting mechanisms, and proactive reporting to the eSafety Commissioner is mandatory in circumstances where there are significant changes.

Other compliance measures contained within the codes include having, and enforcing, clear actions, policies or terms and conditions relating to harmful material - and outlining what is allowed or not allowed on the service. There are also requirements for sufficient personnel to oversee the safety of the service, and tools which enable users to report material they believe to be contrary to the terms and conditions of the service, or to make complaints. Organisations are also required to engage with the eSafety Commissioner in numerous instances – for example were the functionality of their service changes significantly to the point it materially impacts access or exposure to harmful material.

Next Steps

This article is a high-level overview of, what is, a relatively comprehensive mandatory risk assessment and compliance framework. The eSafety commissioner has published separate frameworks for the prevention of this kind of material from various service and product providers.

Each of these codes has its own specific rules and intricacies which need to be considered and implemented quickly, but with the due care and documentation to promote compliance.

As a result, organisations impacted by the codes should seek to assess and implement compliance requirements proactively. For advice on engaging with this process, mitigating these risks, and general compliance with the new eSafety regime, please contact one of our experts listed in this article.

Latest insights

More Insights
featured image

Horizon Scanning: Are you prepared for the evolving HR landscape of 2025 and beyond?

5 minutes Sep 15 2025

Read More
featured image

The Data Act is now being applied – What Organisations Need to Know and Do

8 minutes Sep 12 2025

Read More
featured image

Japan's New AI Act: Examining an Innovation-First Approach Against the EU's Com-prehensive Risk Framework

9 minutes Sep 11 2025

Read More