“Reasonable Steps” under Part 4A of the Online Safety Act 2021 (Cth): eSafety Commissioner Guidance and Implications

Introduction

From 10 December 2025, platforms that are classified as age-restricted social media platforms (“ARSMPs”) under Part 4A of the Online Safety Act 2021 (Cth), as amended by the Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) (“OSA”), must be able to demonstrate they are taking “reasonable steps” to prevent children in Australia under 16 from holding accounts on their services. Failure to do so exposes companies to investigation and enforcement by the eSafety Commissioner.

On 16 September 2025, the Commissioner issued the Social Media Minimum Age Regulatory Guidance (“Guidance”) on what “reasonable steps” are to prevent those aged under 16 from holding accounts. The Guidance makes it clear that businesses will be judged on whether they have adopted a proportionate, layered and accountable strategy — not whether they rely on a single technical fix. For in-scope companies, this means that time is running out to detect existing under-age users, review sign-up processes, deploy workable age-assurance measures and put governance frameworks in place before regulators begin to scrutinize compliance. Waiting until enforcement begins risks reputational damage, regulatory intervention and costly penalties under the Commissioner’s expanded powers.

While some platforms are exempt, such as those whose sole or primary purpose is messaging, professional networking, sharing information about products or services, education, health or online gaming, many other social media services remain squarely in scope. The restriction applies only to holding accounts — under-16s can still view publicly available content without logging in — but the compliance burden sits with platforms to evidence their approach.

For background on how age assurance technologies were tested and evaluated in Australia, see our article on the recent Age Assurance Technology Trial here.

Background to the Guidance

Part 4A of the OSA builds on Australia’s existing online safety framework, requiring ARSMPs to prevent children under 16 from holding accounts on captured platforms. Unlike earlier reliance on self-declared age, this new regime requires platforms to use robust, verifiable, and rights-respecting measures.

The Commissioner’s Guidance outlines:

  • the principles that will inform compliance assessment;
  • the baseline expectations for different types of platforms; and
  • the factors that may raise or lower the standard of “reasonable steps” depending on context (for example, platform size, risk profile, or user base).

The Guidance also confirms that platforms will be expected to detect and deactivate or remove existing underage accounts, with care and clear communication to affected users. While platforms must notify account holders, the Guidance makes clear that they are not required to delete content already posted by under-16 users. Instead, platforms should consider offering options to preserve or manage that content. Taking this approach may help demonstrate the level of “care” regulators expect under the regime, and avoid unnecessary user disruption or reputational risk.

Crucially, this Guidance also sets out what reasonable steps are not intended to require.

What Constitutes “Reasonable Steps”?

The Guidance confirms that “reasonable steps” will be assessed holistically, considering the measures adopted, the way they are implemented and the risk of harm presented by the Platform. Four key themes emerge:

a. Proportionality and Risk-Based Approach

Platforms are expected to calibrate their measures to the nature of their service and associated risks. For higher-risk features (e.g. posting functionality or livestreaming), regulators will expect stronger forms of age assurance. Lower-risk functions may be supported by lighter-touch measures, provided they are part of a coherent overall strategy. 

 b. Layered and Flexible Solutions

No single tool will suffice. Platforms must adopt layered approaches — combining methods such as biometric estimation, identity document verification, and contextual signals — to increase reliability and reduce circumvention. Flexibility to adapt to evolving threats and technologies is also part of the assessment.

 c. Privacy and Data Protection

“Reasonable steps” must be consistent with Australian privacy law. Data minimisation, on-device processing, and strict retention limits are central expectations. Solutions that unnecessarily retain biometric or identity information will weigh against a platform in compliance assessments.

 d. Governance and Accountability

The Guidance stresses that platforms must demonstrate robust governance frameworks. This includes:

  • contractual clarity where third-party vendors are used;
  • internal oversight and documentation of decision-making;
  • active monitoring of system effectiveness; and
  • readiness to provide evidence to the Commissioner.

 e. Removal, Prevention and Review

Platforms will need to implement measures to prevent re-registration or circumvention by underage users whose accounts are removed. This may require steps to counter use of VPNs or errors by age-assurance tools. At the same time, platforms must ensure that users incorrectly restricted (for example, those aged over 16 mis-identified by age assurance) are provided with accessible review mechanisms to restore accounts.

Importantly, the eSafety Commissioner has signalled that expectations in this area will be ever-evolving. ARSMPs cannot take a one-off compliance approach: they will be expected to continually monitor and update their measures as tools and technologies develop, and as industry standards mature. This means building internal governance processes that keep pace with change, ensuring systems are regularly reviewed and that compliance strategies remain defensible over time.

What the Guidance Does Not Require

The Guidance is clear that “reasonable steps” do not mean platforms must:

  • Guarantee absolute accuracy of every age check — the expectation is for robust, proportionate measures, not perfection.
  • Adopt one prescribed technology — platforms have flexibility to select and layer tools suited to their service.
  • Collect or store large volumes of identity data — in fact, data minimisation and privacy-by-design are central.
  • Prevent every single child under 16 from accessing services in all circumstances — the standard is about taking reasonable measures, not achieving zero-risk outcomes.
  • Re-engineer entire services from the ground up — compliance can be achieved through proportionate integration of assurance measures rather than wholesale redesign.
  • Verify the age of all users — a blanket approach may be considered unreasonable where existing data can reliably infer age.
  • Retain user-level data from age checks — platforms are expected to keep records of systems and processes, not detailed personal information.

This balance is intentional: regulators are seeking accountable, evidence-based approaches that respect children’s rights and user privacy, rather than imposing a rigid compliance model.

Practical Implications for Platforms

The Guidance provides clear signals about what regulators will look for in practice:

  • Risk-based deployment: allocating stronger checks to high-risk features.
  • Layered assurance: starting with estimation and escalating to verification where confidence is low.
  • Inclusive design: ensuring fallback pathways for users without standard ID, and accounting for diverse demographics.
  • Privacy-by-design: adopting measures that minimise data use while maintaining effectiveness.
  • Continuous monitoring: testing resilience against evolving cyber threats, including deepfakes and spoofing.
  • Engagement and transparency: consulting with children, parents, and civil society, and publishing clear explanations of how age assurance works.
  • Account management: putting processes in place to detect, deactivate and, where appropriate, preserve the content of underage accounts, while preventing re-registration or circumvention.

What do ARSMPs need to do?

The eSafety Commissioner’s Guidance makes clear that “reasonable steps” are not a check-box exercise. Platforms must take a holistic, proportionate, and evidence-based approach that balances safety with privacy and inclusivity.

The clear call to action is that ARSMPs must now implement a proportionate, layered and evidence-based age assurance program that balances safety, privacy and inclusivity. This means moving quickly to embedding age-assurance tools that respect privacy, building governance structures that withstand regulatory scrutiny, and establishing processes for ongoing monitoring as technologies evolve.

With the 10 December 2025 deadline fast approaching, the question is not whether compliance solutions exist, but whether ARSMPs can deploy them responsibly and transparently in time to meet expectations and deliver safer online spaces for young people in Australia.


Latest insights

More Insights
featured image

MEPs attempt to strike a new balance between AI and copyright

4 minutes Sep 17 2025

Read More
featured image

UK - Government sets out draft strategic priorities for UK telecommunications and digital infrastructure

3 minutes Sep 17 2025

Read More
featured image

EU Court issues further guidance on net neutrality and zero-rating

2 minutes Sep 17 2025

Read More