The highly anticipated Age Assurance Technology Trial— Final Report (“Report”) has now been published. It is the most comprehensive evaluation of age assurance technologies ever undertaken in Australia. Commissioned by the Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts, the Report was conducted independently and assessed a wide range of age verification solutions — from document checks to biometric estimation and layered validation models — to determine their accuracy, usability, privacy impacts and readiness for deployment. Its findings now form the evidence base that will guide how platforms meet their statutory obligations under the new online safety regime.
Australia’s online safety regime is entering a decisive new phase. The Online Safety Act 2021 (Cth) had already established a broad regulatory framework for tackling online harms. Building on this, the Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) imposes significant new obligations on platforms. By 10 December 2025, platforms captured by the amendment — known as Age Restricted Social Media Platforms (“ARSMPs”) — must take “reasonable steps” to prevent individuals under 16 from holding accounts.
The eSafety Commissioner’s February 2025 report confirmed that self-declared age will no longer be acceptable. Stronger, technically reliable methods must be used. The Department’s Fact Sheet (released in July 2025) reinforced that this reform is not about penalising families but about ensuring platforms themselves take responsibility for protecting children.
The Report provides the clearest evidence to date of what will be expected of platforms under Australia’s evolving online safety framework. More than 60 solutions from 48 providers were tested, ranging from document-based age verification, to biometric estimation tools, behavioural and contextual inference models, layered “successive validation” systems, and parental control and consent mechanisms.
The evaluation went beyond lab testing. It included school-based field trials in five jurisdictions, mystery shopper exercises, and provider practice reviews to assess performance under real-world conditions. Crucially, systems were benchmarked not only for technical accuracy but also against criteria central to regulators and users alike: privacy, inclusivity, usability, security, and alignment with international standards such as ISO/IEC 27566 and IEEE 2089.1. Most solutions assessed were already at Technology Readiness Level 7 or above, meaning they were ready for integration into live user journeys. For large platforms, this underscores that implementation is no longer theoretical but expected within existing compliance timelines.
The Report’s findings carry a clear message: age assurance is both technically feasible and commercially viable in Australia. Solutions exist that can be deployed at scale without requiring wholesale redesigns of user experience. But the Report also makes plain that compliance will not be judged by the presence of a single technical “fix”. Regulators will expect platforms to tackle a series of critical risks: avoiding unnecessary data retention, ensuring reliability across diverse demographics (including users without formal identity documents), maintaining usability in everyday conditions (such as low-quality devices or connectivity), and responding to an evolving cyber-threat environment.
For global technology platforms, the key takeaway is that Australia is positioning itself to require layered, risk-based and rights-respecting approaches to age assurance. That means adopting solutions proportionate to the service, configurable across features, and demonstrably aligned with privacy and child-safety obligations. Compliance under the new regime will rest not just on technology adoption but on a platform’s ability to evidence governance, inclusivity, and resilience in the way those tools are deployed.
While the Report confirmed that robust solutions already exist and can be deployed in Australia, it also underscored that implementation will not be straightforward. The evaluation revealed a number of legal, technical and ethical hurdles that platforms must navigate if they are to meet their obligations under the Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth). These challenges go beyond simply adopting a tool; they speak to how age assurance is designed, governed and integrated into user experiences.
a. Absence of a One-Size-Fits-All Solution
The Report found that age verification using official records provides the strongest assurance, but can exclude those without identity documents. Biometric age estimation delivers speed and convenience, but its probabilistic nature makes it unsuitable for legally sensitive contexts. Behavioural or contextual inference offers low-friction checks but risks embedding bias if signals are poorly chosen. For social media, this means no single technology can cover all use cases: layered and risk-based approaches are essential.
b. Privacy and Data Protection
A concerning trend was the unnecessary retention of biometric or identity information, often in anticipation of potential regulatory requests. Such practices sit uneasily with the Privacy Act 1988 (Cth) and undermine trust. Regulators will expect data minimisation, anonymisation and strict retention limits as part of compliance.
c. Inclusivity and Fairness
Systems generally performed consistently across demographics, but gaps remain for remote users, those without formal ID and some Indigenous communities. Without targeted measures to address these gaps, platforms risk excluding vulnerable groups and facing claims of discrimination.
d. Usability and Accessibility
The Report found that performance can degrade in everyday conditions — poor lighting, low-quality cameras or complex parental configurations all created friction. For high-volume services like social media, poor usability not only damages user experience but risks driving circumvention.
e. Cybersecurity Threats
Providers are actively responding to risks such as spoofing, injection attacks and deepfakes, but the threat environment is evolving rapidly. Resilience requires continuous testing and improvement, not one-off compliance.
f. Legal and Governance Uncertainty
Where platforms rely on vendors or stack-level solutions (for example, app stores or device settings), accountability becomes blurred. Without contractual clarity, companies could be exposed to liability for failures outside their direct control. Regulators will expect clear governance structures and evidence of oversight.
g. Children’s Rights and Parental Involvement
Parental controls and consent mechanisms were found to be technically effective but limited. They are often static, assume traditional family structures and risk undermining children’s autonomy as they mature. Platforms cannot rely on them as standalone compliance measures, though they remain valuable as part of layered assurance.
For ARSMPs already preparing for compliance with the Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth), the Report offers practical insight into how strategies should be refined. Risk-based deployment is already shaping implementation. Higher-risk functions such as livestreaming and private messaging are increasingly associated with robust verification checks, while lower-risk features are supported by lighter-touch estimation or inference.
Privacy-by-design is emerging as the baseline. Many providers are focusing on minimising identity data collection, using anonymised tokens, session-based checks and on-device processing. These measures are consistent with both the Privacy Act 1988 (Cth) and international standards such as ISO/IEC 27566.
Inclusivity is also central to platform design choices. Platforms are testing across diverse demographics, ensuring fallback mechanisms for users without standard identification, and developing parental features that can be adapted to non-traditional family and caregiving arrangements.
Layered assurance models are gaining traction. Successive validation — beginning with estimation or inference and escalating to verification when confidence is low — enables platforms to balance user experience with regulatory assurance. This proportional approach aligns with international standards and reflects what regulators are likely to expect in practice.
Finally, governance is being tightened. Platforms are clarifying contractual responsibility with vendors and documenting internal decision-making to demonstrate accountability. Engagement with the eSafety Commissioner’s consultations has become standard practice, with platforms preparing to show that their solutions are not only effective but also respectful of privacy and children’s rights.
The pressing challenge for ARSMPs is to implement age assurance technologies in ways that are proportionate, inclusive and resilient. There is no single solution, but there is now a clear expectation that platforms will combine methods intelligently to deliver both accuracy and usability.
Privacy, fairness and governance will be the decisive factors. Regulators will not simply look for the presence of a technical fix, but for evidence that platforms have embedded a layered, rights-respecting approach to age assurance. A key message from the Report is that the tools exist and are maturing quickly. It is now up to ARSMPs to show that they can deploy them responsibly, transparently and in a way that delivers on the promise of safer digital environments for young Australians.