A spotlight on Australia's privacy reform - What Lies Ahead for AI and Automated Decision-Making?

Artificial intelligence (AI) and its partner in crime, automated decision making (ADM) are creeping into everyday life in ways we do not necessarily expect or understand. For example, the Australian Border Force uses ADM systems to perform facial recognition on people to check their identity at airport passport controls and Transport for NSW uses them to detect illegal mobile phone use by drivers.

As AI and ADM becomes more advanced and ubiquitous, regulators and lawmakers around the world are trying to find the right balance between regulation and innovation for these cutting-edge technologies. 

In Australia, a myriad of laws and law reform proposals and guidelines are already in place or underway. Some of these will shine a light on the direction Australia is likely to take, some could lead to significant changes in how we develop and interact with these technologies in our daily lives.

In an attempt to understand AI and ADM and its future in Australia, this article looks to some of the existing and proposed changes.

Opportunities and risks arising from AI and ADM

AI is a system that generates predictive outputs for a given set of human-defined objectives or parameters and can be designed to operate with varying levels of automation. Among the various applications of AI, an area of recent development and rising concern is the use of automated AI systems in the decision-making process, also known as ADM.It is now not uncommon for various sectors of the economy to use AI and ADM in their daily operation. As noted above, Australian Border Force and Transport for NSW already use ADM systems to perform their public duties. AI tools have also been used in hospitals to consolidate large amounts of patient data and analyse medical images as well as by engineers to evaluate and optimise designs to improve building safety. However, like many advances, with the benefits come risks. 

One serious risk is algorithmic bias.  This bias involves systematic or repeated decisions that privilege one group over another due to the use of small or incomprehensive datasets by the ADM system. Another risk is AI hallucinations, which can be defined as misleading and erroneous outputs arising from faulty algorithms or dataset that is out-of-date. 

As these risks become more prevalent, regulators and individuals have taken legal actions against companies that use AI and ADM in their business, resulting in some successful actions under existing consumer protection laws. In April 2022, Trivago was ordered to pay a penalty of AUD$44.7 million for misleading consumers that its website identified the cheapest rate for a given hotel room when in reality the algorithm they used listed hotel rooms based on which hotel booking site paid Trivago the highest fee. More recently, in February 2024, Air Canada was held liable for information provided by its chatbot, which negligently misrepresented to a consumer that he  could get a discount on his flight ticket.

While the above cases suggest that existing laws may address some of the risks arising from AI and ADM, the lack of an AI-specific legislation has left many other serious risks unattended. For instance, existing laws arguably do not cover AI-generated deepfakes used to spread false information online. One prominent example is a deep faked image of Taylor Swift endorsing Donald Trump at the Grammy Awards. The lack of AI-specific legislation has also caused much public concern over AI-generated election misinformation as the world is about to witness a record number of national elections in 2024

The EU Approach

As with privacy and data protection, the EU has once again taken the lead by introducing one of the world’s first comprehensive AI-specific legislation, the EU AI Act, which is expected to enter into force the second/third quarter of 2024

Significantly, the EU AI Act seeks to govern AI via a risk-based approach, that is, imposing different responsibilities on AI developers and users based on the level of risks associated with certain types of AI technology. The EU AI Act also establishes a European AI Office to oversee the enforcement and implementation of the Act.  

Upcoming AI and ADM regulation in Australia

Australia currently has neither a single AI regulator nor any AI-specific legislation. Instead, AI is governed by a diverse group of regulators and a wide range of legislation, including consumer, privacy, competition and copyright laws. While it is unlikely that Australia will overcome such fragmented AI-regulation anytime soon, recently introduced law reforms could potentially address some of the risks arising from AI and ADM. 

Privacy and ADM

In September 2023, the Australian Government confirmed in its response to the Privacy Act Review Report that it “agrees” to implement the following proposals:

  • privacy policies should set out the types of personal information that will be used in substantially automated decisions which have a legal or similarly significant effect on an individual’s rights (Proposal 19.1);
  • high level indicators of the types of decisions with a legal or similarly significant effect on an individual’s rights should be included in the Privacy Act and this should be supplemented by OAIC guidance (Proposal 19.2); and
  • a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made should be introduced and entities should be required to include information in privacy policies about the use of personal information to make substantially automated decisions with legal or similarly significant effect (Proposal 19.3).

These proposals are similar to the existing ADM laws under the EU’s General Data Protection Regulation and California’s 2023 draft automated decision-making technology regulations, particularly in regards to the right to access information about ADM.

AI-Generated Misinformation and Disinformation 

In January 2023, the Department of Infrastructure, Transport, Regional Development, Communications and the Arts released the draft Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill 2023 for public consultation (Draft Bill). 

The Draft Bill provides the Australian Communications and Media Authority (ACMA) with new powers to combat online misinformation and disinformation, including misinformation and disinformation on digital platforms generated by AI technology. Among other provisions, a registered industry code or ACMA standard may be established to require digital platforms to self-regulate bots that disseminate such information.

While public consultation has closed on August 2023, the Draft Bill has not been tabled in Parliament yet.

Safe and Responsible AI

In January 2024, the Government published its interim response to the Safe and Responsible AI in Australia Consultation and articulated the following next steps, although it is currently unclear how the Government intends to implement these promises in the future – the Government will: 

  • consider new mandatory guardrails for organisations developing and deploying AI systems in high-risk settings;
  • collaborate with the National AI Centre and industry players to develop an AI Safety Standard and consider the merits of voluntary labelling and watermarking of AI-generated material in high-risk settings;
  • strengthen existing laws (e.g., privacy and online safety laws) and build on proposed reforms (e.g., misinformation and disinformation) to address AI risks;
  • support the development of a State of the Science report in accordance with Australia’s commitments under the Bletchley Declaration;
  • continue to engage internationally to help shape global AI governance; and
  • build on the Government’s existing investment to grow Australia’s national capabilities to develop and adopt automation technologies.

The above developments build upon Australia’s AI Ethics Principles, the eSafety Commissioner’s Safety by Design initiative and the Digital Platform Regulators Forum’s recently released working papers on algorithms and large language models.

How businesses can respond to these changes

To stay ahead of the curve, businesses are encouraged to: 

  • start considering what updates will need to be made to their privacy policies and collection notices so that individuals are adequately informed about any use of AI and ADM in relation to their personal information; 
  • maintain a record of any AI technology used in the business, detailing how it is used, what it is used for and, if used to make certain decisions, an outline of the ADM process; 
  • be prepared to engage with individuals requesting for information on how automated decisions are made; and
  • be alert to the evolving AI and ADM landscape by being up to date with any publication issued by key regulators and the Australian Government.

The Bird & Bird Privacy & Data Protection team are supporting clients in navigating key changes in privacy reform around AI and automated decision-making. Please do not hesitate to contact the contributors if you would like to discuss AI and ADM regulation in Australia and its likely impact on your business.

Latest insights

More Insights
gambling

The House Calls for the Government to Double Down on Gambling Advertising Regulation

May 02 2024

Read More
Roulette Wheel Gambling

Weekend Long Read: A Review of the Gambling Commission’s new Source of Funds Guidance

May 02 2024

Read More
sports equipment

Beyond the sidelines – empowering female leaders in sport

May 02 2024

Read More

Related capabilities