On this page
What Changed in Australia's Privacy Act for AI?
Australia's 2026 Privacy Act amendments introduce mandatory transparency, notification, and contestability requirements for automated decision-making systems that materially affect individuals' rights or interests. These amendments represent Australia's most significant step toward regulating AI, bringing it closer to the protections already established under GDPR and the EU AI Act.
The amendments build on the Attorney-General's comprehensive Privacy Act Review (completed in 2023), which recommended over 100 reforms including specific provisions for automated decision-making. After extensive consultation with industry and civil society, the government enacted a phased package of reforms that includes AI-specific obligations.
The key change is the introduction of a new right for individuals to be informed when a "substantially automated" decision materially affects them, and to request human review of that decision. This applies to any organization covered by the Privacy Act that uses AI or algorithmic systems to make, or substantially contribute to, decisions about individuals.
For enterprises operating in Australia or processing data of Australian residents, these amendments require auditing automated decision-making processes, implementing notification mechanisms, and establishing human review pathways. Organizations already compliant with GDPR Article 22 will find significant overlap, but the Australian requirements include distinct provisions that require specific attention.
Scope: What Counts as Automated Decision-Making?
The amendments define "substantially automated decision-making" broadly as any decision made by an automated system where there is no meaningful human involvement in the decision process, and the decision materially affects an individual's rights, interests, or legitimate expectations.
This definition captures a wide range of enterprise AI applications:
- Credit and lending: Automated credit scoring, loan approval algorithms, creditworthiness assessments
- Employment: AI-powered resume screening, candidate ranking, automated performance assessments
- Insurance: Algorithmic underwriting, claims assessment, premium pricing models
- Healthcare: Clinical decision support systems, treatment recommendation engines, diagnostic AI
- Government services: Automated eligibility determinations, benefit calculations, compliance assessments
- Customer management: Automated account closures, service denials, risk profiling
The threshold is "material effect" - routine AI-assisted tasks like spam filtering or content recommendations are generally excluded unless they have a significant impact on an individual's access to services or opportunities. The Office of the Australian Information Commissioner (OAIC) is expected to publish detailed guidance on the materiality threshold.
Importantly, a decision is "substantially automated" even if a human nominally approves it, if the human review is perfunctory or rubber-stamps the AI output without genuine independent assessment. Organizations using AI to generate recommendations that are routinely accepted without scrutiny should treat these as substantially automated decisions.
Notification and Transparency Requirements
Organizations must notify individuals before or at the time a substantially automated decision is made about them, disclosing the use of automated processing, the type of personal information used, and how to request human review.
The notification must include:
- A clear statement that automated decision-making technology is being used
- A description of the type of decision being made in plain language
- The categories of personal information used as inputs to the automated system
- Information about the individual's right to request human review
- How to exercise that right (contact details and process)
Notifications must be provided in a form that is accessible, easy to understand, and available in the languages commonly spoken by the affected population. For digital services, this typically means in-app or on-screen notifications at the point of decision, supplemented by privacy policy disclosures.
Organizations processing high volumes of automated decisions need scalable notification infrastructure. Areebi's platform can be configured to generate automated transparency disclosures that satisfy both Australian and international requirements.
See Areebi in action
Get a 30-minute personalised demo tailored to your industry, team size, and compliance requirements.
Get a DemoRight to Human Review and Explanation
Individuals affected by substantially automated decisions gain the right to request human review by a qualified person, and to receive a meaningful explanation of how the decision was reached.
When an individual requests human review, the organization must:
- Assign a qualified human reviewer who has the authority and competence to genuinely assess the decision, not merely validate the AI output
- Conduct a substantive review that considers the individual's specific circumstances, not just re-run the same automated process
- Provide a meaningful explanation of the factors that contributed to the decision, the data used, and the logic applied
- Communicate the outcome within a reasonable timeframe, including any changes to the original decision
The "meaningful explanation" requirement is significant because it implies a level of AI explainability that many current systems do not support. Organizations deploying complex machine learning models need to implement explainability techniques - such as SHAP values, LIME, or attention visualization - that can translate model outputs into human-understandable explanations.
Building explainability into AI systems is both a compliance requirement and a governance best practice. Areebi's AI governance framework includes explainability as a core component, and the policy engine can enforce explainability documentation requirements across all AI deployments.
Enforcement and Penalties
The OAIC enforces the amended Privacy Act with maximum penalties of AUD 50 million, three times the benefit obtained from the contravention, or 30% of adjusted turnover - whichever is greatest.
These penalties are among the most severe privacy penalties globally, comparable to GDPR's 4% of global turnover maximum. They reflect Australia's intent to make Privacy Act violations genuinely deterrent for large organizations.
Enforcement mechanisms include:
- Commissioner-initiated investigations: The OAIC can initiate investigations based on complaints, media reports, or its own monitoring activities
- Compliance notices: Formal directions requiring organizations to take specific remedial actions
- Enforceable undertakings: Binding commitments from organizations to improve their practices
- Civil penalties: Court-imposed fines for serious or repeated breaches
- Infringement notices: Administrative penalties for less serious violations
The OAIC has indicated it will take a risk-based approach to enforcement, prioritizing cases involving sensitive information, vulnerable populations, large-scale impacts, and systemic non-compliance. Organizations that demonstrate good faith compliance efforts - including implementation of recognized AI governance frameworks - are less likely to face maximum penalties.
Compliance Steps for Australian Enterprises
Organizations should begin compliance preparation now by auditing their automated decision-making processes, implementing notification mechanisms, and building human review capabilities.
- Audit automated decisions: Identify all AI and algorithmic systems that make or contribute to decisions materially affecting individuals. Document the decision type, data inputs, output usage, and volume of affected individuals.
- Assess materiality: For each automated system, determine whether decisions meet the "material effect" threshold. Seek legal advice for borderline cases.
- Implement notifications: Design and deploy notification mechanisms for all in-scope automated decisions. Integrate notifications into existing customer touchpoints.
- Build human review processes: Establish workflows for handling human review requests, including reviewer qualification, review methodology, explanation generation, and response timelines. Staff and train the review function appropriately.
- Deploy explainability: Implement technical explainability mechanisms for AI systems subject to explanation requirements. Test that explanations are genuinely meaningful to non-technical recipients.
- Update privacy policies: Revise privacy policies and collection notices to include automated decision-making disclosures.
- Train staff: Ensure all staff involved in AI deployment, customer interaction, and human review understand the new requirements.
Areebi's free AI governance assessment evaluates your automated decision-making compliance posture and provides a prioritized remediation plan. The platform then operationalizes compliance with automated notifications, review workflows, and audit documentation.
Free Template
Put this into practice with our expert-built templates
Frequently Asked Questions
When do Australia's AI privacy amendments take effect?
The Privacy Act amendments are being enacted in phases throughout 2026. The automated decision-making provisions are among the priority reforms. Specific commencement dates vary by provision - organizations should monitor OAIC announcements for confirmed dates and begin preparing now.
Does the Australian Privacy Act apply to foreign companies?
Yes. The Privacy Act applies to organizations with an Australian link that handle personal information of Australian residents, including foreign companies that carry on business in Australia or collect personal information from Australian sources. The automated decision-making provisions apply to any covered organization using AI to make decisions affecting Australians.
What is the penalty for non-compliance with Australian AI privacy rules?
Maximum penalties are AUD 50 million, three times the benefit obtained from the contravention, or 30% of adjusted turnover - whichever is greatest. These are among the most severe privacy penalties globally, comparable to GDPR maximums.
How do Australia's rules compare to GDPR Article 22?
Both require notification and provide rights related to automated decision-making. Australia's amendments are similar in intent to GDPR Article 22 but differ in specifics: Australia's threshold is material effect on rights or interests (broader than GDPR's legal or similarly significant effects in some interpretations), and the human review process requirements are more explicitly defined.
Do I need to explain how my AI model works to individuals?
You must provide a meaningful explanation of the factors that contributed to the decision, the data used, and the logic applied. This does not require disclosing proprietary algorithms, but it does require explaining the decision in terms the individual can understand. Organizations need explainability mechanisms that translate model outputs into plain-language explanations.
Related Resources
About the Author
VP of Compliance & Trust, Areebi
Former compliance director at a Big Four consulting firm. Deep expertise in HIPAA, SOC 2, GDPR, and the EU AI Act. VP Compliance and Risk at Areebi.
Ready to govern your AI?
See how Areebi can help your organization adopt AI securely and compliantly.