Australia has entered a new phase of AI maturity, and the tone has changed. What was once a space for experimentation is now a regulated capability with deadlines, penalties and expectations that are no longer optional. The Government’s Policy for the Responsible Use of AI in Government (December 2025) has moved from guidance to obligation. Agencies must complete AI Impact Assessments before deploying any system, maintain internal registers of all AI deployments, assign accountable executives and ensure every APS employee completes foundational AI training by the end of 2026. These obligations begin enforcement in June and become fully mandatory by December.
They land just ahead of the Privacy Act reforms on 10 December 2026, which introduce mandatory transparency for automated decision making, expanded definitions of personal information, stronger consent requirements and penalties up to 50 million dollars for serious or repeated breaches. AI is no longer a shiny toy. It is regulated infrastructure.
The shift is structural. Agencies must now articulate the purpose and intended outcomes of every AI system, document data sources, data lineage and data quality, and identify model risks, limitations and potential harms. They must demonstrate human oversight mechanisms including review points, escalation pathways, override capabilities and evidence that humans understand the model’s limitations. They must also provide mitigation strategies and assurance evidence that regulators can audit.
Every agency must maintain a central register of all AI systems, capturing the accountable owner, risk classification, training data sources, deployment context and assurance status. The policy also mandates that every AI system must have a single accountable executive responsible for its performance, compliance, risk posture and lifecycle governance. AI is no longer owned by a project team. It is owned by a leader who can be held to account.
Transparency is now a legal expectation. If an AI system materially affects a citizen, agencies must publish a transparency statement explaining that AI is being used, what decisions it influences, how the system works at a high level, how risks are mitigated and how individuals can challenge or appeal decisions. This anticipates the Privacy Act’s automated decision-making transparency obligations coming into force in December 2026.
The APS wide training mandate is equally significant. Every public servant must complete foundational AI training by the end of 2026, making it one of the most ambitious capability uplift programs in the world.
Australian industry leaders are reinforcing the urgency. CSIRO Chief Scientist Bronwyn Fox has warned that AI capability without governance is a liability, emphasising that Australia must build systems that are transparent and defensible. Data61 Director Jon Whittle has stressed that organisations need explainability, auditability and strong data foundations before they scale AI. Telstra’s CTO Kim Krogh Andersen has said that AI adoption must be matched with responsible engineering and clear accountability, especially in critical infrastructure. NAB’s Chief Data Officer Glenda Crisp has been explicit that AI governance is now a board level conversation, not a technology conversation. These voices reflect a national shift. Australia expects AI to be safe, explainable and accountable.
Regulation is tightening fastest in sectors where failure has real consequences.
In healthcare, AI used for diagnostics and triage is treated as high risk technology requiring clinical validation, documented oversight, traceable data provenance and safety evidence.
In financial services, ASIC’s REP 798 revealed that half of reviewed licensees had no policies addressing fairness or bias in AI enabled decision making. APRA’s CPS 230, which came into effect in July 2025, now requires boards to understand the operational risks created by AI systems and treat AI vendors as material service providers subject to heightened oversight.
In energy and utilities, operators must prove that AI enabled optimisation and predictive maintenance do not compromise operational resilience or safety.
Even government procurement has changed. Agencies must justify AI use cases, document risks and publish transparency statements for systems that materially affect citizens, turning procurement into an AI assurance checkpoint.
AI governance has now reached the boardroom. Boards are expected to demonstrate oversight of AI systems, understand their limitations, ensure alignment with organisational values and make risk based deployment decisions. They must be able to explain the governance structures and reporting lines that support AI.
This mirrors the evolution of cybersecurity governance. What began as a technical concern is now a strategic responsibility. Boards that cannot articulate how AI is governed, or who is accountable, will face scrutiny from regulators, auditors and investors.
The most mature organisations in Australia are now investing in capability mapping, AI‑ready role architecture, skills uplift across engineering and governance, and cross‑functional operating models that integrate data, cyber, cloud, and AI. This reflects a broader realisation: AI outcomes depend on organisational capability, not tools.
The capability gap is now a compliance risk. ADAPT’s State of the Nation: Data and AI 2025 shows that 68 percent of Australian organisations still have less than partial data integration, making it difficult to trace data lineage or produce the documentation regulators now expect. Fewer than 6 percent mandate enterprise wide AI training, despite the APS making it compulsory. More than 70 percent report that AI initiatives have not delivered measurable value, with healthcare and energy reporting the highest underperformance.
Organisations are being asked to govern systems they cannot fully explain, validate or secure. The shortfall is most visible in governance maturity, model risk frameworks, AI assurance processes, cross functional structures, cyber capability for AI specific threats and workforce readiness.
Australia has the infrastructure, the policy momentum, and the appetite. What it lacks – and what will determine the next decade – is capability. The organisations that build AI‑ready capability now will shape Australia’s economic trajectory through the 2030s. Those that delay will find themselves dependent on external vendors, constrained by governance gaps, and unable to operationalise AI at scale.
The NIST AI Risk Management Framework (AI RMF 1.0) has become the reference point for what Australian regulators implicitly expect. Its Govern function establishes accountability, policies and governance structures. Its Map function requires system registers, impact assessments and risk identification — directly aligned with Australia’s mandatory AI registers and AI Impact Assessments. Its Measure function focuses on evaluations, robustness, fairness and explainability — the exact areas ASIC, APRA and the Privacy Act reforms now scrutinise. Its Manage function emphasises risk mitigation, monitoring and human oversight — the core of Australia’s high risk AI requirements. NIST’s Generative AI Profile and upcoming critical infrastructure profiles further support the sectors under the most scrutiny in Australia.
Australia’s regulatory posture is no longer exploratory. It is enforceable. The Government now requires documented AI risks, transparent AI use, accountable owners, human oversight, APS wide training, privacy aligned data practices and board level governance.Organisations that build capability now will deploy faster, reduce risk and avoid costly remediation once the Privacy Act reforms take effect. Those that delay will find themselves governed by forces they can no longer influence…….
443 little collins street
melbourne, vic 3000
423 kirkstall road
kent, leeds, ls12 7rs