Please enable javascript in your browser to view this site

Australia: Designing a digital duty of care

Though a duty of care aligns well with the approaches in the EU and UK, the Australian Government is signalling it will go further than its peers in regulating online safety

Progress on a promise from 2024 

On 15 November 2025, the Australian Government, via the Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts, opened a consultation on legislating a new digital duty of care under the existing online safety framework. The consultation follows on from a November 2024 commitment to advance legislation on the duty of care, which the Government restated in February 2025 following a recommendation to introduce the measure in an independent review of the Online Safety Act. Anika Wells (Minister for Communications, Australia) described the Government’s proposals as needed updates to the “reactive system” currently in place, noting more broadly that “Australia leads the world in online safety”. The introduction of a duty of care and obligations on services to take proactive steps to protect users will generally bring Australia’s framework into closer alignment with its international peers, namely the EU and UK. The consultation will only run until 7 December 2025, closing days before the country’s Social Media Minimum Age Act, which bans under-16s from holding social media accounts, takes effect on 10 December 2025. 

The proposed measures respond to criticism that existing regulation is reactive and seeks to bar minors from accessing harms, instead of addressing them directly

Summarising the findings of the review of the Online Safety Act, the consultation states that the existing framework is primarily geared towards allowing consumers to report online harms retroactively, as opposed to preventing harm from occurring in the first place. By introducing a duty of care, the Government foresees the onus of managing online harms being shifted from users making reports onto platforms implementing preventative measures. While updates to the Online Safety Act passed in 2021 did further develop the responsibilities of platforms in mitigating harms related to illegal content, the framework was designed around the premise of “notice and takedown” which has limited the scope of action required by platforms. Relating the Government’s delay in introducing the digital duty of care to its work on the Social Media Minimum Age Act and a perceived failure to address harms behind these newly mandated age gates, Sarah Hanson-Young (Senator, Australian Greens) argues, “rather than policing the sharks, the government is just hoping kids don’t jump in the water”. At a minimum, the duty of care would require platforms to take proactive steps to limit users’ exposure to a number of types of illegal content, including: 

  • Online sexual exploitation and abuse, including child sexual abuse material (CSAM);

  • Sexual extortion and the nonconsensual sharing of intimate images, including those generated by AI; 

  • Content that incites violence or supports terrorism; and 

  • Technologically enabled abuse including stalking, coercion and doxxing (the sharing of an individual’s personal information for the purposes of harassment). 

The Government appears poised to outpace the EU and UK in regulating more types of content, service features and service providers 

Beyond addressing the spread of illegal content, the consultation also considers responses to legal but harmful content, often referred to as “lawful but awful”, as well as structural features of future legislation, including the types of platforms to be captured by regulation and the structure of mandated dispute resolution mechanisms. The Government seeks feedback on types of content, service features and types of services that may pose harm to children as well as adults.

In many instances, the Australian Government appears well-aligned with the EU and the UK Government’s approach to online safety, such as in highlighting the dangers of content that promotes self-harm and services that use addictive design features. The inclusion of common service features including "ephemeral" or disappearing content, such as the core messaging function of Snapchat, as well as a wider range of service types, including gaming providers and generative AI services, however, signals an intent from the Government to again go further than its peers in regulating online safety.