Please enable javascript in your browser to view this site

Comparing approaches to regulating online safety

Online services have brought great benefits, but have also come with risks of harm and illegality. As regulators around the world are working to establish a safer online environment, we compare how these efforts stack up and the challenges being faced

  • Numerous approaches to regulating online safety have emerged in recent years. These legislative attempts have come out of similar political and social contexts surrounding child safety and online extremism. 

  • At the heart of the differences between regimes is a difference in how harms are defined. The EU’s DSA takes the most expansive view of harms, including commercial conduct, while laws like the Online Safety Act in the UK focus on harms to individual users.

  • Legislation can also be compared based on what technologies it regulates, how it treats users and what impact it is said to have on fundamental rights. Australia’s laws require the broadest group of services to comply while Singapore’s Online Safety Act provides the Infocomm Media Development Authority with the most discretion as a regulator. 

  • Many laws also focus specifically on the safety of children online, including in relation to the proliferation of illegal sexual content. The UK Online Safety Act includes provisions on content that is especially harmful to minors, and Singapore’s law requires additional tools for parental control. 

  • The success of any regime will ultimately depend on the capacity and expertise of the regulator implementing it. In the UK, Ofcom is expected to hire the largest staff while Ireland’s Coimisiún na Meán’s large spend per capita reflects the extensive resources needed to enforce the ambitious DSA. 

  • Those countries yet to regulate, such as the US and France, are picking up many of the shared elements of existing regimes when drafting their legislation. However, a global consensus to online safety has yet to emerge, even in light of the commonly expected ‘Brussels effect’.

Online safety regulation has developed quickly in the past five years

Over the past decade, the broad support for the beneficial impacts of digital technology on society, like democratisation and social cohesion, has given way to a profound concern for the harms occurring in online spaces. Over the past five years, policymakers around the world have responded, often to public pressure, to address the dangers of digital spaces by bringing platforms and providers into the regulatory fold. Our Platforms and Big Tech Tracker identifies a number of these adopted and pending regulatory regimes. 

The UK is the most recent country to address online harms with the Royal Assent of the Online Safety Act. It follows an early campaign but prolonged legislative process to become “the safest place in the world to be online”. Prior to the UK’s efforts, other jurisdictions took up the mantle of protecting their citizens and consumers online. From the early implementation of the Network Enforcement Act (NetzDG) in Germany and the Enhancing Online Safety Act in Australia to now, approaches to mitigating harms online have evolved and endured to create a global patchwork of regulation. In the five jurisdictions we studied, we found differences in the approaches of regulators and scope of the harms that legislation addresses (See Table 1). 

Reforms have emerged out of similar social and political contexts across jurisdictions

In most jurisdictions that have adopted legislation, some form of online harm became a public talking point which acted as the narrative backdrop for legislative efforts. Commonly, tragic stories like that of Molly Russell in the UK highlighted the dangers that unregulated online spaces can pose to children. Campaigners have also pointed out that digital technologies more easily enable persistent harms to children, like the production and dissemination of child sexual abuse material (CSAM), to proliferate more rapidly and widely. Reforms aimed specifically at protecting children online are therefore quite common both as the mobilising premise of legislation and as a distinct feature of a larger regulatory framework. 

The proliferation of political vitriol online as well as violence inspired by digital content has also been a primary context in which regulations were written. The 2019 murder in Germany of local politician Walter Lübcke informed legislators’ work to expand and amend NetzDG. Hate crimes and other acts of politically motivated violence have also been linked to digital radicalization elsewhere in the world. Many regimes take particular issue with the online dissemination of terrorist content or hate content as a result, frequently linking existing criminal codes to conduct or content that originates online. 

Each regime adopts a different definition of harm which then impacts its goals for safety 

Despite common rhetoric surrounding online safety around the world, many jurisdictions reflect a different definition of harm at the heart of their regime, which means they have different goals when defining what makes online spaces safer. Most concretely, laws can be divided into those that regulate only illegal harm and those that regulate both illegal harm and content that is legal but considered harmful. NetzDG, as an early iteration of a notice-and-takedown scheme, solely sought to remove illegal content from the internet. In contrast, the Online Safety Act in the UK seeks to also regulate legal but harmful content in addition to adding a number of new, digitally-based offences to criminal code. While the UK amended its more robust proposal to minimise legal but harmful content, the law still extends well beyond the realm of illegal offences to regulate pornographic content and content that could impair the physical, mental or moral development of children. 

Different jurisdictions have also focused on different victims of harm. Australia, like the United Kingdom and Singapore, focuses almost exclusively on online conduct that harms the individual users who encounter it. These more personal harms do the most damage to the user themselves, although they cumulatively injure the trust and safety of shared online spaces. The EU broadens the scope of harms much more than any other regime by regulating conduct that harms communal institutions as well as individual users. The Digital Services Act (DSA) also targets the violation of intellectual property rights and other commercial conduct online like dark patterns. These communal harms are less likely to fall within the constraints of ‘user-to-user’ content that most other regimes focus on, making the DSA a standout. 

Liability is often based on platform size but varies with the type of services and the discretion of the regulator 

Each legal framework also brings different categories of services under regulatory scrutiny. Most regimes limit the reach of regulation to the content or application layer of the network stack, meaning social media services like Meta or search services like Google are primarily responsible for compliance. This definition is also normally specified to mean user-to-user services or platforms where content is not generated centrally. However, regimes like Australia’s are unique in assigning liability across the entirety of the network stack, even down to the hardware level. Under the Australian Online Safety Act, content platforms, app distribution services, internet service providers, hosting services, and manufacturers of relevant hardware all have sector-specific obligations related to online safety. 

Many regimes also condition liability on thresholds related to providers’ size and impact. NetzDG and the DSA both limit liability based strictly on the size of user bases, suggesting that size alone can increase the threat of harm to users. The UK, however, assigns some duties to all platforms that offer a user-to-user service regardless of size. Additional obligations may also be assigned to platforms of a certain size or risk level. Singapore offers the most discretion to its regulator, the Infocomm Media Development Authority, which designates services with sufficient reach and impact among the domestic user base on a case-by-case basis. 

Many jurisdictions take a more active approach to protecting younger users

On the other side from services, users are also treated differently among online safety regimes based on their age and their responsibility to manage their own user experience. Germany’s NetzDG did not include substantially different obligations related to the experience of minor users. Australia’s Enhancing Online Safety Act, in contrast, set out an early model aimed at protecting children online by including specific provisions for offences against minors like cyberbullying. This focus on protecting those under 18 has developed further under political pressures around child safety and has come to frequently encompass the policing of the online spread of CSAM. Some regimes, like the newer Online Safety Act in Australia, have brought provisions for adults in line with the more stringent requirements already set out to protect children. The UK Online Safety Act was set to provide a similar degree of proactive protection for adults from harmful but legal content but shifted their approach to one based on empowerment of adult users during the course of the legislative process. 

The inclusion of user empowerment therefore correlates frequently to a jurisdiction’s interests in protecting adults online. Singapore’s Online Safety Act functions primarily through obliging designated services to provide adult users with control over the content they are fed and transparency to assist adult users in selecting platforms that meet their expectations. This approach also empowers parents to manage some elements of their children’s online experiences as opposed to imposing horizontal requirements of all accounts for minors. 

Regimes present a trade-off in challenging some protections for  fundamental rights

Given the differences detailed between each jurisdiction’s approach to online safety, it is not surprising that each framework has brought about different criticisms for their impact on the fundamental rights of users. Some common and primary complaints relate to incentivising more and quicker content takedown, undermining encryption protocols and providing regulators too much discretion to block or shutdown noncompliant sites. Taken together, these online safety provisions allegedly threaten the privacy of users as well as their ability to speak freely in the digital public square. 

Some regulators, like in the UK and also reportedly Australia, are conditioning controversial policies on technological advancement. In the UK, Ofcom has stated it would not seek to enforce mandatory monitoring of messaging services for CSAM until there was a technology capable of doing so without undermining encryption protocols. Other regimes, like the EU’s DSA, create alternative incentives to prevent overzealous content moderation. Platforms operating in good faith under the DSA’s notice-and-takedown system are still afforded protection from intermediary liability. Specific deadlines given to respond to takedown notices, like those imposed by online safety laws in Germany and Australia, are instead seen to encourage hasty and overly cautious moderation decisions to avoid fines. In the most extreme cases, like with Singapore’s Online Safety Act, laws afford regulators the unilateral power to block local access to entire sites, including for political purposes. Concerns about placing content decisions in government hands are nonetheless balanced by an alternative in which profit-motivated private companies have sole discretion over moderation, which was the cause for action in many of these jurisdictions to begin with.   

The success of online safety laws depends on the regulators’ capacity and expertise to enforce them 

Given the speed of technological development, big tech commonly suggests that regulators do not have the capacity or the expertise to keep up with emerging trends. These rapid shifts in the digital landscape have been met with massive bureaucratic shifts as jurisdictions take on regulating the internet. As detailed in Table 2, new and existing regulators have been tasked with online safety portfolios and provided extensive funding and staffing to develop the expertise and capacity needed to implement the different legal frameworks. Operating from within the Australian Communications and Media Authority, the Australian eSafety Commission has a dedicated budget and staff as well as statutory duties to coordinate with a variety of other agencies. While implementation of the Online Safety Bill has only just begun in the UK, Ofcom had already spent £56m at the end of the 2023 fiscal year on preparations for its new online safety work. Since DSA enforcement is delegated to Digital Service Coordinators in each Member State, the profile of regulators across the EU vary greatly. However, key regulators like the Coimisiún na Meán in Ireland can provide a useful example for comparison given their importance in enforcing the EU’s ambitious technology laws. Since many aspects of the online environment do not necessarily respect international borders, global collaboration through bodies like the International Association of Gaming Regulators and the International Telecommunication Union can extend the expertise and influence of participating regulators as well. 

Both the celebrated and the controversial aspects of online safety regimes serve as models for future regulation

While some of the regulatory frameworks discussed here have been operating for years, other jurisdictions around the world are still writing their ex-ante rules for online safety. Many of these forthcoming online safety laws resemble these sampled approaches. The Law to Secure and Regulate the Digital Space (SREN) in France and the Kids Online Safety Act in the US both embrace different elements of other online safety frameworks, such as taking further steps to protect children and setting limited timeframes for content takedown. Some of the elements that these regimes pick up, however, such as age verification requirements and additional discretion for regulators, engage the controversial trade-off between improving safety and protecting fundamental rights online. 

Despite a definite first mover advantage in passing the DSA and Digital Markets Act in tandem, we have yet to witness a particularly strong ‘Brussels effect’ in spreading European approaches to online safety globally. Given the EU’s ambitious agenda, emulating the DSA may not be politically or financially viable in many jurisdictions. However, certain DSA provisions like transparency reporting are already benefiting consumers beyond Europe and around the world. Regardless, as emerging technologies, such as generative AI, pose new challenges in keeping the digital world safe, these jurisdictions will be better equipped to meet these threats with established frameworks and ready regulators.