Please enable javascript in your browser to view this site

Ofcom’s guidance on protecting women and girls online

Though the regulator expanded its recommendations on mitigating gendered abuse online, calls continue for a binding code of practice under the Online Safety Act

Ofcom has published its guidance on gendered online harms, promising further industry engagement

On 25 November 2025, Ofcom published its industry guidance on protecting women and girls under the Online Safety Act (OSA). The voluntary guidance was published alongside an open letter sent from Dame Melanie Dawes (CEO, Ofcom) to tech firms operating in the UK regarding the regulator’s expectations for action in response to the recommendations. In the letter, Dawes references a report Ofcom will publish in May 2027 outlining industry action to mitigate gendered harms online, which has been reported as a tool to “name and shame” platforms that fail to act. The regulator also plans to convene a roundtable with industry in 2026 to emphasise and explain its expectations. Ofcom notes that should measures implemented by platforms fall short of these expectations, it may make formal recommendations to the UK Government as to whether the OSA needs to be amended to include stronger, legal protections for women and girls online. In her statement on the guidance, Dawes described the realities of gendered abuse online as “deeply shocking” and emphasised that now is the time for platforms to “step up and act in line with [Ofcom’s] practical industry guidance”.

More than a dozen recommended actions to address four core types of abuse

In setting out its guidance, Ofcom detailed a series of statistics on the unique harms faced by women and girls in digital environments. According to the regulator, 98% of abusive intimate images reported in the UK are of women, and 99% of deepfake intimate images target women. Among younger users, one in five teenagers have encountered content that demeans women in the past month, and almost 70% of boys aged 11-14 have encountered misogynistic content. In light of this research, Ofcom identified four primary categories of harm it wants platforms to proactively address and provides a series of recommendations for actions that firms can take to mitigate them.

In addition to specific modifications to existing features, Ofcom states that platforms are expected to conduct “abusability testing” on new features, meaning moderation teams should consider and test the ability of perpetrators to misuse new services. The regulator also recommends that firms consult with experts on gendered abuse and develop opportunities to incorporate learnings from victims, such as through user surveys. 

Advocates argue that without legal enforceability, harms will continue unabated

Ofcom went further in its guidance than proposed in its original consultation in February 2025, adding new measures such as “rate limiting” posts to prevent abusive pile-ons and designing recommender systems to promote content from diverse perspectives. However, civil society advocates have consistently criticised the regulator’s effort as insufficient, calling for binding legal obligations in the form of a code of practice under the OSA. Organisations including Internet Matters and the End Violence Against Women Coalition have suggested that platforms are likely to ignore the guidance because it lacks statutory backing. This criticism contributes to a wider concern that the regulator is not sufficiently empowered to deliver safer online environments under the OSA, despite having delivered its largest fine to date (£1m) under the law on 4 December 2025. Advocates have seen some success in amending legal frameworks to criminalise certain types of gendered abuse already, including the criminalisation of the non-consensual creation of a deepfake intimate images under the Data (Use and Access) Act and the criminalisation of cyberflashing under the OSA. However, calls continue for wider and legally binding interventions aimed at repairing online environments where women and girls are discouraged from actively participating and can be targeted with technologically enabled abuse.