Please enable javascript in your browser to view this site

Regulating disinformation under the EU’s DSA

The new guidelines complement a series of policies introduced by big tech firms to address disinformation ahead of 2024 elections around the world

EC debuts its first new code of conduct under its DSA authority targeting online risks for elections

On 26 March 2024, the European Commission published a set of guidelines under the Digital Services Act (DSA) on mitigating risks online related to elections. Under its Article 35 authority in the DSA, the EC is empowered to issue codes of codes to address systemic risks which impact multiple very large online platforms (VLOPs) and search engines (VLOSEs). Following the release of draft guidelines for public consultation in February 2024, the EC reported it received 89 responses from interested stakeholders and consulted with the newly formed European Board for Digital Services which consists of regulators from Member States designated as Digital Service Coordinators. With the adoption of the guidelines, VLOPs and VLOSEs are now required to implement the best practices detailed by the EC or prove that other mitigation measures are suitably compliant. These standards can now be enforced the same as other provisions of the DSA, including through the launch of a formal investigation and issuing of fines.

The guidelines introduce both proactive and responsive measures now required of designated platforms

Under the guidelines, VLOPs and VLOSEs are required to allocate additional resources to risk mitigation and reporting measures during election periods. The EC requires that platforms create plans tailored to each electoral period and suggests mitigation measures including promoting official information on electoral processes, implementing media literacy initiatives and adapting recommender systems to demonetise content that threatens election integrity, such as disinformation. The guidelines also preempt expected regulation on political advertising in requiring that all political ads be clearly labelled. Given the interim period between the passage of the AI Act and its implementation, the EC also included a requirement for platforms to mitigate risks posed by generative artificial intelligence (AI) including by labelling artificially generated content such as deepfakes and adjusting relevant terms and conditions. Though designated firms won’t be required to proactively mitigate against risks posed by foreign information manipulation and interference, they are required to cooperate with public authorities and third-party experts before, during and after elections on foreign interference matters including cybersecurity and disinformation.

Platforms have preemptively launched a variety of initiatives in advance of the election cycle targeting disinformation, especially in the context of AI

While anxiety persists globally about the preparedness of numerous large democracies for national elections this year, the EU is one of very few jurisdictions around the world to have adopted binding regulation to mitigate and moderate disinformation online. However, a number of big tech firms have announced new or updated policies around the creation and dissemination of disinformation, in the context of both the coming elections and the increasing consumer popularity of generative AI tools. A number of platforms have adopted the Content Credentials programme, created by Adobe in partnership with Microsoft, Google and the BBC among others, as a method for watermarking content with directly accessible metadata. Other AI-powered image generators and content platforms, including Amazon, Meta and TikTok, have adopted similar policies on labelling substantially edited or generated content. Given the extremely limited legislative window for other jurisdictions to adopt regulation prior to elections, the enforcement of these policies will likely be central to the fight against disinformation online.