Please enable javascript in your browser to view this site

The EC’s code of practice for GPAI

Despite calls for a delayed implementation of the AI Act, the EC’s much-debated code aims to outline a compliance blueprint for GPAI models

The EC has published its voluntary code of practice for GPAI models to support the implementation of the AI Act

On 10 July 2025, the EC issued its General Purpose AI (GPAI) Code Of Practice. The voluntary code – debate over which has led to multiple drafts and a delay in publication – aims to help firms comply with the AI Act’s requirements on GPAI with respect to transparency, copyright and safety and security, which are poised to come into force on 2 August. The code was developed by a group of independent experts and will be assessed by EU Member States in the coming weeks as the EC prepares its own complementary guidelines on key concepts relating to GPAI, which are expected to be published later this month. Notably, both OpenAI and Mistral AI have already announced their intention to sign the voluntary code; however, firms such as Google and Meta have not yet decided their positions. The Computer and Communications Industry Association (CCIA) Europe – which represents many large tech companies – has criticised the code, arguing it places too heavy a burden on AI firms. Taken together, the code and the forthcoming EC guidelines seek to mark a step forward in the AI Act’s implementation despite recent calls, including from some Member States and from a group of 40 European companies, for a delay to its enforcement.

Signatories to the code will be required to submit information relating to how their AI models have been trained

The first of the code’s three chapters focuses on the transparency requirements of the AI Act and sets out three key measures for signatories to follow. The first of these measures relates to keeping up-to-date documentation about AI models. Ahead of launching GPAI models on the market, firms must document at least all information referred to in the EC’s model documentation form, which includes information on the model’s inputs, outputs, training data and energy consumption. The second measure establishes rules for the information that firms providing AI models must provide, both to the EU AI Office and to downstream providers. When requested by the AI Office, firms will be required to provide up to date information from the model documentation, this will also need to be given to downstream providers. The final measure requires firms to ensure that all relevant documented information is controlled for quality and integrity. These firms are further encouraged to follow established existing protocols and technical standards set by the EU.

Improved communications and transparency will be required from AI firms in relation to rightsholders and copyright practices

The second chapter on copyright similarly draws up five measures for signatories to the code to abide by, namely that:

  1. Firms will have to draw up, keep up-to-date and implement a copyright policy in line with EU copyright law and related rights for all GPAI models and are encouraged to make an up-to-date summary of their copyright policy publicly available;

  2. Firms will agree to only reproduce and extract lawfully accessible copyright-protected content when crawling for data – a list of hyperlinks to websites that should not be used will be published on an EU website;

  3. Any web crawlers used by firms must follow existing EU protocols regarding rights reservations while providing transparent information to rightsholders about the data that web crawlers are extracting when requested;

  4. Firms will ensure that the necessary technical safeguards are put in place to mitigate the risk of GPAI models producing copyright-infringing outputs; and

  5. Firms will also commit to appointing a point of contact for electronic communication with affected rightsholders and must provide easily accessible information about it.

Firms will work with the AI Office to complete comprehensive safety and security assessments about AI models at all stages of their product life cycles

The code’s third and final chapter focuses on safety and security, committing signatories to adopting a state-of-the-art framework that outlines the risk management processes for GPAI. This framework also makes clear the measures that firms will need to implement to ensure that any risks stemming from their AI models are deemed acceptable. The chapter sets out 10 commitments for firms, including the identification and analysis of systemic risks, appropriate safety and security mitigations, model safety and security reports, incident reporting and transparency requirements. All of these commitments require that firms will work with the AI Office to mitigate the systemic safety and security risks that arise from their AI models, with a significant proportion of these measures being required before models are even introduced to the market.