Skip to main content

On 10th July, the European Commission received the final version of the long-awaited Code of Practice for General-Purpose AI. While implementation may appear burdensome to organisations without a structured oversight, a watchful observer will notice that many of the Code’s measures can be met through the implementation of an ISO/IEC 42001-compliant Artificial Intelligence Management System (AIMS).

 

A Tight Deadline

Initially expected in early May, the Code was developed over nine months by a group of 13 experts and shaped by input and feedback from more than 1,000 stakeholders, including businesses, member states, and members of the public.

Although the Code has not yet been formally published, as it is up to individual member states to endorse it, developers and providers of general-purpose AI systems can already choose to adhere to it voluntarily. Those who do will be able to demonstrate compliance with the EU AI Act, which becomes a legal requirement on 2nd August 2025, and benefit from reduced administrative burden.

Providers of general-purpose AI systems intended for deployment in the EU market will have until 2nd August 2026 to align their AI governance, when the AI Office is officially granted enforcement powers. Models already on the market as of 2nd August 2025 will benefit from a slightly longer transition period, with compliance required by 2nd August 2027.

The Code of Practice

The Code of Practice is divided into three main sections: Transparency, Copyright, and Safety and Security. The first two apply to all providers of general-purpose AI systems, while the third is specifically addressed to those developing more advanced models.

While these requirements may appear burdensome to organisations without structured oversight, a watchful observer will notice that many of the Code’s measures can be met through the implementation of an ISO/IEC 42001-compliant Artificial Intelligence Management System (AIMS).

Transparency

The Transparency chapter outlines the information that must be shared regarding the AI model. Providers are required to document model capabilities and limitations, explain how training data was selected and processed, support third-party scrutiny, and provide relevant information tailored to the intended audience, particularly downstream deployers.

To support compliance with Articles 53 and 55 of the EU AI Act, a three-page Model Documentation Form is also included.

Copyright

The Copyright chapter outlines the measures needed to comply with EU copyright law. Central to this is the development of a copyright policy within the organisation. Other obligations include identifying and excluding copyright-protected material from training datasets, mitigating the risk of infringement, and designating a point of contact for stakeholders to report suspected copyright violations.

Safety and Security

The final chapter is the longest and most complex. As noted, it applies only to providers of general-purpose AI models with systemic risk, and not to typical AI systems. As such, systemic risk is a central concern.

The Code requires the establishment of a Safety and Security Framework to support the identification, analysis, and either mitigation or acceptance of systemic risks. It also instructs providers to produce a Model Report containing risk-related information and model updates. Other obligations include defining roles and responsibilities, reporting incidents, and implementing additional transparency measures.

Help or Hurdle

The release of the Code of Practice marks a key step toward the implementation of the EU AI Act. However, the Code has already faced criticism from parts of the industry, particularly over the short implementation window and concerns that it could hinder AI development and deployment in Europe.

While some large companies, such as Microsoft and OpenAI, appear to have taken a more compliant stance, others, including Meta, have opted not to sign on. Given the requirements imposed by the EU AI Act (and formalised through the Code), and the broader reluctance among some providers to disclose detailed technical information, a key question emerges: How will downstream adopters comply with the AI Act if the models they rely on come from non-transparent providers?

Companies that deploy general-purpose models in the EU may find themselves in a bind: if the provider resists transparency, the adopter may struggle to meet legal obligations. For large model developers, this raises a difficult strategic trade-off:

Open up and risk giving away the competitive edge or stay secretive and risk losing access to the EU market altogether?

Whether the Code ultimately helps businesses meet their obligations while supporting innovation across Europe, or whether, as critics fear, it adds yet another layer of regulatory burden, remains to be seen. For now, those who have already invested in robust AI governance, by achieving ISO 42001 certification for example, find themselves at a distinct advantage.

 

At Deviceology, we have mapped the requirements of the General Purpose Code of Practice against ISO 42001 controls, this way you can configure your AI management system to be compliant with ISO and the EU AI Act. To obtain a copy of the map, just click here and if you need some help, don’t hesitate to get in touch!