The General-Purpose AI Code of Practice

The General-Purpose AI Code of Practice was published by the European Commission on the 10th of July 2025, as the rules in the EU AI Act regarding General-Purpose AI providers entered into force in August of the same year.

The Code itself is envisioned as a voluntary set of guidelines aimed at General-Purpose AI providers with the goal of streamlining the transition into compliance with the EU AI Act, in the interim period, until the adoption of the harmonized standards, which is expected to happen August of 2027 at the earliest. The Code was drafted through the cooperation of a significant number of experts in the area, including academic and independent experts, providers of General-Purpose AI models, downstream deployers, as well as members of civil society.

Through the acceptance of the Code, which is done by signing a particular Signatory Form, and submitting the signed form to a given e-mail address of the Commission, as well as through adhering to its requirements, General-Purpose AI providers can demonstrate their compliance with the EU AI Act and limit their administrative burden as well as avoid legal scrutiny regarding the acceptance of the EU AI Act.

The Code of Practice is divided into three chapters, with them being:

  • Transparency – applying to all General-Purpose model providers among the signatories of the Code, it requires them to maintain detailed and current documentation for every General-Purpose AI model they distribute around the EU, with the exception of those that are free, open-source, and without systemic risk
  • Copyright – also applying to all General-Purpose model providers, it requires the signatories to draft and regularly update a copyright policy, which is to ensure that the data collected from the internet is lawfully accessible, that it is not collected from websites marked for copyright infringement, and finally, a contact person for those willing to submit complaints regarding copyright infringement must be designated
  • Safety and Security – concerning only providers of General-Purpose AI models with systemic risk, which are required to develop a comprehensive Safety and Security Framework, concerning risk identification, analysis and assessment, risk testing, mitigation measures, as well as incident reporting and record keeping.

 

Prepared by,

Daniel Vujacic LL.M. (UW)