EU AI Act: Draft guidelines for general purpose AIs show the first steps for Big AI to comply

A first draft of a code of practice that will apply to providers of general purpose AI models under the European Union AI law has been published, along with an invitation for feedback – open until November 28 – as the draft process continues next year , ahead of the formal compliance deadlines that will apply in the coming years.

The pan-EU legislation, which came into effect this summerregulates applications of artificial intelligence within a risk-based framework. But it too focuses on a number of measures on more powerful fundamental – or general – AI models (GPAIs). This is where this code of practice comes in handy.

Among those likely to be in the picture is OpenAI, maker of the GPT modelsthat underlie the AI ​​chatbot ChatGPTGoogle with his Gemini GPAIsMeta met LlamaAnthropic with Claudeand others like that of France Mistral. They are expected to adhere to the General-Purpose AI Code of Practice if they want to ensure compliance with the AI ​​Act and thus avoid the risk of enforcement for non-compliance.

For the avoidance of doubt, the Code is intended as guidance for complying with the obligations of the EU AI Act. GPAI providers may choose to deviate from the best practice suggestions if they believe they can demonstrate compliance through other measures.

This first version of the Code is 36 pages, but is likely to be longer – perhaps significantly – as the authors caution that it is short on detail because it is “a high-level draft plan that outlines our guiding principles and objectives for the Code. ”

The design is peppered with box-outs that ask “open questions” that the working groups charged with drafting the Code have yet to resolve. The requested feedback – from industry and civil society – will clearly play a key role in shaping the content of specific sub-measures and key performance indicators (KPIs) yet to be included.

But the document gives an idea of ​​what will happen (in terms of expectations) for GPAI makers, once the relevant compliance deadlines apply.

Transparency requirements for GPAI makers will come into effect on August 1, 2025.

But for the most powerful GPAIs – which pose “systemic risk” under the law – the expectation is that they will have to comply with risk assessment and mitigation requirements 36 months after entry into force (or August 1, 2027).

There is another caveat in the fact that the draft code has been prepared on the assumption that there will only be “a small number” of GPAI makers and GPAIs with systemic risks. “Should this assumption prove incorrect, future designs may need to be significantly modified, for example by introducing a more detailed layered system of measures, primarily targeting those models that pose the greatest systemic risks,” the authors warn.

In terms of transparency, the Code will set out how GPAIs must comply with the information provisions, including in the area of ​​copyrighted material.

An example of this is ‘Sub-measure 5.2’, which currently requires signatories to provide details of the name of all web crawlers used for the development of the GPAI and their relevant robots.txt functions ‘including at the time of crawling’.

GPAI modelers still face questions about how they acquired data to train their models, with several lawsuits filed by rights holders claiming that AI companies have unlawfully processed copyrighted information.

Another obligation set out in the draft code requires GPAI providers to have a single point of contact and complaints handling, making it easier for rights holders to communicate complaints ‘directly and quickly’.

Other proposed measures addressed copyright documentation expected from GPAIs on the data sources used for “training, testing and validation and on authorizations for access to and use of protected content for the development of a general purpose AI.”

Systemic risk

The most powerful GPAIs are also subject to rules in the EU AI Act that aim to limit so-called ‘systemic risk’. These AI systems are currently defined as trained models a total computing power of more than 10^25 FLOPs.

The Code contains a list of types of risks that signatories are expected to treat as systemic risks. They include:

  • Offensive cybersecurity risks (such as discovering vulnerabilities).
  • Chemical, biological, radiological and nuclear risk.
  • “Loss of control” (here meant the inability to control a “powerful autonomous general purpose AI) and automated use of models for AI R&D.
  • Persuasion and manipulation, including widespread disinformation/disinformation that could pose risks to democratic processes or lead to a loss of confidence in the media.
  • Large-scale discrimination.

This version of the Code also suggests that GPAI makers could identify other types of systemic risks that are not explicitly mentioned – such as ‘large-scale’ breaches of privacy and surveillance, or applications that could pose public health risks. And one of the open questions the paper asks here is which risks should be prioritized to complement the main taxonomy. Another is how the systemic risk taxonomy should address deepfake risks (related to AI-generated child sexual abuse material and non-consensual intimate images).

The Code also aims to provide guidance on identifying key features that could cause models to pose systemic risks, such as “dangerous model capabilities” (e.g. cyber offensive or “weapons acquisition or proliferation capabilities”), and “dangerous model tendencies” (e.g. misaligned on human intentions and/or values; the tendency to mislead;

Although many details remain to be completed, as the design process progresses, the authors of the Code write that the measures, sub-measures and KPIs should be ‘proportionate’, with a particular emphasis on ‘tailoring to the size and capacity of a specific provider , especially SMEs and start-ups with less financial resources than those at the frontier of AI development.” Attention should also be paid to “different distribution strategies (e.g. open source), where appropriate, that reflect the principle of proportionality and take into account both benefits and risks,” they add.

Many of the open questions posed by the draft concern how specific measures should be applied to open source models.

Safety and security in the frame

Another measure in the code concerns a ‘Safety and Security Framework’ (SSF). GPAI makers are expected to detail their risk management policies and “continuously and thoroughly” identify systemic risks that may arise from their GPAI.

Here there is an interesting sub-measure about ‘Predicting Risks’. This would require signatories to include in their BSF “best effort estimates” of timelines for when they expect to develop a model that triggers systemic risk indicators – such as the dangerous model capabilities and tendencies mentioned above. It could mean that from 2027 onwards we will see advanced AI developers release timetables for when they expect model development to exceed certain risk thresholds.

Elsewhere, the draft code places an emphasis on GPAIs with systemic risk, using ‘best-in-class’ assessments of the capabilities and limitations of their models and applying ‘a range of appropriate methodologies’ to do this. Examples include: question and answer sets, benchmarks, red-teaming and other adversary testing methods, human support studies, model organisms, simulations, and proxy evaluations for classified materials.

Another sub-measure on “notification of substantial systemic risks” would oblige signatories to inform the supervisor AI officea supervisory and steering body established under the law, “if they have strong reasons to believe that substantial systemic risks could materialize.”

The Code also contains measures regarding “reporting of serious incidents.”

“The signatories commit to identify and monitor serious incidents, to the extent that they arise from their general AI models with systemic risks, and to document and report to the AI, without undue delay, all relevant information and possible corrective actions agency and, as appropriate, to national competent authorities,” it reads – although an accompanying open-ended question asks for input on “what constitutes a serious incident.” So it seems like more work needs to be done to nail down the definitions.

The draft code contains further questions about “possible corrective measures” that can be taken in response to serious incidents. It also asks “what serious incident response processes are appropriate for open source or open source providers?”, among other formulations seeking feedback.

“This first draft of the Code is the result of a preliminary assessment of existing best practices by the four specialist working groups, stakeholder consultation input from almost 430 submissions, supplier workshop responses, international approaches (including the G7 Code of Conduct, the Frontier AI Safety Commitments, the Bletchley Declaration and the results of relevant government and standard-setting bodies), and, most importantly, the AI ​​Act itself,” the authors conclude.

“We emphasize that this is only a first draft and therefore the suggestions in the draft code are preliminary and subject to change,” they added. “We therefore invite your constructive input as we continue to develop and update the content of the Code and move towards a more detailed final form by May 1, 2025.”