EU AI Act’s GPAI rules kick in — and Europe’s TEFs are the practical bridge to safer, more transparent models

18 August 2025

 

From FLOP thresholds to testing grounds: how the Commission’s GPAI guidance, the new Code of Practice and Europe’s Testing & Experimentation Facilities (TEFs) together turn legal obligations into testable reality

The EU’s landmark rules for general-purpose AI (GPAI) models — the provisions of the AI Act that specifically target large, widely reusable models such as modern large language and multimodal foundation models — start to apply across the Union on 2 August 2025. That means providers who put new GPAI models on the EU market must meet the Act’s transparency, copyright and safety obligations from that date; models already on the market get a phased window to comply. The Commission has published practical guidance, a training-data summary template, and a voluntary GPAI Code of Practice to help providers meet those requirements

What counts as a GPAI model — and when a model becomes “systemic”

The Commission’s guidance gives a concrete (compute-based) test: models trained with more than 10^23 floating point operations (FLOP) that can generate language, text-to-image, or text-to-video are presumptively GPAI. Models that exceed 10^25 FLOP are presumed to present systemic risk and therefore trigger additional obligations — for example, notification duties to the Commission and extra safety/security measures. Those thresholds are designed to capture models with broad capability sets and outsized societal reach.

Quick summary of the new obligations (what providers need to know)

 

  • Transparency & documentation: providers must give clearer information on how models were trained (the Commission provides a standard template to summarise training data).
     
  • Copyright & IP protections: measures to reduce unlawful reproduction of copyrighted content are emphasised in both the AI Act and the Code of Practice. 
     
  • Systemic-risk safeguards: models above the 10^25 FLOP threshold face notification duties and more demanding safety/security controls.
     
  • Phased enforcement: obligations apply from 2 August 2025; the Commission’s enforcement powers kick in later (with a staged timeline and a two-year grace window for older models).

What are TEFs — and why they matter now more than ever

Testing and Experimentation Facilities (TEFs) are pan-European, large-scale testing hubs—both physical and virtual—where AI innovators can integrate, validate and stress-test their systems in real-world settings before market deployment. These specialised facilities are co-funded by the European Commission and Member States under the Digital Europe Programme, with five-year funding of €40–60 million per sector, totaling over €220 million across the initial four TEFs: smart cities (CitCom.ai), healthcare (TEF-Health), agri-food (agrifoodTEF), and manufacturing (AI-MATTERS) 

TEFs represent a core implementation instrument of the EU AI Act, providing technical and scientific support to GPAI model providers and notified conformity-assessment bodies. They enable supervised experimentation and validation in cooperation with national authorities, helping translate abstract regulatory obligations—such as safety-by-design, data transparency, and systemic-risk mitigation—into tangible, measurable performance results

 

Stay ahead of the curve – follow us for fresh AI insights, news, and updates straight from Europe’s TEFs.

Follow us!   

Read more about Europe's digital strategy:

Read more

 

Banner