TechBriefly
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska
No Result
View All Result
TechBriefly
Home Tech AI
EU AI Act rules for GPAI models take effect on August 2

EU AI Act rules for GPAI models take effect on August 2

From August 2, 2025, the EU AI Act imposes obligations on GPAI providers to maintain training data summaries, technical docs, and more. Fines for violations could reach up to €35M or 7% of global turnover.

Aytun ÇelebibyAytun Çelebi
1 August 2025
in AI, Tech
Reading Time: 3 mins read
Share on FacebookShare on Twitter

From August 2, 2025, providers of general-purpose artificial intelligence (GPAI) models operating within the European Union will be subject to specific sections of the EU AI Act, mandating the maintenance of up-to-date technical documentation and summaries of training data.

The EU AI Act, published in the EU’s Official Journal on July 12, 2024, and effective as of August 1, 2024, establishes a risk-based regulatory framework to ensure the safe and ethical use of AI across the EU. This framework categorizes AI systems based on their potential risks and impact on individuals.

While the regulatory obligations for GPAI model providers are set to commence on August 2, 2025, a one-year grace period is provided for compliance, deferring the risk of penalties until August 2, 2026.

There are five core sets of rules that GPAI model providers must be aware of and adhere to from August 2, 2025, encompassing:

  • Notified bodies (Chapter III, Section 4)
  • GPAI models (Chapter V)
  • Governance (Chapter VII)
  • Confidentiality (Article 78)
  • Penalties (Articles 99 and 100)

Notified Bodies: Providers of high-risk GPAI models are required to engage with notified bodies for conformity assessments, aligning with the regulatory structure supporting these evaluations. High-risk AI systems are defined as those posing significant threats to health, safety, or fundamental rights. These systems are either used as safety components of products governed by EU product safety laws or deployed in sensitive use cases, including biometric identification, critical infrastructure management, education, employment and HR, and law enforcement.

GPAI Models: GPAI models, which serve multiple purposes, are considered to pose “systemic risk” if they exceed 10^25 floating-point operations per second (FLOPs) during training and are designated as such by the EU AI Office. Examples of models fitting these criteria include OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini.

All GPAI model providers must maintain technical documentation, training data summaries, copyright compliance policies, guidance for downstream deployers, and transparency measures regarding capabilities, limitations, and intended use. Providers of GPAI models that pose systemic risk must also conduct model evaluations, report incidents, implement risk mitigation strategies and cybersecurity safeguards, disclose energy usage, and carry out post-market monitoring.

Governance: This set of rules defines the governance and enforcement architecture at both the EU and national levels. GPAI model providers will need to cooperate with the EU AI Office, European AI Board, Scientific Panel, and National Authorities in fulfilling their compliance obligations, responding to oversight requests, and participating in risk monitoring and incident reporting processes.

Confidentiality: All data requests made to GPAI model providers by authorities will be legally justified, securely handled, and subject to confidentiality protections, especially for intellectual property (IP), trade secrets, and source code.

Penalties: Non-compliance with prohibited AI practices under Article 5, such as manipulating human behavior, social scoring, facial recognition data scraping, or real-time biometric identification in public, can result in penalties of up to €35,000,000 or 7% of the provider’s total worldwide annual turnover, whichever is higher. Other breaches of regulatory obligations, such as those related to transparency, risk management, or deployment responsibilities, may result in fines of up to €15,000,000 or 3% of turnover. Supplying misleading or incomplete information to authorities can lead to fines of up to €7,500,000 or 1% of turnover.

For Small and Medium Enterprises (SMEs) and startups, the lower of the fixed amount or percentage applies. Penalties will take into account the severity of the breach, its impact, the provider’s cooperation level, and whether the violation was intentional or negligent.

To facilitate compliance, the European Commission has published the AI Code of Practice, a voluntary framework that tech companies can adopt to align with the AI Act. Google, OpenAI, and Anthropic have committed to this framework, while Meta has publicly refused to do so.

The Commission intends to publish supplementary guidelines to the AI Code of Practice before August 2, 2025, clarifying the criteria for companies qualifying as providers of general-purpose models and general-purpose AI models with systemic risk.

The EU AI Act is being implemented in phases:

  • February 2, 2025: The ban on certain AI systems deemed to pose unacceptable risk, such as those used for social scoring or real-time biometric surveillance in public, came into effect. Companies must also ensure their staff have a sufficient level of AI literacy.
  • August 2, 2026: GPAI models placed on the market after August 2, 2025, must be compliant by this date. Rules for certain listed high-risk AI systems also apply to those placed on the market after this date, and to those placed on the market before this date that have undergone substantial modification since.
  • August 2, 2027: Full compliance is required for GPAI models placed on the market before August 2, 2025. High-risk systems used as safety components of products governed by EU product safety laws must also comply with stricter obligations.
  • August 2, 2030: All AI systems used by public sector organizations that fall under the high-risk category must be fully compliant.
  • December 31, 2030: AI systems that are components of specific large-scale EU IT systems and were placed on the market before August 2, 2027, must be brought into compliance by this final deadline.

A group representing Apple, Google, Meta, and other companies had requested that regulators postpone the Act’s implementation by at least two years; however, this request was rejected by the EU.

Tags: EU AI Act
ShareTweet
Aytun Çelebi

Aytun Çelebi

Starting with coding on Commodore 64 in elementary school moving to web programming in his teenage years, Aytun has been around technology for over 30 years, and he has been a tech journalist for over 20 years now. He worked in many major Turkish outlets (newspapers, magazines, TV channels and websites) and managed some. Besides journalism, he worked as a copywriter and PR manager (for Lenovo, HP and many international brands ) in agencies. He founded his agency, Linkmedya in 2019 to execute his way of producing content. He is recently interested in AI, automation and MarTech.

Related Posts

Apple begins iPhone 18 series production testing in January

Apple begins iPhone 18 series production testing in January

24 December 2025
EA investigates AI claims in Battlefield 6 cosmetics

EA investigates AI claims in Battlefield 6 cosmetics

24 December 2025
Amazon Alexa+ will book your hotels and salons starting in 2026

Amazon Alexa+ will book your hotels and salons starting in 2026

24 December 2025
OpenAI launches Skills in Codex

OpenAI launches Skills in Codex

24 December 2025

LATEST

How to install mods and custom content in The Sims 2

Running Python files and fixing path errors on Windows

How to boot your PC into Command Prompt for troubleshooting

How to delete a virus using Command Prompt

How to connect a PS4 controller to Steam via USB or Bluetooth

How to connect your phone to Wi-Fi and fix connection issues

Apple begins iPhone 18 series production testing in January

EA investigates AI claims in Battlefield 6 cosmetics

Amazon Alexa+ will book your hotels and salons starting in 2026

OpenAI launches Skills in Codex

TechBriefly

© 2021 TechBriefly is a Linkmedya brand.

  • Tech
  • Business
  • Science
  • Geek
  • How to
  • About
  • Privacy
  • Terms
  • Contact
  • | Network Sites |
  • Digital Report
  • LeaderGamer

Follow Us

No Result
View All Result
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska