EU AI Act Compliance: 10 Key Changes from the New Provisional Agreement
The European Union's Artificial Intelligence Act is undergoing significant revisions following a provisional agreement struck between EU member states and the European Parliament. This deal, announced early Thursday, aims to soften the original regulatory framework by extending deadlines and clarifying high-risk classifications. Businesses across Europe now have more time to adapt, with the new rules promising reduced administrative burdens while maintaining safety standards. Here are the 10 most important changes you need to know about from this landmark agreement.
1. Extended Deadlines for High-Risk AI Systems
The most impactful change is the postponement of compliance deadlines for high-risk AI systems. Under the original Act, these systems were required to meet strict requirements by August 2, 2026. The provisional deal now gives businesses additional breathing room: stand-alone high-risk AI systems must comply by December 2, 2027, while AI used in products covered by EU sectoral safety rules (like medical devices or machinery) have until August 2, 2028. This two-year extension allows companies to better integrate safety measures without rushing.

2. Formal Adoption Still Pending
Despite the provisional agreement, the changes are not yet law. Both the European Parliament and the Council must formally adopt the text before it can take effect. The co-legislators have committed to completing this step before August 2, 2026—the original deadline. Until then, the original rules remain in force. This means businesses should not pause their compliance efforts entirely, but they can plan for a smoother transition once the deal is ratified.
3. Reduced Administrative Costs for Companies
Cyprus’s Deputy Minister for European Affairs, Marilena Raouna, highlighted that the agreement "significantly supports our companies by reducing recurring administrative costs." The revisions eliminate overlapping requirements, especially for AI integrated into machinery, which will now follow only sectoral safety rules instead of dual regulations. This simplification is expected to lower compliance expenses and streamline operations for businesses dealing with AI in industrial contexts.
4. Narrower Definition of 'Safety Component'
The provisional deal refines what qualifies as a safety component under the AI Act. Previously, many AI features were automatically classified as high-risk if they were part of a product’s safety system. Now, AI functions that only assist users or enhance performance will not automatically be considered high-risk, provided their failure does not create health or safety risks. This change gives developers more flexibility when designing AI features for consumer products.
5. Mechanism for Resolving Overlaps with Sectoral Laws
For industries like medical devices, toys, lifts, machinery, and watercraft, the co-legislators agreed on a new mechanism to resolve conflicts between the AI Act and existing sectoral legislation. This ensures that companies are not caught in a web of contradictory requirements. The mechanism will prioritize sector-specific rules where they already provide equivalent health and safety protections, reducing redundancies and legal uncertainty.
6. One-Year Delay for AI Regulatory Sandboxes
Member states now have until August 2, 2027 to establish AI regulatory sandboxes—controlled environments where innovative AI systems can be tested under regulator supervision. The original deadline was August 2, 2026. This extra year gives national authorities more time to design effective sandbox frameworks that encourage innovation while ensuring safety. It also allows companies to prepare more thoroughly for participation.

7. Earlier Watermarking Obligations for AI-Generated Content
In a move to enhance transparency, watermarking obligations for AI-generated content will apply earlier than the European Commission originally proposed. Instead of February 2, 2027, these rules now kick in on December 2, 2026. This means synthetic content—such as deepfakes or AI-generated text—must be clearly labeled sooner, helping consumers distinguish between human and machine-created material.
8. Expanded Exemptions for Mid-Size Companies
Exemptions that were previously reserved for small and medium-sized enterprises (SMEs) have been extended to small mid-cap companies. This category typically includes firms with fewer than 500 employees that are not classified as SMEs. The change gives these mid-size businesses more breathing room from certain high-risk compliance requirements, recognizing their limited resources compared to larger corporations.
9. Centralized Supervision by the EU AI Office
The agreement clarifies that the EU AI Office will oversee general-purpose AI systems centrally. This includes foundational models like large language models. Meanwhile, national authorities retain responsibilities in specific areas, including law enforcement, border management, judicial authorities, and financial institutions. This dual structure aims to ensure consistent enforcement across Europe while allowing localized control where needed.
10. Political Consensus Ahead of Technology
As Parliament’s co-rapporteur Arba Kokalari stated, "With this agreement, we show that politics can move just as quickly as technology." The deal was reached just nine days after previous negotiations collapsed, demonstrating the urgency to adapt the AI Act to real-world feedback. By softening deadlines and clarifying rules, EU lawmakers hope to foster innovation while upholding the highest safety standards.
In conclusion, these revisions to the EU AI Act represent a pragmatic adjustment to an ambitious regulatory framework. Businesses now have extra time to comply, clearer definitions to navigate, and reduced administrative burdens—all without sacrificing safety. As the formal adoption process continues, companies should monitor developments closely and update their compliance strategies accordingly. The final law is expected to be ratified before August 2026, marking a new chapter in European AI governance.
Related Articles
- OpenAI Smartphone Project Confirmed: Exclusive Details on the AI Giant’s Hardware Ambitions
- The Dissolution of Purdue Pharma: A Step-by-Step Guide to Company Transformation through Legal Settlement
- Azure IaaS Security: 10 Essential Layers of Defense in Depth
- Inside the Musk vs. Altman Trial and AI's Role in Democracy: Key Takeaways
- Amazon Extends Price History Feature to a Full Year, Empowering Shoppers with Deeper Insights
- How to Safeguard Against AirTag Stalking: A Step-by-Step Guide
- Mastering Amazon's AI Price Tracker: A Year-Long Shopping Insight Guide
- Y Combinator's Immigration Attorney Engages Startup Community in Live Q&A