AI ComplianceEU: Implementing Regulation on AI Regulatory Sandboxes

03/12/2025

EU: Implementing Regulation on AI Regulatory Sandboxes

 

The EU Commission has published a draft Implementing Regulation on AI Regulatory Sandboxes. It lays down rules for the application of Regulation (EU) 2024/1689 of the European Parliament and of the Council as regards the establishment, development, implementation, operation and supervision of AI regulatory sandboxes.

Under Regulation (EU) 2024/1689, AI regulatory sandboxes provide a controlled environment where competent authorities can supervise the development, training, validation and testing—including real-world trials—of innovative AI systems before they reach the market.

To prevent fragmentation across the EU and ensure consistent application of the Regulation, common rules are needed for how these sandboxes are created, operated and supervised. These rules must align with the objectives of the sandboxes, complement other EU data-driven innovation initiatives and remain coherent with sector-specific product frameworks.

AI regulatory sandboxes are designed to increase legal certainty for innovators, support compliance with Regulation (EU) 2024/1689 and other relevant EU laws (such as the GDPR and product-safety legislation), promote responsible innovation and competitiveness, and facilitate market access—particularly for SMEs and start-ups. They also help competent authorities better understand emerging risks, opportunities and impacts associated with AI. Participation should focus on areas where legal uncertainty may hinder innovation.

This Regulation sets out detailed rules for establishing and managing AI regulatory sandboxes under Article 57 of Regulation (EU) 2024/1689. For uniform interpretation and mutual recognition of outcomes, these rules apply to all sandboxes operating under the Regulation.

AI regulatory sandboxes should be available throughout the EU, with each Member State hosting at least one national sandbox by 2 August 2026. Any provider within the scope of Regulation (EU) 2024/1689 may apply, provided they meet the conditions set out here. Additional sector-specific, regional, local, joint or EU-level sandboxes may also be established, including in areas with strategic industrial importance or significant regulatory challenges. Pilot projects are encouraged ahead of the formal designation of competent authorities. The European Data Protection Supervisor may also create a sandbox for AI systems developed by EU institutions and agencies.

Member States are encouraged to set up joint cross-border sandboxes to streamline coordination, pool resources and enhance consistent implementation. These should be created through appropriate cooperation frameworks, such as memoranda of understanding.

Clear terms of participation must be communicated and agreed upon in advance, potentially through electronic signature. Participation is free for SMEs and start-ups unless exceptional costs arise. Other participants may be charged fair, transparent and proportionate fees, communicated early in the process.

When personal data is processed within a sandbox, a competent data protection authority must be involved in supervision. Competent authorities must also respect confidentiality obligations under Article 78 of Regulation (EU) 2024/1689 while still enabling regulatory learning.

Procedures and requirements for participation must be clear, accessible and based on uniform criteria. In selecting participants, authorities should assess eligibility, regulatory and practical challenges raised, system maturity and competitiveness, and the potential value of regulatory learning. SMEs with limited resources should receive priority access when they meet the criteria and have a registered office or branch in the EU. Authorities may focus each year on specific sectors, especially those with strategic importance or significant regulatory challenges.

To maximise synergies, competent authorities should cooperate with relevant AI ecosystem actors, such as Testing and Experimentation Facilities, European Digital Innovation Hubs, AI Factories and standardisation bodies. Some projects may be supervised within existing national or EU infrastructures, and Member States are encouraged to use these to support joint sandboxes.

Each sandbox should have a plan detailing objectives, needed resources, safeguards, support to be provided and relevant risk-management measures. To promote consistency and mutual recognition, competent authorities should take into account templates and guidance issued by the European Artificial Intelligence Board.

Participation in a sandbox supports progress towards regulatory compliance, but it does not constitute approval or create a presumption of conformity. Providers are encouraged to engage in parallel dialogue with notified bodies.

Real-world testing may be conducted within a sandbox, without affecting other EU or national rules on real-world trials. Where high-risk AI systems are embedded in regulated products, relevant sectoral authorities should be involved.

https://www.regulatory-compliance.eu/wp-content/uploads/2025/01/Weis-auf-Transparenz-.png
al. Pańska 96, 00-837 Warsaw, Poland
+48 575 570 017

Follow us:

GET IN TOUCH

The content provided on this website is not intended to and does not constitute legal advice. Submissions or postings to the website are not confidential. We do not warrant or guarantee the accuracy, completeness or adequacy of the content. Your use of the content on the website or materials linked from this website is at your own risk.

Copyright © RCC 2025

en_USEnglish