AI ComplianceEU: Draft standardization request on standards for the AI Act

15/03/2025

EU: Draft standardization request on standards for the AI Act

 

The EU Commission has requested the European standardization bodies CEN and CENELEC to develop further standards for the AI Act.

Regulation (EU) 2024/1689, known as the Artificial Intelligence Act, establishes a uniform legal framework for the development, market placement, deployment, and use of AI systems within the European Union. It aligns with EU values to foster human-centric and trustworthy AI while ensuring strong protections for health, safety, and fundamental rights—including democracy, the rule of law, and environmental protection. The Act aims to mitigate the potential risks of AI while encouraging innovation.

Articles 8 to 15 outline the requirements for high-risk AI systems, while Articles 16 to 27 set out the responsibilities of providers, deployers, and other relevant parties. These obligations include maintaining quality management systems, logging, and documentation. Following the New Legislative Framework (NLF) for product safety, the AI Act defines essential requirements and stipulates that harmonized standards will detail how AI systems should meet these legal requirements. The NLF approach ensures that such standards provide a consistent level of protection while supporting the effective implementation of the AI Act. Additionally, these standards promote fair competition and innovation, particularly for small and medium-sized enterprises (SMEs) developing AI technologies.

Under Article 40(1), high-risk AI systems that comply with harmonized standards—once published in the Official Journal of the European Union under Regulation (EU) 1025/2012—are presumed to meet the requirements of Articles 9 to 15. To facilitate this, the European Commission issued Implementing Decision C(2023)3215, tasking the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) with developing relevant standards. Both organizations accepted this request, and standardization efforts are currently underway.

The Artificial Intelligence Act was published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024. The development of harmonized standards for high-risk AI systems presents technical challenges, as the AI Act’s approach to product safety is novel in the field of AI standardization. For the first time, product safety legislation is being used to define technical specifications—including test methods and verifiable approaches—for identifying and mitigating risks to fundamental rights in high-risk AI systems.

So far, the following standards have been developed:

  • EN ISO/IEC 23894:2024 – Information technology – Artificial intelligence – Guidelines for risk management
  • CEN/CLC ISO/IEC/TR 24027:2023 – Artificial intelligence (AI) – Bias in AI systems and AI-assisted decision-making
  • EN ISO/IEC 8183:2024 – Information technology – Artificial intelligence – Data lifecycle framework
  • EN ISO/IEC 12792 – Information technology – Artificial intelligence – Transparency taxonomy of AI systems
  • ISO/IEC TS 6254 – Objectives and approaches for explainability of ML models and AI systems
  • ISO/IEC TS 8200 – Controllability of automated artificial intelligence systems
https://www.regulatory-compliance.eu/wp-content/uploads/2025/01/Weis-auf-Transparenz-.png
al. Pańska 96, 00-837 Warsaw, Poland
+48 575 570 017

Follow us:

GET IN TOUCH

The content provided on this website is not intended to and does not constitute legal advice. Submissions or postings to the website are not confidential. We do not warrant or guarantee the accuracy, completeness or adequacy of the content. Your use of the content on the website or materials linked from this website is at your own risk.

Copyright © RCC 2025

en_USEnglish