Harmonised Standards for the European Artificial Intelligence Act

3–4 minutes

Written by Kieran Harte

The European Commission’s Joint Research Centre released a policy brief on 24 October 2024 highlighting the essential role that harmonised standards will play in implementing the EU’s Artificial Intelligence Act (the EU AI Act).

The EU AI Act establishes a legal framework to ensure appropriate AI use that respects our fundamental rights.  Once assessed and published in the EU’s Official Journal, European harmonised standards for the EU AI Act will be crucial for compliance, providing AI systems with a legal presumption of conformity if developed according to these standards.  Following a transition period of 2 or 3 years, i.e. commencing 2 August 2026, high-risk AI systems will have to comply with requirements related to risk management, data quality and governance, logging and traceability, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.  This is to ensure that AI systems respect the safety, health and fundamental rights of individuals, and to address the risks of very powerful AI models.

The EU AI Act adopts a market-centric approach, providing clear regulatory guidelines that ensures a safe, innovation-friendly environment for businesses while protecting consumer rights.  It is more detailed in its categorisation of AI systems by risk levels and introduces specific regulatory mechanisms for high-risk AI applications.  It takes a differentiated oversight approach based on the risk posed by AI applications thus fostering compliance and encouraging AI development within clear ethical boundaries.  The overriding objective being the creation of a harmonised legal framework with basic common rules and protections for the market placement, deployment, and use of AI systems.  This approach is particularly beneficial for small and medium-sized enterprises (SMEs), offering a consistent compliance pathway without requiring extensive in-house expertise.

European standardisation organisations, led by European Standardisation Organisations CEN and CENELEC, are in the process of drafting the necessary AI standards, following a request from the European Commission in May 2023.  The process emphasises seeking consensus from a wide range of stakeholders from all sectors and types of organisations, including SMEs, societal groups, and industry leaders to ensure the standards are comprehensive and inclusive.  This collaborative model has raised challenges to reaching consensus on foundational issues associated with the EU AI Act requirements and their scope.  However, it is essential for developing robust standards that can keep pace with rapid AI advancements.

This brief describes some of the key characteristics of these future standards.  To ensure compliance, requirements set out in standards will be defined in clear and explicit terms, enabling AI providers to meet straightforward criteria, while being mindful of the implementation burden.  The standards aim to be both horizontal, applicable to AI systems across various sectors, and adaptable, with sector-specific/system-specific requirements for high-risk applications such as computer vision or natural language processing systems.  Guidance will be provided to support application of the standards in specific contexts. 

The standards will provide strong assurance and documented evidence of real-life verifiability, through effective testing protocols, to demonstrate that AI systems will address risks in practice.  Additionally, they will be reflective of state-of-the-art AI techniques and modern AI system architectures in defining requirements for high-risk AI systems, including those integrating general purpose AI models as components, to remain relevant as the industry evolves.  They will also establish clear, precise, and actionable requirements, significantly easing the burden of regulatory compliance, particularly for SMEs developing innovative AI solutions. These requirements will be both cohesive and complementary, ensuring that the many aspects of AI trustworthiness are addressed within the documentation. The standards will feature a logical structure and organisation that promotes easy adoption, explicitly capturing the interdependencies and trade-offs between various requirements.

The European Commission, through its newly established AI Office and the Joint Research Centre, is actively supporting the crucial effort to finalise and publish the standards well in advance of the compliance deadlines set for high-risk AI systems in August 2026.  This timeline will facilitate AI providers to adequately prepare for the Act’s requirements. As these standards are finalised, they are expected to promote a more transparent and accountable AI environment within the EU, positioning Europe as a global leader in AI regulation.

Discover more from Irish Computer Society

Subscribe now to keep reading and get access to the full archive.

Continue reading