The Council of Europe Framework Convention on Artificial Intelligence (CETS 225) is the first legally binding international instrument on artificial intelligence. The European Commission signed the Convention on 5 September 2024 on behalf of European Union.
The Convention establishes a legal framework focused on safeguarding human rights, democracy, and the rule of law in AI development and usage. It emphasises principles such as transparency, accountability, risk management, and special protection for vulnerable groups. It is fully compatible with Union law in general, and the EU AI Act in particular, which is the first comprehensive AI regulation globally. The Convention will be implemented in the EU by means of the EU AI Act.
The Convention creates a common framework for AI systems applicable to the US, the UK, the EU and the other signatories, whilst the EU AI Act sets out clear rules for the regulation of AI systems in EU member states.
The EU AI Act excels in its market-centric approach, providing clear regulatory guidelines that ensures a safe, innovation-friendly environment for businesses while protecting consumer rights. It is more detailed in its categorisation of AI systems by risk levels and introduces specific regulatory mechanisms for high-risk AI applications. It takes a differentiated oversight approach based on the risk posed by AI applications thus fostering compliance and encouraging AI development within clear ethical boundaries. The overriding objective being the creation of a harmonised legal framework with basic common rules and protections for the market placement, deployment, and use of AI systems.
The Council of Europe (CoE) Framework Convention is broader in scope, and focuses on human rights, democracy, and the rule of law. It emphasises transparency, accountability, and inclusivity across all sectors, ensuring AI systems respect fundamental rights. The Convention is also committed to protecting vulnerable groups and fostering international cooperation for global AI governance.
Similarities:
- AI Definition: Both have similar definitions on what constitutes an “AI system” that is aligned with the definition set out by the OECD.
- Human-centric Focus: Both emphasise safeguarding human rights, democracy, and rule of law, including that AI systems do not infringe upon fundamental rights of privacy, freedom of expression, and non-discrimination.
- Risk-Based Approach: Both adopt a risk-based approach to AI regulation, requiring measures scaled according to potential risks. They both offer support to safe innovation through regulatory sandboxes.
- Transparency, Trustworthy and Accountability: Both mandate that AI systems be understandable and explainable. They also emphasise accountability, requiring entities deploying AI systems to take responsibility for their impacts, and include oversight mechanisms for supervision of EU AI Activities.
- Non-Discrimination: Both address the need to prevent and mitigate discrimination, particularly in vulnerable groups.
- International Cooperation: Both recognise the importance of international cooperation in AI governance, aiming to create a harmonised cross-jurisdictional approach to address the global nature of AI.
Differences:
- Legal Scope and binding nature: The EU AI Act is a Regulation directly applicable to all EU member states, establishing binding legal obligations. The Convention is an international treaty that countries can choose to ratify. It is open to accession to CoE members, the EU, and other third countries around the world. and provides a broad framework adaptable to national laws. This provides flexibility to signatory states in adapting the convention to their legal systems while still adhering to its overarching principles.
- Objective and Purpose: The provisions of the Convention places a stronger emphasis on ensuring that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law. While the EU AI Act also addresses these issues, it is more focused on improving the functioning of the internal market and promoting the uptake of human-centric and trustworthy AI.
Essentially, although they both prioritise safeguarding democracy, fundamental rights, and the rule of law, they diverge in their specific purposes: the EU AI Act focuses on the marketing of products using AI within the EU internal market, while the Convention focuses on the protection of the fundamental rights of people affected by AI systems. The Convention will, unlike the EU AI Act, introduce legally binding individual human rights that also will apply to citizens of non-EU states. However, the EU AI Act and the GDPR are designed to work hand-in-glove, with GDPR protecting data subject rights where AI systems process personal data.
- Enforcement: The EU AI Act includes specific enforcement mechanisms, such as fines for non-compliance, and assigns responsibilities to existing national authorities for implementation. The Council of Europe Convention, on the other hand, establishes a Conference of the Parties to oversee implementation and foster cooperation among the signatories. Both the EU AI Act and the Convention include similar exceptions for research and development, as well as for national security.
- Scope: The EU AI Act applies to providers and deployers that place AI systems in the Union or that the outputs produced by their systems are used in the Union. Also, importers, distributors, product manufacturers authorized representatives and affected persons are also included. It doesn’t apply to individuals using AI in a purely personal non-professional capacity. It also doesn’t apply to AI systems released under free and open-source licences that are low-risk and are not prohibited.
In contrast, The Convention covers the activities within the lifecycle of artificial intelligence systems undertaken by public authorities, or private actors acting on their behalf that have the potential to interfere with human rights, democracy and the rule of law. It also establishes that each signatory must indicate its approach to fulfilling its international obligations either by applying the principles of the Convention to activities of private actors or by taking other appropriate measures without derogating from or limiting the application of these obligations.
- Consultation and Participation by members of the public: The Convention explicitly requires public discussion and multistakeholder consultation in the context of social, economic, legal, ethical, environmental and other relevant implications. The EU AI Act is less emphatic where the focus is more on regulatory compliance. The Convention requires signatory states to encourage and promote adequate digital literacy and digital skills across all populations. In contrast, the EU AI Act restricts the provision of AI literacy by providers and deployers to the staff and other persons dealing with the operation and use of AI systems on their behalf.
- Remedies and Oversight: The EU AI Act provides for complaints for infringements to be submitted to the relevant market surveillance authority. Moreover, member states are also obliged to put in place rules on penalties and other enforcement measures applicable to infringements by operators. The penalties should be effective, proportionate and dissuasive, they should also consider the interests of SMEs, including start-ups, and their economic viability. Moreover, infringements may result in penalties of either €35 million or 7% of the company’s global turnover for the preceding year, contingent upon the severity of the breach concerning prohibited AI practices, other provisions related to AI systems by operators or notified bodies, and the scale or size of the enterprise.
Chapter IV of the Convention requires signatory states to adopt or maintain measures that ensure accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of AI systems. These measure should include:
- that the relevant information is documented, provided to bodies authorised to access that information, and made available or communicated to affected persons;
- that the information provided is sufficient to allow decisions(s) to be contested or affected persons to be informed by the use of the system and the use of the system itself
- an effective possibility for persons concerned to lodge a complaint to competent authorities.
Both the EU AI Act and the Convention provide strong foundations for regulating AI. They are both technology neutral and hence lack specificity on how to adapt to rapid technological advancements. AI is evolving quickly, and public trust (“social license”) may be undermined in the future if societal concerns and risks are not adequately addressed. While international cooperation is emphasised, neither framework offers a clear path for integrating their approaches into a global AI governance system, potentially resulting in fragmented regulations worldwide. Finally, there are real practical difficulties in implementing enforcement mechanisms for cross-border AI applications and private actors outside the jurisdiction of the EU. Addressing these issues would improve the efficacy of both frameworks in governing AI responsibly.
Council of Europe Framework Convention on Artificial Intelligence
The full text of the OECD 2019 AI principles is available at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
