Guidance to assessing AI risks in the context of the EU AI Act

15–22 minutes

Artificial intelligence (AI) can provide a great benefit to society, but it also carries risks. For this reason, the EU AI Act focuses on management of risk as part of AI governance.

While there is no one-size-fits all for conducting a risk assessment, below are steps an organisation may consider as part of the risk management phase of an AI governance program. While addressing the considerations described in this overview, organisations should also review their existing infrastructure for risk assessments and consider how to leverage their current procedures to address AI. The type of risk assessment an organisation may need to conduct will also depend on the laws it is subject to, such as the specific requirements of the EU AI Act and other future legislation.

The key responsibilities under the AI Act lie with providers and deployers of AI systems. However, there are also responsibilities for other actors in the supply chain: importersdistributors, and authorised representatives.

Identify & Rank AI Risks

To conduct a risk assessment, an organisation’s AI governance team should first identify and rank the risks as unacceptable (prohibited), high, limited, or minimal, evaluate the probability of harm, implement mitigation measures to reduce or eliminate risks, and document the risk assessment to demonstrate accountability.  If an organisation identifies a risk as unacceptable, it will need to stop engaging in that AI processing activity, in accordance with the EU AI Act. However, if the risk is not prohibited, the organisation should rank the risk and assess its likelihood of harm, as described below.

Identify AI Risks – Examples of the types of AI risks

Like other types of security risk, AI risk can be understood as a measure of how likely a potential AI-related threat is to affect an organisation and how much damage that threat would do.

23

If not managed correctly, these risks can expose AI systems and organisations to significant harm, including financial losses, reputational damage, regulatory penalties, erosion of public trust and data breaches.

Data risks

AI systems rely on data sets that might be vulnerable to tampering, breaches, bias or cyberattacks. Organisations can mitigate these risks by protecting data integrity, privacy, security and availability throughout the entire AI lifecycle, from development to training and deployment.

Model Risks

AI models are susceptible to various threats, including theft, reverse engineering, and unauthorised manipulation.  Threat actors can compromise a model’s integrity by tampering with its architecture, weights, or parameters, which are crucial for determining the model’s behaviour and performance.  Some of the most common model risks include:

  • Adversarial Attacks: These include model poisoning to manipulate input data with false or misleading information to deceive AI systems into making incorrect predictions or classifications.  For example, attackers might create adversarial examples to interfere with decision-making or introduce bias.
  • Hallucination: This occur when an AI model generates false or corrupted outputs.  The outputs may appear coherent but are fabricated, highlighting the model’s limitations in understanding context and its reliance on learned patterns.  Hallucinations are not confined to compromised datasets; they can also result from insufficient training data or inadequate safeguards.  While mitigation strategies, such as providing clearer instructions or constraints (e.g., “do not include links unless verified”), can reduce the risk, they remain a key reason why human oversight and accountability are essential.
  • Prompt Injections: These attacks target large language models (LLMs) by disguising malicious inputs as legitimate prompts.  This manipulation can lead to the leakage of sensitive data, the spread of misinformation, or other harmful outcomes.
  • Prompt DoS (Denial of Service): Malicious actors can exploit vulnerabilities in AI systems by launching prompt-based DoS attacks.  These attacks aim to overwhelm the model with crafted or compromised prompts, forcing it to generate excessive automated responses and potentially causing system failure.  While such exploits often target models trained on compromised datasets, they can also arise from insufficient guardrails or inadequate input validation.  Implementing robust safeguards, such as strict prompt filtering and rate-limiting, is essential to prevent overload and maintain system resilience.
  • Exfiltration Risks: Hackers may exploit specific words, phrases, or terminologies within prompts to reverse-engineer and extract elements of a model’s training data.  By doing so, they can access and misuse sensitive information.  These risks are heightened when datasets are compromised or when systems lack adequate guardrails, such as strict input validation and output filtering.  Implementing robust security measures is essential to prevent data leakage and protect confidentiality.
  • Supply Chain Attacks: These occur when threat actors target AI systems at various stages of the supply chain, including at development, deployment, or maintenance phases.  Exploiting vulnerabilities in third-party components used in AI development can lead to data breaches or unauthorised access.

Operational Risks

AI models, while impressive, are fundamentally products of sophisticated code and machine learning algorithms. Like all technologies, they are susceptible to operational risks. If left unaddressed, these risks can lead to system failures and security vulnerabilities that threat actors can exploit. Some of the most common operational risks include:

  • Drift or Decay: AI models can experience model drift, where changes in data or the relationships between data points lead to degraded performance. For example, a fraud detection model might become less accurate over time, allowing fraudulent transactions to proceed undetected.
  • Sustainability Issues: AI systems are complex technologies that require proper scaling and support. Neglecting sustainability can lead to challenges in maintaining and updating these systems, resulting in inconsistent performance over time, increased operating costs, and higher energy consumption.
  • Integration Challenges: Integrating AI systems with existing IT infrastructure can be difficult and resource-intensive. Organisations often encounter issues with compatibility, data silos, and system interoperability. Introducing AI systems can also create new vulnerabilities by expanding the attack surface for cyber threats.
  • Lack of Accountability: With AI systems being relatively new technologies, some organisations lack proper corporate governance structures. As a result, AI systems often lack oversight, leading to potential misuse or mismanagement.

Ethical and Legal Risks

When organisations fail to prioritise safety and ethics in the development and deployment of AI systems, they risk privacy violations and biased outcomes. For example, using biased training data for hiring decisions can reinforce gender or racial stereotypes, leading to AI models that favour certain demographic groups over others. Common ethical and legal risks include:

  • Lack of Transparency: Organisations that are not transparent and accountable with their AI systems risk losing public trust.
  • Noncompliance with Regulatory Requirements: Failure to adhere to government regulations such as the GDPR or sector-specific guidelines like the EU AI Act can result in hefty fines and legal penalties.
  • Algorithmic Biases: AI algorithms can inherit biases from training data. Bias within AI models occurs when the output is prejudiced due to discriminatory assumptions embedded in the dataset it was trained on. This can manifest in various forms, such as racial, gender, socio-economic, or political bias. The source of bias is often the training dataset, which may not be sufficiently neutral or may contain historical bias. These biased outputs can have adverse effects in critical areas where AI decision-making is utilised, such as recruitment, credit evaluations, and criminal justice.  For more information see EDPB Bias Evaluation.
  • Ethical Dilemmas: AI decisions can raise ethical concerns related to privacy, autonomy, and human rights. Mishandling these dilemmas can damage an organisation’s reputation and erode public trust.
  • Lack of Explainability: Explainability in AI refers to the ability to understand and justify decisions made by AI systems. Complex AI models are often difficult to interpret, making it challenging for users to understand how decisions are reached. This lack of explainability can impede trust and lead to legal scrutiny and reputational damage.

Ranking AI Risks

Unacceptable Risks

When ranking AI risks, organisations should initially assess whether the risk is unacceptable or prohibited by law.  The EU AI Act recognises four different levels of risk associated with AI systems:

pyrimid graphic

Source: European Commission

AI systems that are considered a clear threat to the safety, livelihoods and rights of people are banned.  Based on this, the AI Act prohibits the following AI practices:

  • harmful AI-based manipulation and deception (i.e. subliminal techniques likely to cause a person, or another, significant harm)
  • harmful AI-based exploitation of vulnerabilities (i.e. due to age, disability or social or economic situation)
  • Systems that evaluate or classify natural persons or groups over a certain period based on their social behaviour or personality characteristics, commonly known as social scoring.
  • Individual criminal offence risk assessment or prediction
  • untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
  • emotion recognition in workplaces and education institutions
  • biometric categorisation to deduce certain protected characteristics (i.e. using an individual person’s face or fingerprint to deduce or infer an individual’s characteristics, e.g. political opinions, religion, or sexual orientation)
  • real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (some exceptions apply)

The prohibitions on uses with unacceptable risk applied from 2 February 2025.  Note that fines in this category are capped at 7% of global turnover or €35m.

Reference guidelines for the application of these provisions of the AI Act have been published by the European Commission: Guidelines on prohibited AI practices as defined by the AI Act.  The fourth AI Pact webinar on 2 April 2025 explored the guidelines and the definition of an AI system.

High risk

The next category is high-risk, AI systems that are considered to be high-risk, such as those used in critical infrastructure, education, healthcare, law enforcement, border management or elections, will have to comply with strict requirements set out in the Act.  There are two high-risk categories of AI usage.

The first category relates to the use of AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation listed in Annex I of the Act.  All of these are product safety legislation, such as the medical device directive.  The provisions on product-linked high-risk AI systems apply from 2 August 2027.

The second category relates to the high-risk use cases set out in Annex III of the AI Act that is maintained by the European Commission.  These are AI systems that potentially create an adverse impact on people’s safety or their fundamental rights, as protected by the EU Charter of Fundamental Rights, and are high-risk.  An AI system shall always be considered high-risk if it performs profiling of natural persons.

7789

The European Commission is responsible for making changes to Annex III and adding new systems to the list of High-risk AI. The criteria to identify a high-risk AI system is flexible and can be adapted.  The provisions on high-risk uses apply from 2 August 2026.

For more information, see obligations for deployers of high-risk artificial intelligence (AI) systems under the EU AI Act.

Limited/transparency risk

AI systems that are not considered high-risk may fall within a lower tier of AI risks.  Limited risk is concerned with the lack of transparency in the use of AI. Therefore, it is important that AI systems that directly interact with people, e.g. chatbots and deepfakes, are developed to ensure that the person is informed that they are interacting with an AI system.  The AI Act introduces specific transparency requirements for certain AI applications where, for example, there is a clear risk of manipulation such as the use of chatbots or deepfakes.

AI Systems that perform narrow procedural tasks, improve the result of previous human activities, do not influence human decisions or do purely preparatory tasks are not considered high-risk. 

General-purpose AI has also been added to the list of limited-risk AI systems and now carries a number of additional obligations under the Act. Providers of generative AI have to ensure that AI-generated content is identifiable. On top of that, certain AI-generated content should be clearly and visibly labelled, namely deep fakes and text published with the purpose to inform the public on matters of public interest.  These systems will most likely be limited to the largest AI developers and in the short term at least will not be the focus of the majority of organisations.

Minimal or no risk

The AI Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category and, therefore, will not require further mitigation steps under the EU AI Act.   This includes applications such as AI-enabled video games or spam filters.

951

Identify the Likelihood of Harm

Once an organisation ranks the risks, it should then assess the likelihood that the risk will materialise.  For example, using a risk matrix, an organisation may characterise the severity and likelihood of harm as critical, moderate, or low. Depending on the risk spectrum, an organisation may need to implement additional mitigation measures and safeguards.  Alternatively, organisations may use a three-by-three risk matrix, whereby a score of one through three is assigned for the severity of risk and probability of harm, which are then multiplied to provide a risk score. 

graphic 1
graphic 2

Source: Bloomberg Law

There is no single method for conducting a risk assessment. Organisations may leverage their existing procedures for risk audits and assessments to assign scores to AI processing activities, depending on the severity and probability of harm. Organisations may also use a more complex risk matrix, which accounts for risk velocity—i.e., time to impact—and risk contagion—i.e., potential for risk in one area impacting other areas of the organisation.  A listing of risk assessment templates have been set out below.

fin stab

Document the Risk Assessment

The AI risk assessments should be documented to demonstrate accountability.  At a high level, the documentation should reflect the risks identified during the assessment, the steps taken to mitigate the risks, and whether, on balance, the mitigation measures are adequate and sufficient to address the risks for the organisation to proceed with the AI processing activity.  AI risk assessment should also factor in ethical considerations as well.

Conducting impact assessments

In the context of the AI Act, a DPIA will need to be conducted in respect of the use of high-risk AI systems as well as providing a summary to the national authority.  Moreover, under the AI Act, deployers of an AI system may also be required to conduct a Fundamental Rights Impact Assessment (FRIA) and notify the national authority of the results.  In practice, the FRIA requirement can be met where the FRIA elements have been incorporated in one consolidated DPIA that meets the requirements of both the GDPR, and the AI Act – meaning one document will suffice.  If you are relying on legitimate interests as a legal basis, then you will also need to consider completing a legitimate interest assessment. 

For more information, see EU AI Act – Will I need to conduct a Data Protection Impact Assessment? What about a Fundamental Rights Impact Assessment?

Engage in Continuous Monitoring

Organisations should be mindful that conducting a risk assessment is not a one-and-done activity. Rather, organisations should establish, implement, document, and maintain the risk management system throughout the AI lifecycle. This includes monitoring and evaluating emerging risks once the AI system is deployed in the real environment.

General-purpose AI models and rules for AI systems

General-purpose AI (GPAI) models, often referred to as foundational models, can be used for a variety of tasks and are becoming the basis for many AI systems in the EU.  Consequently, they can carry systemic risks if they are very capable (i.e. a total computing power of more than 10²⁵ FLOPs[1]) or widely used.  Accordingly, there are specific provisions and obligations applying to GPAI models under the AI Act. The European Commission, through its AI Office, is responsible for enforcing these provisions.

The AI Act also introduces rules for certain AI systems placed on the market, or deployed, in the EU to mitigate the risks to people’s health, safety and fundamental rights. These rules relate to, certification of AI systems’ conformance with standards; governance of AI systems during development; and supervision of systems when in use.


[1] Floating point operations per second (FLOPS) is a measure of computer performance in computing, FLOP is used a proxy for AI model capabilities.

graphic

Source: Iryna Schwindt

EDPB guidance on AI Privacy Risks & Mitigations LLMs

In April 2025, the EDPB released a pivotal report addressing AI privacy risks and mitigation strategies for LLMs. This comprehensive document sets out a comprehensive risk management methodology designed to systematically identify, assess, and mitigate privacy and data protection risks associated with LLM systems.

The report aims to provide a comprehensive understanding and state-of-the-art information on the functioning of LLMs systems and associated risks to help stakeholders navigate the complexities of AI privacy.  The report highlights the importance of privacy by design, this can be complex and can border on AI, security, and software engineering, particularly where Agentic AI systems are involved.

It includes practical mitigation measures for common privacy risks, ensuring that LLM systems can be deployed responsibly and ethically.

To illustrate the application of this risk management framework, the report presents three real-world use cases:

  1. Virtual Assistant (Chatbot) for Customer Queries: Demonstrating how privacy risks can be managed in customer service scenarios.
  2. LLM System for Monitoring and Supporting Student Progress: Highlighting the importance of data protection in educational settings.
  3. AI Assistant for Travel and Schedule Management: Showcasing the integration of privacy safeguards in personal productivity tools

Future risks presented by quantum-enhanced AI

The EU AI Act regulates high-risk AI systems but does not specifically address quantum-enhanced AI. Nevertheless, its broad definitions suggest that quantum-powered AI could be included, especially in sectors like finance, healthcare, and security.

Quantum computing has the potential to significantly impact the 10²⁵ FLOP threshold for systemic risk outlined in the AI Act. By enabling the surpassing of this benchmark with fewer physical resources, quantum advantage in AI training could accelerate model capabilities beyond classical limits. This might render the current threshold obsolete or necessitate a revised regulatory approach. If scalable, fault-tolerant quantum systems are developed, they could exponentially increase AI processing power, complicating systemic risk assessments and requiring new governance mechanisms to manage unpredictable advancements in AI capabilities.

The GDPR also faces challenges, as quantum computing could compromise anonymisation techniques. The European Data Protection Board has cautioned that quantum AI may re-identify previously anonymised data, raising compliance concerns under EU law.

Resources

Risk assessment templates

  • AI Impact Assessment (AIIA)
    The Government of the Netherlands released a tool to set up responsible AI projects, and incorporates considerations related to the AI Act.
  • AI Risk Assessment | TrustArc
    The AI Risk Assessment Template is a structured framework designed to systematically identify, evaluate, and mitigate potential risks associated with AI.

Discover more from Irish Computer Society

Subscribe now to keep reading and get access to the full archive.

Continue reading