EU AI Act – Will I need to conduct a Data Protection Impact Assessment? What about a Fundamental Rights Impact Assessment?

2–3 minutes

Written by Kieran Harte

Essentially, a Data Protection Impact Assessment (DPIA) is mandatory when the processing of personal data is, “likely to result in a high risk to the rights and freedoms of natural persons”. In the context of the AI Act, a DPIA will also need to be conducted by deployers in respect of the use of high-risk AI systems as well as providing a summary to the national authority. A “deployer” is a person using an AI system under its authority.

The list of high-risk use cases is set out in Annex III of the AI Act and will be maintained by the European Commission. AI systems that are considered to be high-risk, such as those used in critical infrastructure, education, healthcare, law enforcement, border management, or elections, will have to comply with strict requirements set out in the Act. AI Systems that perform narrow procedural tasks, improve the result of previous human activities, do not influence human decisions, or do purely preparatory tasks are not considered high-risk. However, an AI system shall always be considered high-risk if it performs profiling of natural persons.

The DPIA shall contain at least the information set out in Article 13 of the Act (in addition to the information mandated by Article 35 GDPR):

  • identity and contact details of the provider (or its authorised representative)
  • the “characteristics, capabilities and limitations of performance” of the high-risk Al system, including:
    • purpose
    • level of accuracy, including metrics, robustness and cybersecurity, and anything that might impact it (Article 15)
    • possible risks to the health and safety or fundamental rights
    • technical capabilities and characteristics of the system sufficient to explain its output
    • target persons on which the system is intended to be used
    • specifications for the input data or any other relevant information in terms of the training, validation and testing data sets used
    • information to enable deployers to interpret the output of the system and use it appropriately.
  • changes to the high-risk AI system and its performance
  • human oversight measures set out in Article 14
  • computational and hardware resources needed, expected lifetime of the system and any necessary maintenance and care measures to ensure the proper functioning of that AI system including software updates
  • a description of the mechanisms for collecting, storing, and interpreting the logs in accordance with Article 12.

A deployer may also have to conduct a Fundamental Rights Impact Assessment (FRIA) under the AI Act and notify the national authority of the results. Deployers that are public bodies or private enterprises providing public services, and operators providing high-risk systems, are covered by this requirement. The assessment should include:

  • a description of the processes in which the system will be used
  • period of time and frequency in which the system is intended to be used
  • persons and groups affected
  • description of the human oversight measures
  • specific risk assessment and the associated mitigation measures

In practice, the FRIA requirement can be met where the FRIA elements have been incorporated in one consolidated DPIA that meets the requirements of both the GDPR, and the AI Act – meaning one document will suffice.

The European Artificial Intelligence Office (AI Office) will develop standardised template documents to facilitate compliance with the AI Act and help reduce the administrative burden for deployers.

Discover more from Irish Computer Society

Subscribe now to keep reading and get access to the full archive.

Continue reading