Artificial Intelligence (AI) and GDPR

7–11 minutes

Written by Kieran Harte

Introduction

Artificial Intelligence (AI) systems, particularly generative AI, often process vast amounts of data that frequently includes personal data. This brings them within the scope of the General Data Protection Regulation (GDPR), triggering significant privacy and data protection concerns. GDPR is technology-neutral, so while AI is not specifically referenced in the GDPR, this does not change the data protection obligations for AI system providers and users who process personal data. The GDPR and AI Act are designed to work concurrently, necessitating compliance with both legal frameworks. For this reason, the AI Act references the GDPR and its relevance to AI systems.

Moreover, GDPR compliance will help address and mitigate the risks associated with AI technologies, for example through Data Protection Impact Assessments (DPIAs), legitimate interest assessments, keeping records of processing activities (ROPAs), suitable technical and organisational measures, privacy by design and default, and complying with transparency obligations.

GDPR Principles

The GDPR sets out several key principles that may pose challenges for organisations in the context of AI systems, specifically:

1. Lawfulness

A key aspect of the GDPR is that personal data must be processed lawfully, fairly, and in a transparent manner. To process personal data, a controller must have a legal basis under the GDPR. The applicable and appropriate legal bases to legitimise the processing of personal data by AI systems are highly context-specific and often preconditioned by a “necessity” requirement under the GDPR. You should consider the objectives and aims of your processing to determine if these can be achieved through non-AI technologies or methods – these may prove to be more suitable or less risky for your needs.

If you choose to use legitimate interest as a legal basis, the processing must be necessary, manifestly lawful under the law, defined in a clear and precise manner, and actual and current (i.e., not hypothetical or speculative). The benefits of processing, including anticipated benefits, should be weighed against the potential negative impacts on data subjects in the legitimate interest assessment, considering the nature of the data (such as special category data) and the status of the data subjects (including vulnerable individuals, minors, etc.). If necessary, additional measures should be put in place to limit these risks and protect the rights and freedoms of individuals. The development or use of an AI system does not in itself preclude the use of legitimate interests as a legal basis for processing.

Other considerations include:

  • the type of AI system being proposed, including training and development mechanisms, the data being used, and deployment;
  • the purposes for which the AI system is provided/deployed by the controller, and whether they compatible with the original purpose including reasonable expectations of data subjects on how it will be used;
  • the relationship of the controller with the data subjects concerned (direct or indirect);
  • and the different data protection roles of each controller/processor involved in the processing at each stage of the AI system’s lifecycle.

2. Fairness

To ensure that the processing is fair, you should ensure that the personal data is not processed in ways that are unjustifiably harmful, unlawfully discriminatory, unexpected, or misleading to individuals. In particular, you should ensure that biases are not introduced leading to possible discrimination or unfair treatment.

It is recommended that AI literacy training should be designed to provide insight and understanding how biases can be embedded in AI systems, the privacy implications of AI, and the need for transparency and accountability in AI applications.

3. Transparency

Among the obligations under GDPR is being transparent with data subjects whose personal data you are processing or have already processed including informing them what processing you are doing, how you are doing it, and how they can exercise their data protection rights. However, this can be challenging when it involves AI systems as it isn’t always possible to determine:

  • where the data originated, particularly where it has been web scraped or derived from user-generated content;
  • how the AI system operates and processes personal data, given the opaque and highly technical nature of AI systems (i.e. black box, deep learning) thus complicating the availability of intelligible and accessible information on the processing;
  • whether the AI systems is processing personal data in the first place;
  • or if the use of the personal data by the AI systems is compatible with the original purposes of processing and in alignment with data subjects’ reasonable expectation on how the data will be used.

4. Accuracy

AI systems, especially public and ‘off-the-shelf’ generative models, can present challenges to the GDPR’s principle of data accuracy. These systems can produce outputs that appear to be legitimate but are actually factually incorrect (known as AI hallucinations), yet the GDPR mandates that organisations ensure all personal data they process is accurate and current.

Organisations should establish policies that include human oversight and intervention to verify accuracy and quality throughout an AI system’s lifecycle in a reasonable and proportionate way (such as during data collection and analysis). For example, when an AI system is operational, organisations should have a policy that prompts end-users to check:

  • the accuracy and quality of any data output and not presume it is accurate;
  • that biases are not introduced into the data leading to possible discrimination or unfair treatment;
  • and that the data it is not otherwise protected by copyright.

Another dimension of this principle is ensuring that personal data is kept up to date. Therefore, it will be necessary to assess whether an AI system can remain dynamic in operation in such a way that it is continually autonomous, self-learning, adapting and evolving according to the circumstances, as this implies that the data may be constantly continually updated. It should be noted that dynamic AI systems can be more difficult to control and changes to the system have a direct effect.

5. Data Subject Rights

AI systems are required to respect and support the exercise of data subject rights under the GDPR, including the right of access, rectification, and erasure of personal data. The adoption of privacy by design and by default from the start, allied to maintaining accurate records of processing activities is critical for compliance. There are risks associated with ensuring the effectiveness of data subject rights, especially due to technical challenges in identifying data subjects or difficulties in transmitting requests for the exercise of rights when the dataset or model is shared or available in open source. It can be complex, if not technically impossible, to guarantee data subject rights on certain objects such as trained models.

It is recommended that you provide a discretionary right of opt-out to data subjects, particularly when collecting data directly from data subjects before processing, to enhance data subjects’ control over their data. This is particularly relevant where an organisation deploys an AI system and is intending to reuse usage data to refine and improve the system.

What are the Issues to Consider and Next Steps?

Determine your Organisation’s Obligations Under the GDPR

  • What is the AI system being used for and who is using it?
  • Review the lawful basis of all processing related to the use and/or training of AI systems within your organisation.
  • Assess the necessity and appropriateness of personal data processing by AI systems, giving precedence to data protection and privacy.
  • Compile a register of AI systems that are either deployed or under development, including those external vendors.

Determine What Impact Assessments Your Organisation Must Conduct

In the context of the AI Act, a DPIA will need to be conducted by deployers in respect of the use of high-risk AI systems as well as providing a summary to the national authority. Moreover, under the AI Act, deployers of an AI system may also be required to conduct a Fundamental Rights Impact Assessment (FRIA) and notify the national authority of the results. In practice, the FRIA requirement can be met where the FRIA elements have been incorporated in one consolidated DPIA that meets the requirements of both the GDPR, and the AI Act – meaning one document will suffice. If you are relying on legitimate interests as a legal basis, then you will need to consider completing a legitimate interest assessment. It is also recommended that organisations review, adapt, and update data protection policies, documents, due diligence questionnaires, and vendor contracts to include AI-specific requirements and safeguards.

Train Your Staff

Under the AI Act, organisations must ensure their staff (and other affected persons) possess sufficient AI literacy taking into consideration their technical knowledge, experience, education and training, and the context that the AI systems will be used in. The training should cover the importance of compliance, ethical considerations, and understanding the opportunities, risks, and potential harms associated with AI. It should also offer an insight and understanding of the technical elements of AI systems, correct application during development and deployment, and interpreting AI outputs correctly. Specific training should also be provided on identifying and mitigating risks associated with prohibited AI practices.

Whether you are an AI provider or deployer, it is critical from the start to be aware of what you are using, your interactions with it, and the potential consequences, regardless of the beneficial or innovative outcomes they may provide. Where personal data is involved, GDPR and data protection laws become applicable to both providers and deployers of AI systems.  By embracing these considerations and incorporating data protection principles into AI governance, organisations can foster cultivate trust, accountability, and compliance within AI-driven initiatives.

Links to more information

Discover more from Irish Computer Society

Subscribe now to keep reading and get access to the full archive.

Continue reading