Written by Maeve Dunne
The following is a template guide that you can use in your practice as a Data Protection Officer or practitioner.
Introduction
Artificial Intelligence (AI) tools are transforming the way we interact, either in our home life or while at work. Whether it is face-recognition on our smartphone, customised experiences on our social media, or online ID verification for a mortgage application — it is efficiency we have grown to expect. In a business environment, it can automate tasks, improve decision-making, support content, improve targeting and provide valuable insights.
However, with online technology there is a dark side such as ‘deepfakes’: manipulated videos or images to create fake but convincing content, which can be used to spread disinformation, defame individuals, or influence public opinion (fake news!). In a day-to-day business environment, AI tools can present new risks and challenges in terms of intellectual property, ethics and human rights, copyright, information security, inaccurate/liable content, and data protection, specifically in:
- Misinformation about our employees, stakeholders, or business
- Bias and discrimination during automated recruitment
- Disclosure of confidential and commercially sensitive information
This guidance paper is to support you to ensure we can keep each other and the organisation safe and secure when utilising AI tools — especially when it involves sharing information about you, your colleagues, customers, or sensitive company information.
Summary Background
As AI is evolving daily, it is difficult to summarise the types and impossible to summarise the application names and uses. Today, some of the types are:
- Machine learning: The ability of a machine to learn from data and improve its performance, then at a point in its development, learn by itself, without programming. An example is facial recognition for ID verification.
- Natural language processing: The ability of a machine to understand and generate natural language, such as speech or text. For example, ChatGPT is a Large Language Model (LLM) that does natural language processing.
- Computer vision: The ability of a machine to perceive and interpret visual information, such as images or videos. Examples include self-driving cars.
Compliance Steps
When signing up, with a business account, to social media, online applications, new tech suppliers and AI applications, we must engage ‘consciously’ with the process:
- Evaluation: Make a conscious evaluation on the security and privacy of any AI tool before using it. This includes reviewing their Privacy Policy, their T&C’s. What will they do with the information you provide, where do they store it, and who will they share it with? Many apps share the data you provide by default.
- Compliance: It is a regulatory requirement that we assess and log all tech that the organisation engages with, whether that is a web developer, HR platform or AI application. Engage with the Data Protection Team for support and assessment.
- Reputable: As part of the conscious evaluation, only use reputable AI tools and companies with an established reputation. Avoid tools developed by individuals or companies without an established reputation, unless you have done your homework on their ethics.
- Protection: Never upload information that is private and personal, business confidential, proprietary information belonging to the organisation or protected by regulation. This includes information about colleagues, customers, supporters, partners, or third-party providers. Never share access or login passwords to your AI accounts, and keep the software up to date.
- Accuracy: Fact check, and fact check again! When using an AI tool to write content, carry out research or design promotional material or videos, all content must be checked for reliability and accuracy. While the output can appear polished and professional, content can often be out of date, biased in opinion, or simply inaccurate. This can lead to embarrassment, lack of credibility, defamation of character, or negative media.
- Transparency: For the sake of accuracy and accountability, information created by AI should be referenced as ‘AI Generated’.
Our Responsibility for Security, Privacy and Ethics
We are committed to ensuring that the use of AI tools remain safe and secure for the organisation. With everyone’s active participation, we believe we can maximise the benefits of AI tools while minimising the potential risks associated with their use.
AI Act
The Artificial Intelligence (AI) Act was passed into regulation on 13 March 2024. Its focus is to protect the fundamental rights of an individual, while promoting responsible innovation. From initial review, it takes a risk-based approach to the use of AI, categorising systems as high-risk, limited-risk, minimal-risk, or no-risk, with requirements and fines getting progressively stricter as the perceived risk level rises.
Banned systems, which will not have an impact on our organisation, include scraping of facial images from the internet and CCTV (ensure CCTV footage remains safe!). ‘High risk’ includes areas of recruitment, health, and policing.
While EU guidance is still taking shape, we know AI presents new risks in terms of intellectual property, ethics and human rights, copyright, information security, inaccurate/liable content, and data protection, which could include:
- Liability for misinformation
- Bias and discrimination
- Disclosure of sensitive information
Some Recommended Actions Now to Ensure Governance
- Ask the question, ‘who is currently using AI and what is their use?’ Quantify it.
- Identify what are the potential opportunities and efficiencies for the use of AI.
- Produce a Guidance Document or Policy for the safe use of AI.
- Conduct specific training or information sessions.
- Conduct a Privacy Risk Assessment prior to integrating / introducing a new system.
- Integrate AI applications into the Record of Processing (ROPA) & Inventory Management.
- Assess the ongoing use, as machine learning grows.
