Written by Kieran Harte
The purpose of a Corporate AI policy is set out guidelines and regulations that ensure the ethical and responsible use of AI technologies. It is an integral and essential component for organisations aiming to be at the forefront of the AI revolution. The policy needs to be designed to help meet legal and regulatory standards, safeguard data privacy, and prevent data bias and discrimination. This article provides guidance on how to create an AI policy. The policy should be written in clear, accessible language and provide practical guidance.
Corporate AI Policy – Elements
- Purpose and scope
Organisations must first determine their primary objectives for adopting AI technology and establish the boundaries for its use. This includes deciding the extent to which personal AI tools will be permitted versus enterprise-wide applications.
These objectives will help shape the overall direction of the policy. This may include improving your organisation’s efficiency, enhancing customer experience, or fostering innovation.
- Who owns the policy
Essentially, who will be responsible for the implementation of the AI policy in your organisation.
- Why the policy is necessary
Legal and Regulatory Compliance
AI technologies must adhere to specific national and EU laws and regulations, including those related to data protection, intellectual property rights, and consumer protection. Implementing a corporate AI policy ensures that your organisation’s use of AI upholds compliance, minimising the risk of legal issues or fines. The relevant legislation impacting the use of AI in your organisation should be set out in the policy.
Data Protection
Generative AI can be trained from proprietary and special category data, and this data can potentially be exposed to hackers or malicious actors. A corporate AI policy helps mitigate this risk by establishing stringent data protection protocols. It outlines how data will be collected, stored, and used, thereby safeguarding the privacy of your customers and employees.
Bias and Discrimination Prevention
AI language models absorb the characteristics of their training data. If the data is biased, the AI output will likely reflect that bias. A corporate AI policy can enforce bias reviews and audits, ensuring AI-generated content does not discriminate based on race, gender, age, or other traits.
Ethical and Responsible AI Use
Generative AI can unintentionally produce content perceived as unethical, offensive, or harmful. This could lead to significant reputational damage or legal implications for your organisation. A corporate AI policy can help prevent this by setting out clear principles and values, such as fairness, transparency, accountability, and human well-being, that guide ethical and responsible development and deployment of AI-generated content.
- Roles and responsibilities
Roles and responsibilities should be assigned to those involved in AI initiatives, such as development, deployment and monitoring. Establishing clear lines of accountability and governance mechanisms should help ensure ethical decision-making and effective risk management throughout the AI lifecycle.
- Provision for employee training and engagement
The policy should be communicated to your organisation’s management and staff through diverse channels like the corporate intranet, emails, town hall meetings, and training presentations. The goal is to educate management and staff about AI tools, their benefits, and potential risks, fostering a culture of informed use and innovation.
- Data management and governance
The policy should describe:
- what information is confidential (both personal and commercially sensitive)
- what is protected by copyright/intellectual property rights
- authorised and prohibited uses (i.e. what data can collected, stored, processed and disseminated)
- the treatment of sensitive/special category data – this must be compliant with data protection legislation
- mechanisms to monitor an AI system’s performance, impact, and adherence to ethical standards over time
- completion of DPIA/FRIA where required.
Records and logs to be kept
Organisations should consider what relevant events, triggers and information elements should be incorporated into logging and record keeping systems, including, for example, criteria to define which actions and data points to log and how it should be implemented, tested and documented. The logging capabilities should include monitoring the use of the AI system, as well as any risks to health, safety or fundamental rights.
Human oversight
Human oversight measures should be described, distinguishing between organisational and technical, that are in place, or should be put in place by the organisation to ensure that AI systems can effectively be overseen in order to prevent risks related to health, safety or fundamental rights. These measures should ensure that AI systems stay within intended operational constraints, and that the outputs can be controlled and overridden if required. These should lead to verifiable outcomes regarding the oversight of AI systems and should clearly describe the parameters and criteria to consider when testing the effectiveness of the human oversight measures in preventing and minimising risks posed by the AI system.
- Transparency and explainability
Transparency and explainability in AI systems should be promoted by requiring that clear documentation, responsible data practices, and understandable algorithms are provided. It should also ensure that AI-generated content and decisions are clearly identified as such. This approach allows stakeholders, including employees, relevant employees’ representatives and customers, to understand the basis of AI decisions and raise concerns if necessary.
The policy should outline the kinds of information elements to be provided that will to support understanding of the AI system’s workings, characteristics, capabilities, strengths, limitations, and performance. Additionally, it should require organisations to receive information from providers about any potential risks and to understand the system’s outputs. This enables organisations to make informed decisions, oversee the system’s operation, and take corrective action when needed.
- Bias and fairness
The policy should describe the processes, methods, and techniques to reliably measure and report accuracy, adhering to best practices. It should also mandate detailed and explicit information on accuracy levels, performance, and other relevant metrics, ensuring that selected accuracy metrics and thresholds are demonstrably appropriate and effective.
The policy should include:
- verification of input data and output quality
- critical evaluation and assessment of AI system outputs to verify:
- accuracy and quality
- compliance with copyright laws
- absence of biases that could lead to discrimination or unfair treatment (e.g., stereotyping).
Additionally, if necessary, the policy should seek the provision of background information to contextualise the metrics and enable correct interpretation of results.
- Robustness
Organisations should implement measures to ensure AI systems maintain performance and stability under varying conditions, such as changes in input data, environmental shifts, and attempts at adversarial interference. The policy should describe the technical and organisational measures that support robustness and address both intrinsic risks (e.g., system limitations and how to correct them) and external risks (e.g., malicious actions, user behaviour, or input data anomalies).
- Cybersecurity
A comprehensive overview should be provided of their cybersecurity measures, including if and how products, services, and data are encrypted, and/or the cybersecurity standards implemented, and the best practices followed. This overview can include an evaluation of the AI system’s cybersecurity, highlighting AI-specific vulnerabilities and any unaddressed risks.
Additionally, it should outline the organisational measures taken to ensure cybersecurity and offer advice or best practices for users. This advice can cover topics such as securing personal devices, ensuring only authorised access, and keeping software up to date.
This may be set in the Corporate AI Policy itself or, where appropriate, set out in a separate policy and referred to in the Corporate AI Policy
- Incident reporting
The policy should set out how incidents are to be reported, investigated and tracked through the organisation’s established reporting channels. This includes violations of the policy or any potential ethical, legal, or regulatory concerns related to AI use
- Management and governance of the AI policy
Finally, the policy should describe the organisations processes and procedures concerning the policy in relation to:
- Enforcement
- Training
- Exceptions
- Monitoring and compliance
- Review and update to the policy
- Glossary of terms and acronyms
AI has immense potential to revolutionise workplace processes, spanning recruitment, talent management, employee engagement, and decision-making. However, the path to fully harnessing this potential faces challenges, such as ethical considerations, data protection concerns, and the necessity for robust governance frameworks.
Organisations must be proactive in comprehending AI’s capabilities and limitations, safeguarding data privacy, and vigilantly monitoring AI applications to avert unlawful or unethical practices. Developing a clear Corporate AI Policy, informed by the key considerations and best practices discussed here, is crucial for ensuring AI’s ethical, lawful, and efficient use.
Links:
Knowledge Centre Data & Society – Template Instructions for Use in accordance with the AI Act
