AI Driven Dark Patterns: What does the Future Hold?

4–6 minutes

Written by Dr Maria Moloney

The rapid development of generative AI (artificial intelligence) shines a spotlight on the increasing and immediate importance of addressing ethical and privacy concerns associated with the use of these technologies in various fields. Data protection conferences over the past year have consistently highlighted AI’s growing presence in the data protection and privacy arenas and the urgent need for Data Protection Officers (DPOs) to address its associated challenges for their organisations.

These challenges include sensitive personal information being leaked in model outputs, inherent bias in generative algorithms, overestimation of AI capabilities leading to inaccurate output (referred to as AI hallucinations), which often refer to real individuals, and the creation of deepfakes and synthetic content that could manipulate public opinion or pose risks to specific individuals and the public in general.

Apart from these concerns, generative AI introduces intricate data protection risks for individuals, organisations, and society. Regular reports highlighting the leakage of chat histories emphasise the immediate requirement for strong data protection and security measures during the development and implementation of generative AI technologies. It can often be difficult and sometimes impossible to determine the origin of the content generated by AI models. This makes it extremely challenging to hold anyone accountable for any biases or misinformation that these models may output. We have seen time and again, the negative consequences of these biases, especially in areas like news generation or social media content. By now, we have all heard of the racist chatbot or the ageist AI hiring tool. What we may not have heard of, however, is the idea of AI driven dark patterns. These can be harder to spot and potentially more detrimental, given the right circumstances.

So, what are AI driven dark patterns? These are deceptive tactics used in user interfaces (UI) that leverage AI to manipulate users of the application into making decisions that benefit the company, not necessarily the user. These patterns exploit user psychology and behaviour in even more sophisticated ways than traditional dark patterns.

Imagine a world where you receive a video call from your bank manager (generated by a deepfake) and they inform you about some suspicious activity on your account. The AI tailors the call to your specific bank branch, your bank manager’s voice patterns, and even their appearance, making it incredibly believable. This deepfake call could entice you to reveal sensitive information or click on malicious links.

Another frightening example of AI driven dark patterns might be the creation of malicious actors using AI to create highly targeted social media profiles that exploit your child’s vulnerabilities. The AI could analyse your child’s online behaviour and tailor fake friendships or relationships that are so believable that the child could be manipulated into revealing personal information or even their location to these individuals.

The question, thus, arises around what can we do now to mitigate these ills? How do we prevent these future scenarios from happening, where cyber criminals and even ill-intentioned corporations reach us and our loved ones on devices that we have come to depend on for carrying out our daily activities?

Unfortunately, the answer is not a simple one. Mitigating AI-driven dark patterns requires a multi-pronged approach involving users, developers, and regulatory bodies.

The globally recognised privacy principles encompassing data quality, data collection limitation, purpose specification, use limitation, security, transparency, accountability, and individual participation apply universally to all systems that handle personal data, including training algorithms and generative AI. We must now put these principles to the test to see if they can truly protect us from this new, and oftentimes thrilling, technology.

First and foremost, we need to educate users about AI-driven dark patterns and their deceptive tactics. This can be done through public awareness campaigns, educational resources at all levels of the education system, and integrating warnings within user interfaces, especially on social media sites that are frequented by young people. Cigarette companies need to highlight the harms of their product; so too should AI driven services to which our children are exposed.

We should look at ways to encourage users, especially young and vulnerable users, to be critical consumers of information that they encounter online, especially when interacting with AI systems. In the twenty-first century, our education systems should teach members of society to question (much more) the source and purpose of content generated by AI.

Give the younger generation, and indeed the older ones too, options needed to be able to take control of their data and to be able to personalise their interaction with AI systems. This could include settings that would allow users, or parents of young users, to opt-out of AI-driven recommendations or data collection.

Developers should be as transparent as possible about the use of AI in user interfaces and the data that their AI system collects. This will allow for trust to grow and for users to better understand how their data gets used by organisations.

Organisations should also prioritise data-protection by design during the development phase of their AI systems, including consideration for user well-being and ethics. Regular audits and tests should be conducted to identify potential biases or dark patterns embedded within the algorithms.

Governments and regulatory bodies also play a crucial role in establishing clear guidelines and regulations for the development and use of AI. The first such regulation is due to be published this summer by the European Union. The long-awaited EU AI Act enforces many of these data protection and ethical considerations. This is a positive start.

This new AI regulation, along with our existing data protection regulation, the GDPR (General Data Protection Regulation), can be leveraged to ensure responsible data collection and use practices in AI development across Europe. Hopefully, this should encourage collaboration between stakeholders, including developers, researchers, and user advocacy groups, to identify and address emerging challenges related to AI-driven dark patterns.

Discover more from Irish Computer Society

Subscribe now to keep reading and get access to the full archive.

Continue reading