When you first hear the word “DECEPTICON” it may make you think of the animated series Transformers in which robotic lifeforms known as Decepticons are capable of manipulating their form to turn into something different.
Metaphorically speaking, not too far off that idea, is a project of the same name, headed-up by two departments of the University of Luxembourg namely, The Interdisciplinary Centre for Security, Reliability and Trust (SnT) and The Human-Computer Interaction Research Group, in partnership with LIST.
DECEPTICON stands for “Deceptive Patterns Online” and tackles what is commonly known as Dark Patterns. These are deceptive, manipulative elements, that can push you to make decisions where you are not conscience of the consequences of those decisions.
“Dark patterns can be a text that can be formulated in a way to trick you. It can be, for example, on the internet when cookies are requested. When it becomes complicated and annoying to the user, it can be considered dark patterns because when it becomes more complicated there’s a very strong possibility that the user will just accept rather than refuse to make it go away,” explained the project leader for LIST Philippe Valoggia.
One of the main challenges the project faces is actually being able to identify dark patterns as they can appear in many shapes and sizes. “I work with everything that is data protection, privacy etc, and in that perspective that puts into question the requirements of everything related to fairness and transparency. The first problem we have with dark patterns is that we need to concretely identify the traits of them,” Philippe stated. “The idea in the framework of this project is, firstly to identify that for example, a phrase is suspiciously or wrongly written and then identify the user because depending on the user, it will not have the same impact”.
Indeed, certain dark patterns will have a big impact on young people while others on older generations. This means that even if you have identified dark patterns, it is still a challenge to know what impact it would have, and this needs to be tested to confirm whether it is really manipulative or not.
The objectives of the project are in fact four-fold:
So how do you go about detecting dark patterns? With a programme or an app? Philippe outlined why this is extremely difficult to achieve. “At the start we thought it would be good to have a programme for this, and the ideal would be to have an add-on to your browser that would highlight and say, ‘attention on this page there is elements that is considered as dark patterns’, and give you confidence scores,” he began, “but they are all so different, sometimes just in a text. We need an advance analysis of text and need technology like NLP (Natural Language Processing) which permits at a machine level to understand the text. Today though, there is no solution with the capacity to detect correctly if there is manipulative intention. So we have to do the evaluation manually”. But there is an added complication to the equation that Philippe highlighted. “Even if I detect an element that is a dark pattern, the site I discover it on could be aimed at people that are sufficiently informed so in this case it is not manipulative for them”.
When there is a text extremely well structured, it can prove extremely difficult to detect malintent due to the magic of language. Expressing feelings in different ways such as irony or sarcasm makes it very difficult for a machine and to be programmed for anything that has sentiments. Philippe gave an example of how the basic use of sentiments are currently used in practice in programming. “When you have an interaction with a seller online, the first contact you often have is with a bot. Often they work with perception – is the person angry? Happy? - This is done by identifying certain words. If the phrase is turned in one way it shows more anger, so those that are angry are generally treated as priority, than those that are happy”. However, that’s about as far as these bots can go.
The goal of the project is to be able to detect the traits of dark patterns. “We have a lot of examples but, in the end, they simply function as examples. The aim is to be able to distinguish dark patterns from characteristics not just examples. This is something that has not been done before, so a real aim of this project,” clarified Philippe.
The project also wants to be able to show that one type of dark pattern is likely to have a manipulatory effect on a specific category of person. Once this type of knowledge exists, the next objective is to integrate this knowledge in practice.
Although the University and LIST work together on the project, they cover different aspects of dark pattern analysis. The University aims at measuring the effects of dark patterns on people and is very much focused on the academic effects. “At LIST we deal more with how that knowledge can be implicated by different stake holders, so data controllers, privacy engineer, supervisory bodies, and see how much a data subject is aware when faced with different dark patterns,” said Philippe.
LIST joined the DECEPTICON project in June, “so we are really at the start, having entered the project this year, and we are present because we see it as important that we are associated in the discussion and discover what we extract from dark patterns”.
Highlighting the complexity of the project, Philippe concluded, “dark patterns are not just something mechanical and not just a process, not just visualisations, you also have text, and honestly with artificial intelligence today there are very few solutions with the finesse of analysis to say whether something is manipulative or not. Maybe in a few years we will have something that performs better, but today it is not here”.