Do you ever wonder what Google knows about you? Maybe you've visited Adsettings to find out what interests they think you have based on your search behavior. Or looked at Facebook to see what information the platform has collected about you. I once did this rather by accident - and was a bit shocked. And that' s despite the fact that it shouldn't really be a surprise. Nevertheless, I use Facebook at least occasionally, and Instagram much more often. Actually, I don't feel entirely comfortable with it. If you add up all the information about location, search queries, usage data and purchasing behavior that Google, Meta and the likes have on me at this point, you could probably reconstruct most of my life.
It's an uncomfortable thought when you realize that even seemingly innocuous data can be used for serious cases of identity theft. This is even true when someone uses the name and a stolen picture to create a fake profile on Instagram to send spam messages, for example.
Like me, many people claim that their privacy is important to them. However, corresponding behavior to proactively protect personal data cannot be observed in most of them.
The phenomenon of carelessly handling one's own data while at the same time worrying about the misuse of the information disclosed is known as the Privacy Paradox.
Why do we worry - and at the same time behave in ways that make it more likely that our fears will come to pass?
Intentions and probabilities of success
According to the theory of planned behavior, several factors precede an action. The intention to act in a certain way (e.g., the intention: 'I protect my personal data from access by unauthorized persons.') is only one of them. The subjectively perceived pressure to behave in a certain way, such as having a profile on a certain platform, also plays an important role.
Finally, the perceived probability of success of an action also significantly determines how motivated we are to perform it. The probability of success is divided into the expected effectiveness of a measure and the perceived probability of being able to implement this measure effectively ourselves. A data protection measure could, for example, be to revoke unnecessary permissions from apps. However, anyone who assumes that this is only cosmetic and cannot really prevent the corresponding accesses will assess the general effectiveness of this step as correspondingly low. It is also possible that someone does not even know how to navigate through the settings menu to manage app permissions. In that case, they lack the conviction to be able to perform the measure themselves. Of course, these two things can also coincide, in the sense of 'Don't know how, but it doesn't matter because it won't do any good anyway.'
Here, a tension can quickly arise between good data security intentions and perceived self-efficacy. In the conflict with the behavioral intention ('protect data', the presumed general and/or individual probability of success (low) gains the upper hand and one finally gives in to the perceived pressure (log on to platform X).
How we all perceive the world a little differently (and most likely incorrectly) - and make decisions based on that.
Cognitive distortions (also called biases) are systematic errors in information processing. They can affect how we perceive our environment, how we remember, think and make decisions. Cognitive biases are characterized by the fact that, as a rule, all people are subject to them to some degree. One can become aware of their existence and try to observe how they play out. Since they take place subconsciously, it is difficult, though possible, to counteract their influence in cognition.
Optimism bias leads people to tend to assume that negative consequences are less likely for them than for others. It is thus a kind of 'it won't happen to me' mentality.
Another cognitive bias considered relevant is called 'hyperbolic discounting.' This refers to the observation that advantages and disadvantages that lie in the future are perceived as significantly less drastic than those that occur immediately. Thus, if one receives an immediate (e.g., financial) benefit from sharing data, that naturally seems significantly more relevant in the moment than the possibility of becoming a victim of a potential data leak in the distant future.
Our human self-image is that of rational, reason-driven beings. But our rationality is bound by psychological constraints. Because our cognitive resources are limited, we cannot possibly consider all relevant information when making decisions. Even if we have relevant knowledge, we may not be able to access it at the right moment. For this reason, people often use simple decision rules (heuristics) in everyday life to use the available resources economically and to leave as much brain power as possible for the really important decisions. (Here, the term heuristic as well as three popular examples are explained in more detail: Read more).
The availability heuristic, for example, means that events are perceived as more likely the easier it is to remember or think of an example. So if someone in your circle of friends has already experienced a data protection incident, or if you have a particularly strong imagination, you consider the possibility of getting into such a situation yourself to be much more likely.
According to the affect heuristic, people base their decisions strongly on how they feel about something. As a result, people underestimate risks when they are associated with things they like. Conversely, people tend to overestimate risks when they are associated with things they don't like.
According to the feeling-as-information theory, these feelings do not even have to relate to the decision situation in question. People thus generally assume that their emotions relate to what is currently the focus of their attention. A positive basic mood can thus be erroneously transferred to a specific counterpart who wants to lay claim to data.
Ultimately, this means that gut feelings largely determine the extent to which people feel it is safe to pass on information. Positive moods and feelings increase trust and belief in greater data protection, while negative ones have the opposite effect.