The Privacy Paradox

Why we worry about privacy but choose not to care

Do you ever wonder what Google knows about you? Maybe you've visited Adsettings to find out what interests they think you have based on your search behavior. Or looked at Facebook to see what information the platform has collected about you. I once did this rather by accident - and was a bit shocked. And that' s despite the fact that it shouldn't really be a surprise. Nevertheless, I use Facebook at least occasionally, and Instagram much more often. Actually, I don't feel entirely comfortable with it. If you add up all the information about location, search queries, usage data and purchasing behavior that Google, Meta and the likes have on me at this point, you could probably reconstruct most of my life.

It's an uncomfortable thought when you realize that even seemingly innocuous data can be used for serious cases of identity theft. This is even true when someone uses the name and a stolen picture to create a fake profile on Instagram to send spam messages, for example.

Like me, many people claim that their privacy is important to them. However, corresponding behavior to proactively protect personal data cannot be observed in most of them.

The phenomenon of carelessly handling one's own data while at the same time worrying about the misuse of the information disclosed is known as the Privacy Paradox.

Why do we worry - and at the same time behave in ways that make it more likely that our fears will come to pass?


Intentions and probabilities of success


According to the theory of planned behavior, several factors precede an action. The intention to act in a certain way (e.g., the intention: 'I protect my personal data from access by unauthorized persons.') is only one of them. The subjectively perceived pressure to behave in a certain way, such as having a profile on a certain platform, also plays an important role.

Finally, the perceived probability of success of an action also significantly determines how motivated we are to perform it. The probability of success is divided into the expected effectiveness of a measure and the perceived probability of being able to implement this measure effectively ourselves. A data protection measure could, for example, be to revoke unnecessary permissions from apps. However, anyone who assumes that this is only cosmetic and cannot really prevent the corresponding accesses will assess the general effectiveness of this step as correspondingly low. It is also possible that someone does not even know how to navigate through the settings menu to manage app permissions. In that case, they lack the conviction to be able to perform the measure themselves. Of course, these two things can also coincide, in the sense of 'Don't know how, but it doesn't matter because it won't do any good anyway.'

Here, a tension can quickly arise between good data security intentions and perceived self-efficacy. In the conflict with the behavioral intention ('protect data', the presumed general and/or individual probability of success (low) gains the upper hand and one finally gives in to the perceived pressure (log on to platform X).


How we all perceive the world a little differently (and most likely incorrectly) - and make decisions based on that.


Cognitive distortions (also called biases) are systematic errors in information processing. They can affect how we perceive our environment, how we remember, think and make decisions. Cognitive biases are characterized by the fact that, as a rule, all people are subject to them to some degree. One can become aware of their existence and try to observe how they play out. Since they take place subconsciously, it is difficult, though possible, to counteract their influence in cognition.

Optimism bias leads people to tend to assume that negative consequences are less likely for them than for others. It is thus a kind of 'it won't happen to me' mentality.

Another cognitive bias considered relevant is called 'hyperbolic discounting.' This refers to the observation that advantages and disadvantages that lie in the future are perceived as significantly less drastic than those that occur immediately. Thus, if one receives an immediate (e.g., financial) benefit from sharing data, that naturally seems significantly more relevant in the moment than the possibility of becoming a victim of a potential data leak in the distant future.

Our human self-image is that of rational, reason-driven beings. But our rationality is bound by psychological constraints. Because our cognitive resources are limited, we cannot possibly consider all relevant information when making decisions. Even if we have relevant knowledge, we may not be able to access it at the right moment. For this reason, people often use simple decision rules (heuristics) in everyday life to use the available resources economically and to leave as much brain power as possible for the really important decisions. (Here, the term heuristic as well as three popular examples are explained in more detail: Read more).

The availability heuristic, for example, means that events are perceived as more likely the easier it is to remember or think of an example. So if someone in your circle of friends has already experienced a data protection incident, or if you have a particularly strong imagination, you consider the possibility of getting into such a situation yourself to be much more likely.

According to the affect heuristic, people base their decisions strongly on how they feel about something. As a result, people underestimate risks when they are associated with things they like. Conversely, people tend to overestimate risks when they are associated with things they don't like.

According to the feeling-as-information theory, these feelings do not even have to relate to the decision situation in question. People thus generally assume that their emotions relate to what is currently the focus of their attention. A positive basic mood can thus be erroneously transferred to a specific counterpart who wants to lay claim to data.

Ultimately, this means that gut feelings largely determine the extent to which people feel it is safe to pass on information.  Positive moods and feelings increase trust and belief in greater data protection, while negative ones have the opposite effect.



Trust and risk


How willingly we hand over our data depends to a large extent on how much we trust the other party. Is the online store a well-established company or does the whole thing seem rather dubious?

Not only rational considerations play a role here. Even the website design can ensure that more trust is placed in the operator. As a result, providers can be seen as having more competence in the responsible handling of personal data, sometimes without justification. General distrust of online services can be tempered by increased trust in a specific counterpart that wants to collect data. For example, trust in Apple related to privacy issues may be higher than in Android devices, due to their linkage with Google. On the other hand, Android users in particular unsurprisingly had more trust in Android devices because they felt Google was more transparent about privacy issues.

This example shows how general trust in a medium interacts with the attitude toward the specific counterpart. And it also makes clear that people find reasons to trust (more) what they are already used to. You can find the whole study here.

This can go so far that pre-existing attitudes can be completely overwritten due to certain situational effects.


It is assumed that users try to find a compromise between possible risks and potential benefits associated with an action. They thus view their personal information as a currency that can be exchanged for certain benefits such as personalization, entertainment or financial.

Often, this trade-off ends up with the benefits of disclosing data being weighted higher than the potential harm of compromising privacy - especially if the other person is particularly trusted. Or the other way around: when the perceived risk is so low that there is no motivation to avoid the risk. Here again, the perceived likelihood of success of protective measures plays a role. If these seem generally low or are judged to be effective only at a disproportionately high cost, it seems 'not worth trying at all.' It should not be forgotten that in quite a few cases the use of a service is only possible at all by providing certain (sensitive) data. Accordingly, a decision is often possible at most between consenting to data collection and not using it.


Privacy cynicism


It is often assumed that users are simply too poorly informed about data protection issues. As a result, they underestimate the risks of careless data sharing and take no steps to protect themselves online. The supposedly simple solution would then be improved education about where data is collected and how it is used. There is even an assumption that simply informing users that they own their information leads to better risk perception and thus safer decisions.

Recently, an observation known as privacy cynicism contradicts this assumption. According to this observation, people are well aware of privacy risks in the digital sphere. Users are not clueless or indifferent. Permanent surveillance merely seems so inevitable to them that they see no point in taking countermeasures. Resigning oneself to the situation in which one seems to have no control is then the best solution. The feeling of being at the mercy of large tech companies is followed by resignation and ultimately apathy. (Hoffmann, Lutz, Ranzini, 2016:


What now?


Anyone who uses the Internet leaves their data there. The safest strategy from a data protection perspective would therefore probably be to avoid the digital sphere altogether. But that's not really practical today, of course. Without a mobile device and an e-mail address, it's virtually impossible to participate in work or organize one's private life.

Fortunately, the options for moving around the Internet are not limited to divulging all data or simply not using the Internet at all. Various measures are relatively easy to implement. Although none of them can guarantee absolute security, they can, especially in combination, help to protect one's privacy.


Install updates regularly

Simple, but often neglected, are regular updates of operating systems. Especially in the private sphere, however, these are often postponed because the device is supposed to be available all the time. But at the latest when the device is also used for work, in the 'Bring Your Own Device' (BYOD) model, updates become mandatory, as this closes security gaps.


E-mail addresses for different purposes

It's also worth creating different email addresses for different purposes. These should include one that can function as a 'throwaway address', i.e. in particular one that does not allow any conclusions to be drawn about the user's real name.

A separate password should be used for each of these e-mail addresses (and also for every other online service used) so that a single data leak does not lead to unauthorized persons easily gaining access to several or, in the worst case, all customer accounts. All passwords should be as abstract as possible and should not be guessable. The name of a beloved pet is usually not a secure password, at most it might be, it is very fancy and you keep it secret. In the same way, passwords that refer to the particular platform on which they are used to log in (e.g. Instagram123) are not secure even in 2022. If you want to be on the even safer side, you can change passwords regularly.

Even if it's old news by now, it's also still valid to exercise caution with emails from unknown senders. If in doubt, avoid opening attachments and clicking on links. It is still true that banks, for example, will never ask for sensitive account data by e-mail. Spam is best ignored. Answers such as 'No thanks' or 'Not interested' at least confirm that the e-mail address exists and is being used. It also makes sense to always check whether the e-mail address was part of a data leak. This can be done, for example, at


Put a stop to data collection

The above measures are primarily aimed at preventing unauthorized access to personal data by hackers. However, they cannot stop the collection and analysis of data by service providers such as Google or Facebook. To counter this as well, a VPN can be used. These encrypt the IP address so that even third parties cannot see what you are doing on the Internet. VPNs are particularly worthwhile for anyone who is regularly on the move in public networks, for example in a café or on the train. But VPNs are also suitable for surfing at home if you want to remain anonymous. However, it should be noted that while a VPN can provide increased security and greater anonymity, it is also not free of certain risks. On the one hand, data leaks can never be ruled out with absolute certainty even with VPN servers, and on the other hand, the VPN provider now has access to the usage history instead of Google.

If a VPN is not a solution for you, then the most important measure is to revoke permissions wherever possible. Google, Facebook and Co. also grab data that is not needed to function. It is therefore always important to question: does this app need this particular permission? If I don't want to send photos via Facebook Messenger, for example, it doesn't need access to my gallery either.


Taking a close look at cookie boxes

Everybody knows them, nobody likes them: cookie boxes. Unfortunately, most cookie boxes make it seem as if nothing can be personalized or as if the default setting is already the one that protects data in the best possible way. This is not the case.

If possible, always try to disable all cookies that are not necessary for the site to function. This way you will also protect yourself from unwelcome advertising.




Despite its contradictory nature, the Privacy Paradox can be logically explained by various approaches. It can be assumed that not one of the mentioned aspects is decisive, but that all of them play a certain role. While most could be addressed through education, privacy resignation in particular is likely to be more difficult to combat. At the moment, users are often faced with the choice of using an online service only by revealing their personal data or not at all. Those who want to participate in online life thus often find it difficult to defend themselves against data collection. Stricter legal regulation would be needed here, which could, for example, prescribe that the purpose for which data is used be explained more clearly, or enable users to exclude the collection of certain data and/or purposes of use in advance. Unfortunately, such solutions are not currently in sight.

In this respect, the apathy that often accompanies the topic of data protection is understandable. However, complete capitulation to the circumstances is not appropriate. Even if it is a shame that it is the only option, self-protection is possible and important.


You want to protect the data of your customers and employees? Learn how we can help you.