Menu Close

How Facebook uses the ‘privacy paradox’ to keep users sharing

Privacy on Facebook: how much sharing is too much? Elijah Hiett/Unsplash

A group of US privacy, civil liberties and human-rights organisations have launched a website campaign demanding that tech companies improve their protection of users’ privacy. Twenty companies have been listed on the site and asked to pledge that they will:

  • Ensure users have access to and control over their data

  • Protect users data

  • Limit the data that is collected

  • Ensure that all communities receive equal protections

  • Resist improper government access and support pro-privacy laws

The companies listed include technology leaders such as Facebook, Google, Apple, Microsoft and Amazon, and so far, not one has taken the pledge.

The move by privacy groups to get tech companies to take a stronger stance on privacy comes in the wake of Facebook’s Cambridge Analytica data breach scandal, where the personal details of 87 million people were siphoned without their knowledge. Cambridge Analytica allegedly used that information to target ads and social-media posts as part of the 2016 US presidential election.

Few users are acting on Facebook privacy concerns

The data breach, and the general anger directed at Facebook’s poor response, resulted in a Twitter campaign, #DeleteFacebook, calling on users to delete their Facebook accounts. On April 10 and 11, Facebook CEO Mark Zuckerberg was called to testify before Congress, and his repeated apologies did little to calm the storm. Despite the scandal and social-media campaign, however, it seems that relatively few Facebook users have actually deleted their accounts.

In a survey conducted by financial firm Raymond James, only 8% of respondents claimed they would stop using Facebook as a result of the data breach. A much greater 48% said they would not change their usage at all. Even those who said they would use Facebook less were switching to Instagram, which is also owned by Facebook.

The privacy paradox

Given how little care Facebook and other technological companies have shown for their users’ privacy, it is interesting to ask why, despite their users’ concerns for their privacy, they are so unwilling to take actual steps to protect themselves.

This is actually a well-known phenomenon and is referred to as the “privacy paradox”. It was first described by HP employee Barry Brown in 2001. In an early study of participants’ online shopping behaviours, Brown noted that although people expressed a concern for their privacy, they were still happy to use loyalty cards that tracked their behaviour. More recently, a survey by the Pew Research Centre found that 51% of respondents did not think it acceptable that social media platforms use personal information to target ads at them in exchange for free access.

The reason for this paradox are multifaceted. In part, it is because the concern expressed by people for their privacy is an abstract feeling that many find hard to express and think about in specific terms. As a consequence, we’re often not very good at putting an absolute value on our privacy, nor are we good at evaluating in real terms the potential harms that could come to them if that privacy is violated.

Researchers have shown that when asked to put a monetary value on private information, people put a value of 25 euros for sensitive information and only 10 euros for browsing information. Associating such a low value to personal information means that free services like those provided by Google or Facebook are seen as being even more worthwhile given the low costs and the perceived high benefits.

Even if users care more about their privacy, they often do not have the expertise needed to protect their privacy nor do they understand what the potential consequences are of its being violated. Even when customers do know about Facebook’s privacy settings, research has shown that their sharing behaviour is unrelated to their concerns about privacy or their knowledge.

People are bad at working out privacy risk

Whether we take action or not to protect our privacy depends in large part on the calculation that we do, even subconsciously, of the risks versus the benefits. The problem with this equation is that people are particularly bad at assessing risks, especially when it comes to privacy.

One model of how people evaluate risk suggests that most of the time, they do so the basis of intuitive reactions to danger that are heavily influenced by emotions. This way of judging risk contrasts with the slower, more analytical approach that uses logic, reason and evidence. The problem with judging risk through a rapid intuitive process is that it has many flaws and is subject to cognitive biases. The important thing to stress here is that this is true even when these flaws and biases are explained.

In calculating privacy risk, most people will downplay the likelihood of the threats they face as well as their potential impact. When users see a platform like Facebook in a positive way, they estimate the benefits as high and the risks low. The converse is also true: if a technology is perceived as negative, the benefits are seen as being lower and the risks greater.

What this means is that as long as users of Facebook and Google believe that they’re beneficial, the risks and loss of privacy will be seen as simply the cost of getting free access.

Mark Zuckerberg has created a platform that encourages users to share their personal information, which Facebook have exploits for profit. Maurizio Pesce/Flickr, CC BY

Facebook markets privacy to keep people sharing

As part of Facebook’s preparations for the new privacy regulations known as the GDPR, it is planning to implement an improved system for users to control their privacy settings. This will be available to all users and not just those within Europe. While this will help, Facebook will not be encouraging users to modify how they share information – it’s this indiscriminate sharing that provides the detailed personal information that drives Facebook’s ability to target ads. The campaign to keep talking about privacy and how much Facebook cares is really a marketing strategy to get past this latest crisis.

While it is possible to get people to think about their privacy and give them tools to protect it, Facebook’s advertising business model that will always drive the need to exploit users’ personal data. Mark Zuckerberg has talked up the fact that Facebook needs to be “free” to users and implied that personal privacy is a low cost to pay in exchange for the platform’s use.

This article was originally published in French

Want to write?

Write an article and join a growing community of more than 182,500 academics and researchers from 4,943 institutions.

Register now