So, Facebook has been in the dock after publishing details of a supposedly sinister experiment it oversaw several years ago. It involved monitoring the moods of around 700,000 users based on their posts. The research also established that it was possible to affect those moods by posting positive or negative content in the users’ news feeds.
The reaction has been highly negative, with many people raising concerns about the implications for privacy online. Whether or not you think they are right, they could probably do with an update on what has been happening online these past few years.
We are entering into an era where data is king, where our every move, our every emotion and every contact can be tracked. With the increasing analysis of social media activity, there is often very little that we do that can be hidden from the organisations we interact with online.
So long as an online company can drop a cookie onto our machine, it can track our behaviour online. This now includes logging how we react to advertising material and especially what makes us click on the content. Increasingly they are learning our behaviour as a result.
The need to gain permission, in the same way that research teams require when they involve human participants, has been slowly eroding. It is coming to be seen as a natural extension of existing practices, where advertising content is focused on target groups.
Users often freely offer their data to the internet, to be used in ways that, frequently, they would never expect. For example a tweet on a local event will time-stamp where a person was at a given time. It may reveal information around their movements and even perhaps who they had contact with along the way.
Mining for sense and emotion
One vital ingredient of following users’ data is for advertising agencies to understand the emotions that lie behind messages posted online. Advertisers have always sought to mine the emotions of large populations as they respond to adverts – this is arguably just an extension of it.
Studies that may previously have taken months or even years can now be done within minutes – witness the tweet monitoring of World Cup fans for a piece on this site last week, for example.
Making sense of people’s emotions on social media brings new challenges. It might be fairly easy to make sense of a tweet that says:
“I am so happy that the sun is shining today :)”
But just by placing a different emoticon on it, you can change the sentiment:
“I am so happy that the sun is shining today ;)”
And then you can change it completely if you add the dreaded exclamation mark:
“I am so happy that the sun is shining today!”
With such a subtle change, which implies sarcasm, it now gives the impression that someone is very unhappy about the weather.
Part of a great experiment
Facebook’s January 2012 experiment took this sort of emotion research one step forward when its data scientist Adam Kramer spent two weeks following 689,003 users to see if emotions were contagious within social networks.
It basically came to the almost obvious conclusion that users feel generally happy when they are fed good news –- “the economy is looking good and the weather is nice” –- and depressed when they get bad news –- “a bomb has gone off injuring many people and it looks as if snow is on the way”.
Users generally don’t seem to mind having their data mined, to the extent they are aware of it. Facebook and many other internet companies, especially Google and Amazon, do it extensively and try to make sense of people’s behaviour (albeit monitoring the emotional content of posts is still a new frontier and does not yet appear to be a major focus). Data mining is usually part of the trade-off that we agree to in exchange for something we want, such as free email or the opportunity to distribute our messages in a social network.
Data mining is also an established part of how affiliate networks operate. These exist to help websites sell advertising space by saving them the bother of doing it themselves. Instead the website outsources its advertising space a network, which analyse the user, tries to push them advertising content from an affiliate of the network, then monitors the response by using analytics such as dwell time, click-through rate and actual purchases.
The only difference in the case of the Facebook experiment was that none of the users had any idea they were guinea pigs. Again though, it’s quite a subtle distinction. In many cases the general public are not much aware of the extent to which their behaviour is being monitored when they go online.
Like it or not, we are all part of an ongoing experiment which is mining our data on a continual basis. It is pushing customised content our way and monitoring how we use it. There is generally no need for informed consent for this type of so-called push advertising. Should that change? If so, and if it were even possible to do so at this stage, you’d be aiming at a much wider target than just this one Facebook experiment.