Aeroplan, a popular Canadian loyalty program, found itself in hot water recently after launching a survey that asked participants about contentious issues, such as whether immigration “threatens the purity of the country” or whether men have a “natural superiority over women.”
CROP, the marketing research company that designed the survey for Aeroplan, defended the practice, stating that learning about consumers’ values can help companies predict their preferences and choices.
The loyalty program apologized and announced it was deleting all the data collected through the survey. But the move is not likely to placate consumers who are already angry with the company.
As a marketing professor who teaches consumer behaviour and researches loyalty programs, I believe consumers perceived the survey to be backwards on several distinct levels.
First, the scope of the questions seemed out of line with the mission of a travel rewards program.
Second, the questions were offensive to many respondents because they appeared to normalize intolerant views.
And third, there is the concern that simply asking such charged questions can change respondents’ attitudes and render them more xenophobic, prejudiced and sexist.
All information is precious
To many, these questions seemed completely irrelevant to Aeroplan’s mission and business model.
As a coalition reward program, Aeroplan sells its miles to a very diverse set of partners like Costco, Esso and TD. The partners hand out the points to their customers as a token of appreciation for their business and as an incentive to remain loyal.
Finally, the customers swap the miles they have accumulated for rewards offered by Aeroplan, like vacation packages or merchandise, thus closing the loop.
Consumer research has a long tradition of using rich, varied information that’s not limited only to dry demographic data to understand customers and their needs and preferences.
Back in the 1960s, researchers would use information ranging from the number of airplane trips one took to whether customers used hemorrhoid remedies to build consumer profiles for two competing beer brands.
Some of the apparently irrelevant characteristics turned out to distinguish between consumers with different types of lifestyles — somebody who spends a lot of time outdoors, for example, versus somebody who travels abroad frequently.
Furthermore, a consumer’s lifestyle was correlated with their propensity to drink one of the two beers — hence the relevance of the questions that seemed to have nothing at all to do with the product.
Different lifestyles, values, attitudes, interests and preferences tend to be clustered together — for example, Swiss consumers who are price-insensitive to wine tend to be older, well-informed, live in a single household and are more likely to prefer both Italian and German wines than other consumers.
The fact that different lifestyles are associated with clusters of attitudes and consumption patterns means that seemingly irrelevant details about a person can be diagnostic of their preferences in a wide variety of contexts. No surprise, then, that Cambridge Analytica was so keen on gathering apparently benign and trivial information, such as one’s Facebook likes.
In the era of machine learning, marketing researchers not only need more data, but they need more diverse, comprehensive data that contain new, non-redundant information. The contentious questions in CROP’s survey were precisely designed to tap into a fresh pool of insights.
No question is neutral
The contentious questions were framed in the affirmative, asking respondents about the extent to which they agreed with the statements. They could have just as easily been presented in the negative — for example, “immigration does not threaten the purity of the country.”
It may seem a superficial difference, but research shows that people are prone to acquiescence bias. That is, they tend to agree, rather than disagree, with statements presented in surveys. So the framing of the question will impact the way people respond to it.
Furthermore, respondents expect that “agree” is the correct answer, the one that questionnaire’s designers expected. Applying this logic to Aareoplan’s case, it appears that the company is implicitly backing the statements and thus legitimizing them.
We also know that measurement tools like surveys or interviews that are commonly used in social sciences can interfere with and change the attitudes that we are trying to measure. The concern here is that by floating these statements, the company may actually contribute to the creation and proliferation of bigoted and narrow viewpoints.
Humans like consistency
For example, if someone has no clear attitudes towards immigration, they might be more likely to agree with the statement that immigration threatens “the purity of the country” due to the way the question was framed and to acquiescence bias.
Since humans also like to maintain a consistency across time, the tiny act of having answered in the affirmative to this question may lead them to perceive themselves as the type of person who disagrees with immigration-friendly policies.
Of course, the chain of events I just described is probabilistic — based on a theory of probability and so subject to chance. But applied to a large number of respondents, the effects may still be material.
In the era of artificial intelligence, companies can capitalize on any type of information they gather on their customers or prospective customers. At the same time, consumers are becoming more sensitive about the data they share with merchants and also to the specific questions they are being asked.
It’s not a zero-sum game: Sometimes consumers can benefit by letting companies know more about what they really want. But finding that sweet spot where companies can learn about consumers without invading their privacy or offending them is not easy.