Political pollsters got the 2020 presidential election wrong, and in a similar manner to the 2016 race. Regardless of who won these elections, the candidates either didn’t win battleground states as forecasted or didn’t win those states by the margins that the polls predicted. Just like polls are used to predict an election’s outcome, life sciences organizations rely on market research to predict future customer behaviors that in turn, are used to guide business decision-making and brand strategy. Getting market research “right” can inform decisions that ensure a brand’s success. Getting it wrong can have the opposite effect: Whether it’s missing a forecast, serving up a value proposition that doesn’t resonate or triggering a poor customer experience, poor market research can drag down brand performance. The key is to make a deeper connection between what people say and what they do.
The disciplines of cognitive and behavioral science can help shed some light on the persistent gap between what people say and what they do. The foundational concept is that rational decision making is very expensive for our brains, and so we have all evolved to use a second, unconscious system in parallel when we make decisions. This second system is fast and efficient because it uses cognitive biases, or a simple, rules-based decision-making approach. Here are a couple of examples that help explain the say vs. do gap, and how we compensate in market research to avoid the kind of miss seen by political pollsters.
- Social desirability: the tendency to under-report things that we think others won’t approve of. A Public Opinion Quarterly study found that students who were interviewed over the telephone by a computer significantly underreported undesirable activities like failing a class vs. students who completed an online survey. Similarly, we know from anecdotal reports that some people who voted for President Trump were concerned that friends and family might disapprove of their choice. A ZS study revealed a similar tendency: Patients were randomly divided into two groups. Group A was told that doctors appreciate patients who discuss their symptoms, while Group B was told that doctors are frustrated with patients who consistently discuss their symptoms. Next, we asked all patients how likely they were to discuss their symptoms with their own doctor. The patients in Group B who thought talking about symptoms was “undesirable” were significantly less likely to talk about their symptoms with their own doctors. These findings identify a real barrier between what people say and what they do and help to quantify how big that barrier is and recalibrate our expectations.
- Bandwagon effect: the tendency to report behaviors or attitudes because everyone else is doing it. In a study that was presented at the American Political Science Association Conference in 2015, 2,500 respondents were asked whether they support mandatory vaccinations for measles. Half of the respondents were told that in a previous poll that most people think that mandatory vaccinations are a good idea. The respondents in that group reported significantly more support for the vaccinations vs. the group that was simply asked for their perspective. In this way, polls can become a self-fulfilling prophecy: Undecided voters saw that Biden led in previous polls which may have influenced what they reported to pollsters. In a ZS study with physicians, we told half of the respondents that their peers preferred a treatment, Brand X, in a previous poll. We then asked everyone which treatment they would choose for a sample patient. The physicians who thought their peers preferred Brand X were more likely (35%) to choose that treatment than the respondents who didn’t see the poll data (22%).
“Understanding how cognitive biases can impact the gap between what people do and what they say is just one critical part of doing market research well. ”
Understanding how cognitive biases can impact the gap between what people do and what they say is just one critical part of doing market research well. Here are four additional elements that market researchers in the life sciences space need to take into consideration when designing and executing research:
- Anticipate and correct for sample biases. Market researchers need to anticipate biases that could affect key behaviors and design tests to uncover those influences. To do this, you can embed bias triggers into the research via A/B testing and determine which biases have the greatest impact on behavior.
- Tailor the sample to match business needs. Market researchers must intentionally control for sample biases that they’ve encountered in the past. In our experience with healthcare market research, physicians in certain practice settings and geographical areas can be less likely to respond to long surveys. To overcome these limitations, the research needs to include thoughtful sub quotas for key elements like practice setting, decile and region, or use a specific target list to ensure the sample is representative of the group that will ultimately determine the “outcome of interest,” or our version of election results.
- Weight results based on population skews. Even with careful sampling, it can be difficult to achieve a perfectly representative sample. In this year’s election, the pre-election and exit polls were aligned, confirming the hypotheses that the same skew existed in both samples. An effective market researcher anticipates and corrects for skews through data weighting. By down-weighting the groups that are over-represented and up-weighting the groups that are under-represented, the reported averages better reflect the universe.
- Discount future behavior based on past evidence of overstatement. Predicting future behavior is never going to be a perfect science. In a market research environment, artificial factors that don’t exist in the real world are introduced such as, the influence of people around you, different datasets and information to consider, and the “non-rational” side of the brain taking over the rational side. In healthcare market research, this can be handled by analyzing the discrepancy between stated and actual behavior across hundreds of studies to determine how to adjust for the “overstatement” that naturally occurs. Stated responses also can be triangulated with secondary data sources and calibrated as appropriate. The key is to understand and address the overstatement, rather than taking the answers at face value as the pollsters did.
Even though subject matter isn’t a parallel, market researchers and pollsters share a common goal: To inform and predict outcomes or behaviors. However, the inaccuracies of recent political polls underscore the importance of having a robust approach that accounts for biases both in the respondents and in the design and execution of the research.