The Quantum Mechanics of Brand:
What You See – and Don’t See – with Derived Importance Analysis

by Susan Schwartz McDonald, PhD and Michael Polster, PhD, Naxion Research Consulting

Derived importance is a hammer that, if aimed wrong, can hit not the nail, but the fingernail. This piece discusses the assumptions and the limitations of derived importance in order to help market researchers and marketers make thoughtful decisions about whether and when to rely on them. Our focus is particularly on “professional purchase decisions”, but most of the points made here have application for consumer products research as well.

In an effort to gain competitive advantage, market researchers deploy a variety of tools to detect attributes and customer impressions that differentiate brands and drive product selection. Those nuances sometimes reside at a level we like to call “quantum brand mechanics,” where the subtlest of distinctions and motivations reside. Understanding the invisible force fields that draw customers to products can be the key to success — especially in markets where product differentiation is slight and the competition fierce. But despite billions spent annually to uncover drivers of behavior, the industry continues to struggle with a fundamental paradox: our need to rely on customers to explain themselves — and our deep doubts about whether they are reliable reporters.

Measures of “derived importance” aim to circumvent the problem by using correlation or regression models to infer links between stated brand perceptions and purchasing behavior. These techniques have become ubiquitous because they are widely assumed to reveal relationships not captured in the traditional “stated importance” exercises that ask respondents to rate the importance of attributes in accounting for purchase decisions. (Another form of “derived importance” is also used to prioritize attributes in conjoint tasks, but the underlying assumptions and methods of calculation in that setting are different, and not part of this discussion.) The results of derived and stated importance analyses are often quite similar, but when derived and stated importance exercises appear to contradict one another, there is a tendency — especially among strong partisans — to assume that the derived motivators are more illuminating. That assumption is subject to challenge, however, because inferential statistics do not invariably lift the veil on motivational relationships, and can, on occasion, create opacity or distortions of their own.

The question of whom to trust — people or statistics — is especially challenging in the context of professional purchase decisions, made by individuals whose job description and training require them to use specialized decision criteria. Those decisions can include everything from capital investments in a B2B environment to prescribing decisions made by physicians. By their nature, these decisions often involve explicit, rational criteria that buffer them to some degree from the effect of emotional factors.

Of course, even decisions made by “rational” customers can be influenced by irrational factors – especially when product offerings are poorly differentiated, virtually inviting emotional decisions. Arguably, the very success of marketing often depends on the ability to “amplify” certain subtle brand signals that customers might not otherwise detect. Still, the assumption that statistical inference should always trump customer self-report can be a dangerous one. Indeed, findings from analyses of derived importance in such markets are sometimes not only implausible, but actually misleading.

So what can go wrong with derived importance? First and most obvious, these analyses all assume a causal relationship. The assumption of causality flagrantly disregards the first rule of correlational analysis: causation cannot be inferred. As much as we might want to believe that attitudes or perceptions of product attributes explain purchase decisions, we cannot use correlation to support that conclusion. Notably, in most derived importance analyses, ratings are often collected for attributes that are very similar or nested, with scant attention paid to whether or not those attributes are linked in a web of inter-correlations.

Another problem with causal inference is that respondents sometimes fool not just us, but themselves, in ways that can create misleading statistical relationships. If we believe that respondents cannot be trusted to report their own motivations accurately, then it is equally plausible (as Attribution Theory would suggest) to believe that respondents are also vulnerable to self-delusion and self-justification about their own brand perceptions (e.g., “I buy Product X, therefore, it must be a good product, and so I will give it high ratings”).

Causation issues aside, there is also a pervasive lack of regard for statistical significance in ranking the correlation coefficients. In several recent derived importance analyses we conducted, fewer than 20% of the correlations were statistically significant. What’s more, the respondents’ randomly generated identification numbers produced correlations of a similar magnitude to those of half the product variables tested – a sober reminder that lots of statistical testing produces lots of meaningless significance.

There are also grounds to critique the practice of using a quadrant (or Kano) analysis, in which derived importance is plotted on one axis of a grid and stated importance. In this type of graphic, attributes that are relatively high on derived importance and low on stated importance are often identified as hidden drivers, since inference is presumed to trump assertion. Unfortunately, interpretation can be distorted by decisions made in the service of graphic display – especially if the picture is designed with the goal of ensuring membership in each of the four quadrants rather than adherence to any scientific or statistical method (i.e., a median split on each variable). Indeed, the significance of the “distance” between two attributes is rarely, if ever reported.

Although there may be unstated motivations at work with any customer, the ability to tap those motivations via derived importance analysis — or direct importance ratings, for that matter — is ultimately contingent on an ability and willingness to frame them as brand attributes. We have encountered some derived importance analyses in which the only attributes explicitly rated are rational and concrete ones, leaving no opportunity to uncover the relative role played by factors that are often the ones most vulnerable to camouflage, denial, and distortion (e.g., advertising, logos, brand personality, etc.).

The implication is that if you are going to hunt for hidden motivators, they have to be in plain enough sight so that you can articulate them as attributes that respondents can rate.

No matter how you depict it, derived importance analysis (like any correlation) depends on the presence of sufficient variance in the ratings per attribute for any relationship patterns to emerge. As a result, it will overlook or discredit price of entry attributes. This is a point that some partisans will argue is the crux of the case against derived importance and some will argue is actually its justification. All else being equal, whenever the ratings for an individual attribute converge due to market consensus and limited differentiation, there is less opportunity to observe correlational relationships of any kind. As a result, a product attribute that receives similarly high or low ratings from all respondents is likely to have low derived importance due to lack of variability in ratings. This might seem like a useful marketing insight insofar as it encourages marketers to focus on more differentiating features that really do vary — even if they deliver only second-order benefits.

There are, however, marketing perils associated with playing all your bets on differentiating predictive features. For instance, in the pharmaceutical market, where efficacy is the price of entry, woe to the brand that ceases to tout importance (even though it is not differentiating). Experience confirms that even very effective products can quickly lose ground if marketers cease to beat that drum. That same sort of statistical anomaly can lead technology marketers astray in cases where they are simultaneously competing with another category and with competitors in their own. The factors that position a tablet PC against a more traditional laptop will be marginal in a derived importance analysis that incorporates tablet competitors.

So here’s where we land on derived importance: It’s a tool for seeing more or making small things look bigger — but like any magnifying lens, it creates distortions. Making sense of what you see with derived importance requires looking from multiple perspectives through a wide-angle lens.

  • Be wary of mechanical approaches to derived importance that remove it from context and judgment, and prioritize “hidden” subtleties over straightforward but compelling truths.
  • Be conservative about assumptions of causation, and mindful about the mischief of inter-correlations – both in developing a plan of analysis and drawing conclusions from the data.
  • Think of Kano graphics as a useful and attractive heuristic but exercise caution before making strategic decisions based entirely on that sort of display — or on correlations so tiny that they don’t necessarily justify marketing action.
  • Don’t automatically disregard all attributes that emerge as lower priority drivers simply based on limited variance if the market tells you they matter.

About the Authors

The authors are Michael Polster, Senior Vice President in the firm’s Healthcare Practice, Naxion Research Consulting and Susan Schwartz McDonald, PhD, President and CEO, Naxion Research Consulting. To add to this conversation, contact National Analysts at mpolster@naxionthinking.com

 

© 2011, Naxion Research Consulting. All rights reserved.