Choice Modeling Analytics—Benefits of New Methods
by John Colias, Ph.D.

Over the past two decades, the marketing research industry has witnessed rapid growth in the use of choice models. These types of models are valuable in their ability to do the following:

  • Optimize marketing strategy.
  • Determine optimal pricing strategy.
  • Create effective promotional offers.
  • Maximize the appeal of product features.
  • Optimize product lines.
  • Define bundles of features and benefits that maximize profitability.
  • Predict market share and source of volume for new brands or products.
 

This brief article aims to help marketing researchers understand the benefits of several technical advances in choice analysis:

  • Improved experimental design algorithms.
  • Latent Class and Hierarchical Bayes models.
  • Model calibration.
 

Improved Experimental Design

Advances in these areas have provided significant benefits, summarized in this diagram. Next we will explain how the benefits of new methods are realized.

Improved Experimental Design Algorithms

Experimental designs select combinations of attributes and levels for each alternative in a market scenario. The combinations are selected to ensure that the relative value to customers of each part of a brand or product (e.g., price, size, and packaging) can be measured with maximized reliability. Improved experimental design software has enabled researchers to produce more has enabled researchers to produce more realistic scenarios to test in survey choice tasks. For example, suppose one wireless communications provider offers multiple service plans. It would be unrealistic for the same brand to offer two wireless plans that are identical in all aspects, except that one includes more minutes and a lower monthly fee than the other. Today, experimental design software can avoid such combinations of attributes, while still producing experimental designs with high reliability. Another benefit of improved experimental design software is that survey choice tasks can be made easier for the respondent, while still handling complicated products that have many features.

For example, respondent survey choice tasks that elicit a choice from among multiple wireless phone service brands, where each brand is described by 9 or more attributes, can be quite tiring. Imagine a respondent evaluating 10 or more scenarios such as Choice Task with 9 Attributes Example below.

Example Choice Task with 9 Attributes
Example Choice Task with 9 Attributes

 

Example Choice Task with 5 Attributes
Example Choice Task with 5 Attributes

 

Latent Class and Hierarchical Bayes Models

Latent Class models have unique parameters (e.g., price response, brand preference) for each sub-segment of the total population of customers. During model development, segments of customers (who share similar market responses to changes in product prices and features) are discovered and separate model parameters are produced for each segment.

Hierarchical Bayes models have unique parameters for each individual customer or survey respondent. Since every individual truly has unique tastes and preferences, customer-level choice models are more realistic. For example, one customer might be very price sensitive and brand loyal, while another might be moderately sensitive to price but not loyal to any brand.

Customer-level modeling uses survey responses to (a) determine the most likely distributions (across customers) for price and brand preference parameters and (b) estimate each individual respondent's price sensitivity and brand preferences.

In the following diagram, price elasticity is a parameter derived from choice model coefficients and represents the percent increase in demand due to a one percent increase in price; i.e., price sensitivity.

Choice Model Price Sensitivity

The three histograms represent the movement from one total population price elasticity parameter at the left with an aggregate model (traditional approach before Latent Class and Hierarchical Bayes) to segment level price elasticities in the middle histogram (Latent Class model) to customer-level price elasticities at the right (Hierarchical Bayes model).

As the distribution of price elasticities is more fully modeled with Latent Class and Hierarchical Bayes, marketers can more effectively target price promotions to customers who are most sensitive to price.

Latent Class and Hierarchical Bayes models use very different statistical algorithms to produce the final model parameters, and in many cases the final results are similar. This author has estimated both Latent Class and Hierarchical Bayes choice models using the same source data and found similar patterns of responses; however, experts still debate the relative benefits between the two techniques.

In general, Hierarchical Bayes methods enable researchers to investigate more complex decision-making processes. For example, a recent application (Allenby and Gilbride, 2004) applies an Hierarchical Bayes model with two decision-making stages. First, consumers use a screening process to decide which products to consider. Second, consumers make a purchase decision among the products that are considered. This Hierarchical Bayes model not only delivers relative preferences for the various product features, but also estimates customer-level threshold values for price and feature functionality that must be exceeded in order for a product to be considered. As you can see from this example, Hierarchical Bayes gives sophisticated researchers extreme flexibility to try out new models of consumer behavior. Segment- and customer-level models have enabled companies to:

  • Develop new products and services for targeted subgroups of the total population (based on customer-level model parameters).
  • Improve retention and acquisition campaigns by targeting segments or individuals that exhibit high preferences for particular product features (based on customer-level model parameters).
  • Test more complete and complex models of purchase decision making.
 
Calibration of Choice Models

With really new products—that is, new concepts yet to be introduced to category buyers—choice models based on survey data will usually produce biased results. For example, placing a new product into an existing competitive set can produce a predicted market share that is too low. On the other hand, exposing a new product concept to respondents before showing choice scenarios will almost always produce a predicted market share that is too high.

For existing products, price and feature elasticities can be biased if the survey questionnaire’s choice scenarios (a) provide too much or too little information relative to real market scenarios or (b) omit the impact on market choices of busy lifestyles and attitudes towards change.

Choice models can be calibrated to reduce bias in model predictions. The mathematics behind calibration of choice models can be explained in terms of the random utility model—the most used utility specification used by practitioners of marketing research. The random utility model assumes that total utility (attractiveness of a product in terms of its attributes) is the sum of a measurable component (systematic utility) and a random component (random utility).

Total Utility of Brand A =
Systematic Utility + Random Utility

In its simplest form, choice models specify systematic utility to be a sum of part-worth utilities (worth of each part of the product) minus the worth of the money required to purchase. For example, the total utility for a $2.00 bottle of Heinz ketchup would be the sum of part-worth utilities for brand name, type of bottle, and size of bottle minus the worth of $2.00.

Systematic Utility =
Part Worth of Heinz Brand + Part Worth of Glass + Part Worth of 14 oz - Part Worth of $2.00

When using survey responses to estimate part-worth utilities, utilities may be biased. In order to reduce or eliminate bias that causes inaccurate predictions, researchers can calibrate choice models by adjusting utilities to better predict actual market choices.

While all serious practitioners acknowledge that choice models can produce market shares and price and feature responses that differ substantially from those of actual markets, different calibration solutions have been implemented.

Traditional calibration solutions include:

  • Not calibrating—but using the choice model results as valuable inputs for strategic and tactical decision making.
  • Calibrating Brand Part-Worth–Adjusting part-worth utilities for brands to force a choice model to produce market shares from an external source; for example, scanner data or a forecast.
  • Rescaling Price or Feature Part-Worth Utilities–Proportionately rescaling price and feature part-worth utilities based on the relative variability of random utility from survey responses vs. actual market choices.
  • Calibrating Brand Part-Worth and Rescaling Price or Feature Part-Worth Utilities—Not only adjusting brand utilities but also rescaling price and feature utilities.
 

The following figure illustrates how calibrating brand part-worth and rescaling price utilities alters the shape of a demand curve. In this example, the Uncalibrated Demand Curve exhibits an exaggerated price sensitivity and, at the highest price, an upward-biased estimate of market share. Calibration corrects these biases to produce a more realistic demand curve, enabling the researcher to predict market share more accurately.

Choice Model Analytics Calibration

Several solutions are being investigated in academic and business circles to improve choice model calibration. First, recent research on rescaling of price and feature utilities includes very detailed comparison of survey choice models with household scanner data (Renkin, Rogers and Huber, 2004). This research has focused on how much to rescale price utilities so as to minimize differences between survey choice model and household scanner data model predictions.

Based on personal experience, market share predictions for really new products can be greatly improved by incorporating additional survey responses. For example, survey responses that measure positive attitudes about a new brand or product concept statement can be combined with choice model simulations to deliver more reliable first-year market predictions. 

Finally, laboratory experiments have been proposed (Allenby et. al., 2005) to understand the amount of adjustment of brand, price, and feature utilities for different types of customers, bringing calibration to the individual customer level.

All of these calibration approaches have a goal of increasing the accuracy and reliability of market share and revenue predictions from choice models.

Implications for Marketing Researchers

Recent advances in choice modeling enable marketing researchers to do the following:

  • Reduce survey length for choice modeling research.
  • Deliver segmentation algorithms that increase ROI for target marketing programs.
  • Deepen understanding of customer purchase decision processes.
  • Handle extremely complicated products with many features.
  • Predict bottom-line revenue and profit impacts.
  • Recommend market strategy options.
  • Deliver DecisionSimulators (interactive decision tools).
 

Market simulators enable researchers to investigate hundreds of "what if" pricing and competitive scenarios. For example, one can answer "what if" questions such as: What happens to my brand's market share if I increase price, if I add a product to my brand's product line, or if a competitor drops its price?

References
  • Allenby, Greg; Fennell, Geraldine; Huber, Joel; Eagle, Thomas; Gilbride, Tim; Horsky, Dan; Kim, Jaehwan; Lenk, Peter; Johnson, Rich; Ofek, Elie; Orme, Bryan; Otter, Thomas; and Walker, Joan (2004). "Adjusting Choice Models to Better Predict Market Behavior," Working Paper.
  • Gilbride, Timothy J. and Allenby, Greg G. (2004). "A Choice Model with Conjunctive, Disjunctive, and Compensatory Screening Rules," Marketing Science, Vol. 23, No. 3.
  • Renkin, Tim; Rogers, Greg; and Huber, Joel (2004). "A Comparison of Conjoint and Scanner Data-Based Price Elasticity Estimates," presented at Advanced Research Techniques Forum 2004, Whistler, BC.

About the Author

John Colias (jcolias@decisionanalyst.com) is a Senior Vice President and Director of Advanced Analytics at Dallas-Fort Worth based Decision Analyst. He may be reached at 1-800-262-5974 or 1-817-640-6166.

 


Copyright © 2016 by Decision Analyst, Inc.
This article may not be copied, published, or used in any way without written permission of Decision Analyst.