Choice Model Calibration
by John Colias, Ph.D.

In the early 1990s, applications of the joint revealed preference and stated preference (RPSP) model was investigated by Ben-Akiva and Morikawa (1990)1 and Adamowicz, Louviere, and Williams (1994)2, among others. More recently, the RPSP model has been applied within the realistic context of individual customer heterogeneity (Brownstone, Bunch, and Train 1997)3.

In simple terms, the RPSP model uses in-market purchase data (revealed preferences) to calibrate choice model coefficients derived from hypothetical purchase data (stated preferences) collected in a survey.

Calibrating stated preferences based on revealed preferences offers important benefits:

  • Addresses the criticism that survey choice data fails to take into account real-market constraints such as lack of time or means to investigate competitive alternatives.
  • Overcomes the criticism that in-market purchase data does not include the full range of product features and prices that are being contemplated, creating the need to extrapolate outside the range of current experience.
 

Let's take price elasticity to demonstrate the importance of calibration. Price elasticity is a parameter derived from choice model coefficients and represents the percent increase in demand due to a one percent increase in price; i.e., price sensitivity. If a choice model produces biased price elasticity, then poor business decisions may result. Suppose that modeled price elasticity is three times too high; that is, a price increase from $50 to $55 is predicted to cause a 15% decline in market share (20% to 17%), instead of an actual 5% decline (20% to 19%).

Choice Model Calibration

In this case, revenue is predicted to fall from $10 million to $9.4 million. In fact, revenue would actually increase $10 million to $10.5 million. This example illustrates the potential value of calibration. If the business were to maintain price at $50 as suggested by the uncalibrated choice model, then potential revenue would be lost. Choice model calibration could prevent this poor pricing decision.

Three Types of Choice Model Calibration

Choice models estimate the worth, or utility, of each part of a product. Three important parts of a product are brand, features, and price. So, for example, the total utility for a $2.00 bottle of Heinz ketchup would be the sum of part-worth utilities for brand name, type of bottle, and size of bottle minus the worth of $2.00.

Total Utility = Part Worth of Heinz Brand + Part Worth
of Glass + Part Worth of 14 Oz - Part Worth of $2.00

Choice model calibrations are means to adjust part-worth utilities to better predict actual market choices. Three ways to adjust these utilities are:

  • Adjust Brand Part-Worth–Adjust part-worth utilities for brands to force a choice model to produce market shares from an external source; for example, scanner data or a forecast.
  • Rescale Price and Feature Part-Worth Utilities–Proportionately rescale price and feature part-worth utilities based on the relative variability of random utility from survey responses vs. actual market choices.
  • Calibrate Brand Part-Worth and Rescale Price and Feature Part-Worth Utilities–Not only adjust brand utilities but also rescale price and feature utilities.
 

All of these calibration approaches have as their goal to increase the accuracy and reliability of market share and revenue predictions from choice models.

Adjust Brand Part-Worth

Adjusting only the brand part-worth is the type of calibration that is simplest to execute. Orme and Johnson (2006)4 term this approach "individual-level utility adjustment." This type of calibration forces the choice model simulator to predict target market shares for a particular scenario. The current market is usually the ideal scenario as a source of target market shares. It is important to realize that calibration of the brand part-worth does impact price sensitivity.

Choice Model Calibration

In this example, an uncalibrated choice model simulates a market share of 50% at $10. A 50% price reduction ($10 to $5) causes simulated market share to increase by 46% (50% to 73%). After calibrating the brand part-worth, the choice model simulates a market share of 88% at $10. The same 50% price reduction has caused only an 8% increase in purchase probability.

The first step in adjusting brand part-worth is to obtain target market shares for a particular scenario. The second step is the calibration itself, which can be accomplished by applying an iterative procedure that repeatedly updates a brand calibration factor and checks for convergence to the target market shares until convergence is achieved. Train (1986, 104-106)5 explains the iterative procedure.

Calibration can also be implemented by solving a system of simultaneous equations, where each equation predicts market share for a brand. If there are, say, three brands; then, there would be three equations with three unknown brand calibration factors. The solver routine in Microsoft Excel can be easily adapted to implement this procedure and solve for the three calibration factors.

The Train iterative and the Excel solver procedures produce identical brand calibration factors.

Rescale Price and Feature Part-Worth Utilities

The second approach to calibration is to rescale price and feature part-worth utilities by a constant scaling factor. A relative scale factor, for survey vs. actual market, of less than (or greater than) 1.0 reduces (or increases) the impact of price and feature changes on market share.

A below-one relative scale factor results when respondents are more certain about their survey purchase decision than they are about real-world purchase decisions. For example, when a brand price drops, survey respondents may be certain that competitor brands do not also drop in price, simply because they are provided with a complete set of competitor prices. In the actual market, where competitor prices are often unknown at the point of purchase, customers are less certain about buying the brand with the price drop.

Although less common, an above-one scale factor applied to survey-based choice utilities applies when respondents are less certain about their survey purchase decision than they would be about the real-world purchase decision. Such a situation might occur when a new product is described to respondents in a concept, but the respondents are not given the product and are, hence, uncertain about the new product’s quality. In this case, compared to the real world where consumers learn about product quality through word of mouth, survey respondents might be less certain about their purchase intent.

The most common method used to adjust the relative scale parameter is to simply search for a relative scale parameter that minimizes the absolute deviation between the target and simulated shares, at the aggregate or market level. This search can be accomplished most quickly with a nonlinear optimization program such as Excel’s solver tool.

Calibrate Brand Part-Worth and Rescale Price or Feature Part-Worth Utilities

In order to both match target market shares exactly for a base scenario and rescale part-worth utilities, the joint revealed preference and stated preference (RPSP) model was developed in the late 1980s by Ben-Akiva, et. al. (1994)6. Brownstone, Bunch, and Train (2000) applied this approach in a more complex model that estimates individual-level price elasticity. Recently, Decision Analyst developed a Hierarchical Bayes RPSP model (2007)7. Gilbride, Lenk, and Brazell (2008)8 developed a new approach to calibration using the "Loss" function, the theoretical staring point for Bayesian models.

The Decision Analyst Hierarchical Bayes RPSP model combined actual purchase data with survey-based hypothetical purchases to estimate individual-level calibration parameters in a Hierarchical Bayes model. Using the R-language rhierMnlRwMixture9 program as the starting point, we modified the code to estimate the calibration (relative scale) parameters. The model was applied to an over-the-counter(OTC) healthcare product, producing the following calibrated distribution of price elasticity. The uncalibrated price elasticity is also presented for comparison.

The uncalibrated price elasticity estimates are skewed towards the left, with about 20% of survey respondents having a very large price sensitivity (price elasticity <= -5.0). The average price elasticity was -2.1; that is, a 1% increase in price causes a 2.1% decrease in market share. In contrast, the calibrated price elasticity using the Hierarchical Bayes RPSP model eliminated the unrealistic leftward skew of the distribution, resulting in an average price elasticity of -1.1.

Choice Model Calibration

This practical example demonstrates the value of the Bayesian RPSP model. The average price elasticity dropped by about one and half of its uncalibrated value. The calibration of the choice model resulted in a much lower price elasticity, making a price increase more viable, since a price increase is found to reduce revenue much less. Furthermore, the calibrated RPSP model reveals that 47% of customers have very low price sensitivity, with a price elasticity of less than 0.5 in absolute value (vs. 38% from the uncalibrated model). The knowledge that one and half of customers have such a low price elasticity solidifies the conclusion that targeted price increases can raise revenue.

Conclusion

Calibration of survey-based choice models can make a substantial difference in predicted demand and revenue resulting from price changes. Calibration of brand part-worth utilities based on in-market data such as that derived from store scanner data can deliver more accurate measurement of price elasticity and better market predictions of demand response due to price changes. A Bayesian RPSP (revealed preference and stated preference) model was used to demonstrate that price elasticity should be calibrated downward in an OTC healthcare product category. Decision Analyst's Bayesian RPSP model not only calibrated the average market price elasticity downwards (in absolute value), but also shifted the distribution of estimated price elasticity, increasing the estimated percent of customers with low price sensitivity.

References
  1. M. Ben-Akiva and T. Morikawa (1990), "Estimation of Switching Models from Revealed Preferences and Stated Intentions," Transportation Research, A 24, 485-495.
  2. W. Adamowicz, J. Louviere, and M. Williams (1994), "Combining Revealed and Stated Preference Methods for Valuing Environmental Amenities," Journal of Environmental Economics and Management 26, 271-292.
  3. D. Brownstone, D. Bunch, and K. Train (2000), "Joint Mixed Logit Models of Revealed and Stated Preferences for Alternative-Fuel Vehicles," Transportation Research Part B: Methodological, Volume 34, Issue 5, 315-338.
  4. Bryan Orme and Rich Johnson (2006), "External Effect Adjustments in Conjoint Analysis," Sawtooth Software Research Paper Series.
  5. Kenneth Train (1986). "Qualitative Choice Analysis," The MIT Press, Cambridge.
  6. Moshe Ben-Akiva, M. Bradley, T. Morikawa, J. Benjamin, and T. Novak (1994), "Combining Revealed and Stated Preferences Data," Marketing Letters 5(4).
  7. Brownstone, D., D.S. Bunch, K. Train (2000), "Joint Mixed Logit Models of Stated and Revealed Preferences for Alternative-Fuel Vehicles," Transportation Research Part B: Methodological, 34(5).
  8. Timothy Gilbride, Peter Lenk, and Jeff Brazell (2008), "Market Share Constraints and the Loss Function in Choice Based Conjoint Analysis," Marketing Science (forthcoming November/December 2008).
  9. Peter E. Rossi, Greg Allenby, and Rob McCulloch (2005), Bayesian Statistics and Marketing, John Wiley and Sons.

About the Author

John Colias (jcolias@decisionanalyst.com) is a Senior Vice President and Director of Advanced Analytics at Dallas-Fort Worth based Decision Analyst. He may be reached at 1-800-262-5974 or 1-817-640-6166.

 
Copyright © 2009 by Decision Analyst, Inc.
This article may not be copied, published, or used in any way without written permission of Decision Analyst.