Moving Tracking Research from Telephone to Internet Data Collection: To Compare or Not to Compare?
by Felicia Rogers

With the rapid spread of Internet usage throughout the late 1990s and into the 21st century, and the increased difficulty and cost of conducting research via telephone, many companies have begun to adopt the internet as the preferred platform for conducting consumer research. In 2002 Internet access had reached 71% of the U.S. population. That is, 71% of respondents to a recent study* (all aged 12 and above) reported having gone online in the past-12-month period, whether at home, work, school, or some other location. Because of this widespread adoption of the Internet, researchers are becoming more comfortable with it as a platform for surveys, including tracking research.

So when a company with a long-term telephone tracking history considers transitioning its data collection from telephone- to internet-based tracking, how can they be sure they are doing the right thing? Perhaps more importantly, how will they know what to expect from the subsequent survey results? Some might argue that side-by-side comparisons of the two methods, for some time period (six months to a year), are a must. However, we are not convinced that side-by-side comparisons are necessary or even effective.

There is no argument about the value of tracking data. Tracking is a very important tool for monitoring consumer awareness, perceptions of, and interest in the products we produce and hope to sell more and more of each and every day. When we consider making a major change in the way we collect that data, a very important question has to be addressed: “Should we keep the historical data and attempt to calibrate the two methods as we make the transition, or should we cut the cord?”

I would argue we need to consider cutting the cord. Make a clean break. Here’s why.

Telephone tracking data in 2003 is not directly comparable to telephone data collected in 2000, or 1995. Why do we want internet data to be comparable to something that is not comparable to itself? In recent years researchers’ confidence in the stability of trend data from telephone tracking studies has declined. There are several reasons for rising uncertainty regarding telephone research.

Because of advances in telecommunications technology over the past decade, telephone data itself has not truly remained comparable to past data. Here’s what I mean.

  • In today’s society, with the technology available, a large percentage of telephone households in RDD samples select themselves out of survey samples. They do so through the use of telephone company services such as Caller ID, call waiting, and call block. Many also screen calls through answering systems; this has been true for more than 10 years. The most recent development, known as state and federal “Do Not Call Lists,” are not designed to affect the marketing research industry. However, they are having an impact. If someone has registered on such a list, it really means they are not interested in receiving unsolicited telephone calls. It’s very likely that the line between telemarketing and marketing research is too fuzzy for consumers to distinguish or make exception for.
  • There has been an explosive demand for unique telephone lines. With the addition of data lines (fax and Internet connections) in virtually every office and in many homes and other locations, there has been a rapidly growing need for telephone companies to add new area codes. As new area codes have been added, they are often “overlaid.” When this happens, the well-defined geographic boundaries of “area codes” become extremely blurred. This has a negative impact on our ability to create RDD samples for specific geographic areas. The result is a possible (and potentially unrecognized) disruption or deterioration in comparability of trend data, based solely on the changing sample.
  • A very recent development in the U.S. telecommunications industry is the growing trend toward adoption of mobile phones as primary phones. In other words, people are getting rid of their “land lines.” Since RDD samples are built from databases of seed numbers that include only traditional telephone lines, cellular and mobile telephones are excluded from RDD samples. So now we have excluded those “early adopters” from telephone samples.
  • Americans are becoming busier and busier all the time. As this happens, many Americans are becoming less and less willing to participate in surveys, especially those seen as interruptions. Over the years this has lead to a severe decline in participation rates. When I began my marketing research career in 1989, about 65% of the individuals my company contacted randomly for telephone surveys would agree to participate. That number has declined steadily and has now reached a rate close to 25%. In other words, only one-fourth of the few consumers who answer their telephones when we call will agree to listen to the interviewer long enough to begin answering the screening questions. This means we are interviewing fewer consumers than in the past, which means telephone samples are much less representative of the population than they were five, ten, 15 years ago.
 

Another very important issue is the difference between hearing a questionnaire read over the telephone and seeing a questionnaire on your computer screen. This fundamental difference impacts survey results. Telephone tracking data is based on responses from consumers who are listening to an interviewer reading a questionnaire over telephone lines. In this age of sophisticated technology, “land line” connections are typically very high quality, so the clarity of the transmission is not an issue. What does matter, though, is the unavoidable fact that each human interviewer reads very differently. There are different voices—male and female, different accents, varying enunciation, mispronunciations, fluctuations in tone of voice, faster and slower readers, and the list goes on. All of these variations impact respondents’ ability to hear and understand questions clearly and consistently. And this is not even addressing the occasional situation where a telephone interviewer offers an interpretation of what he or she just read, without regard for the strict instruction to reread until the respondent understands, rather than paraphrasing.

When respondents see and read Internet questionnaires for themselves, they make their own interpretations, without the influence of a third party (the interviewer). In addition, seeing a scale, for example, is much more effective than hearing a scale read over the telephone and jotting it down or trying to remember it. Over the Internet, long lists of attributes can be read and reread as necessary, rather than having to listen carefully or ask a telephone interviewer to repeat himself (or just guessing). So, one could argue that Internet data (reading/seeing) is better than telephone data (listening/hearing). Many of the differences between telephone and Internet data will be caused by this unavoidable difference between hearing and seeing.

Questionnaire changes also affect comparability. When transitioning a tracking study, the perfect opportunity to make questionnaire changes presents itself to clients. It is a perfectly reasonable time to add new brands, change attributes, add visual stimulus, ask questions that have been on everyone’s mind for months or even years, etc. However, questionnaire changes always mean that new data may no longer be comparable to past data. This situation is virtually unavoidable, and is yet another factor contributing to the limitations on our ability to effectively compare trend data across methods.

Finally, I will mention the “attention differential” as an important contributor to differences in data, and an advantage of Internet research over telephone research. Telephone interviewers notoriously call during dinnertime. It’s a reputation our industry has to live with. Internet data collection eliminates this issue. Using internet panels, invitations are emailed to willing members who are free to participate at their leisure. Since the internet is available 24/7, consumers can complete screeners and surveys at any time of day or night. They don’t have to balance the telephone while cooking dinner and holding the baby all at the same time (and don’t forget trying to hear the interviewer clearly). Because respondents complete internet questionnaires on their own time, they are able to give the interview their full attention, taking as much time as they deem necessary to provide well-thought-out, complete answers. In our survey data sets, we often see lengthy, elaborate responses to open-ended questions, for example, providing reassurance that our panelists do take care to provide comprehensive feedback.

As we consider moving tracking studies from the telephone to the internet, Decision Analyst feels obligated to share these critical thoughts and observations. Each of these issues has an impact on data comparability from one method to the other. If the need exists for calibration or interpretation of the differences—and it often does—there are a couple of options. First, you could go ahead with a side-by-side comparison of both methods. Or you could conduct a detailed comparison after making a clean break. In this case, the final quarter’s telephone data would be compared to the first quarter’s internet data. Since it is unlikely that any significant trends would be developing or occurring during that six-month time period, we can assume the differences discovered can be associated in large part to the methodology, and we will know how to deal with that. This is a low cost way to accomplish a rough calibration of results from the two methods. Ultimately, you and your team should be the final judge regarding whether or not to compare results and how.

* Source: UCLA Internet Report—“Surveying The Digital Future” (April – June 2002)

About the Author

Felicia Rogers (frogers@decisionanalyst.com) is an Executive Vice President at Dallas-Fort Worth based Decision Analyst. She may be reached at 1-800-262-5974 or 1-817-640-6166.

 

Copyright © 2004 by Decision Analyst, Inc.
This article may not be copied, published, or used in any way without written permission of Decision Analyst.