OUR POV: THE MMR BLOG

Scale Order

Research on Research: Scale Order Impact

Context
Much of good research practice is based on statistical and behavioral theories and known human biases which, over the years, have given us many rules of thumb about the BEST way to do something. However, one of the interesting things that DOESN’T happen as often as it should in research is “research on research” – and, in many areas, there is no governing body or book that tells us to “do it this way”… “best practices” in this area are often learned by experience.

Purpose
An MMR client partnered with us to determine the potential impact of “scale order” differences that were being advised by a new vendor selected for a relaunch of a long running customer satisfaction program. Some “limitations” of this vendor’s processes weren’t discovered until the project was well underway. This “research on the research” would help the client assess the potential impact that scale reversal might have on responses so they could decide how hard to “push back” on the vendor.

Solution/Approach:
As discovered very early in the days of migration to online surveys, scale display order can and does affect responses for some measures, particularly for longer scales. We know from experience that scales that begin with a positive end point generate higher top box percentages… but how much, in this case?

To assist our client we created and executed a large two-cell study that included several different types of scale questions. Survey participants were existing customers of the client who were randomly assigned to respond to different scale questions. Samples were large, well over 1000 per cell.

 

Shopping Intent Scales (5 Point):  As shown below, Definitely will responses increased for an Intent Scale such as “Likelihood to Shop Again” when the scale was ordered from positive (left) to negative (right).

 

ScaleOrder1

 

Satisfaction Scales (5 pt): scale order reversal also impacted Satisfaction questions.  Extremely satisfied response rates increased for satisfaction grid attributes shown below.  In addition, for attribute grids, the scale reversal impact diminishes for attributes shown later in the list, suggesting both an order bias, and a learning process by respondents.

ScaleOrder2

 

Comparative Scales: There was no significant impact on responses for comparative scales such as the comparative Shopping Experience compared to other Stores scale shown below.

ScaleOrder3

 

Actionable Results:

Practical – Scale order matters

This study confirmed what is already known about survey bias – that scale order has a modest, but measurable impact on results.  The differences found in the above results were definitely sufficient to affect both trends and interpretation of results, particularly on attribute batteries where order bias added an additional problem.

But, are the differences enough to raise a red flag and halt the project?   If trending was critical, yes.  If not, then the answer is less clear… can they live with the results? Maybe so.  Certainly, having a known positive bias is not desired, but what’s it worth in time, cost and effort to fix?  It’s a difficult question; however, the client is now better prepared to discuss trade-offs and interpret results both internally and with the vendor.

Strategic – It’s important to look beneath the hood

Some vendors who have gained great traction/name recognition in the automation of some study types have succeeded based on the development and expansion of a system which did not originate in the Marketing Research world.  They can lack flexibility and sometimes not even include standard, research industry best practices.  It’s known that because VoC studies often rely upon self-selected respondents, results can trend positive. Designing something that makes this trend even more significant makes the tool/results less useful. It’s important that users of “packaged” research tools insure that research-industry best practices are integrated into the tool, or that results of these systems account for possible response bias.

 

N.B.: Our client noted their surprise that, “ a major, leading vendor was unable to apply standard, best practice, research methods, even in simple areas like list randomization (a topic which did not come up in their sales pitch focused more on the great reporting tool)”.

If you would like to read more, here’s an article with another fairly objective analysis that corroborates our findings: http://www.measuringu.com/blog/left-side-bias.php