14
Journal of Retailing 88 (4, 2012) 542–555 Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies Scott B. MacKenzie a,, Philip M. Podsakoff b,1 a Department of Marketing, Kelley School of Business, Indiana University, Bloomington, IN 47405, United States b Department of Management at the Kelley School of Business, Indiana University, Bloomington, IN 47405, United States Abstract There is a great deal of evidence that method bias influences item validities, item reliabilities, and the covariation between latent constructs. In this paper, we identify a series of factors that may cause method bias by undermining the capabilities of the respondent, making the task of responding accurately more difficult, decreasing the motivation to respond accurately, and making it easier for respondents to satisfice. In addition, we discuss the psychological mechanisms through which these factors produce their biasing effects and propose several procedural remedies that counterbalance or offset each of these specific effects. We hope that this discussion will help researchers anticipate when method bias is likely to be a problem and provide ideas about how to avoid it through the careful design of a study. © 2012 New York University. Published by Elsevier Inc. All rights reserved. Keywords: “Research Dialog” on common method bias; Research methods in retailing The dangers of the effects of method bias have long been recognized in the research literature (e.g., Arndt and Crane 1975; Bagozzi 1980, 1984; Campbell and Fiske 1959; Cote and Buckley 1987, 1988; Fiske 1982; Greenleaf 1992; McGuire 1969). For example, over 50 years ago in the psychology liter- ature, Campbell and Fiske (1959) called attention to the fact that any measuring instrument inevitably has: (a) systematic trait/construct variance due to features that are intended to repre- sent the trait/construct of interest, (b) systematic error variance due to characteristics of the specific method being employed which may be common to measures of other traits/constructs, and (c) random error variance. In this research, they (Fiske 1982, p. 82) defined the term “method” broadly to encompass several key aspects of the measurement process including: “the content of the items, the response format, the general instructions and other features of the test-task as a whole, the characteristics of the examiner, other features of the total setting, and the reason why the subject is taking the test.” Twenty-five years later in the marketing literature, Bagozzi (1984, p. 24) further warned that it is important to understand the sources of systematic measurement error because they can Corresponding author. Tel.: +1 812 855 1101. E-mail addresses: [email protected] (S.B. MacKenzie), [email protected] (P.M. Podsakoff). 1 Tel.: +1 812 855 2747. “induce regular or irregular changes in the means, variances, and/or covariances of observations. This especially becomes a problem when a subset of measurements is so affected. By suppressing or enhancing variation in selected observations in an orderly way, sources of systematic error may influence key phenomena of interest over and above the true causes and ran- dom error.” A few years later Cote and Buckley (1987, p. 317) conducted a meta-analysis of 70 MTMM studies to obtain an estimate of the amount of systematic measurement error present in different types of measures and concluded that, although trait/construct variance often is assumed to be large in relation to systematic and random measurement error, “Our findings indi- cate the general assumption of minimal measurement error is highly questionable. Measurement error, on average, accounts for most of the variance in a measure. This observation raises questions about the practice of applying statistical techniques based on the assumption that trait variance is large in relation to measurement error variance.” There are two detrimental effects produced by systematic method variance. First, systematic method variance “biases” estimates of construct validity and reliability (e.g., Bagozzi 1984; Baumgartner and Steenkamp 2001; Buckley, Cote, and Comstock 1990; Cote and Buckley 1987; Doty and Glick 1998; Lance et al. 2010; MacKenzie, Podsakoff, and Podsakoff 2011; Podsakoff et al. 2003; Podsakoff, MacKenzie, and Podsakoff 2012; Williams, Cote, and Buckley 1989). Because a latent 0022-4359/$ – see front matter © 2012 New York University. Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jretai.2012.08.001

Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

Embed Size (px)

Citation preview

Page 1: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

A

Irwcb©

K

r1a1attsdwapkootw

(t

p

0h

Journal of Retailing 88 (4, 2012) 542–555

Common Method Bias in Marketing: Causes, Mechanisms, and ProceduralRemedies

Scott B. MacKenzie a,∗, Philip M. Podsakoff b,1

a Department of Marketing, Kelley School of Business, Indiana University, Bloomington, IN 47405, United Statesb Department of Management at the Kelley School of Business, Indiana University, Bloomington, IN 47405, United States

bstract

There is a great deal of evidence that method bias influences item validities, item reliabilities, and the covariation between latent constructs.n this paper, we identify a series of factors that may cause method bias by undermining the capabilities of the respondent, making the task ofesponding accurately more difficult, decreasing the motivation to respond accurately, and making it easier for respondents to satisfice. In addition,

e discuss the psychological mechanisms through which these factors produce their biasing effects and propose several procedural remedies that

ounterbalance or offset each of these specific effects. We hope that this discussion will help researchers anticipate when method bias is likely toe a problem and provide ideas about how to avoid it through the careful design of a study.

2012 New York University. Published by Elsevier Inc. All rights reserved.

etaili

“aasapdceitschfqb

eywords: “Research Dialog” on common method bias; Research methods in r

The dangers of the effects of method bias have long beenecognized in the research literature (e.g., Arndt and Crane975; Bagozzi 1980, 1984; Campbell and Fiske 1959; Cotend Buckley 1987, 1988; Fiske 1982; Greenleaf 1992; McGuire969). For example, over 50 years ago in the psychology liter-ture, Campbell and Fiske (1959) called attention to the facthat any measuring instrument inevitably has: (a) systematicrait/construct variance due to features that are intended to repre-ent the trait/construct of interest, (b) systematic error varianceue to characteristics of the specific method being employedhich may be common to measures of other traits/constructs,

nd (c) random error variance. In this research, they (Fiske 1982,. 82) defined the term “method” broadly to encompass severaley aspects of the measurement process including: “the contentf the items, the response format, the general instructions andther features of the test-task as a whole, the characteristics ofhe examiner, other features of the total setting, and the reason

hy the subject is taking the test.”Twenty-five years later in the marketing literature, Bagozzi

1984, p. 24) further warned that it is important to understandhe sources of systematic measurement error because they can

∗ Corresponding author. Tel.: +1 812 855 1101.E-mail addresses: [email protected] (S.B. MacKenzie),

[email protected] (P.M. Podsakoff).1 Tel.: +1 812 855 2747.

m

me1CLP2

022-4359/$ – see front matter © 2012 New York University. Published by Elsevier Ittp://dx.doi.org/10.1016/j.jretai.2012.08.001

ng

induce regular or irregular changes in the means, variances,nd/or covariances of observations. This especially becomesproblem when a subset of measurements is so affected. By

uppressing or enhancing variation in selected observations inn orderly way, sources of systematic error may influence keyhenomena of interest over and above the true causes and ran-om error.” A few years later Cote and Buckley (1987, p. 317)onducted a meta-analysis of 70 MTMM studies to obtain anstimate of the amount of systematic measurement error presentn different types of measures and concluded that, althoughrait/construct variance often is assumed to be large in relation toystematic and random measurement error, “Our findings indi-ate the general assumption of minimal measurement error isighly questionable. Measurement error, on average, accountsor most of the variance in a measure. This observation raisesuestions about the practice of applying statistical techniquesased on the assumption that trait variance is large in relation toeasurement error variance.”There are two detrimental effects produced by systematic

ethod variance. First, systematic method variance “biases”stimates of construct validity and reliability (e.g., Bagozzi984; Baumgartner and Steenkamp 2001; Buckley, Cote, andomstock 1990; Cote and Buckley 1987; Doty and Glick 1998;

ance et al. 2010; MacKenzie, Podsakoff, and Podsakoff 2011;odsakoff et al. 2003; Podsakoff, MacKenzie, and Podsakoff012; Williams, Cote, and Buckley 1989). Because a latent

nc. All rights reserved.

Page 2: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542–555 543

Table 1Empirical evidence of the effects of method bias.

Nature of method bias effect Evidence Conclusion

Effects of method bias on item reliability orvalidity.

CFA of MTMM matrices (Buckley, Cote, andComstock 1990; Cote and Buckley 1987; Dotyand Glick 1998; Lance et al. 2010; Williams,Cote, and Buckley 1989).

Evidence suggests that between 18 percent and 32 percentof the variance in items is attributable to method factors.

Effects of method bias on the covariationbetween constructs.

Estimates based on MTMM meta-analyticstudies (Buckley, Cote, and Comstock 1990;Lance et al. 2009).

Meta-analytic evidence suggests that the true correlationbetween traits in these studies was inflated between 38percent and 92 percent by method bias.

Effects of obtaining measures of predictor andcriterion variables from the same versusdifferent sources (summarized by Podsakoff,MacKenzie, and Podsakoff 2012).

Summary of evidence from meta-analytic studies suggeststhat correlation between many widely studied constructs areinflated from 133 percent to 304 percent when the predictorand criterion variables are obtained from the same source asopposed to different sources.

Effects of response styles (Baumgartner andSteenkamp 2001).

Evidence suggests that 27 percent of the variance in themagnitude of correlations between fourteen consumerbehavior constructs was attributable to five response styles.

Effects of proximity (Weijters, Geuens, andSchillewaert 2009).

The correlation between items measuring unrelatedconstructs increased by 225 percent when they arepositioned next to each other compared to when they werepositioned six items apart.

Effects of item wording (Harris and Bladen1994).

Evidence suggests that the correlation between constructswas 0.21 when item wording bias was controlled, but 0.50

S sakoff

ciwct(2trvmv

abnoBBaacavd

rhbAvI

tutcarm

Sahmomptdlmm2rWtbmga(

ource: Adapted from evidence summarized in Podsakoff, MacKenzie and Pod

onstruct captures systematic variance among its measures,f systematic method variance is not controlled, this varianceill be lumped together with systematic trait variance in the

onstruct. This can lead to: (a) incorrect conclusions abouthe adequacy of a scale’s reliability and convergent validityBaumgartner and Steenkamp 2001; Lance in Brannick et al.010; Williams, Hartman, and Cavazotte 2010); (b) underes-imates of corrected correlations in meta-analyses because theeliability estimates will be artificially inflated due to methodariance (Le, Schmidt, and Putka 2009); and (c) biased esti-ates of the effects of other correlated predictors on the criterion

ariable (Bollen 1989).The second detrimental effect of systematic method vari-

nce is that it can “bias” parameter estimates of the relationshipetween two different constructs. It has been widely recog-ized that method bias can inflate, deflate, or have no effectn estimates of the relationship between two constructs (e.g.,agozzi 1984; Baumgartner and Steenkamp 2001; Cote anduckley 1988; Podsakoff et al. 2003; Podsakoff, MacKenzie,nd Podsakoff 2012; Siemsen, Roth, and Oliveira 2010). This isserious problem because it can: (a) bias hypothesis tests and

ause Type I or Type II errors, (b) lead to incorrect conclusionsbout the proportion of variance accounted for in a criterionariable, and (c) alter conclusions about the nomological and/oriscriminant validity of a scale.

Recently, Podsakoff, MacKenzie, and Podsakoff (2012)eviewed the empirical evidence of the effects that method biasas on item reliability and validity as well as on the covariationetween constructs. This evidence is summarized in Table 1.

s indicated in this table, an average of 18–32 percent of theariance in a typical measure is attributable to method variance.n addition, estimates of the covariation between constructs are

tia

when it was not controlled (an increase of 238 percent).

(2012).

ypically inflated 27–304 percent by method variance dependingpon the nature of the method factor examined. They concludedhat (2012, p. 10.27), “the evidence shows that method biasesan significantly influence item validities and reliabilities as wells the covariation between latent constructs. This suggests thatesearchers must be knowledgeable about the ways to controlethod biases that might be present in their studies.”Several researchers (Bagozzi 1984; Baumgartner and

teenkamp 2001; Podsakoff et al. 2003; Podsakoff, MacKenzie,nd Podsakoff 2012; Williams, Hartman, and Cavazotte 2010)ave noted that there are two fundamental ways to control forethod biases. One way is to statistically control for the effects

f method biases after the data have been gathered; the other is toinimize their effects through the careful design of the study’s

rocedures. Far more has been written about the post hoc wayso statistically control for method bias than about the proce-ural controls. Based on their recent review of this extensiveiterature, Podsakoff, MacKenzie, and Podsakoff (2012) recom-

ended that researchers try to use what they called the directlyeasured latent factor technique (Bagozzi 1984; Podsakoff et al.

003; Williams, Gavin, and Williams 1996) or the measuredesponse style technique (Baumgartner and Steenkamp 2001;

eijters, Schillewaert, and Geuens 2008). The advantage ofhese techniques is that they identify the nature of the methodias being statistically controlled. If the specific source of theethod bias is unknown or cannot be measured, then they sug-

ested the use of the CFA marker technique (Williams, Hartman,nd Cavazotte 2010) or the common method factor techniqueBagozzi 1984; Podsakoff et al. 2003). Although neither of these

echniques is as good as the previously recommended ones,n some situations they may be the best a researcher can do,nd they are better than relying on the weaker, conceptually
Page 3: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

5 urnal

flcf(2

ksafiatw((dwm

B2twtMapottsaitj

spttwwKat

biFaaawaiataitsfrd(d

esaia(taemlt1Atseaiaf

dmB

44 S.B. MacKenzie, P.M. Podsakoff / Jo

awed statistical control procedures (i.e., Lindell and Whitney’sorrelation-based marker variable technique or Harman’s oneactor test) that have been used in some recent retailing researche.g., see Arnold and Reynolds 2009; Babakus, Ugur, and Ashill009; Grace and Weaven 2011; Spralls, Hunt, and Wilcox 2011).

In contrast, the literature provides less guidance about whatinds of procedural controls should be implemented in a giventudy to control for method bias. In part this may be because,lthough many of the sources of method bias have been identi-ed, the mechanisms through which they produce their effectsre not well understood. In view of this, the current paper con-ributes to the literature by: (a) examining the conditions underhich method bias is likely to be a particularly serious problem,

b) identifying the potential mechanisms responsible for it, andc) suggesting several procedural techniques that can be used toiminish it. Hopefully, this information will provide researchersith better tools to avoid or minimize the detrimental effects ofethod bias.

When is method bias likely to be a problem?

Several researchers (Krosnick 1991, 1999; Sudman,radburn, and Schwarz 1996; Tourangeau, Rips, and Rasinski000) have speculated that, because the cognitive effort requiredo generate an optimal answer to a long series of questions on aide range of topics is often substantial, respondents cope with

hese demands by seeking easier ways to generate their answers.ore specifically, when the difficulty of the task of generating

n optimal answer is high, and a respondent’s ability, naturalredisposition, or motivation to expend the required amountf cognitive effort are low, Krosnick (1991, 1999) has arguedhat respondents may “satisfice” by being less thorough in ques-ion comprehension, memory retrieval, judgment, and responseelection. In other words, they may expend less effort: thinkingbout the meaning of a question; searching their memories fornformation to answer the question; integrating the informationhat has been retrieved to form a judgment; and matching theirudgments to the response options presented in the question.

Our fundamental hypothesis is that responses will be moretrongly influenced by method bias when respondents cannotrovide accurate responses and/or when they are unwilling tory to provide accurate responses. In other words, we expecthat when respondents are satisficing rather than optimizing theyill be more likely to respond stylistically, and their responsesill be more susceptible to method bias. This is consistent withrosnick (1999, p. 548) who observes that when respondents

re unwilling or unable to expend the cognitive effort requiredo accurately answer a question, they,

“. . . can use a number of possible decision heuristics to arriveat a satisfactory answer without expending substantial effort.A person might select the first reasonable response he orshe encounters in a list rather than carefully processing all

possible alternatives. Respondents could be inclined to acceptassertions made in the questions regardless of content, ratherthan performing the cognitive work required to evaluate thoseassertions. Respondents might offer safe answers, such as the

aMRk

of Retailing 88 (4, 2012) 542–555

neutral point of a rating scale, endorsement of the status quo,or saying don’t know so as to avoid expending the effortnecessary to consider and possibly take more risky stands. Inthe extreme, respondents could randomly select a responsefrom those offered by a closed-ended question.”

Therefore, the key to understanding when method bias wille a problem is to identify when respondents are likely to be sat-sficing rather than optimizing. Generally speaking, as shown inig. 1 respondents will optimize when they are able to provideccurate answers and they are motivated to provide accuratenswers. Both are necessary. If respondents are able to provideccurate answers, but unwilling to try to do so, then satisficingill result. Similarly, if respondents are motivated to provide

ccurate answers, but are unable to do so, once again satisfic-ng may be the result. The ability of respondents to answerccurately is itself jointly determined by the match betweenhe respondent’s capabilities and the difficulty of the task ofnswering the question. If a respondent lacks the experience,ntelligence, and so forth needed to answer accurately, or theask is too difficult, then satisficing will be the likely result. Thisuggests that method bias is more likely to be a problem whenactors are present that: (a) undermine the capabilities of theespondent; (b) make the task of responding accurately moreifficult; (c) decrease the motivation to respond accurately; andd) make it easier for respondents to satisfice (i.e., decrease theifficulty of the task of satisficing).

Krosnick (1991, 1999) reviews a considerable amount ofmpirical evidence that is consistent with these general propo-itions. For example, there is a great deal of evidence thatcquiescence bias is more common among respondents with lim-ted cognitive capabilities (Elliott 1961; Jackson 1959; Schumannd Presser 1981), for items that are more difficult to answerTrott and Jackson 1967), and for items that appear towardhe end of a long questionnaire when respondents are presum-bly fatigued (Clancy and Wachsler 1971). Similarly, there isvidence that a nondifferentiated response style and randomeasurement error are more common when respondents have

imited cognitive capabilities and toward the end of a long ques-ionnaire when respondents are fatigued (Herzog and Bachman981; Kraut, Wolfson, and Rothenberg 1975; Krosnick andlwin 1988). The same is true for response order effects (i.e., the

endency to select the first or the last response alternative). Morepecifically, several researchers have found that response orderffects are stronger for: less educated respondents (Krosnicknd Alwin 1987; McClendon 1986, 1991); respondents with lim-ted cognitive skills (Krosnick 1991; Krosnick and Alwin 1987);nd questions that are more difficult and when respondents areatigued (Mathews 1927).

Finally, empirical evidence also demonstrates that respon-ents are less likely to say “don’t know” or “no opinion” whenotivation is high due to: (a) interest in the topic (Francis andusch 1975; Rapoport 1982); (b) the perceived ability to process

nd understand information relevant to the topic (Krosnick andilburn 1990); and (c) inducements to optimize (McDaniel andao 1980; Wotruba 1966). In contrast, “no opinion” or “don’tnow” responses are more common for questions at the end
Page 4: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542–555 545

bias

o(m

flrmtt(wo

Ra

mftobtai

L

Aacbcrmicanrats

(s

L

eaKmotdheimmt1rstttttoe(

C

dlditttoir

Fig. 1. When is method

f a long questionnaire when motivation is low due to fatigueDickinson and Kirzner 1985; Ferber 1966), and when intrinsicotivation to optimize has been undermined (Hansen 1980).Therefore, in the sections below we will identify some of the

actors that may increase method bias by increasing the like-ihood of satisficing either by undermining the capabilities ofespondents; decreasing their motivation to respond accurately;aking the task of responding accurately more difficult or the

ask of satisficing easier. In addition, we will attempt to linkhese mechanisms to the stages in the question response processKrosnick 1999; Tourangeau, Rips, and Rasinski 2000). Finally,e will suggest procedural changes researchers can make toffset these biases.

emedies for factors that decrease the ability to respondccurately

To answer a question accurately, the difficulty of the taskust not exceed the capabilities of the respondent. Therefore,

actors that limit a respondent’s capabilities and/or make theask of answering accurately more difficult are potential sourcesf method bias. There are a number of factors that may causeiased responding by decreasing the ability of the respondento answer accurately. These factors relate to the respondent’sbility, experience, and/or characteristics of the items that s/hes asked to answer (Table 2).

ack of abilitySeveral researchers (Krosnick 1991, 1999; Krosnick and

lwin 1987; Schuman and Presser 1981) have noted that lowbility (e.g., as reflected in a lack of verbal skills, education, orognitive sophistication) increases the likelihood of satisficingecause respondents who lack these abilities have more diffi-ulty comprehending the meaning of the questions, retrievingelevant information from memory, and making judgments. Toitigate this problem, researchers need to do a better job of align-

ng the difficulty of the task of answering the questions with theapabilities of the respondents. For example, if the concern isbout the cognitive sophistication of respondents, researcherseed to ensure that the questions are written at a level that the

espondents can comprehend. This is something that should bessessed through pretesting. Alternatively, if the concern is abouthe literacy of respondents, questions can be recorded and pre-ented to respondents in audio form to augment the written form

svr

likely to be a problem?

Bowling 2005). This can be done using audio computer-assistedelf-administered interviewing (ACASI) techniques.

ack of experience thinking about the topicSeveral researchers have suggested that a lack of experi-

nce thinking about a given topic may impair a respondent’sbility to answer relevant questions about that topic (Fiske andinder 1981; Krosnick 1991). This can produce several detri-ental effects. Comprehension may be impaired because a lack

f experience reduces the respondent’s ability to link key termso relevant concepts. Information retrieval may also be moreifficult because there is less information to retrieve or thereas been less practice retrieving it. Finally, a lack of experi-nce thinking about a topic may make it more difficult to drawnferences needed to fill in missing information, and to integrate

aterial that is retrieved. One obvious way to remedy this is toake sure that you do not ask respondents to “tell more than

hey can know” (Ericsson and Simon 1980; Nisbett and Wilson977). This can be avoided by exercising caution when askingespondents about the motives for their behavior, the effects ofituational factors on their behavior, or other things pertainingo cognitive processes that they are unlikely to have attendedo or stored in short-term memory. Perhaps more importantly,his suggests that it is crucial to select respondents who havehe necessary experience thinking about the issues of interest inhe questionnaire. In instances where several diverse topics aref interest, this may require asking one set of respondents (e.g.,mployees) about some topics, and another set of respondentse.g., managers) about other topics.

omplex or abstract questionsComplex, abstract questions are more difficult for respon-

ents to answer (Doty and Glick 1998) and increase theikelihood of satisficing (Krosnick 1991) because they: (a)ecrease the ability of respondents to comprehend the mean-ng of the questions, (b) make it more difficult for respondentso know what information to retrieve from memory to answerhe question, and (c) make judgments more difficult becausehey make it harder for respondents to assess the completenessf what has been recalled and to identify and fill in gaps in whats recalled. Obvious ways to diminish this problem are to: avoideferring to vague concepts without providing clear examples;

implify complex or compound questions; and use language,ocabulary, and syntax that match the reading capabilities of theespondents.
Page 5: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

546 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542–555

Table 2Factors that increase method bias by decreasing the ability to respond accurately.

Conditions that cause method bias Mechanism Potential remedies

Lack of verbal ability, education, orcognitive sophistication (Krosnick1999; Krosnick and Alwin 1987;Schuman and Presser 1981).

May increase the difficulty of the task ofcomprehending the meaning of the questions,retrieving information, and making judgments(Krosnick 1991).

Align the difficulty of the task with the capabilities of therespondents by: (a) pretesting questions to ensure they arewritten at a level the respondents can comprehend; and/or(b) presenting the questions in audio form to augment thewritten form (e.g., audio computer-assistedself-administered interviewing (ACASI)).

Lack of experience thinking about thetopic (e.g., Fiske and Kinder 1981;Schwarz, Hippler, andNoelle-Neumann 1992).

May impair a respondent’s ability to answerbecause it: (a) hinders comprehension by reducingthe respondent’s ability to link key terms torelevant concepts, (b) makes information retrievalmore difficult (less to retrieve, less practiceretrieving), and (c) makes it harder to drawinferences needed to fill in gaps and to integratematerial that is retrieved.

Select respondents who have the necessary experiencethinking about the issues of interest. Exercise caution whenasking respondents about the motives for their behavior, theeffects of situational factors on their behavior, or otherthings pertaining to cognitive processes that are unlikely tohave been attended to or stored in short-term memory.

Complex or abstract questions (Doty andGlick 1998; Krosnick 1991).

May increase the difficulty of comprehending themeaning of the questions, retrieving relevantinformation, and making judgments.

Avoid referring to vague concepts without providing clearexamples; simplify complex or compound questions; anduse language, vocabulary, and syntax that match the readingcapabilities of the respondents.

Item ambiguity (Krosnick 1991;Podsakoff et al. 2003; Tourangeau,Rips, and Rasinski 2000).

May increase the difficulty of comprehending thequestions, retrieving relevant information, andmaking judgments. Can also increase thesensitivity of answers to context effects.

Use clear and concise language; avoid complicated syntax;define ambiguous or unfamiliar terms; and label allresponse options rather than just the end points.

Double-barreled questions (Bradburn,Sudman, and Wansink 2004; Krosnick1991; Sudman and Bradburn 1982).

Make the retrieval task more demanding(Krosnick 1991) and introduce ambiguities intothe response selection task by making it unclearwhether respondents should: (a) answer only onepart of the question, or (b) average their responsesto both parts of the question.

Avoid double-barreled questions.

Questions that rely on retrospectiverecall (Krosnick 1991).

May increase the difficulty of the retrieval processand the likelihood of satisficing because questionsthat require retrospective recall are more difficultto answer due to the relative remoteness of therelevant information in memory.

Refocus the questions to ask about current states becausethis reduces the effort required for retrieval. Take steps toincrease the respondent’s motivation to expend the effortrequired to retrieve the information necessary to answer thequestion accurately by explaining why the questions areimportant and how accurate responses will have usefulconsequences for the respondent and/or the organization.Make it easier for respondents to recall the informationnecessary to answer the question accurately.

Auditory only presentation of item(telephone) versus writtenpresentation of item (print or web).

Increases the memory load because respondentsmust keep the meaning of the question and allresponse options in short-term memory beforeresponding.

Simplify questions and/or response options. Present long,complex, questions with many response options in writtenform or with visual aids.

I

a2tet(bpttwa

ar

D

taaP1

tem ambiguityItem ambiguity decreases the ability of respondents to gener-

te an accurate response (MacKenzie, Podsakoff, and Podsakoff011; Podsakoff et al. 2003) and thus increases the likelihoodhat respondents will rely on stylistic response tendencies to gen-rate a merely satisfactory answer (Krosnick 1991) and increaseshe sensitivity of their answers to measurement context effectssee Tourangeau, Rips, and Rasinski 2000). This can happenecause item ambiguity impairs the respondent’s ability to com-rehend the meaning of the question which makes it difficult forhe respondent to generate an effective retrieval strategy, assess

he completeness of what has been recalled, and fill in gaps inhat is recalled. This problem can be addressed by using clear

nd concise language; avoiding complicated syntax; defining

ttta

mbiguous or unfamiliar terms; and labeling all response optionsather than just the end points.

ouble-barreled itemsDouble-barreled questions are ones in which opinions about

wo subjects are joined together so that respondents mustnswer two questions with one answer (Bradburn, Sudman,nd Wansink 2004; Krosnick 1991; MacKenzie, Podsakoff, andodsakoff 2011; Podsakoff et al. 2003; Sudman and Bradburn982). Krosnick (1991, p. 221) notes that that this type of ques-

ion can make the retrieval task more demanding, “A questionhat asks respondents how much they like spinach requires onlyhat cognitions about spinach be retrieved. But a question thatsks whether an individual prefers spinach or turnips requires
Page 6: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

urnal

tfioamoodTs

Q

crvtticttbtetptaiaqtdtcficc

A

sobtTmpabtpCits

tiCtdoodeiRtoiqfqostbfaot

Rr

nmmBrrc

L

1ahttdra(pml

S.B. MacKenzie, P.M. Podsakoff / Jo

hat information about both vegetables be recalled. The more dif-cult the required retrieval task is, the more likely satisficing is toccur.” In addition, because double-barreled questions decreaserespondent’s ability to generate an accurate response, theyay increase the likelihood that respondents will answer only

ne part of the question, or average their responses to both partsf the question. Finally, this type of question can also make itifficult to accurately map judgments onto response categories.he simple solution is to avoid double-barreled questions byplitting the question up into two separate questions.

uestions that rely on retrospective recallQuestions that require the retrieval of information about

urrent states are easier to answer than questions that requireetrospective recall because of the relative remoteness of the rele-ant information in memory (Krosnick 1991). The more difficulthe retrieval process, the less complete the information retrieval,he less able the respondent is to fill in missing details and gapsn what is recalled, and the greater the motivation to satisfice. Toompensate for this, one obvious strategy would be to refocushe questions to ask about current states because this reduceshe effort required for retrieval. However, this may not alwayse possible. In these instances, another strategy would be toake steps to increase the respondent’s motivation to expend theffort required to retrieve the information necessary to answerhe question accurately (Cannell, Miller, and Oksenberg 1981),erhaps by explaining why the questions are important and howheir accurate responses will have useful consequences for themnd/or their organization. Yet another strategy would be to maket easier for respondents to recall the information necessary tonswer the question accurately. Depending upon the nature of theuestion of interest, Schröder (2011, p. 15) notes three thingshat can be done to make this information easier for respon-ent’s to recall: (a) to facilitate top-down retrieval, specify largeopics first and then move within these topics to the more spe-ific events; (b) to facilitate temporal retrieval, start with therst (or the last) event and then move forward (or backward)hronologically; and (c) to facilitate parallel retrieval, ask aboutontemporaneous events all at the same time.

uditory versus written presentation of itemsThe most common data collection methods used in marketing

tudies are: (a) auditory only presentation of items via telephoner face-to-face interviews or (b) written presentation of itemsy means of traditional pencil and paper self-completion ques-ionnaires or computer-assisted self-completion questionnaires.hese methods confound two key factors that potentially affectethod bias. One is the modality through which the items are

resented – auditory mode or visual mode. The other is whodministers the questions – an interviewer or self-administrationy the respondent. The former primarily affects the difficulty ofhe task of answering the question accurately, whereas the latterrimarily affects a respondent’s motivation to answer accurately.

onsequently, we will discuss the issues related to the modal-

ty through which the items are presented in this section andhe issues related to the presence of an interviewer in the nextection.

jdf(

of Retailing 88 (4, 2012) 542–555 547

The modality through which the items are presented (audi-ory or visual) can potentially influence method bias becauset can affect the cognitive demands placed on the respondent.ompared to visual presentation, auditory only presentation of

he items may increase the difficulty of the task of respon-ing accurately because respondents must keep the meaningf the question and all response options in short-term mem-ry before answering. In addition, an auditory only presentationoes not permit the respondent to control the rate of informationxchange, and no visual cues are present to reinforce the mean-ng of the items or the response options. Indeed, Tourangeau,ips, and Rasinski (2000, pp. 292–293) have noted that, “Audi-

ory presentation (without simultaneous visual display), of longr complicated questions may overtax the respondents’ listen-ng ability, reducing their comprehension. Moreover, when theuestion has numerous response options, it may be difficultor respondents to keep them all in mind . . . [and] When theuestions are only read aloud, respondents have little controlver the pace at which questions and response options are pre-ented and may begin by considering the options that come athe end of the list.” One way to diminish this problem woulde to present the items in visual form in addition to auditoryorm (e.g., computer assisted self-completion questionnaire withudio), or present long, complex, questions with many responseptions with visual aids. Beyond this, it would help to simplifyhe questions and/or response options to the extent possible.

emedies for factors that decrease the motivation toespond accurately

To answer accurately, it is essential for respondents to haveot only the ability to answer questions correctly, but also theotivation to do so. Consequently, factors that diminish theotivation of respondents are potential sources of method bias.roadly speaking, many of these factors identified in previous

esearch are dispositional characteristics of the respondent, self-eferential characteristics, or characteristics of the measurementontext (Table 3).

ow personal relevance of the issueBoth the Elaboration Likelihood Model (Petty and Cacioppo

986) and the Heuristic Systematic Model (Chaiken, Liberman,nd Eagly 1989) imply that when an issue is perceived toave little personal relevance to respondents, it may decreaseheir motivation to exert cognitive effort to answer the ques-ion and increase the desire to satisfice. Consequently, this mayecrease their willingness to assess the completeness and accu-acy of information retrieved, fill in gaps in what is recalled,nd integrate the information retrieved. Similarly, Krosnick1999) has noted that questions that are perceived to be unim-ortant or that are not expected to have useful consequencesay undermine motivation and result in poorer comprehension,

ess thorough retrieval, less careful judgment and mapping of

udgments onto response categories, and can lead to respon-ing in a stylistic or nondifferentiated manner. To compensateor this an attempt could be made to increase motivation by:a) explaining to respondents why the questions are important
Page 7: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

548 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542–555

Table 3Factors that increase method bias by decreasing the motivation to respond accurately.

Conditions that cause method bias Mechanism Potential remedies

Low personal relevance of the issue(Chaiken, Liberman, and Eagly 1989;Krosnick 1999; Petty and Cacioppo 1986).

May decrease a respondent’s motivation to exertcognitive effort and result in: poorercomprehension; less thorough retrieval; and lesscareful judgment and mapping of judgmentsonto response categories.

Explain to respondents why the questions are importantand how their accurate responses will have usefulconsequences for them and/or their organization; and/orpromise feedback to respondents to motivate them torespond more accurately so that they can gain greaterself-understanding, enhance self-efficacy, and improveperformance.

Low self-efficacy to provide a correct answer(Chaiken, Liberman, and Eagly 1989).

May decrease motivation to exert cognitive effortwhich decreases a person’s willingness to: assessthe completeness and accuracy of informationretrieved, fill in gaps in what is recalled, and trusthis/her own inferences based on partial retrieval.

Emphasizing to respondents that it is their personalopinions that are important, and only their personalexperience or knowledge is required to answer thequestions.

Low need for cognition (Cacioppo and Petty1982).

May decrease motivation to exert cognitive effortand thereby diminish: (a) the thoroughness ofinformation retrieval and integration processes,and (b) the filling in of gaps in what is recalled.

Enhance motivation to exert cognitive effort byemphasizing the importance of the issues; remindingrespondents of how research can benefit them or helpthe organization; or increasing personal relevance of thetask.

Low need for self-expression,self-disclosure, or emotional catharsis(Krosnick 1999).

May decrease motivation to exert cognitiveeffort and thereby decrease (a) the thoroughnessof information retrieval and (b) the filling in ofgaps in what is recalled. As a result, thesefactors may cause people to respond to itemscarelessly, randomly, or nonpurposefully.

Enhance the motivation for self-expression byexplaining in the cover story or instructions that “wevalue your opinion,” “we need your feedback,” or thatwe want respondents to “tell us what you think,” and soforth. Similarly, enhance willingness to self-disclose byemphasizing the personal benefits of the research tothem (e.g., improved performance and increasedself-awareness) in the instructions.

Low feelings of altruism (Krosnick 1999;Orne 1962; Viswanathan 2005).

May decrease intentions to exert cognitive efforton behalf of the researcher which can decreasethe thoroughness of information retrieval and thefilling in of gaps in what is recalled.

Explain how much the respondent’s help is needed,indicating that others are depending upon the accuracyof the responses; suggest that no one else can providethe needed information (or you are one of the few thatcan); and/or remind them how research can improve thequality of life for others or help the organization.

Agreeableness (Costa and McCrae 1992;Knowles and Condon 1999).

Increases the tendency to uncritically endorse oracquiesce to statements, search for cuessuggesting how to respond, and edit responsesfor acceptability. According to the dual processtheory, acquiescence results from a prematuretruncation of the reconsideration stage ofcomprehension.

Through instructions, stress the fact that the best way tohelp the researcher is to answer the questions asaccurately as possible. Enhance motivation byemphasizing the importance of the issues; remindingrespondents of how research can benefit them or helpthe organization; or increasing personal relevance of thetask. In addition, measure acquiescence response styleand control for it.

Impulsiveness (Couch and Keniston 1960;Messick 1991).

May: (a) impair comprehension by decreasingattention to questions and instructions; (b)diminish the tendency to assess thecompleteness and accuracy of informationretrieved, fill in gaps in what is recalled, andintegrate the information retrieved; and (c) resultin carelessness in mapping judgments ontoresponse categories.

Stress the importance of conscientiousness andaccuracy, and encourage respondents to carefully weighthe alternatives before responding.

Dogmatism, rigidity, or intolerance ofambiguity (Baumgartner and Steenkamp2001; Hamilton 1968).

Dogmatism (or rigidity) may heighten feelingsof certainty and thus increase willingness to:make an estimate based on partial retrieval;and/or draw inferences based on accessibility orto fill in gaps in what is recalled. Dogmatism (orintolerance for ambiguity) can cause people toview things as either black or white, thusincreasing the likelihood that they will mapjudgments onto extreme response categories.

Stress the importance of conscientiousness andaccuracy, and encourage respondents to carefully weighthe alternatives before responding. Measure extremeresponse style and control for it.

Implicit theories (Lord et al. 1978; Podsakoffet al. 2003; Staw 1975).

May motivate respondents to edit their responsesin a manner that is consistent with their theory.

Introduce a temporal, proximal, or spatial separation;and/or obtain the information about the predictor andcriterion variables from separate sources.

Page 8: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542–555 549

Table 3 (Continued)

Conditions that cause method bias Mechanism Potential remedies

Repetitiveness of the items (Petty andCacioppo 1986).

May decrease motivation to maintain thecognitive effort required to provide optimalanswers and increase the tendency to respond ina nondifferentiated manner or stylistically.

Increase motivation by minimizing the repetitiveness ofthe items, making the questions seem less repetitive byreversing some items (i.e., polar opposites notnegations), or changing the format.

Lengthy scales (Krosnick 1999). May decrease motivation to maintain thecognitive effort required to provide optimalanswers, and result in poorer comprehension,less thorough retrieval, less careful judgmentand mapping of judgments onto responsecategories, and/or stylistic responding.

Increase motivation by minimizing the length of thesurvey, simplifying the questions, making the questionsseem less repetitive by reversing some items (i.e., polaropposites not negations), or changing the format.

Forced participation (Brehm 1966). May increase psychological reactance andconsequently decrease the motivation to exertcognitive effort to generate accurate answers orto faithfully report those answers.

Solicit participation by promising rewards rather thanby threatening punishment. Treat participants in arespectful manner, show that you value their time, andexpress appreciation for their participation.

Presence of an interviewer (Bowling 2005). May motivate respondents to edit their answersto make them more socially desirable to avoidany social consequences of expressing their truejudgments.

If appropriate, utilize a self-administered method of datacollection (e.g., traditional paper and pencil orcomputer-assisted questionnaire). If this isinappropriate, assure respondents in the cover story orinstructions that there are no right or wrong answers,that people have many different opinions about theissues addressed in the questionnaire, that theirresponses will only be used for research purposes, andthat their individual responses will not be revealed toanyone else.

Source of the survey is disliked (Krosnick1999).

May decrease: the desire to cooperate;willingness to exert the cognitive effort requiredto generate optimal answers; or motivation tofaithfully report those answers.

Treat participants in a respectful manner, show that youvalue their time, and express appreciation for theirparticipation. If the dislike relates to an impersonalsource of the survey, you can attempt to disguise thesource.

Contexts that arouse suspicions(Baumgartner and Steenkamp 2001;Schmitt 1994).

May motivate respondents to conceal their trueopinion by editing their responses. They mightdo this by using the middle scale categoryregardless of their true feelings, or byresponding to items carelessly, randomly, ornonpurposefully.

Suspicions may be mitigated by explaining how theinformation will be used, why the information is beingrequested, who will see the responses, and how theinformation will be kept secure. In addition, one couldassure participants that their responses will be used onlyfor research purposes, will be aggregated with theresponses of others, and that no one in their organizationwill see their individual responses. Measure midpointresponse style and control for it.

Measurement conditions that make theconsequences of a response salient (see

May increase desire to edit answers in order toprovide a socially acceptable response or to

ces.

Can be diminished by guaranteeing anonymity, tellingrespondents there are no right or wrong answers, and

aftsp

L

ditecwm

hlrtOttk

L

Paulhus 1984; Steenkamp, DeJong, andBaumgartner 2010).

avoid undesirable consequen

nd how their accurate responses will have useful consequencesor them and/or their organization; or (b) promising feedbacko respondents (where appropriate) so that they can gain greaterelf-understanding, enhance self-efficacy, and/or improve theirerformance.

ow self-efficacy to provide a correct answerChaiken, Liberman, and Eagly (1989) noted that a respon-

ent’s perceived self-efficacy to provide a correct answer cannfluence how much cognitive effort s/he is willing to exerto answer a question. This suggests that, low perceived self-

fficacy to answer a question can decrease motivation to exertognitive effort, which can subsequently decrease a respondent’sillingness to: (a) assess the completeness and accuracy of infor-ation retrieved, (b) fill in gaps in what is recalled, and (c) trust

niaa

assuring them that people have different opinions aboutthe issues addressed in the questionnaire.

is/her own inferences based on partial retrieval. In addition,ow perceived self-efficacy may also increase the likelihood thatespondents will use the middle scale category regardless of theirrue feelings because they lack confidence in their responses.ne remedy for this perceived lack of self-efficacy would be

o emphasize to respondents that it is their personal opinionshat are important and that only their personal experience ornowledge is required to answer the questions.

ow need for cognitionCacioppo and Petty (1982) demonstrated that if need for cog-

ition (i.e., chronic motivation to exert cognitive effort) is low,t is likely to decrease the thoroughness of information retrievalnd integration processes and the extent to which the respondentttempts to fill in missing details and gaps in what is recalled.

Page 9: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

5 urnal

Meaompmrtt

Lc

drttipfesfstbi

L

pttmtp1rhagtro

A

tcahastwh

prnmta(

I

qT((apwreita

D

1aFnreadttwr2bcda

I

1fiwnatmf

50 S.B. MacKenzie, P.M. Podsakoff / Jo

ore generally, it can decrease a respondent’s motivation toxert the cognitive effort required to provide optimal answersnd increase his/her desire to satisfice by responding stylisticallyr in a nondifferentiated manner. This chronically low level ofotivation to exert cognitive effort might be temporarily com-

ensated for by enhancing motivation through other means. Thisight be done by emphasizing the importance of the issues;

eminding respondents of how research can benefit them or helpheir organization; or increasing the personal relevance of theask of answering the questions.

ow need for self-expression, self-disclosure, or emotionalatharsis

Krosnick (1999) noted that these self-referential factors canecrease a respondent’s motivation to exert the cognitive effortequired to provide optimal answers. This is likely to decreasehe thoroughness of information retrieval and the extent to whichhe respondent attempts to fill in missing details and gaps in whats recalled. Thus, these factors may have a tendency to causeeople to respond to items carelessly, randomly, or nonpurpose-ully. To counter this tendency, the desire for self-expression ormotional catharsis may be enhanced by explaining in the covertory or instructions that “we value your opinion,” “we need youreedback,” or that we want you to “tell us what you think,” ando forth. Similarly, to increase the willingness of respondentso self-disclose, the instructions could emphasize the personalenefits of the research to them (e.g., improved performance andncreased self-awareness).

ow feelings of altruismLow feelings of altruism toward the sponsor of the research

roject can decrease intentions to exert cognitive effort to helphe sponsor, and thereby decrease the thoroughness of informa-ion retrieval and the respondent’s willingness to try to fill inissing details and gaps in what is recalled. More generally,

his may increase the respondent’s tendency to satisfice by res-onding in a nondifferentiated manner or stylistically (Krosnick991, 1999). Feelings of altruism toward the sponsor of theesearch might be increased by explaining why the respondent’selp is needed, indicating that others (who the respondent caresbout) are depending upon the accuracy of the responses, or sug-esting that no one else can provide the needed information (orhe respondent is one of the few that can). It might also help toemind the respondents of how research can improve the qualityf life for others and/or help the organization.

greeablenessCosta and McCrae (1992) have noted that agreeableness is a

endency to be compassionate and cooperative rather than suspi-ious and antagonistic toward others, and that agreeable peoplere especially concerned with maintaining or enhancing socialarmony. Consequently, people high in agreeableness may havetendency to uncritically endorse or acquiesce to statements,

earch for cues that suggest how they should respond, and editheir responses for acceptability. One way to compensate for thisould be to stress the fact that the best way for respondents toelp the researcher is to answer the questions as accurately as

olbt

of Retailing 88 (4, 2012) 542–555

ossible. This could be done through instructions that encourageespondents to “tell us what you honestly think” and that “weeed your opinion.” Another way to control this bias might be toeasure the respondent’s acquiescence response style and con-

rol for it using the procedures recommended by Baumgartnernd Steenkamp (2001) and Weijters, Schillewaert, and Geuens2008).

mpulsivenessImpulsiveness causes respondents to react to questions

uickly, with little reflection, and little monitoring of judgments.his trait has been linked to response bias in several studies

Couch and Keniston 1960; Messick 1991). Impulsiveness can:a) impair comprehension by decreasing attention to questionsnd instructions; (b) diminish the tendency to assess the com-leteness and accuracy of information retrieved, fill in gaps inhat is recalled, and integrate the information retrieved; and (c)

esult in carelessness in mapping judgments onto response cat-gories. One way to offset this tendency might be to stress themportance of conscientiousness and accuracy in the instructionso respondents and/or to encourage them to carefully weigh thelternatives before responding.

ogmatism, rigidity, or intolerance for ambiguityResearch (Baumgartner and Steenkamp 2001; Hamilton

968) suggests that dogmatism, rigidity, and intolerance formbiguity can produce biased responding for several reasons.irst, rigidity and dogmatism can heighten a person’s willing-ess to make judgments or fill in gaps based on only partialetrieval, and cause people to be more willing to draw infer-nces based on information accessibility, because people whore high in these traits tend to feel certain of everything. Second,ogmatism (or intolerance for ambiguity) can make people viewhings as either black or white, thus increasing the likelihood thathey will map judgments onto extreme response categories. Oneay to manage the latter problem would be to measure extreme

esponse style and control for it (Baumgartner and Steenkamp001). Managing the former problem is more difficult, but mighte mitigated to some degree by stressing the importance ofonscientiousness and accuracy in the instructions to respon-ents and/or by encouraging respondents to carefully weigh thelternatives before answering.

mplicit theoriesIf respondents have an implicit theory (Lord et al. 1978; Staw

975) that two constructs are related, they may be motivated toll in gaps in what is recalled in a manner that is consistentith their implicit theory, or to edit their responses in a man-er that is consistent with their theory. Podsakoff et al. (2003)nd Podsakoff, MacKenzie, and Podsakoff (2012) have arguedhat one way to diminish this tendency is to obtain the infor-

ation about the two constructs linked by the implicit theoryrom separate sources, or introduce a temporal, psychological,

r spatial separation between the measures of the two constructsinked by the implicit theory. Introducing a temporal separationetween the measures of two constructs linked by an implicitheory decreases the likelihood that the answers to the first set
Page 10: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

urnal

omItnhLt(oa

R

ltrrtbopist

L

so1tiitjampvrn

F

firtoCcrirpr

trw

P

dtjebtsapapipspirb

S

dtcflamtuietgptp

C

Sacdtdb

S.B. MacKenzie, P.M. Podsakoff / Jo

f measures will be available in the respondent’s short-termemory at the time s/he answers the second set of measures.

ntroducing a psychological separation between the measures ofwo constructs linked by an implicit theory decreases the diag-osticity of the answers to the first set of measures as cues toow to respond to the second set of measures (cf. Feldman andynch 1988). Finally, introducing a spatial separation between

he measures of two constructs linked by an implicit theory canunder some circumstances) make the answers to the first setf measures physically unavailable at the time the respondentnswers the second set of measures.

epetitivenessAccording to Petty and Cacioppo’s (1986) Elaboration Like-

ihood Model, excessive repetition of a message decreaseshe motivation to centrally process it. This suggests that theepetitiveness of the items on a questionnaire may decrease aespondent’s motivation to maintain the cognitive effort requiredo provide optimal answers and increase the desire to satisficey responding in a nondifferentiated manner or stylistically. Thebvious solution is to maintain the respondent’s motivation torovide accurate answers by minimizing the repetitiveness of thetems or making the questions seem less repetitive by reversingome items (i.e., polar opposites but not negations), or varyinghe scale format.

engthy scalesSeveral researchers have noted that a seemingly unending

tream of questions may cause respondents to become fatiguedr irritated (Clancy and Wachsler 1971; Schuman and Presser981). This may decrease a respondent’s motivation to maintainhe cognitive effort required to provide optimal answers andncrease the desire to satisfice (e.g., by mechanically acquiesc-ng to items). This may also result in poorer comprehension dueo careless reading of items, less thorough retrieval, less carefuludgment and mapping of judgments onto response categories,nd can lead to responding in a stylistic or nondifferentiatedanner. This can be mitigated by increasing the motivation to

rovide accurate answers by shortening the length of the sur-ey, simplifying the questions, making the questions seem lessepetitive by reversing some items (i.e., polar opposites but notegations), or changing the format.

orced participationCompelling respondents to participate in the survey (e.g., to

ulfill a course requirement or a request by top management) canncrease psychological reactance (Brehm 1966) and the desire toebel. This may subsequently decrease a respondent’s motivationo exert the cognitive effort required to generate optimal answers,r decrease his/her motivation to faithfully report those answers.onsequently, the respondent may be less likely to cooperate by:arefully attending to the questions and instructions; thoroughlyetrieving relevant information from memory and filling in gaps

n what is recalled; and conscientiously mapping judgments ontoesponse categories. Perhaps the most effective way to avoid thisroblem would be to solicit participation by promising rewardsather than by threatening punishment. Beyond this, the tendency

biab

of Retailing 88 (4, 2012) 542–555 551

o rebel might also be diminished by treating participants in aespectful manner, showing that you value their time (i.e., by notasting it), and expressing appreciation for their participation.

resence of an interviewerThe mere presence of an interviewer may motivate respon-

ents to edit their answers to make them more socially desirableo avoid any social consequences of expressing their trueudgments. Indeed, Bowling’s (2005) review of the empirical lit-rature demonstrates that social desirability bias and yea-sayingias are higher, and willingness to disclose sensitive informa-ion is lower, for interviews (face-to-face or telephone) than forelf-administered questionnaires (paper and pencil or computer-ssisted). Perhaps the most obvious solution to this potentialroblem would be to avoid the use of an interviewer by utilizingself-administered method of data collection (e.g., traditional

aper and pencil or computer-assisted questionnaire). However,n those cases where this is infeasible, or undesirable, one couldartially diffuse this issue by assuring respondents in the covertory or instructions that there are no right or wrong answers, thateople have many different opinions about the issues addressedn the questionnaire, that their responses will only be used foresearch purposes, and that their individual responses will note revealed to anyone else.

ource of survey is dislikedIf the interviewer, experimenter, or sponsor of the survey is

isliked by respondents it may decrease the respondents’ desireo cooperate which may decrease their motivation to exert theognitive effort required to generate optimal answers or faith-ully report those answers. Consequently, respondents may beess likely to expend the cognitive effort necessary to carefullyttend to the questions and instructions, retrieve relevant infor-ation from memory, fill in gaps in what is recalled, and select

he appropriate response categories. The remedy for this dependspon the source that is disliked by respondents. If the dislikes for an interviewer or experimenter, steps could be taken tostablish rapport with the respondents. If the dislike relates tohe sponsor of the survey, an attempt could be made to dis-uise the source. In addition, it is always important to treatarticipants in a respectful manner, show that you value theirime (i.e., by not wasting it), and express appreciation for theirarticipation.

ontexts that arouse suspicionsSeveral researchers (Baumgartner and Steenkamp 2001;

chmitt 1994) have noted that when respondents are suspiciousbout how the data will be used, they may be motivated to con-eal their true opinions by editing their responses. They mighto this by using the middle scale category regardless of theirrue feelings, or perhaps by responding to items carelessly, ran-omly, or nonpurposefully. These suspicions may be mitigatedy explaining (to the extent possible) why the information is

eing requested, how the information will be used, and how thenformation will be kept secure. Beyond this, if possible, it wouldlso be beneficial to assure participants that their responses wille used only for research purposes, will be aggregated with the
Page 11: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

552 S.B. MacKenzie, P.M. Podsakoff / Journal of Retailing 88 (4, 2012) 542–555

Table 4Factors that increase method bias by decreasing the difficulty of satisficing.

Conditions that cause method bias Mechanism Potential remedies

Common scale attributes (e.g., the same scaletypes, scale points, and anchor labels).

May heighten the perceived similarity andredundancy of items; and encourage respondentsto be less thorough in item comprehension,memory retrieval, and judgment. This alsomakes it easier to edit answers for consistencywhich decreases difficulty of satisficing.

Explain to respondents that although some questionsmay seem similar, each is unique in important ways.Encourage them to read each item carefully. Vary thescale types and anchor labels and/or reverse the wordingof some of the items to disrupt undesirable responsepatterns.

Grouping related items together. May heighten the perceived similarity andredundancy of items; and encourage respondentsto be less thorough in item comprehension,memory retrieval, and judgment. This alsomakes it easier to edit answers for consistencywhich decreases difficulty of satisficing; andmakes it easier to use previously recalledinformation and prior answers to respond to thecurrent question.

Disperse similar items throughout the questionnaire,separated by unrelated buffer items.

The availability of answers to previousquestions (physically or in memory).

This may make it easier to: (a) use previouslyrecalled information and answers to respond tothe current question (i.e., judgment referral); and(b) provide answers that are consistent with eachother or with an implicit theory.

Memory availability can be diminished by introducing atemporal separation between the measurement of thepredictor and criterion variables; and the diagnosticityof the previous answers (as a cue to how to respond) canbe diminished by introducing a psychological

rs

Mr

dcSbstnrnpq

Rs

watweta

C

ehlmwtiidtt(2wtr

G

iAdt

esponses of others, and that no one in their organization willee their individual responses.

easurement conditions that make the consequences of aesponse salient

When the social or professional consequences of a respon-ent’s answers are potentially serious, and the measurementonditions make these consequences salient (see Paulhus 1984;teenkamp, DeJong, and Baumgartner 2010), respondents maye motivated to edit their answers in order to provide aocially acceptable response, rather than saying what they reallyhink. This tendency to respond in a socially desirable man-er may be diminished by guaranteeing anonymity, tellingespondents in the cover story or instructions that there areo right or wrong answers, and assuring respondents that peo-le have different opinions about the issues addressed in theuestionnaire.

emedies for factors that decrease the difficulty ofatisficing

The factors above decrease the likelihood that a respondent

ill answer accurately by: decreasing the motivation to respond

ccurately, undermining the respondent’s capabilities, or makinghe task of responding accurately more difficult. In this sectione will shift our focus to discuss characteristics of items that

ncourage satisficing by making the task of generating alterna-ive answers easier, rather than by making the task of answeringccurately harder (Table 4).

estmaop

separation. Physical availability can be diminished byrestricting access to previous answers.

ommon scale attributesIt seems plausible that some forms of satisficing would be

asier to implement if the measures share a common scale type,ave the same number of scale points and common anchorabels, and do not have any reverse-worded items. These com-

on characteristics make it easier to edit answers for consistencyhich decreases the difficulty of satisficing. These characteris-

ics also heighten the perceived similarity and redundancy oftems which may encourage respondents to be less thorough intem comprehension, memory retrieval, and judgment. Theseetrimental tendencies can be diminished by varying the scaleypes and anchor labels and/or reversing the wording of some ofhe items to balance the positively and negatively worded itemsBaumgartner and Steenkamp 2001; Weijters and Baumgartner012). However, the latter is only a good idea if it can be doneithout altering the content validity or conceptual meaning of

he scale, and if the reverse-worded items are not confusing toespondents.

rouping related items togetherIt is not uncommon for researchers to group items measur-

ng the same or similar constructs together on a questionnaire.lthough this has the benefit of diminishing the cognitiveemands of the task, it may also heighten perceptions ofhe similarity and redundancy of the items, and consequentlyncourage respondents to be less thorough in item comprehen-ion, memory retrieval, and judgment. Grouping similar itemsogether also makes it easier to: use previously recalled infor-

ation and prior answers to respond to the current questionnd edit answers for consistency which decreases the difficultyf satisficing. The obvious solution to this problem is to dis-erse similar items throughout the questionnaire, separated by

Page 12: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

urnal

u2

A

taiFaabpawsiavsqabatb

otbibitrupaaqcttdedcrtcmvhs

rarqmsamdlt

isTrthSaWsCaolew

woitaitteettui

S.B. MacKenzie, P.M. Podsakoff / Jo

nrelated buffer items (see Weijters, Schillewaert, and Geuens008).

vailability of answers to previous questionsIt is easier for respondents to satisfice by providing answers

hat are consistent with each other or with an implicit theory if thenswers to previous questions are readily available, either phys-cally or in memory, at the time of answering a later question (cf.eldman and Lynch 1988). Answers are likely to be physicallyvailable in a self-administered paper and pencil questionnairend may be (but need not be) for online questionnaires. This cane remedied by using computer-presented questionnaires thatrevent subjects from scrolling backwards to consult previousnswers. In addition, as noted above, availability is also greaterhen questions are grouped together in close proximity by con-

truct on the questionnaire. This can be avoided by separatingtems on the questionnaire to decrease the availability of previousnswers. More generally, whenever the predictor and criterionariables are measured at the same point in time by means aingle questionnaire, the likelihood that the answers to previousuestions will be available is greater. In these instances, avail-bility might be diminished by introducing a temporal separationetween the measurement of the predictor and criterion vari-bles; and/or the diagnosticity of the previous answers (as a cueo how to respond to subsequent questions) might be diminishedy introducing a psychological separation.

Conclusion

The purpose of this article has been to discuss the causesf method biases, the mechanisms through which they produceheir biasing effects, and how to minimize or prevent these effectsy implementing appropriate procedural controls. More specif-cally, we identified a series of factors that may cause methodiases by undermining the capabilities of the respondent, mak-ng the task of responding accurately more difficult, decreasinghe motivation to respond accurately, and making it easier forespondents to satisfice. In addition, we tried to advance ournderstanding of the mechanisms through which these factorsroduce their biasing effects by discussing how they influencerespondent’s desire to provide optimal versus satisfactory

nswers to the questions, and how this subsequently affectsuestion comprehension, memory retrieval and inference pro-esses, the mapping of judgments onto response categories, andhe editing of responses. Finally, based on this understanding ofhe mechanisms involved, we tried to propose procedural reme-ies that would counterbalance or offset each of these specificffects. For example, to increase the likelihood that respon-ents are able to answer accurately it is important to: align theapabilities of respondents with the difficulty of the task; selectespondents who have the necessary experience thinking abouthe issues of interest; avoid referring to vague concepts; and uselear and concise language. It is also important to enhance the

otivation of respondents to answer accurately perhaps by: pro-iding an explanation of why the questions are important andave useful consequences for the respondent, organization, ando forth; explaining why their answers are important; assuring

baes

of Retailing 88 (4, 2012) 542–555 553

espondents of the confidentiality of their answers; encour-ging respondents to carefully weigh the alternatives beforeesponding; and minimizing the length and repetitiveness of theuestionnaire to the extent possible. It also may be a good idea toake it more difficult for respondents to satisfice by: varying the

cale types and anchor labels when appropriate; and introducingtemporal, psychological, or spatial separation between itemseasuring key constructs when possible. Our hope is that this

iscussion will help researchers anticipate when method bias isikely to be a problem and provide ideas about how to avoid ithrough the careful design of a study.

A final cautionary note

That being said, it is important to note that all researchnvolves inevitable tradeoffs and it is impossible to design atudy that completely rules out all possibility of method bias.his implies that the procedural remedies discussed in this

esearch should be viewed as a much needed complemento – but not a substitute for – the statistical remedies thatave already been developed (Bagozzi 1984; Baumgartner andteenkamp 2001; Podsakoff et al. 2003; Podsakoff, MacKenzie,nd Podsakoff 2012; Weijters, Schillewaert, and Geuens 2008;illiams, Hartman, and Cavazotte 2010). This also implies that

ome method bias may be present even in a well designed study.onsequently, the researcher’s goal should be to thoughtfullyssess the research setting, try to identify the most likely causesf method bias, and take concrete steps to mitigate these prob-ems in order to reduce the plausibility of method bias as a rivalxplanation for the relationships observed in a study. Indeed, ase have said before (Podsakoff et al. 2003, p. 899),

The key point to remember is that the procedural and statis-tical remedies selected should be tailored to fit the specificresearch question at hand. There is no single best method forhandling the problem of common method variance becauseit depends on what the sources of method variance are in thestudy and the feasibility of the remedies that are available.

In addition, we would also like to make it clear that nothinge have said here should be construed as a general indictmentf survey research. Indeed, many of the sources of method biasdentified in this research are also present in experimental inves-igations of mediating effects models, and in research based onrchival datasets gathered through single-source surveys (e.g.,ncluding official government statistics). Moreover, we believehat survey research is an essential complement to experimen-al research for two reasons. First, survey research is needed toxamine the extent to which causal relationships observed inxperimental studies hold over variations in persons, settings,reatment manipulations, and outcome variables. This is impor-ant because there are some phenomena that consistently showp in a lab setting that do not generalize. Second, there are somemportant phenomena that can only be studied in field settings

ecause: they cannot be effectively or ethically manipulated inlab setting; the real world setting to which one wishes to gen-ralize is complex and cannot be adequately replicated in a labetting; some subject populations may be unwilling to participate
Page 13: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

5 urnal

iso

samctpor

A

A

B

B

B

B

B

B

B

B

B

C

C

C

C

C

C

C

C

D

D

E

E

F

F

F

F

F

G

G

H

H

H

H

J

K

K

K

ogy, 50, 537–67.

54 S.B. MacKenzie, P.M. Podsakoff / Jo

n a lab study; and/or some important outcome variables (i.e.,ales, profitability, loyalty, repeat purchase, satisfaction basedn product usage, etc.) are difficult to capture in lab settings.

Therefore, our objective is not to discourage the use ofurvey research methods in marketing, but rather to encour-ge researchers to think carefully about how to control forethod biases in the design of their studies. Moreover, we

aution researchers not to “throw the baby out with the bathwa-er” by rejecting survey research simply because method biasotentially provides a rival explanation for the relationshipsbserved. The goal of this paper has been to suggest some thingsesearchers can do to make this explanation less plausible.

References

rndt, Johan and Edgar Crane (1975), “Response Bias, Yea-Saying, and theDouble Negative,” Journal of Marketing Research, 12 (May), 218–20.

rnold, Mark J. and Kristy E. Reynolds (2009), “Affect and Retail ShoppingBehavior: Understanding the Role of Mood Regulation and RegulatoryFocus,” Journal of Retailing, 85 (3), 308–20.

abakus, Emin, Ugur Yavas and Nicholas J. Ashill (2009), “The Role of Cus-tomer Orientation as a Moderator of the Job Demand–Burnout–PerformanceRelationship: A Surface-Level Trait Perspective,” Journal of Retailing, 85(4), 480–92.

agozzi, Richard P. (1980), Causal Models in Marketing, New York: John Wiley.(1984), “A Prospectus for Theory Construction in Mar-

keting,” Journal of Marketing, 48 (1), 11–29.aumgartner, Hans and Jan-Benedict E.M. Steenkamp (2001), “Response Styles

in Marketing Research: A Cross-National Investigation,” Journal of Market-ing Research, 38 (2), 143–56.

ollen, Kenneth A. (1989), Structural Equations with Latent Variables, NewYork, NY: Wiley.

owling, Ann (2005), “Mode of Questionnaire Administration Can Have Seri-ous Effects on Data Quality,” Journal of Public Health, 27 (3), 281–91.

radburn, Norman M., Seymour Sudman and Brian Wansink (2004), AskingQuestions: The Definitive Guide to Questionnaire Design – For MarketResearch, Political Polls, and Social and Health Questionnaires, San Fran-cisco, CA: Jossey-Bass.

rannick, Michael T., David Chan, James M. Conway, Charles E. Lance andPaul E. Spector (2010), “What Is Method Variance and How Can We Copewith It? A Panel Discussion,” Organizational Research Methods, 13 (3),407–20.

rehm, John W. (1966), A Theory of Psychological Reactance, New York, NY:Academic Press.

uckley, M. Ronald, James A. Cote and S. Mark Comstock (1990), “Measure-ment Errors in the Behavioral Sciences: The Case of Personality AttitudeResearch,” Educational and Psychological Measurement, 50 (September),447–74.

acioppo, John T. and Richard E. Petty (1982), “The Need for Cognition,”Journal of Personality and Social Psychology, 42 (1), 116–31.

ampbell, Donald T. and Donald W. Fiske (1959), “Convergent and Dis-criminant Validation by the Multitrait–Multimethod Matrix,” PsychologicalBulletin, 56 (2), 81–105.

annell, Charles F., Peter V. Miller and Lois F. Oksenberg (1981), “Research onInterviewing Techniques,” In Sociological Methodology, Vol. 11, LeinhardtSamuel ed. San Francisco, CA: Jossey Bass, 389–437.

haiken, Shelley, Akiva Liberman and Alice H. Eagly (1989), “Heuristic andSystematic Processing Within and Beyond the Persuasion Context,” in Unin-tended Thought: Limits of Awareness, Intention, and Control, Uleman JamesS. and Bargh John A., eds. New York, NY: Guilford, 212–52.

lancy, Kevin J. and Robert A. Wachsler (1971), “Positional Effects in Shared-Cost Surveys,” Public Opinion Quarterly, 35, 258–65.

ouch, Arthur and Kenneth Keniston (1960), “Yeasayers and Naysayers: Agree-ing Response Set as a Personality Variable,” Journal of Abnormal and SocialPsychology, 60 (2), 151–72.

K

of Retailing 88 (4, 2012) 542–555

osta, Paul T. and Robert R. McCrae (1992), NEO PI-R Professional Manual,Odessa, FL: Psychological Assessment Resources, Inc.

ote, James A. and Ronald Buckley (1987), “Estimating Trait, Method, andError Variance: Generalizing Across 70 Construct Validation Studies,” Jour-nal of Marketing Research, 24 (3), 315–8.

and (1988), “Measurement Errorand Theory Testing in Consumer Research: An Illustration of the Impor-tance of Construct Validation,” Journal of Consumer Research, 14 (4),579–82.

ickinson, John R. and Eric Kirzner (1985), “Questionnaire Item Omission as aFunction of Within-Group Question Position,” Journal of Business Research,13 (1), 71–5.

oty, D. Harold and William H. Glick (1998), “Common Methods Bias: DoesCommon Methods Variance Really Bias Results?,” Organizational ResearchMethods, 1, 374–406.

lliott, Lois L. (1961), “Effects of Item Construction and Respondent Aptitudeon Response Acquiescence,” Educational and Psychological Measurement,21 (2), 405–15.

ricsson, K. Anders and Herbert A. Simon (1980), “Verbal Reports as Data,”Psychological Review, 87 (3), 215–57.

eldman, Jack M. and John G. Lynch (1988), “Self-Generated Validity andOther Effects of Measurement on Belief, Attitude, Intention, and Behavior,”Journal of Applied Psychology, 73 (3), 421–35.

erber, Robert (1966), “Item Nonresponse in a Consumer Survey,” Public Opin-ion Quarterly, 30 (3), 399–415.

iske, Donald W. (1982), “Convergent-Discriminant Validation in Measure-ments and Research Strategies,” in Forms of Validity in Research, BrinbergDavid and Kidder Louise H., eds. San Francisco, CA: Jossey-Bass,77–92.

iske, Susan T. and Donald R. Kinder (1981), “Involvement, Expertise, andSchema Use: Evidence from Political Cognition,” in Personality, Cognitionand Social Interaction, Cantor Nancy and Kihlstrom John, eds. Hillsdale,NJ: Erlbaum, 171–90.

rancis, Joe D. and Lawrence Busch (1975), “What We Don’t Know About ‘IDon’t Knows’,” Public Opinion Quarterly, 39 (2), 207–18.

race, Debra and Scott Weaven (2011), “An Empirical Analysis of Fran-chisee Value-in-Use, Investment Risk and Relational Satisfaction,” Journalof Retailing, 87 (3), 366–80.

reenleaf, Eric A. (1992), “Improving Rating Scale Measures by Detectingand Correcting Bias Components in Some Response Styles,” Journal ofMarketing Research, 29 (May), 176–88.

amilton, David L. (1968), “Personality Attributes Associated with ExtremeResponse Style,” Psychological Bulletin, 69 (March), 192–203.

ansen, Robert A. (1980), “A Self-Perception Interpretation of the Effect ofMonetary and Nonmonetary Incentives on Mail Survey Respondent Behav-ior,” Journal of Marketing Research, 17 (1), 77–83.

arris, Michael M. and Amy Bladen (1994), “Wording Effects in the Mea-surement of Role Conflict and Role Ambiguity: A Multitrait–MultimethodAnalysis,” Journal of Management, 20 (4), 887–901.

erzog, A. Regula and Jerald G. Bachman (1981), “Effects of Question-naire Length on Response Quality,” Public Opinion Quarterly, 45 (4),549–59.

ackson, Douglas N. (1959), “Cognitive Energy Level, Acquiescence, andAuthoritarianism,” Journal of Social Psychology, 49 (1), 65–9.

nowles, Eric S. and Christopher A. Condon (1999), “Why People Say ‘Yes’: ADual-Process Theory Of Acquiescence,” Journal of Personality and SocialPsychology, 77 (2), 379–86.

raut, Allen I., Alan D. Wolfson and Alan Rothenberg (1975), “Some Effectsof Position on Opinion Survey Items,” Journal of Applied Psychology, 60(6), 774–6.

rosnick, Jon A. (1991), “Response Strategies for Coping with the CognitiveDemands of Attitude Measures in Surveys,” Applied Cognitive Psychology,5 (3), 213–36.

(1999), “Survey Research,” Annual Review of Psychol-

rosnick, Jon A. and Duane F. Alwin (1987), “An Evaluation of a CognitiveTheory of Response-Order Effects in Survey Measurement,” Public OpinionQuarterly, 51 (2), 201–19.

Page 14: Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies

urnal

K

L

L

L

L

M

M

M

M

M

M

N

O

P

P

P

P

R

S

S

S

S

S

S

S

S

S

S

T

T

V

W

W

W

W

W

S.B. MacKenzie, P.M. Podsakoff / Jo

and (1988), “A Test of the Form-Resistant Correlation Hypothesis: Ratings, Rankings, and the Measurementof Values,” Public Opinion Quarterly, 52 (4), 526–38.

rosnick, Jon A. and Michael A. Milburn (1990), “Psychological Determinantsof Political Opinionation,” Social Cognition, 8 (1), 49–72.

ance, Charles E., Lisa E. Baranik, Abby R. Lau and Elizabeth A. Scharlau(2009), “If It Ain’t Trait It Must Be Method: (Mis)Application Of TheMultitrait-Multimethod Design In Organizational Research, in Statisticaland Methodological Myths and Urban Legends,” in Doctrine, Verity, andFable in the Organizational and Social Sciences, Lance Charles E. andVandenberg Robert L., eds. New York: Routledge, 337–60.

ance, Charles E., Bryan Dawson, David Birkelbach and Brian J. Hoffman(2010), “Method Effects, Measurement Error, and Substantive Conclusions,”Organizational Research Methods, 13 (3), 407–20.

e, Huy, Frank L. Schmidt and Dan J. Putka (2009), “The Multifaceted Nature ofMeasurement Artifacts and Its Implications for Estimating Construct-LevelRelationships,” Organizational Research Methods, 12 (1), 165–200.

ord, Robert G., John F. Binning, Michael C. Rush and Jay C. Thomas (1978),“The Effect of Performance Cues and Leader Behavior on QuestionnaireRatings of Leadership Behavior,” Organizational Behavior and HumanDecision Processes, 21 (1), 27–39.

acKenzie, Scott B., Philip M. Podsakoff and Nathan P. Podsakoff (2011),“Construct Measurement and Validity Assessment in Behavioral Research:Integrating New and Existing Techniques,” MIS Quarterly, 35 (2),293–334.

athews, C.O. (1927), “The Effect of Position of Printed Response Words UponChildren’s Answers to Questions in Two-Response Types of Tests,” Journalof Educational Psychology, 18 (7), 445–57.

cClendon, McKee J. (1986), “Response-Order Effects for Dichotomous Ques-tions,” Social Science Quarterly, 67 (1), 205–11.

(1991), “Acquiescence and Recency Response OrderEffects in Interview Surveys,” Sociological Methods & Research, 20 (1),60–103.

cDaniel, Stephen W. and C.P. Rao (1980), “The Effect of Monetary Induce-ment on Mailed Questionnaire Response Quality,” Journal of MarketingResearch, 17 (2), 265–8.

cGuire, Robert J. (1969), “The Nature of Attitudes and Attitude Change,” inThe Handbook of Social Psychology, Lindzey Gardner and Aronson Eliot,eds. Reading, MA: Addison-Wesley, 136–314.

essick, Samuel (1991), “Psychology and Methodology of Response Styles,” inImproving the Inquiry in Social Science: A Volume in Honor of Lee J. Cron-bach, Snow Richard E. and Wiley David E., eds. Hillsdale, NJ: LawrenceErlbaum, 161–200.

isbett, Richard E. and Timothy D. Wilson (1977), “Telling More than We CanKnow: Verbal Reports on Mental Processes,” Psychological Review, 84 (3),231–59.

rne, Martin T. (1962), “On the Social Psychology of the Psychological Exper-iment: With Particular Reference to Demand Characteristics and TheirImplications,” American Psychologist, 17 (11), 776–83.

aulhus, Delbert L. (1984), “Two-Component Models of Socially Desir-able Responding,” Journal of Personality and Social Psychology, 46 (3),598–609.

etty, Richard E. and John T. Cacioppo (1986), Communication and Persuasion:Central and Peripheral Routes to Attitude Change, New York, NY: Springer-Verlag.

odsakoff, Philip M., Scott B. MacKenzie, Jeong-Yeon Lee and Nathan P. Pod-sakoff (2003), “Common Method Biases in Behavioral Research: A Critical

Review of the Literature and Recommended Remedies,” Journal of AppliedPsychology, 88 (5), 879–903.

odsakoff, Philip M., Scott B. MacKenzie and Nathan P. Podsakoff(2012), “Sources of Method Bias in Social Science Research and

W

of Retailing 88 (4, 2012) 542–555 555

Recommendations on How to Control It,” Annual Review of Psychology, 63,539–69.

apoport, Ronald B. (1982), “Sex Differences in Attitude Expression: A Gen-erational Explanation,” Public Opinion Quarterly, 46 (1), 86–96.

chmitt, Neal (1994), “Method Bias: The Importance of Theory and Measure-ment,” Journal of Organizational Behavior, 15 (5), 393–8.

chröeder, Mathis (2011), “Concepts and Topics,” in Retrospective DataCollection in the Survey of Health, Ageing and Retirement in Europe:SHARELIFE Methodology, Schröder Mathis ed. Mannheim, Germany:Mannheim Research Institute for the Economics of Ageing, 11–9.

chuman, Howard and Stanley Presser (1981), Questions and Answers in Atti-tude Surveys, New York: Academic Press.

chwarz, Norbert, Hans-Juergen Hippler and Elisabeth Noelle-Neumann(1992), “A Cognitive Model of Response-Order Effects in Survey Measure-ment,” in Context Effects in Social and Psychological Research, SchwarzNorbert and Sudman Seymour, eds. New York: Springer-Verlag.

iemsen, Enno, Aleda Roth and Pedro Oliveira (2010), “CommonMethod Bias in Regression Models with Linear, Quadratic, andInteraction Effects,” Organizational Research Methods, 13 (3),456–76.

pralls, Samuel A. III, Shelby D. Hunt and James B. Wilcox (2011), “ExtranetUse and Building Relationship Capital in Interfirm Distribution Networks:The Role of Extranet Capability,” Journal of Retailing, 87 (1), 59–74.

taw, Barry M. (1975), “Attribution of the “Causes” of Performance: A GeneralAlternative Interpretation of Cross-sectional Research on Organizations,”Organizational Behavior and Human Decision Processes, 13 (3), 414–32.

teenkamp, Jan-Benedict E.M., Martijn G. De Jong and Hans Baumgartner(2010), “Socially Desirable Response Tendencies in Survey Research,” Jour-nal of Marketing Research, 47 (2), 199–214.

udman, Seymour and Norman M. Bradburn (1982), Asking Questions a Prac-tical Guide to Questionnaire Design, San Francisco, CA: Jossey-Bass.

udman, Seymour, Norman M. Bradburn and Norbert Schwarz (1996), ThinkingAbout Answers: The Application of Cognitive Processes to Survey Method-ology, San Francisco, CA: Jossey-Bass.

ourangeau, Roger, Lance J. Rips and Kenneth A. Rasinski (2000), The Psy-chology of Survey Response, Cambridge, UK: Cambridge University Press.

rott, D. Merilee and Douglas N. Jackson (1967), “An Experimental Analysisof Acquiescence,” Journal of Experimental Research in Personality, 2 (4),278–88.

iswanathan, Madhu (2005), Measurement Error and Research Design, Thou-sand Oaks, CA: Sage Publications.

eijters, Bert and Hans Baumgartner (2012), “Misresponse to Reversed andNegated Items in Surveys: A Review,” Journal of Marketing Research, 49,http://dx.doi.org/10.1509/jmr.11.0368. Ahead of Print

eijters, Bert, Maggie Geuens and Niels Schillewaert (2009), “The ProximityEffect: The Role of Inter-Item Distance on Reverse-Item Bias,” InternationalJournal of Marketing Research, 26 (1), 2–12.

, Niels Schillewaert and Maggie Geuens (2008),“Assessing Response Styles Across Modes of Data Collection,” Journal ofthe Academy of Marketing Science, 36 (3), 409–22.

illiams, Larry J., James A. Cote and M. Ronald Buckley (1989), “Lack ofMethod Variance in Self-Reported Affect and Perceptions at Work: Realityor Artifact?,” Journal of Applied Psychology, 74 (3), 462–8.

illiams, Larry J., Mark B. Gavin and Margaret L. Williams (1996), “Mea-surement and Nonmeasurement Processes with Negative Affectivity andEmployee Attitudes,” Journal of Applied Psychology, 81 (1), 88–101.

illiams, Larry J., Nathan Hartman and Flavia Cavazotte (2010), “Method Vari-

ance and Marker Variables: A Review and Comprehensive CFA MarkerTechnique,” Organizational Research Methods, 13 (3), 477–514.

otruba, Thomas R. (1966), “Monetary Inducements and Mail QuestionnaireResponse,” Journal of Marketing Research, 3 (4), 398–400.