18
Journal of Behavioral Decision Making J. Behav. Dec. Making, 17: 173–190 (2004) Published online 11 June 2004 in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/bdm.468 Improving Judgement with Prepaid Expert Advice JANET A. SNIEZEK, GUNNAR E. SCHRAH and REESHAD S. DALAL* University of Illinois at Urbana-Champaign, USA ABSTRACT Decision makers (‘‘Judges’’) often make decisions after obtaining advice from an Advisor. The two parties often share a psychological ‘‘contract’’ about what each con- tributes in expertise to the decision and receives in monetary outcomes from it. In a laboratory experiment, we varied Advisor Experitise and the opportunity for monetary rewards. As expected, these manipulations influenced advice quality, advice taking, and Judge post-advice decision quality. The main contribution of the study, however, was the manipulation of the timing of monetary rewards (before or after the advising interaction). We found, as predicted, that committing money for expert—but not novice—advice increases Judges’ use of advice and their subsequent estimation accu- racy. Implications for advice giving and taking are discussed. Copyright # 2004 John Wiley & Sons, Ltd. key words advice giving; decision quality; monetary rewards; estimation accuracy People solicit advice for many reasons, one of which is to improve the quality of their judgements and deci- sions. The Judge Advisor System (JAS; Sniezek & Buckley, 1995; Sniezek & Van Swol, 2001) is one para- digm that has been used to study the giving and taking of advice for decision making. A JAS consists of a decision maker, called the Judge, who receives advice from one or more Advisors. Advice can come in many forms, one of which is a recommendation about the decision problem. Although the Advisor may share the consequences of the decision, the Judge alone is responsible for making the decision. The JAS model cap- tures many aspects of business and work relationships surrounding decisions, such as those between a client and consultant or supervisor and subordinate, that individual and group decision theory and research do not address. For example, several studies have controlled or manipulated Advisor behavior to focus exclusively on advice taking by the Judge (Budescu & Rantilla, 2000; Harvey, Harries, & Fischer, 2000; Kuhn & Sniezek, 1996; Rantilla & Budescu, 1999; Schrah, 2000; Stone & Price, 2000; Yaniv & Kleinberger, 2000; Yates, Price, Lee, & Ramirez, 1996). Other research has compared decision strategies of Judge and Copyright # 2004 John Wiley & Sons, Ltd. *Correspondence to: Reeshad S. Dalal, Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907-2081, USA. E-mail: [email protected] Contract/grant sponsor: Advance Displays and Interactive Displays Consortium, US Army Research Laboratory Cooperative Agreement; contract/grant number: DAAL01-96-2-0003.

Improving judgement with prepaid expert advice

Embed Size (px)

Citation preview

Page 1: Improving judgement with prepaid expert advice

Journal of Behavioral Decision Making

J. Behav. Dec. Making, 17: 173–190 (2004)

Published online 11 June 2004 in Wiley InterScience

(www.interscience.wiley.com) DOI: 10.1002/bdm.468

Improving Judgement with PrepaidExpert Advice

JANET A. SNIEZEK, GUNNAR E. SCHRAH and REESHAD S. DALAL*

Universityof Illinois atUrbana-Champaign, USA

ABSTRACT

Decision makers (‘‘Judges’’) often make decisions after obtaining advice from anAdvisor. The two parties often share a psychological ‘‘contract’’ about what each con-tributes in expertise to the decision and receives in monetary outcomes from it. In alaboratory experiment, we varied Advisor Experitise and the opportunity for monetaryrewards. As expected, these manipulations influenced advice quality, advice taking,and Judge post-advice decision quality. The main contribution of the study, however,was the manipulation of the timing of monetary rewards (before or after the advisinginteraction). We found, as predicted, that committing money for expert—but notnovice—advice increases Judges’ use of advice and their subsequent estimation accu-racy. Implications for advice giving and taking are discussed. Copyright # 2004 JohnWiley & Sons, Ltd.

key words advice giving; decision quality; monetary rewards; estimation accuracy

People solicit advice for many reasons, one of which is to improve the quality of their judgements and deci-

sions. The Judge Advisor System (JAS; Sniezek & Buckley, 1995; Sniezek & Van Swol, 2001) is one para-

digm that has been used to study the giving and taking of advice for decision making. A JAS consists of a

decision maker, called the Judge, who receives advice from one or more Advisors. Advice can come in many

forms, one of which is a recommendation about the decision problem. Although the Advisor may share the

consequences of the decision, the Judge alone is responsible for making the decision. The JAS model cap-

tures many aspects of business and work relationships surrounding decisions, such as those between a client

and consultant or supervisor and subordinate, that individual and group decision theory and research do not

address. For example, several studies have controlled or manipulated Advisor behavior to focus exclusively

on advice taking by the Judge (Budescu & Rantilla, 2000; Harvey, Harries, & Fischer, 2000; Kuhn &

Sniezek, 1996; Rantilla & Budescu, 1999; Schrah, 2000; Stone & Price, 2000; Yaniv & Kleinberger,

2000; Yates, Price, Lee, & Ramirez, 1996). Other research has compared decision strategies of Judge and

Copyright # 2004 John Wiley & Sons, Ltd.

*Correspondence to: Reeshad S. Dalal, Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN47907-2081, USA. E-mail: [email protected]

Contract/grant sponsor: Advance Displays and Interactive Displays Consortium, US Army Research Laboratory CooperativeAgreement; contract/grant number: DAAL01-96-2-0003.

Page 2: Improving judgement with prepaid expert advice

Advisor (Kray & Gonzalez, 1999), or emphasized the parties’ joint behavior (Hedlund, Ilgen, & Hollenbeck,

1998; Sniezek & Van Swol, 2001; Dalal, 2001). The JAS paradigm, in short, allows the study of both the

Judge’s and Advisor’s unique roles in the decision-making process, as well as their interaction. An important

goal in this and most JAS studies is achieving an understanding of the processes responsible for enhancing or

diminishing the accuracy of the Judge and Advisor.

The present research examines aspects of the ‘‘psychological contract’’ between the Judge and a single

Advisor, and how changes in these aspects of the contract influence the Judge’s use of advice and the quality

of his or her final decision. A psychological contract is an informal and implicit understanding between par-

ties in a relationship that defines acceptable behavior (Robinson, 1996; Robinson, Kraatz, & Rousseau,

1994). Central to the contract is a social-exchange agreement whereby each party perceives benefits from

and obligations toward the other. The nature of the contract will vary across situations (Rousseau, 1995); in a

JAS, it can reasonably be expected to concern what the parties will each contribute to the task and receive in

outcomes from it. For example, each party has an idea of the parties’ relative expertise vis-a-vis the task at

hand, and how outcomes from the Judge’s decision will be divided between them. Despite its significance in

organizations, the psychological contract has not been experimentally varied in decision-making studies of

advice giving and taking. Manipulating selected aspects of the contract enables us to better examine advice

giving and taking processes.

Several aspects of the psychological contract are manipulated in the current study. First, we manipulate the

relative amount of expertise each person brings to the task. One research goal is to learn how advice giving

and taking change when the Judge–Advisor relationship is one between two novices, or between a novice

and an expert. Unlike most prior research on expertise, we experimentally manipulate the factor.

The second component of the contract concerns performance–contingent rewards. The Judge’s task is to

estimate prices of various models of a consumer product. In some experimental conditions, JASs have the

potential to win monetary outcomes contingent on the Judge’s post-advice decision accuracy. The Judge

decides what proportion of the money to give to the Advisor, but the timing of this reward allocation decision

is manipulated to occur either before or after the advising interactions. This makes it possible to understand

how advice giving and taking change as a function of whether the Judge has made, or is yet to make, a com-

mitment to pay for advice. This examination of Reward Allocation Timing represents the main contribution

of this paper. As we shall see, a focal prediction is the greater use of expert advice once the decision to allo-

cate reward money has already been made.

Advisor Expertise

Knowing each other’s expertise level is assumed to be important to members of a JAS, because people are

generally motivated to engage in social comparison regarding their performance (Suls & Tesch, 1978;

Brickman & Berman, 1971). In JASs, comparisons of expertise suggest how to tailor advice for a Judge

or weigh advice from an Advisor. Judges, being responsible for making the final judgements as well as allo-

cating money, should be especially interested in comparing the expertise of themselves and their Advisor. In

an organizational context, they might notice the label of ‘‘expert’’ provided in the initial introduction of the

Advisor, or infer expertise from the Advisor’s credentials and training, occupation, etc. From a psychological

contract perspective, expertise is a contribution (or ‘‘input’’) that either or both parties can bring to the JAS.

A Judge is likely to expect highly accurate advice from an expert Advisor, especially if the Judge is ‘‘pay-

ing’’ the Advisor for the advice.

Of course, expert and novice are relative terms. We use these to distinguish between two levels of exper-

tise, keeping in mind that characteristics of ‘‘experts’’ in judgement and decision making vary widely over

domains (cf. Shanteau, 1992). We define expertise as domain- or task-specific knowledge (Ericsson & Smith,

1991). According to this view, a person with domain-specific training can be assumed to be an expert while

someone without such training is considered a novice.

174 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 3: Improving judgement with prepaid expert advice

We manipulated Advisor Experitise by training some Advisors on the relationship between product attri-

butes and price. Advisors who had been trained were introduced to their respective Judges as experts while

the others were presented as peers who, like the Judges, were untrained in the price-estimation task.

We allowed the quality of human Advisors’ advice to vary across judgements within each Advisor, across

Advisors within the same expertise condition, and across expertise conditions. Expert Advisors had training

credentials, reputations as experts, and better judgement than novices.1 Our approach thus provided a more

realistic representation of real-world expertise than did previous research (e.g., Harvey & Fischer, 1997) that

fabricated Advisor ‘‘expertise’’ levels by controlling advice quality and impressions of expertise. To ensure

that the ascription of expertise to expert Advisors in this study was valid, we conducted the following manip-

ulation check.

Expertise manipulation check: Expert Advisors will have more accurate recommendations than will

novice Advisors.

A critical feature to keep in mind, however, is that there is no necessary relation between advice quality

and its use by the Judge (Harvey et al., 2000). For instance, Yaniv and Kleinberger (2000) report that decision

makers engage in ‘‘egocentric discounting’’ of advice (i.e., underweighting of advice relative to one’s own

opinion) even if the advice is more accurate than their own judgement (see also Harvey & Fischer, 1997;

Sniezek & Buckley, 1995). Consequently, Judges tend to underutilize advice, and this phenomenon is

reduced but not eliminated when Advisors are experts. It is therefore necessary to examine other aspects

of the psychological contract that interact with the Advisor’s expertise level in influencing Judges’ utilization

of advice. In this regard, we investigate the presence and timing of allocation of monetary rewards.

Monetary rewards

Long an accepted practice in the experimental economics literature, the use of monetary performance incen-

tives in psychological research is increasing (Zwick, Erev, & Budescu, 1999). Empirical research (e.g.,

Camerer & Hogarth, 1999; Smith & Walker, 1993) has found, in support of expectancy theories (e.g., Naylor,

Pritchard, & Ilgen, 1980), that performance-contingent financial incentives improve average performance

when effort increases due to the presence of incentives, and when performance increases as a consequence

of effort.

Decisions in real-world JAS analogues often involve monetary outcomes, but JAS laboratory studies

involving rewards have been limited in scope. Rewards have typically been available to only Judges (e.g.,

Yaniv & Kleinberger, 2000) or have not varied across conditions (e.g., Sniezek & Buckley, 1995). Yet,

important new theoretical and practical issues arise when rewards are used with JASs. Compared to inde-

pendent individuals, JASs and groups face additional uncertainty about rewards when there is outcome inter-

dependence (i.e., when all outcomes from joint work are distributed among members; see Sniezek, May, &

Sawyer, 1990). It is also not clear whether Advisors and Judges react comparably to the presence of rewards.

In the current study, care was taken to insure that the task was one in which initial performance of unre-

warded individuals would leave room for improvement, perhaps by increased effort (Harvey & Fischer,

1997). In reward conditions, the amount of money provided to a JAS was based on the accuracy of the Jud-

ge’s post-advice estimates. The Judge allocated a proportion of this overall sum of money to the Advisor, and

kept the rest. There were no reward opportunities and thus no reward allocation decisions in the ‘‘no reward’’

control conditions. The possibility of winning rewards is likely to motivate Advisors and Judges to greater

1Unlike their real-world counterparts, our experts were admittedly inexperienced in the task. However, Camerer and Johnson (1991) andGarb (1989) have argued that the correlation between experience and performance on judgement and decision-making tasks isapproximately zero when controlling for amounts of training and feedback.

J. A. Sniezek et al. Improving Judgement with Prepaid Expert Advice 175

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 4: Improving judgement with prepaid expert advice

effort in their recommendations and pre-advice estimates respectively. From the perspective of the psycho-

logical contract, both parties are likely to increase their contributions (inputs) to the task.

Reward Presence manipulation check: Reward money will increase the accuracy of Judges’ pre-advice

estimates and Advisors’ recommendations.

Whereas Reward Presence alone is expected to improve the quality of Advisors’ recommendations and

Judges’ pre-advice estimates, higher advice utilization and Judges’ post-advice quality will depend upon the

presence of both rewards and Advisor Experitise. That is, advice utilization and Judge post-advice accuracy

should be higher when both rewards and Advisor Experitise are present that when either or both is absent.

In the absence of rewards, Judges may not be motivated to expend the effort to evaluate and incorporate

Advisors’ recommendations into their own final estimates even when the Advisors are experts. Thus, when

there is no possibility of rewards, advice utilization may or may not vary significantly as a function of Advi-

sor Experitise (Dalal, 2001). Conversely, in the absence of Advisor Experitise, Judges cannot expect an

asymmetry in judgement quality between themselves and their Advisors even in the presence of rewards.

It is therefore unclear whether such Judges would increase the weighting of their Advisors’ recommenda-

tions relative to their own pre-advice estimates merely because rewards are present. In the presence of both

rewards and Advisor Experitise, however, Judges have the motivation and the rationale (i.e., superior Advi-

sor accuracy) for utilizing advice.

Reward Presence hypothesis 1: Advice utilization will be higher in the condition in which monetary

rewards are present and Advisors are experts than in all other conditions.

The influence of Reward Presence and Advisor Experitise on Judges’ post-advice accuracy follows

directly from their corresponding effects on advice utilization. Advisor recommendations will be more accu-

rate because Advisors are experts and because they are motivated to win rewards. Judge initial recommen-

dations will also be more accurate because of Reward Presence. Finally, as hypothesized above, Judges will

be more likely to use the expert advice (i.e., they will engage in less egocentric discounting). Their post-

advice accuracy is therefore expected to be higher than if rewards or Advisor Experitise or both were absent.

Reward Presence hypothesis 2: Judges’ post-advice accuracy will be higher in the condition in which

monetary rewards are present and Advisors are experts than in all other conditions.

Reward Allocation Timing

An important factor in organizational and market versions of JASs that involve money is the timing of the allo-

cation of financial incentives (Sniezek & Van Swol, 2001). An investigation of this aspect of the psychological

contract represents the major contribution of the present research. We manipulate whether the Judge divides the

anticipated incentive before (‘‘pre-task’’) or after receiving advice (‘‘post-task’’), and we propose that the Jud-

ge’s relative timing of receiving advice changes the way in which he or she evaluates and uses the advice.

In numerous real-world cases, Advisors provide information only after having established the percentage

or amount or rate of financial remuneration they will receive. Most consultants and agents, for example,

function in this manner. However, this is not the only financial arrangement according to which advice is

given and received. In certain situations, decision makers provide remuneration after they have already

received advice.

Real-world examples of such post-task reward allocations are readily available. For instance, servers at

restaurants frequently provide advice about the quality of items on the menu; they are aware that the amount

of their tip depends partly on the extent to which customers enjoy the recommended entrees. Consider also

the example of a large academic research project, on which the primary investigator is required to adjudicate

the issue of authorship order for collaborators on the project according to his or her perception of the

176 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 5: Improving judgement with prepaid expert advice

collaborators’ respective contributions. Our ‘‘post-task’’ reward allocation condition is, in fact, a special case

of any situation in which one person provides another with goods or services, and expects to be compensated

based on the recipient’s evaluations of their value. Despite the existence of many real-world psychological

contracts that incorporate pre-task or post-task (or both) reward allocations, to our knowledge the effects of

Reward Allocation Timing have not previously been investigated from either a theoretical or an empirical

standpoint.

One potential difference between the two Reward Allocation Timing conditions is how much Judges will

pay their Advisors in the different conditions. Quite simply, however, we do not predict any such difference

because previous research (e.g., Sniezek & Van Swol, 2001; Dalal, 2001) suggests that Judges in short

laboratory experimental tasks usually follow equality norms in making allocations.

There is, however, another (and more interesting) potential distinction between the two Reward Allocation

Timing conditions. This pertains to the extent to which Judges from the different timing conditions will dif-

ferentially utilize advice and hence be accurate in their post-advice decisions. A central premise of this paper

is that the influence of Reward Allocation Timing on advice utilization and post-advice decision accuracy

will depend on the expertise level of the Advisor.

A number of studies involving overjustification, compliance, contrast effects, and behavioral commitment

have supported the notion that a prior, seemingly relevant, judgement or behavior can influence subsequent

judgements and behavior (Bastardi & Shafir, 1998). The prior act of having allocated a proportion of the

reward to the Advisor may have two related and complementary effects on the pre-task Judge. First, his

or her future information processing and behavior is likely to be biased in the direction of maintaining con-

sistency with the allocation (e.g., Staw, 1997). Consistency is maintained and dissonance avoided when he or

she utilizes the advice, thereby justifying the prior investment. The post-task Judge, in contrast, has not

already ‘‘paid’’ for the advice. He or she, therefore, does not experience behavioral consistency pressures

and, as a consequence, will not feel as compelled to use the advice. In other words, he or she is expected to

engage in standard ‘‘egocentric discounting’’ practices.

The second effect differs slightly from the above ‘‘commitment’’ effect. It is predicted via a logical exten-

sion of the ‘‘mere ownership effect’’ (e.g., Beggan, 1992), the ‘‘endowment effect’’ (e.g., Kahneman,

Knetsch, & Thaler, 1990), and the research on price as a signal of quality (e.g., Teas & Agarwal, 2000;

Erickson & Johansson, 1985; Monroe, 1973).2 The ‘‘mere ownership effect’’ and the ‘‘endowment effect’’

predict that, ceteris paribus, objects are viewed more favorably if they are owned than if they are not. The

price–quality relationship literature reveals that price is positively correlated with perceived quality. Now,

pre-task Judges have already ‘‘paid’’ for advice and therefore, in a sense, own it. Moreover, we expect these

Judges to allocate a substantial portion of their reward—approximately 50%, as suggested by past research

(Dalal, 2001; Sniezek & Van Swol, 2001). Because the advice is ‘‘owned’’ and because Judges paid a high

‘‘price’’ for it, they are likely to believe that it is of high quality. As a consequence, they are likely to use it.

Post-task Judges do not experience the ‘‘ownership effect’’ and are not influenced by price–quality con-

siderations. Their advice utilization is therefore likely to be lower. In fact, the salience of deferring allocation

till the end of the task may lead these Judges to believe that the advice will not be of high quality. Thus,

advice utilization in the post-payment case may be lower not just due to a lack of pressures generated by

pre-paying, but also because of active under-valuation of advice.

The research reviewed above supports the contention that pre-task Judges will utilize more advice than post-

task Judges, all else being equal. However, we do not presume that Judges’ advice taking is as uncomplicated

as this. According to Erickson and Johansson (1985), price loses some importance as a predictor of quality

when other cues predicting quality are also available. For the pre-task Judge, the expertise level ascribed to

2We thank an anonymous reviewer for alerting us to the marketing literature on the price–quality relationship, and for influencing ourthinking about the mechanism(s) by which pre-payment influences Judges’ advice utilization and post-advice accuracy.

J. A. Sniezek et al. Improving Judgement with Prepaid Expert Advice 177

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 6: Improving judgement with prepaid expert advice

the Advisor is a potent cue vis-a-vis the quality of advice the Advisor is capable of giving. The psychological

contract for pre-task Judges with expert Advisors essentially entails the provision of a fair proportion of the

anticipated rewards to the Advisor in exchange for advice that achieves the potential for high quality created by

the Advisor’s Expertise training. Advice utilization is therefore expected to be high in this condition.

Pre-task Judges with novice Advisors, in contrast, cannot reasonably expect that high reward allocations

will motivate their Advisors to provide estimates more accurate on average than the Judges’ own pre-advice

estimates. Although not substantially different in magnitude from those to expert Advisors, these allocations

will merely remain part of a common norm to share under such circumstances. Because of this, pre-task

Judges with novice Advisors are unlikely to exhibit high levels of advice utilization.

Thus, we predict that advice utilization is not increased by pre-payment per se, but rather by pre-payment

for expert advice. Advice utilization should therefore be highest in this condition. Neither pre-payment nor

Advisor Experitise alone would influence advice utilization to the same extent.

Reward Timing hypothesis 1: Advice utilization will be higher in the condition in which reward alloca-

tions are made before the task and Advisors are experts than in all other conditions.

Because utilization of expert advice is generally desirable in increasing the accuracy of post-advice deci-

sions, we predict that pre-payment for expert advice will also boost the accuracy of the Judge’s post-advice

decision.

Reward Timing hypothesis 2: Judges’ post-advice accuracy will be higher in the condition in which

reward allocations are made before the task and Advisors are experts than in all other conditions.

Although the hypotheses above are germane, they do not account for the complexity of the task and the

consequent dependency of several measures of interest on each other. Manipulating aspects of the psycho-

logical contract is likely to influence both Judge and Advisor behavior. This change in both parties’ behavior

is then likely to influence the quality of the Judge’s post-advice decisions. For instance, a stated assumption

in the rationale for the hypotheses is that advice utilization influences the accuracy of the Judge’s post-advice

decisions. Yet, this assumption is not explicitly tested in the hypotheses. In addition to tests of hypotheses,

therefore, the influence of Advisor Experitise and Reward Allocation Timing on advice utilization and Judge

post-advice error is examined via path analyses.3

METHOD

The study involved the manipulation of the psychological contract between the Judge and Advisor. Judges and

Advisors were given a veridical understanding of the relative expertise of the Judge and Advisor, and the con-

ditions determining how rewards, if any, would be given to each member. As in many natural JASs, the Judge

and Advisor were physically separated and communicated via transmission of text. This procedure controls

for effects of several factors present in face-to-face or video-mediated interactions (e.g. non-verbal behavior,

emotional expression, racial/ethnic and gender stereotypes) that are unrelated to the hypotheses of interest.

Participants

Participants were 182 undergraduate students at a large public university in the United States. The present

study was one of several for which participants could volunteer in order to fulfill a research participation

requirement for an introductory psychology course.

3We thank an anonymous reviewer for suggesting this.

178 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 7: Improving judgement with prepaid expert advice

Judgement task

The task was standardized across participants and involved estimating the prices of seven backpack models

based on information about their features. Backpacks were chosen because the participant population pur-

chases this product and should therefore find the task interesting (albeit not trivial, given the complexities of

current market data on pricing). Judges were required to supply price estimates both before and after con-

sulting their Advisor’s recommendation (referred to as Judge pre-advice and Judge post-advice estimates,

respectively—not to be confused with pre-task and post-task reward allocations). Advisors recorded their

recommended price estimates, which were then given to the Judge.

Information about the backpacks was based on data from Consumer Reports. A linear model for price

prediction was developed (R2¼ 0.91, R2adj¼ 0.71, p< 0.05) on the basis of five cues (backpack features):

capacity (the number of standard-size paperback books that the backpack is capable of holding); conveni-

ence of loading (panel-loading versus top-loading); back padding (quilted versus plain); shoulder straps

(webbed versus regular); and underside (leather versus same material as rest of backpack). Individual regres-

sion coefficients for all five cues used were significant at the 0.05 level. To avoid contaminating the perfor-

mance measure by random errors associated with actual prices in the relatively small sample of seven

backpacks, the price predicted by the model served as the criterion in computing accuracy. The model is

also a better representation of expert knowledge regarding backpack prices than are the actual prices for

seven models.

The seven backpacks were presented to participants in random order. Because the accuracy of price esti-

mates on any single backpack may have been unduly influenced by features unique to that backpack, and

because there was nothing inherently meaningful (from a statistical or methodological standpoint) about any

individual backpack, each participant’s judgements on the seven backpacks were averaged; analyses were

conducted on these averages.

Design

The design was a 2� 3� 3 factorial with the factors of Advisor Expertise (Expert or Novice Advisor),

Reward (No Reward, Pre-task reward allocation, Post-task reward allocation) and Role (Judge pre-advice

estimate, Advisor recommendation, Judge post-advice estimate). The first two factors were varied between

participants. Assignment to the six treatment conditions, engendered by fully crossing the two Advisor

Expertise and three Reward categories, was random by experimental session. Role was treated as a

within-participants factor due to repeated observations on the Judge (Pre-advice and Post-advice estimates)

as well as potential dependencies between the observations for the Judge and Advisor of the same JAS.

Advisor Expertise

In sessions assigned to expert Advisor conditions, Advisors viewed a short video on backpack pricing. Train-

ing was designed to increase the estimation accuracy without being so complete as to eliminate estimation

error and create a ceiling effect for performance. Judges with expert Advisors were informed of the Advi-

sor’s Expertise training, and its potential benefits. Judges with novice Advisors were informed that their

Advisors possessed, on average, no more expertise than they themselves.

Rewards

Session numbers were randomly assigned to be in the ‘‘reward’’ or ‘‘no reward’’ treatment conditions. All

participants in the same session were assigned to the same condition. After the conclusion of all data collec-

tion for the study, 50% of the JASs in the reward conditions were randomly chosen for actual performance-

contingent monetary payments. This was done because it permitted larger, more attractive, rewards without

increasing the overall cost. It can also be argued that this payment scheme realistically represented various

chance factors in determining actual outcomes in real-world contexts.

J. A. Sniezek et al. Improving Judgement with Prepaid Expert Advice 179

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 8: Improving judgement with prepaid expert advice

Thus, the monetary reward should be understood to mean the dollar amount that the JAS had a 50:50

chance of receiving. The amount of the potential reward could vary from $0 to $21, with $3 awarded for

each problem in which the Judge’s post-advice estimate was within five dollars of the price predicted by

the linear model. Participants in the reward conditions were truthfully informed about all the reward con-

tingencies described above. Rewards were not available to participants in the ‘‘no reward’’ conditions. These

participants were also unaware that rewards were available to participants in the other conditions.

Reward Allocation Timing

Judges were given the power to distribute the reward between themselves and their Advisors. The allocation

to the Advisor could vary from 0 to 100%; i.e., anywhere between the extremes of giving the Advisor nothing

and giving all of the reward to the Advisor. Previous studies (e.g., Sniezek & Van Swol, 2001; Dalal, 2001)

have found, however, that undergraduates in laboratory experiments tend to distribute monetary rewards to

themselves and other parties in a relatively equal manner. Thus, the reward allocation percentage is unlikely

to be a major variable in the current study; rather, the presence and timing of reward allocations are the main

variables of interest. The timing of allocation decisions was varied (randomly by experimental session) so

that Judges determined the Advisor’s reward percentage either before task inception (Pre-task) or after task

completion (Post-task). Note that feedback about their own or their Advisors’ performance was not available

to Judges in either timing condition.

Dependent variables

Accuracy was defined as the absence of estimation error, which, in turn, was computed as the mean absolute

difference between the estimates and the criterion prices predicted by the optimal linear model over the

seven problems.4 To quantify Advisor social influence, we followed Harvey and Fischer’s (1997) method.

Advice utilization was operationalized as the difference between Judge pre-advice and post-advice estimates

as a proportion of the difference between Judge pre-advice estimates and Advisor recommendations. For

each Judge, advice utilization was the mean change index across the seven problems.

Procedure

Participants in each session were randomly paired (as a JAS) and, within the pair, randomly assigned the role

of Judge or Advisor. These random role assignments were done publicly. This allowed every participant to

appreciate that his or her partner was indeed a real person—specifically, a peer in the same undergraduate

psychology course—participating concurrently in the task. Yet, the random assignment to roles also con-

trolled for participants’ perceptions and inferences about their partners’ task performance.

Judges and Advisors completed their respective portions of the task in different rooms. The experimenter

passed the necessary written information between members of the Judge–Advisor pairs. Apart from this, no

verbal or non-verbal communication between Judges and their Advisors was allowed until the end of the

task. This was to ensure that only the Advisors’ written recommendations could be considered as ‘‘advice’’

by their Judges.

The first step in the Expert Advisors condition was the training of Advisors. Next, Judges in the pre-task

allocation condition recorded the Advisor’s reward percentage, which was then conveyed to the Advisor.5

The price-estimation task followed. Judges made pre-advice estimates while Advisors made recommenda-

tions. Judges were then given their Advisors’ written advice. Having access to the backpack information,

4Solely for the purpose of providing rewards to participants, we computed a binary measure of accuracy that determined whether eachestimate was within $5 of the price determined by the linear model.5This was done after the training session for expert advisors to reduce dilution of its impact by the training and ensure that the rewardpercentage amount would not affect Advisor learning during training.

180 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 9: Improving judgement with prepaid expert advice

their own written pre-advice estimates, and the written advice, Judges then made post-advice estimates. Fol-

lowing the task, Judges in the Post-task allocation condition decided their Advisors’ reward percentage. All

participants were then debriefed and thanked.

RESULTS

After showing support for the two manipulation checks, we describe the effects of two factors of the psy-

chological contract, Advisor Expertise and Reward presence, on the behavior of the Judge. Next, we verify

that, as expected, actual reward allocations did not differ across the four treatment conditions created by

manipulating the levels of Advisor Expertise and the third contract factor, Reward Allocation Timing. We

then report analyses designed to test the hypotheses about Advisor Expertise and Reward Allocation Timing

on advice utilization and Judges’ post-advice estimation accuracy. Finally, we model the effects of Advisor

Expertise and Reward Allocation Timing using path analysis.

Manipulation checks

The first step was a manipulation check (Manipulation Check 1) to verify that Advisors given expertise train-

ing made more accurate predictions than novice Advisors. Accuracy is defined in terms of (the minimization

of) estimation error, which in turn is the absolute difference between the price predicted by the linear model

and the participant’s price judgement. Results were supportive; Expert Advisors (M¼ $5.56) had lower esti-

mation error than novice Advisors (M¼ $8.79), F(1, 83)¼ 29.18, p< 0.05. Thus, we conclude that the train-

ing improved performance, and that the expertise ascribed to expert Advisors was justified.

Analyses also supported Manipulation Check 2, which predicted that Reward presence would lead to

higher accuracy for both Judges and Advisors. The recommendations of Advisors contained significantly

less error when rewards were present (M¼ $6.64) than when they were absent (M¼ $8.34), F(1,

83)¼ 7.42, p< 0.05. In addition, Judges’ pre-advice estimates contained marginally less error when rewards

were present (Mreward present¼ 6.51<Mreward absent ¼ 8.08), F(1, 83)¼ 3.34, p< 0.10. We therefore conclude

that, in general, the presence of monetary rewards contingent on accuracy provides a motivational effect

resulting in decreased error in judgement.

Reward Presence

Reward Presence Hypothesis 1 proposed that advice utilization would be greatest for the Judges with expert

Advisors and rewards. Mean advice utilization for all four conditions is shown in Figure 1. A planned com-

parison between conditions revealed that, as hypothesized, advice utilization was higher when Judges had

expert Advisors and rewards (M¼ 57.7%) than when either expert Advisors or monetary rewards or both

were absent (M¼ 46.4%), F(1, 85)¼ 6.26, p< 0.05.6

Reward Presence Hypothesis 2 proposed that Judges with expert Advisors and rewards would have the

highest post-advice accuracy. Results were supportive, and can be seen in Figure 2. A planned comparison

found the post-advice estimation error of Judges with expert Advisors and rewards (M¼ $5.78) to be lower

than when expert Advisors or rewards or both were absent (M¼ $7.71), F(1, 85)¼ 8.21, p< 0.05.

Reward Allocation proportions

Before testing the hypotheses associated with Reward Allocation Timing, it is important to verify that

rewards were not shared in different proportions over the four experimental conditions involving rewards.

6Tests of this and subsequent hypotheses involved planned contrasts of data from one treatment condition versus the pooled data from theother three relevant treatment conditions.

J. A. Sniezek et al. Improving Judgement with Prepaid Expert Advice 181

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 10: Improving judgement with prepaid expert advice

Comparison of the mean allocations to Advisors for the Pre-task (M¼ 58.32%) and Post-task reward

allocation (M¼ 58.04%) did not show any effect of timing, F(1, 55)¼ 0.23, p> 0.10. Further, the difference

between reward allocations to expert and novice Advisors in the Pre-task condition (Mdifference ¼ þ 5.10%)

was not significantly greater than the analogous difference in the Post-task condition (Mdifference ¼ þ 2.23%),

F(1, 55)¼ 0.08, p> 0.10.7

Figure 1. Advice utilization as a function of Advisor Expertise and Reward Presence.

Figure 2. Judge accuracy (plotted in terms of mean absolute estimation error) as a function of Advisor Expertise andReward Presence. Note: Lower estimation errors indicate higher accuracy.

7With regard to the negligible role of the reward allocation percentage, the present findings (mean allocation to Advisor¼ 58.16%,standard deviation¼ 18.52%; 55.9% of allotments were exactly 50%) were entirely consistent with those of previous studies (e.g.,Sniezek & Van Swol, 2001; Dalal, 2001). The mean percentage allocation was not significantly higher than 50%, and percentageallocations did not vary significantly as a function of Advisor Expertise or decision timing. Additionally, results of hypothesis tests werenot appreciably altered when the reward allocation percentage was used as a covariate. Now, in the foregoing analyses, expertise was amanipulated variable with two levels: novice and expert. An alternative approach is to assess ‘‘expertise’’ as an individual differencevariable, specifically in terms of Advisor accuracy. The correlation between Advisor error and reward allocation percentage was non-significant for both the Pre-task (r¼ 0.03, p> 0.10) and Post-task (r¼�0.02, p> 0.10) conditions, meaning—respectively—thatAdvisors rewarded with larger shares of money were not subsequently more accurate, and that more accurate Advisors were notsubsequently rewarded with larger shares of money. Thus, reward allocation amounts did not vary significantly with either ascribedexpertise level or actual advice quality.

182 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 11: Improving judgement with prepaid expert advice

Reward Allocation Timing

Hypothesis tests. Reward Allocation Timing Hypothesis 1 proposed that advice would be used most when it

came from an expert and when it had been paid for before the task. Data were supportive. A planned com-

parison between conditions revealed that, as hypothesized, utilization of expert advice was higher when

Judges allocated rewards pre-task and had expert Advisors (M¼ 65.87%) than in all other cases

(M¼ 53.74%), F(1, 55)¼ 7.33, p< 0.05. Figure 3 illustrates advice utilization as a function of Expertise

and Reward Allocation Timing.

Reward Allocation Timing Hypotheses 2 proposed that Judges’ post-advice judgements would be most

accurate when they were advised by experts and had paid for their advice up front. As hypothesized, Judge

post-advice error was lower in the condition with pre-task allocations and expert advice (M¼ 5.13) than in

the other conditions (M¼ 6.81), F(1, 55)¼ 3.97, p< 0.05. Figure 4 displays mean post-advice Judge error by

level of Expertise and Timing condition.

Why was advice utilization highest and Judge post-advice estimation error lowest when Judges pre-paid

for expert advice? Post-hoc pairwise comparisons revealed that, as we had speculated, advice utilization was

Figure 3. Advice utilization as a function of Advisor Expertise and timing of the reward decision.

Figure 4. Judge accuracy (plotted in terms of mean absolute estimation error) as a function of Advisor Expertise andtiming of the reward decision. Note: Lower estimation error indicate higher accuracy.

J. A. Sniezek et al. Improving Judgement with Prepaid Expert Advice 183

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 12: Improving judgement with prepaid expert advice

higher for expert versus novice Advisors in the pre-payment, but not post-payment, condition [MPre-Payment,

Expert Advisors ¼ 65.87%, MPre-Payment, Novice Advisors ¼ 48.63%, F(1, 30)¼ 5.24, p< 0.05; MPost-Payment, Expert

Advisors ¼ 51.28%, MPost-Payment, Novice Advisors¼ 49.54%, F(1, 25)¼ 0.01, p> 0.05]. The same pattern

occurred with regard to Judge post-advice error as well [MPre-Payment, Expert Advisors ¼ 5.13, MPre-Payment, Novice

Advisors ¼ 7.25, F(1, 30)¼ 5.15, p< 0.05; MPost-Payment, Expert Advisors ¼ 6.28, MPost-Payment, Novice

Advisors ¼ 7.09, F(1, 25)¼ 0.01, p> 0.05].

Path analyses. To account for the inter-dependencies in the data, two path models—a full model and a

reduced (fully-mediated) model—were fitted (see Figure 5). The full model used Advisor Expertise and

Reward Allocation Timing as proximal causes of both advice utilization and Judge post-advice accuracy

(error). In addition, advice utilization predicted Judge post-advice error. Advisor Expertise and Reward Allo-

cation Timing both predicted Advisor error, and Reward Allocation Timing predicted Judge pre-advice error.8

In a structural equation modeling (SEM) framework, there are several well-known problems related to

modeling non-nominal interaction terms with more than two categories. These include non-normal variable

distributions, unreliability of indicator variables, nonlinearity of parameters, misspecified mean structures,

and lack of invariance across variable transformations (Cortina, Chen, & Dunlap, 2001; Moosbrugger,

Schermelleh-Engel, & Klein, 1997). Therefore, only a main-effects model was fitted. Because the interaction

between Advisor Expertise and Reward Allocation Timing was not modeled, the path-analytic results cannot

be directly compared to the ANOVA results reported earlier.

As can be seen in Table 1, the full model provided a good fit to the data. This is of course unsurprising,

considering that almost all the possible paths were freely estimated in the model. Note also, from Figure 5,

that Reward Allocation Timing and Advisor Expertise were essentially uncorrelated, thereby verifying the

efficacy of the random assignment procedures used.

The full model appeared to indicate that both Advisor Expertise and Reward Allocation Timing influenced

the amount of error in the Judge’s post-advice decision. The direct effects, however, were not significant. The

effect of Advisor Expertise on Judge post-advice error was mediated by Advisor error and, to a much lesser

Figure 5. Path model showing the influence of Reward Timing and Advisor Expertise on Judge post-advice error.*p< 0.05; yp< 0.10. Note: Paths fixed to zero in the reduced model are in dashed lines. Numbers outside parenthesesreflect coefficient values for the full model, whereas numbers inside parentheses reflect coefficient values for the reducedmodel. Reward (allocation) timing was scored such that a positive path from it to another variable indicates that higher

values of that variable are associated with pre-task reward allocation.

8To prevent a perfectly-fitting model that would result from the inclusion of all possible paths, the path from Advisor Expertise to Judgepre-advice error was omitted. A one-way ANCOVA did not reveal a significant effect of Advisor Expertise on Judge pre-advice errorwhen controlling for Reward Allocation Timing, F(1, 56)¼ 2.63, p> 0.10. This indicates that omitting the path in question was unlikelyto have caused a large decrement in model-data fit.

184 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 13: Improving judgement with prepaid expert advice

extent, by advice utilization. Likewise, the effect of Reward Allocation Timing on Judge post-advice error

was mediated by Judge pre-advice error and, to a lesser extent, by Advisor error and by advice utilization.

Relationships between Advisor Expertise and Reward Allocation Timing, on the one hand, and advice uti-

lization on the other, tended not to be strongly mediated by Advisor error and Judge pre-advice error. In

general, both expert Advisors and pre-task reward allocations were associated with increased advice utiliza-

tion and (via the mediators) decreased Judge post-advice error.

A reduced model was also constructed by fixing the direct effects of Advisor Expertise and Reward Allo-

cation Timing on Judge post-advice error to zero. In other words, the manipulated variables (Advisor Exper-

tise and Reward Allocation Timing) were only modeled as distal causes of the ultimate criterion (Judge post-

advice error), because the relationship between manipulated variables and ultimate criterion was mediated

by other variables. All other paths remained similar to their counterparts in the full model. Fit statistics for

the reduced model are also presented in Table 1.

The chi-square difference test for ‘‘nested models’’ revealed that the fit of the reduced model was not sig-

nificantly worse than that of the full model, ��2 (�df¼ 2)¼ 1.95, p> 0.05. In addition, because the models

were nested, their fit could be compared by directly comparing the values of the parsimony-adjusted ‘‘abso-

lute’’ fit indices (Hoyle & Panter, 1995)—here, the AGFI, which penalizes model complexity. The AGFI

was higher for the reduced model than for the full model. It therefore appeared reasonable to conclude that,

at least when the interaction between Advisor Expertise and Reward Allocation Timing was not taken into

account, the predominant effect of these variables on Judge post-advice error was indirect.

DISCUSSION

Whenever there is a need for accurate decision making, there is an incentive for decision makers to adapt

their strategies to improve the accuracy of their judgements and choices. The JAS paradigm is based on the

assumption that a common strategy for increasing the quality of important decisions is the communication of

advice from another party who shares some stake in the consequences of the decision. Regardless of whether

this communication is initiated by Judge or Advisor, the primary objective of advice giving and taking is

frequently to improve the Judge’s decision making. This study demonstrates that the nature of the psycho-

logical contract influences the behavior of both Judge and Advisor, which in turn influences Judges’ post-

advice decision accuracy. We now discuss the effects of each of our psychological contract manipulations.

The psychological contract

The joint activities of a Judge and Advisor are assumed to be guided by their shared understanding of what

each of them will contribute to and receive as outcomes from the decision-making process. The data show

that the details of this psychological contract affect various behaviors of the Judge and Advisor, in both pre-

dicted and unexpected ways.

Table 1. Fit of the path models to the JAS data

Model �2 (df) p-value GFI AGFI RMSEA CFI

Full 2.54 (2) 0.28 0.99 0.88 0.060 0.99Reduced 4.49 (4) 0.33 0.98 0.89 0.041 0.99

GFI¼Goodness-of-fit index; AGFI¼Adjusted goodness-of-fit index; RMSEA¼Root mean square error of approximation;CFI¼Comparative fit index.Note: In the reduced (fully mediated) model, the direct effects of Advisor Expertise and Reward timing on Judge post-advice error werefixed to zero.

J. A. Sniezek et al. Improving Judgement with Prepaid Expert Advice 185

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 14: Improving judgement with prepaid expert advice

Before discussing these findings, it is worth contemplating what our operationalization of the psycholo-

gical contract did and did not entail. We kept the potential scope for psychological contracts straightforward,

addressing only the Judge and Advisor’s relative expertise levels and their opportunity for, and timing of

allocation of, monetary rewards. The beliefs composing the contract were the same for Judge and Advisor,

and compatible with the realities of the context in which they were operating. This was accomplished by

providing the same truthful information about relative expertise and rewards to both parties. In practice,

however, beliefs about contributions and outcomes can be erroneous or can differ between Advisor and

Judge. The very circumstances that lead to role differentiation may encourage different expectations about

the advice transaction on the part of the Judge and Advisor. In controlling these aspects of the psychological

contract to maximize internal validity, we were compelled to attenuate external validity. Further advances in

understanding of decision making in JASs will, however, require broader construals of psychological con-

tracts in future research. We proceed to discuss the results with respect to the various features of the contract.

Expertise

One purpose of this research was to compare the decision processes in two types of JASs—those in which

the Advisor’s expertise was superior to that of a novice Judge, and those in which it was equivalent. Results

show that the expertise manipulation affected the behavior of Advisors, making expert Advisors more

accurate than novice Advisors. Similarly, Judges with expert Advisors made more accurate post-advice

judgements.

Perhaps the most interesting issue is the connection between Advisor Experitise and the way in which the

advice influences estimation by the Judge. At the extremes, Judges could adhere to their own pre-advice

price estimate and ignore the advice, or simply opt for the price recommended by their Advisor. However,

neither strategy of zero or maximum social influence was used by Judges, regardless of the Advisor Experi-

tise level. Instead, Judges used a ‘‘combination strategy,’’ meaning that their judgements could be described

as a weighted combination of their own and another’s opinions, where neither weight is zero or one.

A striking feature of advice utilization in both Harvey and Fischer (1997) and Yaniv and Kleinberger

(2000) was the tendency for Judges to weight their own pre-advice judgements much higher than Advisor

recommendations. Yaniv and Kleinberger showed that this ‘‘egocentric discounting’’ of advice is lessened,

but not eliminated, with exceptionally good advice. By comparison, Judges in our study engaged in less ego-

centric discounting, with equal weighting on average (49.5% own versus 50.5% Advisor), and preferred the

advice if it came from an expert (46.2% own versus 53.8% Advisor). One explanation for the disparity

between the findings of previous studies and those of the present one is that Judges found the multiple fea-

tures of expertise level in the present study—i.e., reputation, training, and advice quality—more believable

than the expertise manipulations in previous studies.

Yet, it can be argued here, as in Yaniv and Kleinberger (2000), that expert advice was still inappropriately

discounted relative to its potential to improve Judge accuracy. Judges’ post-advice judgements

(Merror¼ $6.20) in the present study failed to reach the accuracy levels of their expert Advisors’ recommen-

dations (Merror¼ $5.56), F(1, 45)¼ 4.61, p< 0.05. Had Judges merely used the simple heuristic of always

adopting the expert Advisor’s recommendation rather than going to the extra effort of combining their judge-

ment with the advice, they would have lowered their error by an additional 10.3%. Since previous studies

have shown that underutilization of advice is not eliminated but only decreased when the Advisor is an expert,

we investigated other factors that interact with the Advisor’s expertise level to increase advice utilization.

Rewards

An important feature of the social context in this study was the fact that both the Judge and the Advisor knew

that the Judge had the power to distribute the reward money. As in past studies with live dyads (Sniezek &

186 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 15: Improving judgement with prepaid expert advice

Van Swol, 2001; Dalal, 2001), reward allocations tended to be shared, presumably due to notions of fairness

between two partners. Nevertheless, rewards had a pervasive influence on the behavior of Judges and Advi-

sors at many points in the decision process. Results for the measure of estimation error were straightforward:

the accuracy of Advisor recommendations and Judge pre-advice estimates was significantly improved by the

presence of rewards.

Theoretical accounts of the effects of performance-contingent rewards predict improvements in perfor-

mance due to increased allocation of effort to the performance task (e.g., Naylor et al., 1980). The positive

effects of rewards on Judge and Advisor accuracy in the present study are all the more impressive given that

contingencies between individual performance and rewards were, as in much of life, imperfect. Rewards for

Judges were contingent on their own performance, and luck in being selected to receive money. Rewards to

Advisors were governed by additional uncertainties about how their performance would actually affect the

post-advice performance of the Judge, and, in the Post-task conditions, how the Judge would decide to share

the reward. Despite the differences due to roles, rewards affected the accuracy of both Judges (pre-advice)

and Advisors positively. Yet, advice utilization and Judge post-advice accuracy were increased most when

both rewards and Advisor Experitise were present. The timing of allocation of the reward also played an

important role.

Reward Allocation Timing

One of the main contributions of the paper was in identifying the role of Reward Allocation Timing in the

psychological contract. Despite the ready availability of examples testifying to the existence of both pre-task

and post-task allocations in ‘‘real-world’’ decision settings, the timing of allocations has never been exam-

ined in laboratory studies of advice giving and taking processes. We demonstrated empirically that allocation

timing influenced advice utilization and Judge post-advice accuracy when the Advisor was an expert.

Indeed, the data showed that expert advice was used more when monetary allocations preceded rather than

followed receipt of advice. Usage of novice advice was not influenced by Reward Allocation Timing. Pat-

terns of Judges’ post-advice decision accuracy followed those of advice utilization. That is, the quality of the

Judges’ post-advice decisions was also highest when Advisors were experts and Judges had pre-paid for this

expert advice.

It is precisely because Judges normally underweight expert advice that Reward Allocation Timing

becomes an important variable. Expert advice was weighted significantly higher when Judges pre-paid

(65.87%) than when they post-paid (51.28%). The latter weighting was barely greater than the weighting

of novice advice in either the pre-task (48.63%) or post-task (49.54%) reward allocation conditions. It could

of course be argued that, even in the pre-payment condition, expert advice utilization was insufficient from a

normative perspective. Future research should therefore examine other aspects of the psychological contract

that influence advice utilization and post-advice decision accuracy.

Taking the Judge–Advisor system seriously

The present research has several features that differentiate it from much of the literature on decision making

in organizations. Perhaps the most important is the incorporation of social context, a neglected part of

decision-making research and theory (Fox & Irwin, 1998, Harvey & Fischer, 1997; Heath & Gonzalez,

1995; Kray, 2000; Kray & Gonzalez, 1999; Sniezek & Buckley, 1995; Sniezek & Van Swol, 2001). Research

that is focused solely on the Judge cannot succeed in fully explicating the dynamics of the Judge–Advisor

system. If Advisors are not part of the experimental context, they cannot react to or attempt to influence the

behavior of the Judge, and vice versa. Knowing what Judges want in terms of advice is interesting and impor-

tant (e.g., Stone & Price, 2000; Yates et al., 1996), but of little use if we do not understand the conditions that

lead to the desired advice. The present study showed (see Figure 5, for example) that manipulating aspects of

J. A. Sniezek et al. Improving Judgement with Prepaid Expert Advice 187

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 16: Improving judgement with prepaid expert advice

the psychological contract leads to changes in both Judge and Advisor behavior that influence the post-

advice decision quality of the Judge.

For research on advice giving and taking processes, the JAS—if taken seriously—could represent an ideal

compromise between internal and external validity. Decisions in naturalistic settings are based on a complex

interplay of numerous factors and contingencies. The JAS was originally developed in recognition of the fact

that few truly important ‘‘real-world’’ decisions are made without some level of consultation with one or

more people. Application of the psychological contract to the JAS represents another important step in repli-

cating the complexity of real decisions in the controlled environs of the laboratory. The present study makes

a beginning in this regard. As can be seen in the path analyses, the advising interaction in the present study

involved several interdependent variables. An interesting finding was that the influence of Reward Allocation

Timing and Advisor Experitise on Judge post-advice error was largely indirect. This would not have been

observable on the basis of the ANOVA analyses alone.

Much remains, however, for future research to explore. One consequence of the incomplete information

chronically observed in naturalistic settings is, as alluded to earlier, that Judges and Advisors will be likely to

have different, often incompatible, perceptions of the psychological contract. For instance, the findings with

regard to the impact of Reward Allocation Timing on Judges’ post-advice decision accuracy may be very

different if Judges are unable to accurately compare their own expertise to that of their Advisors. Future

research should therefore involve multidimensional tasks with Judges and Advisors provided training on

different dimensions of the task. Another move in the direction of greater generalizability would be a fairly

straightforward extension of the present study to a situation where Judges themselves choose whether to

allocate a proportion of rewards to expert or novice Advisors before or after the task (rather than Reward

Allocation Timing being an experimental manipulation). Alternatively, the psychological contract could be

altered by removing or reducing interdependency of outcomes, perhaps by having Judges choose to allocate,

up front, a certain amount of money to their Advisors. Unlike a proportion of anticipated rewards, the fixed

amount would not be contingent on the Judge’s post-advice performance. In general, we suggest that future

JAS studies examine aspects of the psychological contract in the context of numerous potential (interdepen-

dent) consequences such as the frequency and quality of advice, the extent to which the advice is utilized, the

quality of the Judge’s post-advice decision, and the satisfaction of both parties with the advising interaction.

AUTHOR NOTES

This project was funded by the Advanced Displays and Interactive Displays Consortium sponsored by the

US Army Research Laboratory under Cooperative Agreement DAAL01-96-2-0003.

The authors gratefully acknowledge the assistance of Gregory Brake in development of the task, and feed-

back from Michael Baumann, David Budescu, and Carol Gohm.

Portions of these data were presented at the Annual meeting of the Judgment and Decision Making

Society, Los Angeles, November 1999, and the International Symposium on Forecasting, Lisbon, Portugal,

June 2000.

The second and third authors contributed equal work to this project. The ordering of their names is, there-

fore, arbitrary.

REFERENCES

Bastardi, A., & Shafir, E. (1998). On the pursuit and misuse of useless information. Journal of Personality and SocialPsychology, 75, 19–32.

188 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 17: Improving judgement with prepaid expert advice

Beggan, J. K. (1992). On the social nature of nonsocial perception: the mere ownership effect. Journal of Personality &Social Psychology, 62, 229–237.

Brickman, P., & Berman, J. J. (1971). Effects of performance expectancy and outcome certainty on interest in socialcomparison. Journal of Experimental Social Psychology, 7, 600–609.

Budescu, D. V., & Rantilla, A. K. (2000). Confidence in aggregation of expert opinions. Acta Psychologica, 104, 371–398.

Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: a review and capital-labor-production framework. Journal of Risk and Uncertainty, 19, 7–42.

Camerer, C. F., & Johnson, E. J. (1991). The process-performance paradox in expert judgment: how can experts know somuch and predict so badly? In K. A. Ericsson, & J. Smith (Eds.), Toward a general theory of expertise: Prospects andlimits (pp. 195–217). Cambridge, UK: Cambridge University Press.

Cortina, J. M., Chen, G., & Dunlap, W. P. (2001). Testing interaction effects in LISREL: examination and illustration ofavailable procedures. Organizational Research Methods, 4, 324–360.

Dalal, R. S. (2001). The prisoner’s dilemma game with role-differentiated groups: Expert advice and performance-contingent financial incentives. Unpublished master’s thesis, University of Illinois, Urbana-Champaign.

Ericsson, K. A., & Smith, J. (1991). Toward a general theory of expertise: Prospect and limits. New York: CambridgeUniversity Press.

Erickson, G. M., & Johansson, J. K. (1985). The role of price in multi-attribute product evaluations. The Journal ofConsumer Research, 12, 195–199.

Fox, C. R., & Irwin, J. R. (1998). The role of context in the communication of uncertain beliefs. Basic and Applied SocialPsychology, 201(1), 57–70.

Garb, H. N. (1989). Clinical judgment, clinical training, and professional experience. Psychological Bulletin, 105, 387–396.

Harvey, N., & Fischer, I. (1997). Taking advice: accepting help, improving judgment, and sharing responsibility.Organizational Behavior and Human Decision Processes, 7, 117–134.

Harvey, N., Harries, C., & Fischer, I. (2000). Using advice and assessing its quality. Organizational Behavior and HumanDecision Processes, 81, 252–273.

Heath, C., & Gonzalez, R. (1995). Interaction with others increases decision confidence but not decision quality:evidence against information collection views of interactive decision making. Organizational Behavior and HumanDecision Processes, 61, 305–326.

Hedlund, J., Ilgen, D. R., & Hollenbeck, J. R. (1998). Decision accuracy in computer-mediated versus face-to-facedecision-making teams. Organizational Behavior and Human Decision Processes, 76, 30–47.

Hoyle, R. H., & Panter, A. T. (1995). Writing about structural equation models. In R. H. Hoyle (Ed.), Structual equationmodeling: Concepts, issues, and applications (pp. 158–176). Thousand Oaks, CA: Sage Publications.

Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). Experimental tests of the endowment effect and the Coasetheorem. Journal of Political Economy, 98, 1325–1348.

Kray, L. J. (2000). Contingent weighting in self-other decision making. Organizational Behavior and Human DecisionProcesses, 83, 82–106.

Kray, L., & Gonzalez, R. (1999). Differential weighting in choice versus advice: I’ll do this, you do that. Journal ofBehavioral Decision Making, 12, 207–217.

Kuhn, K. M., & Sniezek, J. A. (1996). Differential effects of scenario generation on confidence and uncertainty. Journalof Behavioral Decision Making, 9, 231–247.

Monroe, K. B. (1973). Buyers’ subjective perceptions of price. Journal of Marketing Research, 10, 70–80.Moosbrugger, H., Schermelleh-Engel, K., & Klein, A. (1997). Methodological problems of estimating latent interac-

tion effects. Methods of psychological research online, 2, 95–111. Retrieved 28 June 2003, fromhttp : //www:mpr-online:de:

Naylor, J. C., Pritchard, R. D., & Ilgen, D. R. (1980). A theory of behavior in organizations. New York: Academic Press.Rantilla, A., & Budescu, D. V. (1999). Aggregation of expert opinions. In Proceedings of the 32nd Hawaii International

Conference on System Sciences (HICSS-32). Maui, HI: Institute of Electrical and Electronics Engineers, Inc. (IEEE).CD-ROM

Robinson, S. L. (1996). Trust and breach of the psychological contract. Administrative Science Quarterly, 41, 574–599.Robinson, S. L., Kraatz, M. S., & Rousseau, D. M. (1994). Changing obligations and the psychological contract: a

longitudinal study. Academy of Management Journal, 37, 137–152.Rousseau, D. M. (1995). Psychological contracts in organizations: Understanding written and unwritten agreements.

Thousand Oaks, CA: Sage.Schrah, G. E. (2000). Task complexity and strategy selection in Judge–Advisor decision making: A cost–benefit model of

advice utilization. Unpublished master’s thesis, University of Illinois, Urbana-Champaign.

J. A. Sniezek et al. Improving Judgement with Prepaid Expert Advice 189

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)

Page 18: Improving judgement with prepaid expert advice

Shanteau, J. (1992). Why study expert decision making? Some historical perspectives and comments. OrganizationalBehavior and Human Performance, 43, 95–106.

Smith, V. L., & Walker, J. M. (1993). Monetary rewards and decision cost in experimental economics. Economic Inquiry,XXXI, 245–261.

Sniezek, J. A., & Buckley, T. (1995). Choice accuracy and confidence in Judge–Advisor decision-making systems.Organizational Behavior and Human Decision Processes, 62, 159–174.

Sniezek, J. A., May, D., & Sawyer, J. E. (1990). Social uncertainty and interdependence: a study of resource allocationdecision in groups. Organizational Behavior and Human Decision Processes, 46, 155–180.

Sniezek, J. A., & Van Swol, L. (2001). Trust, confidence, and expertise in Judge–Advisor systems. OrganizationalBehavior and Human Decision Processes, 84, 288–307.

Staw, B. M. (1997). The escalation of commitment: an update and appraisal. In Z. Shapira (Ed.), Organizational decisionmaking (pp. 191–251). New York: Cambridge University Press.

Stone, E. R., & Price, P. C. (2000). Overconfidence from the judgment consumer’s perspective. Presentation at the annualmeeting of the Judgment and Decision Making Society, New Orleans, LA.

Suls, J. M., & Tesch, F. (1978). Students’ preferences for information about their test performance: a social comparisonstudy. Journal of Applied Social Psychology, 8, 189–197.

Teas, R. K., & Agarwal, S. (2000). The effects of extrinsic product cues on consumers’ perceptions of quality, sacrifice,and value. Journal of the Academy of Marketing Science, 28, 278–290.

Yaniv, I., & Kleinberger, E. (2000). Advice taking in decision making: egocentric discounting and reputation formation.Organizational Behavior and Human Decision Processes, 83, 260–281.

Yates, J. F., Price, P. C., Lee, J. W., & Ramirez, J. (1996). Good probabilistic forecasters: the ‘‘consumers’’’ perspective.International Journal of Forecasting, 12, 41–56.

Zwick, R., Erev, I., & Budescu, D. (1999). The psychological and economical perspectives on human decisions in socialand interactive contexts. In D. Budescu, I. Erev, & R. Zwick (Eds.), Games and human behavior: Essays in honor ofAmnon Rapoport (pp. 3–20). Mahwah, NJ: Lawrence Erlbaum Associates.

Authors’ biographies:

Janet A. Sniezek was a professor of psychology at the University of Illinois, Urbana-Champaign. A leading researcher inthe decision-making field, her recent research focused on the overconfidence phenomenon and the social context sur-rounding decision-making. She died of cancer at the age of 51, while this article was under review.

Gunnar E. Schrah is a doctoral student at the University of Illinois at Urbana-Champaign. His research interests includethe use of information and advice in organizational decision making, performance measurement, personnel selection andrecruitment, organizational justice, and policy analysis.

Reeshad S. Dalal is now an assistant professor of industrial/organizational psychology at Purdue University. He studiescitizenship and counterproductive behavior, affect, and the giving and taking of advice.

Authors’ addresses:

Gunnar E. Schrah, Department of Psychology, University of Illinois at Urbana-Champaign, 603 E. Daniel Street,Champaign, IL 61820, USA.

Reeshad S. Dalal, Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN47907-2081, USA.

190 Journal of Behavioral Decision Making

Copyright # 2004 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making, 17, 173–190 (2004)