Upload
ellen
View
219
Download
0
Embed Size (px)
Citation preview
This article was downloaded by: [York University Libraries]On: 22 November 2014, At: 02:00Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office:Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Behaviour & Information TechnologyPublication details, including instructions for authors and subscriptioninformation:http://www.tandfonline.com/loi/tbit20
Persuasiveness of expert systemsJaap J. Dijkstra a , Wim B. G. Liebrand b & Ellen Timminga ba Social Science Information Technology , University of Groningen , GroteRozenstraat 15, Groningen, 9712 TG, The Netherlands E-mail:b Social Science Information Technology , University of Groningen , GroteRozenstraat 15, Groningen, 9712 TG, The NetherlandsPublished online: 08 Nov 2010.
To cite this article: Jaap J. Dijkstra , Wim B. G. Liebrand & Ellen Timminga (1998) Persuasiveness of expertsystems, Behaviour & Information Technology, 17:3, 155-163, DOI: 10.1080/014492998119526
To link to this article: http://dx.doi.org/10.1080/014492998119526
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”)contained in the publications on our platform. However, Taylor & Francis, our agents, and ourlicensors make no representations or warranties whatsoever as to the accuracy, completeness, orsuitability for any purpose of the Content. Any opinions and views expressed in this publication arethe opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis.The accuracy of the Content should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoevercaused arising directly or indirectly in connection with, in relation to or arising out of the use of theContent.
This article may be used for research, teaching, and private study purposes. Any substantialor systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, ordistribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use canbe found at http://www.tandfonline.com/page/terms-and-conditions
Persuasiveness of expert systems
JAAP J. DIJKSTRA, WIM B. G. LIEBRAND and ELLEN TIMMINGA
Social Science Information Technology, University of Groningen, Grote Rozenstraat 15, 9712 TG Groningen,
The Netherlands; email j.j.dijkstra@ ppsw.rug.nl
Abstract. Expert system advice is not always evaluated by
examining its contents. Users can be persuaded by expertsystem advice because they have certain beliefs about advice
given by a computer. The experiment in this paper shows that
subjects (n = 84) thought that, given the same argumentation,expert systems are more objective and rational than humanadvisers. Furthermore, subjects thought a problem was easier
when advice on it was said to be given by an expert system while
the advice was shown in production rule style. Such beliefs canin¯ uence expert system use.
1. Introduction
Do users judge advice of expert systems by examining
its contents? This paper argues that users’ beliefs about
expert systems in¯ uences their evaluation of the advice.
It is important to know how users assess advice of expert
systems, because they have to neglect the advice when
the expert system is incorrect (Waldrop 1987, Hollnagel
1989, Chen 1992, Suh and Suh 1993, Berg 1995).
Therefore, modern expert systems are equipped with
explanation functions that give users the opportunity to
examine the inference (Berry and Broadbent 1987,
Waldrop 1987, Wognum 1990, Bridges 1995, Ye and
Johnson 1995). But do users consult these explanation
functions? Are users interested in the argumentation
behind the advice? Theory on persuasion shows that
advice is often judged without an examination of the
arguments. Beliefs held by a person about the adviser
can in¯ uence his or her attitudes towards the advice
(Petty and Cacioppo 1986a, b). Therefore, it is possible
that user beliefs and attitudes in¯ uence the use of expert
system advice.
In M anagement Information System research the
topic of user beliefs and attitudes has been studied
extensively (Robey 1979, Ginzberg 1981, Baroudi et al.
1986, Barki and Hartwick 1989, 1994, Igbaria 1993,
Hartwick and Barki 1994, Pare and Elam 1995,).
Especially user satisfaction and perceived usefulness,
are believed to be important factors for system use (Ives
et al. 1983, Davis 1989, 1993, Davis et al. 1989, Galletta
and Lederer 1989, Igbaria and Nachman 1990, Gatain
1994, Igbaria et al. 1994, Iivari and Ervasti 1994, Keil et
al. 1995). But, user satisfaction will not always indicate
whether computer support is desirable (Srinivasan 1985,
M elone 1990, Szajna 1993). Some studies underline that
users can be wrong in their judgement on the usefulness
of intelligent computer support. Kottemann et al. (1994)
show that redundant what-if help increases users’
con® dence and Davis et al. (1994) point out that
irrelevant information in an information system weak-
ens performance but increases the users’ con® dence.
Similar results were reported by Aldag and Power (1986)
and Will (1992). Cornford et al. (1994) warn for the
in¯ uence an imperfect medical expert system might have
on medical professionals in developing countries.
Dijkstra (1995) found that the incompleteness and
incorrectness of argumentations made by a legal expert
system were hardly noticed by the lawyers who used it.
These studies show that users sometimes make
judgements on expert system advice that are not
supported by the contents of the argumentation. User
acceptance of expert system advice is not always based
on a thorough examination of the advice. This notion is
supported by the Elaboration Likelihood Model (Petty
and Cacioppo 1986a, b), a theory on persuasion that
states that people are likely to make their judgement on
peripheral cues if they are less motivated or unable to
judge a message on its contents. Persuasiveness of the
source of the message is an important peripheral cue.
For example, a doctor’ s advice is believed to be
important because it is made by a doctor, not because
of a positive evaluation of the message by the patient.
According to the Elaboration Likelihood M odel, user
acceptance of expert system advice without thorough
examination of correctness or applicability might be
explained by persuasiveness of the expert system.
The motivation to examine the contents of a message
is triggered by personal relevance, responsibility, and
need for cognition. Need for cognition indicates whether
a person likes to examine an argument (Cacioppo and
BEHAVIOUR & INFORMATION TECHNOLOGY, 1998, VOL. 17, NO. 3, 155 ± 163
0144-929X/98 $12.00 Ó 1998 Taylor & Francis Ltd.
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 0
2:00
22
Nov
embe
r 20
14
Petty 1982, Priester and Petty 1995). A person who does
not like to study an argument is more likely to make a
judgement on peripheral cues. Todd and Benbasat
(1992, 1994a, b) found that imperfect use of decision
support systems occurs more often when users want to
reduce their cognitive eŒort. Thus, need for cognition
can be an important indicator of how a person will use
an expert system.
The research presented in this paper is on persuasive-
ness of expert systems. It tries to explain why users of
expert systems sometimes put their trust into redundant
or even incorrect information presented by an expert
system. In line with the Elaboration Likelihood Model,
it is thought that users have certain beliefs about advice
given by an expert system that, regardless of its contents,
makes them agree easily. This research tries to identify
some of these beliefs that people might have about
advice given by an expert system.
In the experiment, subjects had to judge advice on
several cases. Some subjects were told that the arguments
were made by expert systems and some were told they
were made by humans. In the expert system group, half
of the subjects got arguments in natural language (the
same as in the human group), the other half got them in
production rule format (see table 1). Subjects were asked
to ® ll out a questionnaire on what they thought of the
arguments and conclusions. If subjects base their
judgement only on the contents of the advice, this
evaluation will not diŒer when the adviser is said to be a
human or an expert system. However, it is hypothesized
that an expert system is a persuasive message source in
comparison with human advisers. Research has shown
that the advice of an expert system can be convincing
even when its arguments are weak (Kottemann and
Remus 1987, Dijkstra 1995). Thus, it is expected that the
expert system groups will score higher on the persua-
siveness factors (H1). Furthermore, arguments presented
in a production rule format are more di� cult to judge
because people are not used to reading production rules.
Therefore, it is hypothesized that subjects who got the
argument of the expert system in a production rule
format will score higher on the persuasiveness factors
than subjects who got the argument in natural language
(H2). In addition, it is expected that subjects with a low
need for cognition are less motivated to study the
argument and, consequently, are more likely persuaded
by an expert system. Thus, subjects with a low need for
cognition will score higher on the persuasiveness factors
in the expert system conditions than subjects with a high
need for cognition (H3).
2. Method
2.1. Subjects
Students were asked to participate in the experiment
through an advertisement in a local university news-
paper. They were paid 12 Dutch guilders (8 US dollars).
Eighty-® ve students of several faculties participated.
After randomly assigning the subjects to the conditions,
28 subjects were told that the arguments were made by a
human, 28 subjects were told that the arguments were
made by an expert system, and 29 subjects received the
same arguments in a production rule style and were told
that they were made by an expert system.
2.2. Apparatus and procedure
Table 2 gives an overview of the ® ve phases of the
procedure. The procedure started with a pretest on
subjects’ experience with computers and the knowledge
domains involved (Phase 1). Subjects’ need for cognition
was tested with a questionnaire developed by Cacioppo
and Petty (1982) and translated in Dutch by Pieters et al.
(1987). After the pretests, subjects carried out a simple
number-circling task (Phase 2). Subjects were asked to
circle every 1, 5, and 7 in a table of 875 random
numbers. This test was used as a distraction to reduce
the eŒect of the pretest. The test was also used as a check
on the need for cognition test (see Phase 4).
In the main part of the test (Phase 3), four cases with a
conclusion and argument were handed out. In each case,
a person’ s problem was presented. The problem was
described, with an advice on it said to be given by either
an expert system or a human (for an example see the
Appendix). The cases were on dyslexia, cardiology, law,
and train tickets. After each case, the subject ® lled out a
questionnaire on the conclusion and argument. All
subjects evaluated, in random order, the same cases with
the same argument and conclusions. The only diŒerence
between Conditions 1 and 2 was the message source
mentioned (a human in Condition 1 and an expert
system in Condition 2). In Condition 3 the same
argumentation was presented as in Conditions 1 and 2
but in a production rule style and with an expert system
as message source mentioned (see table 1 and the
Appendix).
J. J. Dijkstra et al.156
Table 1. The experimental groups.
Expert system adviser
Human adviserNaturallanguage Production rules
Condition 1 2 3
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 0
2:00
22
Nov
embe
r 20
14
The items used in the questionnaire were based on
research on interpersonal persuasiveness and on qualities
people expect from expert systems. Important interper-
sonal persuasiveness factors are credibility (competence
and trustworthiness), liking, similarity, and consensus
(Petty and Cacioppo 1986b, Worchel et al. 1988, O’Keefe
1990). Qualities people associate with expert systems are
objectivity and rationality (Boden 1990, Berg 1995).
Items were designed for all these constructs, with two
items on the di� culty of the case, making a questionnaire
of 23 items (see table 3 for item wordings). A factor
analysis was used to distinguish factors thought to be
important for judging expert system advice.
At the end, subjects carried out a complex number-
circling task (Phase 4). They had to circle every 3, every
6 when a 7 followed, and every second 4 in a table of 600
numbers. It was expected that subjects with a high need
for cognition liked the di� cult number-circling task
better than the simple one. The ® nal questionnaire
(Phase 5) was a recall test on the information presented
in the dyslexia, cardiology, law, and train ticket cases.
2.3. Construction of the need for cognition scale
For the construction of the need for cognition scale, 13
of the 18 items from a test developed by Cacioppo et al.
(1984) were used. Pieters et al. (1987) dropped Items 5,
13, and 16 after translating and testing the scale in Dutch
(for item wording see Cacioppo et al. 1984). Item 18 was
dropped because, in a small pilot study, subjects thought
it was incomprehensible. Of the 14 items used in the test,
Item 9 was dropped because it correlated very low with
the other items (the corrected item total correlation was
0.11). Cronbach’ s alpha was 0.79 for the 13 items used.
As suggested by Cacioppo and Petty (1982), the need for
a cognition scale was constructed by collapsing the item
scores. Three 33% percentiles were used to make groups
with a high, average, and low need for cognition.
With two 6-point Likert scale items subjects were
asked whether they liked the di� cult number-circling
task better than the simple one (Phase 4, see table 2).
Cacioppo and Petty (1982) used a similar test to verify
the need for a cognition scale. As predicted, subjects
with a high need for cognition (M = 8.1) liked the
di� cult task better than subjects with a low need for
cognition (M = 6.8), t(49) = 2.44, p < 0.01. Also, scores
on the recall questions (Phase 5, see table 2) were used
to test the need for cognition scale. According to the
Elaboration Likelihood Model, a high need for
cognition would invoke a thorough examination of
the cases. Therefore, subjects with a high need for
cognition were expected to score better on recall in
comparison with subjects with a low need for cogni-
tion. The recall scale was constructed by taking the
number of correct answers on eight good/wrong recall
questions. As predicted, subjects with a high need for
cognition (M = 6.1) scored better on recall than
subjects with a low need for cognition (M = 5.5),
t(49) = 2.02, P < 0.05.
2.4. Construction of the computer experience scale
A scale for computer experience was constructed by
using four questions from the pretest (Phase 1, see table
2). The ® rst question was on frequency of computer use
(never, monthly, several times a week, weekly, daily;
scoring 0 ± 4). The second question was on diversity of
computer use (`Do you use a computer for: word-
processing, databases, electronic mail, programming,
games’ ; scoring the number of applications: 0 ± 5). The
other two questions were 6-point Likert scale items on
computer expertise and expert system knowledge. A
Principal Component Analysis (PCA) showed one
factor, accounting for 65% of the variance. Cronbach’ s
alpha was 0.81. The PCA factor scores were taken as an
indication of computer experience. Although not
hypothesized, a signi® cant correlation was found
between need for cognition and computer experience,
r = 0.36, P < 0.001; subjects with a high need for
cognition had more computer experience.
Persuasiveness of expert systems 157
Table 2. An overview of the procedure.
Phase 1: Prestests Phase 2: Distraction Phase 3: Main test Phase 4: Distraction Phases 5: Recall
Sex, age, study (3 items)Knowledge on: dyslexia,
cardiology, train
tickets, law (4 items)Computer experience
(4 items)Need for cognition
(14 items)
Simple number ±circling test
Four cases in randomorder with after each
case a questionnaire
of 23 items
Di� cult number ±circling test and a
short questionnaire
on the number ±circling test
Eight recall questionson the cases
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 0
2:00
22
Nov
embe
r 20
14
2.5. Construction of the persuasion scales
In the main part of the test, subjects had to judge 4
cases. After each case they ® lled out a questionnaire
(Phase 3, see table 2). Thus, subjects scored each item
four times. Simultaneous Component Analysis (SCA;
Kiers and Ten Berge 1989, Kiers 1990) with a varimax
rotation was used for summarizing the items by means
of a small number of components. In SCA, component
weights are found that de ® ne components that optimally
account for the variance in all four cases simultaneously.
These components have the same interpretation in all
four cases, because their weights are equal and apply to
the same items. The results are presented in table 3.
The SCA showed four factors: compliance, objectiv-
ity/rationality, easiness of the case, and authority. The
compliance and authority factors were based on the
interpersonal persuasiveness constructs. Items 2, 6, 10,
20, and 23 were used to measure authority. They
constitute the fourth factor. The other items based upon
interpersonal persuasiveness constructs constitute the
® rst factor: compliance. Its items are on trustworthiness
(Items 8, 11, and 21), liking (Items 9 and 22), similarity
(Item 19), and consensus (Items 3, 4, 5, 7, and 13).
Objectivity (Items 15, 16, and 18) and rationality (Items
1 and 17) make the second factor. Easiness of the case
(Items 12 and 14) is the third factor. Table 4 gives an
overview of the variance accounted for by the factors
found.
A Mokken scale analyses reliability test (Molenaar et
al. 1994) on the selected items per case supported the
constructed factors. Reliabilities found were about 0.95
for compliance, 0.65 for objectivity/rationality, 0.75 for
authority, and 0.40 for easiness of the case. In the
J. J. Dijkstra et al.158
Table 3. SCA factor weights for all cases and factor loadings for the law case (between brackets) of the items used in the main test.Absolute weights lower than 0.15 and absolute loadings lower than 0.35 are left out.
Item
number Item wording
Factor 1:
Compliance
Factor 2:Objectivity
rationality
Factor 3:
Easiness case
Factor 4:
Authority
3.
4.5.
7.
8.
9.*
11.
13.*19.21.
22.
I fully agree with the way the problem
is solved.
This way of problem solving is tempting.I wouldn’ t have any questions after
this explanation.
More often problems should be solved
this way.I have full con® dence in the way the
problem is solved.I wouldn’ t have tolerated such a treatment
if I had a similar problem.
The problem is solved reliably.
This conclusion is a bad one.I would have solved the case in a similar way.Inference and conclusion are trustworthy.
The problem is dealt with sympathetically.
0.34
0.260.23
0.24
0.33
0.33
0.34
0.290.310.20
0.24
(0.88)
(0.78)(0.68)
(0.84)
(0.91)
(0.73)
(0.82)
(0.74)(0.83)(0.61)
(0.56)
0.15
0.21
0.16
0.16
(0.53)
(0.50)
(0.57)(0.55)
(0.36)
1.
15.16.*
17.
18.
The problem is solved in a rational way.
The conclusion is objective.The problem is dealt with in an emotional
way.The problem is solved step by step.
The problem is dealt with unbiased.
0.33
0.450.51
0.26
0.48
(0.46)
(0.67)(0.56)
(0.57)
(0.70)
0.20
0.19 (0.51)
12.14.
This case is easy.A child could have come to this conclusion.
(0.42) 0.25 0.480.69
(0.55)(0.86)
Ð 0.17
2.
6.
10.20.
23.
The solution of the problem involves a lot
of thinking.The problem is analysed thoroughly.
The analyses shows great expertise.The problem is solved in a powerful way.
The solution is authoritative.
0.18 (0.60)
(0.68)(0.49)
Ð 0.32
Ð 0.210.18
( Ð 0.51) 0.32
0.19
0.320.41
0.60
(0.60)
(0.42)
(0.62)(0.69)
(0.73)
*reverse scoring is used on these items.
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 0
2:00
22
Nov
embe
r 20
14
dyslexia case reliabilities were a bit lower for objectivity/rationality and easiness of the case.
3. Results
All subjects ® nished their tasks in about one hour.
One subject did not ® ll out a questionnaire completely
and was left out of the analyses. No diŒerences could be
explained by sex, age, or study. For examining the
hypotheses, a 3 (condition) ´ 3 (need for cognition)
repeated measurement MANOVA was performed on
compliance, objectivity/rationality, easiness of the case,
and authority SCA scores, taking the four cases as
within factor. For the analysis, SCA factor scores were
used. Scores mentioned in ® gures 1 ± 4 are standardized
SCA factor scores. Results are reported below.
3.1. Compliance and authority
The SCA showed that compliance and authority were
highly correlated; the scores on the four cases showed
correlations between 0.51 and 0.66. An SCA can ® nd
constructs that correlate because several data sets (the
scores on the four cases) are factorized simultaneously.
Although compliance and authority are two separate
constructs (this was con® rmed in an analysis on the
indicators of the compliance and the authority construct),
they are related and therefore, they are discussed together.
Compliance and authority are both interpersonal
persuasiveness factors. W hen these factors would be
important for explaining persuasiveness of expert
systems, their scores should be signi® cantly higher in
the expert system conditions. However, there was only a
small within eŒect of the interaction between condition
and case diŒerence on compliance, F(6,225) = 2.22, P <0.05 (see ® gure 1). For the train ticket and law case the
compliance scores were a bit higher in the production
rule condition, but for the dyslexia and cardiology case
the scores were lower in the production rule condition
when compared with the expert system with natural
language condition. The dyslexia and cardiology case
were more technical and di� cult to understand than the
train ticket and law case were. Maybe this caused
negative feelings when production rules made the advice
even more technical. However, as ® gure 1 shows, the
eŒect is very small. On authority there was no signi® cant
between or within eŒect from the conditions.
Contrary to the expectation that subjects with a low
need for cognition would score higher for compliance and
authority in the expert system conditions, they scored
lower (see ® gure 2). Apparently, subjects with a low need
for cognition assigned less authority to the expert systems
than subjects with a high need for cognition. This goes
especially for the expert systems with a production rule
argumentation (t(12) = 2.85, P < 0.01).
The cases had a signi® cant within eŒect on both
factors (P < 0.001): compliance F(3,225) = 28.96;
authority F(3,225) = 5.51. Scores on compliance and
authority were much aŒected by the cases.
3.2. Objectivity/rationality
The objectivity/rationality factor seems to be an
expert system persuasiveness factor. There was a
signi® cant between eŒect of the conditions on the
scores, F(2,75) = 6.01, P < 0.01 (see ® gure 3). Subjects
thought the expert system to be more objective and
rational, especially when production rules were used.
There was a signi® cant within eŒect of the case
diŒerence on the objectivity/rationality scores,
Persuasiveness of expert systems 159
Figure 1. Compliance with the four cases.
Table 4. Percentage of variance accounted for by the SCA factors.
Case TotalFactor 1:
Compliance
Factor 2:
Objectivityrationality
Factor 3:Easiness case
Factor 4:Authority
DyslexiaLaw
CardiologyTrain ticket
59.958.7
61.364.9
40.936.3
37.743.1
8.610.8
16.518.6
7.17.6
8.57.2
25.317.8
22.326.8
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 0
2:00
22
Nov
embe
r 20
14
F(6,225) = 4.68, P < 0.01, thus, case diŒerence also
in¯ uenced objectivity/rationality scores.
Low need for cognition did not increase the eŒect of
the conditions. However, need for cognition seemed to
in¯ uence the evaluation. Figure 3 suggests that subjects
with a high need for cognition expected objective and
rational advice when an expert system was mentioned as
message source; subjects with a low need for cognition
increased their expectations of objectivity and ration-
ality when production rules were shown as well. The
diŒerence between high and low need for cognition was
signi® cant in the expert system with natural language
condition (t(19) = 2.17, P < 0.05).
3.3. Easiness of the cases
As expected, there was a signi® cant within eŒect of
case diŒerence on easiness of the case, F(3,225) = 36.70,
P < 0.01, but there was no interaction with other factors.
The train ticket case was judged to be easier than the
other cases. More interesting, however, was the between
eŒect of the experimental conditions on easiness of the
case, F(2,75) = 6.74, P < 0.01. As shown in ® gure 4,
cases were judged to be easier in the expert system with
production rules condition. Especially subjects with a
high need for cognition thought cases to be easier in the
production rule condition.
3.4. Computer experience
In a 2 (computer experience) ´ 3 (condition) repeated
measurement M ANOVA, computer experience did not
have an eŒect on compliance, objectivity/rationality, case
easiness, or authority. Thus, computer experience did not
aŒect user attitudes towards expert system advice.
4. Conclusions and discussion
Given the same advice, subjects thought an expert
system to be more objective and rational than a human
adviser, especially when the expert system advice was
J. J. Dijkstra et al.160
Figure 2. The eŒect of need for cognition on compliance
(top) and on authority (bottom).
Figure 3. Collapsed scores on objectivity/rationality (top)and the eŒect of need for cognition on objectivity/rationality
(bottom).
Figure 4. Collapsed scores on easiness of the case.
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 0
2:00
22
Nov
embe
r 20
14
given in a production rule format. Thus, H1 and H2 hold
for the objectivity and rationality construct. Objectivity
and rationality seem to be persuasive cues (beliefs) that
can make users accept advice of expert systems without
examining it. Experimental conditions did not seriously
aŒect scores on other, interpersonal, persuasiveness
factors (compliance and authority); only on compliance
there was a small within eŒect of condition by case.
Thus, H1 and H2 were not supported for compliance and
authority, which indicates that these factors are
relatively unimportant persuasive expert system cues.
The experimental conditions did have an eŒect on
perceived easiness of the cases. Surprisingly enough,
subjects found cases to be easier when the advice of the
expert system was given in a production rule style. Thus,
a case was thought to be less di� cult when the advice
was more di� cult to understand.
The experiment shows that studying an expert system
merely as a persuasive message source can give an
interesting new view on expert system use. The fact that
people are told that information comes from an expert
system is enough to in¯ uence their evaluation of the
information. This might explain why users of expert
systems sometimes neglect the incompetence of the
expert system. It is very likely that actual interaction
with an expert system will strengthen this persuasive
eŒect. However, more research has to be done to
examine this hypothesis.
The eŒect of need for cognition was diŒerent than
expected: H3 was not supported. It was hypothesized
that subjects with a low need for cognition would favour
the expert system advice more than subjects with a high
need for cognition. However, the eŒects of need for
cognition on the authority and compliance scores
indicate that people with a low need for cognition do
not like expert systems. A signi® cant correlation
between computer experience and need for cognition
supports this interpretation. This does not mean that
these people will disagree with the expert system.
According to Todd and Benbasat (1992, 1994a, b),
reduction of cognitive eŒort is a reason to accept expert
system advice. Most likely, people with a low need for
cognition want to reduce cognitive eŒort and, therefore,
accept advice of an expert system, even when they do
not like it.
The cases were thought to be easier in the production
rule condition, especially by subjects with a high need
for cognition. It seems likely that these subjects put an
eŒort in trying to understand the production rules and,
unfortunately, neglect the di� culties of the cases
involved. This implies that expert system use can
decrease attention paid towards problems. This would
make need for cognition an interesting but di� cult
factor when studying the attitudes towards expert
systems. People with a low need for cognition may be
uncritical towards expert system advice because they
want to reduce cognitive eŒort. People with a high need
for cognition may be uncritical because they pay too
much attention to the argument of the expert system and
forget to think about the case. Personal concern and
responsibility, two other factors that in¯ uence a user’ s
motivation to examine the advice (Petty and Cacioppo
1986a, b), will probably aŒect judgements on expert
system advice in a similar way. This makes these factors
interesting aspects for further research on attitudes of
expert system users.
References
ALDAG , R. J. and POWER, D. J. 1986, An empirical assessmentof computer-assisted decision analysis. Decision Sciences, 17,
572 ± 588.BARKI, H. and HARTW ICK, J. 1989, Rethinking the concept of
user involvement. Management Information Systems Quar-
terly, 13, 53 ± 63.
BARKI, H. and HARTW ICK, J. 1994, Measuring user participa-tion, user involvement, and user attitude. Management
Information Systems Quarterly, 18, 59 ± 82.BAROUDI, J. J., OLSON , M. H. and IVES, B. 1986. An empirical
study of the impact of user involvement on system usage andinformation satisfaction. Communications of the ACM , 29,
232 ± 238.BERG , M. 1995, Rationalizing medical work, Ph.D. disserta-
tion, University of Limburg.BERRY, D. C. and BROA DBEN T, D. E. 1987, Expert systems and
the man-machine interface, part two. Expert Systems, 4,
18 ± 28.
BODEN, M. 1990, Arti® cial intelligence and images of man, inR. Ennals and J. Gardin (eds), Interpretation in the
Humanities: Perspectives from Arti® cial Intelligence (Cam-bridge: Cambridge University Press), 10 ± 21.
BRIDGES, S. M. 1995, A model for justi® cation production byexpert planning systems. International Journal of Human-
Computer Studies, 43, 593 ± 619.CACIOPPO, J. T. and PETTY , R. E. 1982, The need for cognition.
Journal of Personality and Social Psychology, 42, 116 ± 131.CACIOPPO, J. T., PETTY , R. E. and KAO, C. F. 1984, The e� cient
assessment of need for cognition. Journal of Personality
Assessment, 48, 306 ± 307.
CHEN , Z. 1992, User responsibility and exception handling indecision support systems. Decision Support Systems, 8, 537 ±
540.COR NFORD, T., DOUKIDIS, G. I. and FORSTER, D. 1994,
Experience with a structure, process and outcome frame-work for evaluating an information system. Omega, 22,
491 ± 504.
DAVIS, F. D. 1989, Perceived usefulness, perceived ease of use,and user acceptance of information technology. Manage-
ment Information Systems Quarterly, 13, 319 ± 340.
DAVIS, F. D. 1993, User acceptance of information technology:System characteristics, user perceptions and behavioral
impacts. International Journal of Man-Machine Studies, 38,
475 ± 487.DAVIS, F. D., BAGOZZI, R. P. and WARSHAAW, P. R. 1989, User
acceptance of computer technology: A comparison of two
theoretical models. Management Science , 35, 982 ± 1003.
Persuasiveness of expert systems 161D
ownl
oade
d by
[Y
ork
Uni
vers
ity L
ibra
ries
] at
02:
00 2
2 N
ovem
ber
2014
DAVIS, F. D., LOHSE, G. L. and KOTTEM ANN, J. E. 1994,Harmful eŒects of seemingly helpful information on
forecasts of stock earnings. Journal of Economical Psychol-
ogy, 15, 253 ± 267.
DIJKSTRA, J. J. 1995, The in¯ uence of an expert system on theuser’ s view: How to fool a lawyer. New Review of Applied
Expert Systems, 1, 123 ± 138.GALLETTA , D. F. and LEDERER , A. L. 1989, Some cautions on
the measurement of user information satisfaction. Decision
Sciences, 20, 413 ± 438.
GATAIN, A. W. 1994, Is user satisfaction a valid measure ofsystem eŒectiveness? Information & Management, 26, 119 ±
131.GINZBERG , M. J. 1981, Early diagnosis of MIS implementation
failure: Promising results and unanswered questions. Man-
agement Science, 27, 459 ± 478.
HAR TW ICK, J. and BARKI, H. 1994, Explaining the role of userparticipation in information system use. Management
Science , 40, 440 ± 465.HOLLNAG EL, E. 1989, The reliability of expert systemsÐ an
inquiry of the background, in E. Hollnagel (ed.), The
Reliability of Expert Systems (Sussex: Ellis Horwood), 14 ±32.
IGBARIA, M. 1993, User acceptance of microcomputer technol-
ogy: An empirical test. Omega, 21, 73 ± 90.
IGBARIA, M. and NACH MAN , S. A. 1990, Correlates of usersatisfaction with end user computing: An exploratory study.Information & Management, 19, 73 ± 82.
IGBARIA, M., SCHIFFM AN, S. J. and W IECKOW SKI, T. J. 1994, The
respective roles of perceived usefulness and perceived fun inthe acceptance of microcomputer technology. Behaviour &
Information Technology, 13, 349 ± 361.IIVARI, J. and ERVA STI, I. 1994, User information satisfaction:
IS implementability and eŒectiveness. Information & Man-
agement, 27, 205 ± 220.
IVES, B., OLSON , M. H. and BAROUDI, J. J. 1983, Themeasurement of user information satisfaction. Communica-
tions of the ACM , 26, 785 ± 793.KEIL, M., BER ANEK, P. M. and KONSYNSKI, B. R. 1995,
Usefulness and ease of use: Field study evidence regardingtask considerations. Decision Support Systems, 13, 75 ± 91.
KIERS, H. A. L. 1990, SCA User’ s Manual (Groningen: iecProGAMMA).
KIERS, H. A. L. and TEN BERG E, J. M. F. 1989, Alternatingleast square algorithms for simultaneous components
analysis with equal weight matrices in two or morepopulations. Psychometrika, 54, 467 ± 473.
KOTTEM ANN , J. E. and REM US, W. E. 1987, Evidence andprinciples of functional and dysfunctional DSS. Omega, 15,
135 ± 143.KOTTEM ANN , J. E., DAVIS, F. D. and REM US, W. E. 1994,
Computer-assisted decision making: Performance, beliefsand the illusion of control. Organizational Behavior and
Human Decision Processes, 57, 26 ± 37.MELONE , N. P. 1990, A theoretical assessment of the user-
satisfaction construct in information systems research.Management Science, 36, 76 ± 91.
MOLENAA R, I. W., DEBETS , P., SIJTSM A, K. and HEMK ER, B. T.1994, MSP User’ s Manual (Groningen: iec ProGAMMA).
O’KEEFE, D. J. 1990, Persuasion: Theory and Research (New-bury Park: SAGE Publications).
PAREÂ , G. and ELA M, J. J. 1995, Discretionary use of personalcomputers by knowledge workers: Testing of a social
psychology theoretical model. Behaviour & Information
Technology, 14, 215 ± 228.
PETTY , R. E. and CACIOPPO, J. T. 1986a, Communication and
Persuasion, Central and Peripheral Routes to Attitude
Change (New York: Springer Verlag).PETTY , R. E. and CACIOPPO, J. T. 1986b, The elaboration
likelihood model of persuasion, in L. Berkowitz (ed.),Advances in Experimental Social Psychology, Vol. 19 (New
York: Academic Press), 123 ± 205.PIETER S, R. G. M., VERPLANKEN, B. and MODDE, J. M. 1987,
`Neiging tot nadenken’ : Samenhang met beredeneerdgedrag. Nederlands Tijdschrift voor de Psychologie, 42, 62 ±
70.PRIESTER, J. R. and PETTY , R. E. 1995, Source attributions and
persuasion: Perceived honesty as a determinant of message
scrutiny. Personality and Social Psychology Bulletin, 21,
637 ± 654.ROBEY , D. 1979, User attitude and management information
system use. Academy of Management Journal, 22, 527 ± 538.
SRINIVASAN, A. 1985, Alternative measures of system eŒective-ness: Associations and implications. Management Informa-
tion Systems Quarterly, 9, 243 ± 253.
SUH, C. and SUH, E. 1993, Using human factor guidelines for
developing expert systems. Expert Systems, 10, 151 ± 156.SZAJNA, B. 1993, Determining information system usage: Some
issues and examples. Information & Management, 25, 147 ±154.
TODD, P. and BENBASAT, I. 1992, The use of information in
decision making: An experimental investigation of the
impact of computer-based decision aids. Management
Information Systems Quarterly, 16, 373 ± 393.
TODD, P. and BENBASAT, I. 1994a, The in¯ uence of decision aidson choice strategies: An experimental analysis of the role of
cognitive eŒort. Organizational Behavior and Human Deci-
sion Processes, 60, 36 ± 74.
TODD, P. A. and BEN BA SAT, I. 1994b, The in¯ uence of decisionaids on choice strategies under conditions of high cognitive
load. IEEE Transactions on Systems, Man, and Cybernetics,24, 537 ± 547.
WALD ROP, M. M. 1987, Man-made Minds: The Promise of
Arti® cial Intelligence (New York: Walker and Company).
W ILL, R. P. 1992, Individual diŒerences in the performanceand use of an expert system. International Journal of Man-
Machine Studies, 37, 173 ± 190.WOGNUM , P. M. 1990, Explanation of Automated Reasoning:
How and Why?, Ph.D. dissertation, University of Twente.WORCHEL, S., COOPER, J. and GOETH ALS, G. R. 1988, Under-
standing Social Psychology (Chicago: Dorsey Press).YE, L. R. and JOHNSON, P. E. 1995, The impact of explanation
facilities on user acceptance of expert systems advice.Management Information Systems Quarterly, 19, 157 ± 172.
J. J. Dijkstra et al.162D
ownl
oade
d by
[Y
ork
Uni
vers
ity L
ibra
ries
] at
02:
00 2
2 N
ovem
ber
2014
Appendix
Law case wordings used in the expert system with
natural language argument (the original text was in
Dutch).
Tenancy case
About four years ago, Karel Biegman (age 27) moved in
with his mother again, after having lived on his own for 13
months. Mrs. Biegman, a widow, rents her house from the
rental company `Sun house,’ but Karel pays the rent. Mrs.
Biegman is disabled and rather old; soon she will move to
a Senior Citizens’ Home. Karel would like to be the new
tenant of his mother’ s house but the rental company does
not agree. Karel thinks he has a right to stay and looks for
advice. Therefore, at City Hall, Karel consults the expert
system `Tenant Advice.’ After Karel has answered some
questions on the case, the computer concludes:
As a non-tenant you want to claim tenancy rights, thus
you want to plead for joint tenancy (section 1623 Civil
Law).
To claim joint tenancy according to section 1623 Civil Law
you should have established a long-term household with the
current tenant. A parent and child cannot set up a long-
term household together, because, for the law, a child is
thought to live with its parents only temporarily (DeWit vs
Koning 1972). Usually, a child leaves home when he or she
is grown up. However, there is the possibility that, after
living on your own, you move in with your parents again. In
such a case, it is possible to set up a long-term household
together; that is when you really share the housekeeping
and you plan to live with your parents permanently
(Eleveld vs DeHaan 1981). Unfortunately, the period of
13 months you have lived on your own is too short to claim
joint tenancy after moving in with your mother again.
A bit depressed Karel goes home. He cannot believe that
the period he has lived on his own in¯ uences his tenancy
rights. Two months later the rental company orders
Karel to leave the house.
In the human adviser condition, the last two sentences
of the ® rst paragraph read:
`Therefore, at City Hall, Karel consults Mr.
Stienstra for advice. After Karel has answered
some questions on the case, Mr. Stienstra con-
cludes:
In the expert system with production rules condition the
case material reads:
Tenancy case
About four years ago, Karel Biegman (age 27) moved in
with his mother again, after having lived on his own for
13 months. Mrs. Biegman, a widow, rents her house
from the rental company `Sun house,’ but Karel pays
the rent. Mrs. Biegman is disabled and rather old; soon
she will move to a Senior Citizens’ Home. Karel would
like to be the new tenant of his mother’ s house but the
rental company does not agree. Karel thinks he has a
right to stay and looks for advice. Therefore, at City
Hall, Karel consults the expert system `Tenant Advice’ .
After Karel has answered some questions on the case,
the computer concludes:
A bit depressed Karel goes home. He cannot believe that
the period he has lived on his own in¯ uences his tenancy
rights. Two months later the rental company orders
Karel to leave the house.
Persuasiveness of expert systems 163
GOAL joined_tenancy {section 1623 Civil Law}{Comment: a non-tenant wants to claim tenancy rights}
RULE long-term-household_parent_childIF child_living_with_parent= TRUE
THEN long-term_household:= FALSE
{Source: DeWit vs Koning (1972). A parent and childcannot set up a long-term household together, because,
for the law, a child is thought to live with its parentsonly temporarily. Usually, a child leaves home when he
or she is grown up.}
INFERENCE child_living_with_parent= TRUETHUS long-term_household=
FALSE
RULE exception_long-term-household_parent_childIF
AND
AND
AND
THEN
child_has_lived_on_its_own= TRUE
period_child_living_in_its_own
> 2 {year}
parent_child_share_housekeeping=TRUE
intention_living_together_permanently= TRUE
long-term_household:= TRUE{Source: Eleveld vs DeHaan, 1981}
INFERENCE period_child_living_on_its_own< 2
THUS long-term_household=FALSE
RULE joined_tenancy
IFTHEN
ELSE
long-term_household= TRUEjoined_tenancy:= TRUE
joined_tenancy:= FALSE
INFERENCE long-term_household= FALSE
THUS joined_tenancy:= TRUE
CONCLUSION: you cannot claim joint tenancy
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 0
2:00
22
Nov
embe
r 20
14