22
User satisfaction from commercial web sites: The effect of design and use Moshe Zviran a, * , Chanan Glezer b , Itay Avni a a Faculty of Management, Leon Recanati School of Business Administration, Tel Aviv University, Tel Aviv 69978, Israel b Department of Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel Received 16 August 2004; received in revised form 16 October 2004; accepted 23 April 2005 Available online 7 July 2005 Abstract We empirically investigated the effect of user-based design and Web site usability on user satisfaction across four types of commercial Web sites: online shopping, customer self-service, trading, and publish/subscribe. To this end, a Web-based survey questionnaire was assembled, based on previously reported instruments for measuring user satisfaction, usability, and user-based design. Three hundred and fifty-nine respondents used the questionnaire to rate a collection of 20 popular commercial Web sites. Data collected were analyzed to test four hypotheses on the relationships among the attributes examined. The Web site attributes were also plotted on bi-dimensional perceptual maps in order to visualize their interactions. The two techniques yielded the same result, namely that trading sites are the lowest rated and that online shopping and customer self-service sites should serve as models for Web site developers. These findings are especially useful for designers of electronic commerce (EC) Web sites and can aid in the development and maintenance phases of Web site creation. # 2005 Elsevier B.V. All rights reserved. Keywords: User satisfaction; User-based design; Usability; World Wide Web 1. Introduction The rapid development of the World Wide Web has allowed people, as never before, to access information and interact globally with new markets and products [38,75]. This year, the Web is expected to increase to 200 million sites. According to Nielsen [67–69], the number of Web pages is projected to grow to 50 billion by the end of the year and in 2007 some 880 million Internet access devices of various kinds may be sold worldwide (www2.cio.com/metrics). Considering the turbulence and size of these developments, it is not surprising that there has been growing interest in identifying design principles and features that can enhance user satisfaction and loyalty to the prolifera- tion of the electronic commerce (EC) sites that use the Web as their underlying technological platform [52] and enable the long-term business relationships www.elsevier.com/locate/dsw Information & Management 43 (2006) 157–178 * Corresponding author. Tel.: +972 3 6409671; fax: +972 3 6407741. E-mail address: [email protected] (M. Zviran). 0378-7206/$ – see front matter # 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.im.2005.04.002

User Satisfaction From Commercial Web Sites

Embed Size (px)

DESCRIPTION

user satisfaction

Citation preview

Page 1: User Satisfaction From Commercial Web Sites

www.elsevier.com/locate/dsw

Information & Management 43 (2006) 157–178

User satisfaction from commercial web sites:

The effect of design and use

Moshe Zviran a,*, Chanan Glezer b, Itay Avni a

a Faculty of Management, Leon Recanati School of Business Administration, Tel Aviv University, Tel Aviv 69978, Israelb Department of Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel

Received 16 August 2004; received in revised form 16 October 2004; accepted 23 April 2005

Available online 7 July 2005

Abstract

We empirically investigated the effect of user-based design and Web site usability on user satisfaction across four types of

commercial Web sites: online shopping, customer self-service, trading, and publish/subscribe. To this end, a Web-based survey

questionnaire was assembled, based on previously reported instruments for measuring user satisfaction, usability, and user-based

design. Three hundred and fifty-nine respondents used the questionnaire to rate a collection of 20 popular commercial Web sites.

Data collected were analyzed to test four hypotheses on the relationships among the attributes examined. The Web site

attributes were also plotted on bi-dimensional perceptual maps in order to visualize their interactions. The two techniques

yielded the same result, namely that trading sites are the lowest rated and that online shopping and customer self-service sites

should serve as models for Web site developers. These findings are especially useful for designers of electronic commerce (EC)

Web sites and can aid in the development and maintenance phases of Web site creation.

# 2005 Elsevier B.V. All rights reserved.

Keywords: User satisfaction; User-based design; Usability; World Wide Web

1. Introduction

The rapid development of the World Wide Web has

allowed people, as never before, to access information

and interact globally with new markets and products

[38,75]. This year, the Web is expected to increase to

200 million sites. According to Nielsen [67–69], the

* Corresponding author. Tel.: +972 3 6409671;

fax: +972 3 6407741.

E-mail address: [email protected] (M. Zviran).

0378-7206/$ – see front matter # 2005 Elsevier B.V. All rights reserved

doi:10.1016/j.im.2005.04.002

number ofWeb pages is projected to grow to 50 billion

by the end of the year and in 2007 some 880 million

Internet access devices of various kinds may be sold

worldwide (www2.cio.com/metrics). Considering the

turbulence and size of these developments, it is not

surprising that there has been growing interest in

identifying design principles and features that can

enhance user satisfaction and loyalty to the prolifera-

tion of the electronic commerce (EC) sites that use the

Web as their underlying technological platform [52]

and enable the long-term business relationships

.

Page 2: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178158

critical to the success of these ventures. This claim is

further supported by a survey, which found that three

of the five main concerns about IT are related to poor

user satisfaction [17].User satisfaction with EC

applications has been found to be significantly

associated with usability and design features unique

to the Web, such as download delay, navigation,

content, interactivity, and responsiveness [72]. In

addition, online shopping invokes methods of infor-

mation gathering that are different from those of the

traditional shopping experience, raising questions

about user satisfaction with the information quality

(IQ) and software quality (SQ) of EC applications,

resulting in discrepancies between prior expectations

and perceived performance. In stock trading sites,

other design principles such as convenience, delight-

fulness, reliability, and technological advance have all

been found to affect the level of user satisfaction and

loyalty.

The literature indicates that measuring user

satisfaction with EC applications is an important

but complex task. Many factors affect the satisfaction

of users from EC Web sites.

The purpose of this study is therefore to address the

following questions:

1. W

hat are the major factors that drive user

satisfaction from Web sites?

2. A

re there differences among different types of Web

sites with regard to user satisfaction?

2. Web success measures

Measuring IS success has received much attention

in the IS literature (e.g. [14,26,31,34,45,74,78,85]).

These view user satisfaction in terms of system use

and acceptance as the practical measure of IS success.

User participation, involvement, and attitude have also

been adopted as success measures [5].

For EC there is no way of directly measuring the

success of an application [35]. Measures such as total

business attracted, site usability, design features,

information and Web site quality, user characteristics,

and fundamental objectives appear to be relevant

indicators [20,65]. There is also difficulty in measur-

ing intentions and actual usage of online shopping;

this was addressed by developing an exhaustive

literature-derived model of online shopping, classified

into: consumer characteristics, Web site and product

characteristics, and perceived characteristics of the

Web as a sales channel [21].

One approach to coping with the complexity of

the issue has been to estimate the quality of EC Web

sites using Web site ranking methods. Thus, the

Webtango project (webtango.ischool.washington.edu/

papers) proposed and tested a quality ranking system to

profile the Web sites and provide insights for design

improvements [46]. This, however, cannot replace

usability testing but complements it by identifying

aspects to be assessed during application acceptance

tests. An extension of the Kano Quality Model [48]

found that the quality factors seemed to change over

time, and that the samequality factormayhave different

quality designations in different domains [92]. WebQ-

ual is a popular index calculated on the basis of user

perceptions on dimensions of usability, information

quality, and service interaction quality; it has evolved

via a process of iterative refinement [8–13].

Another alternative exploited automated tools that

analyze logs of Web servers [84]. These were easy to

use and highly effective in capturing the volume of

activity on a Web site in the form of page views, hits,

and even return visits (using cookie technology), but

they did not provide any reliable indication of the

value of the published content to the end-user [62].

This is a serious drawback: user satisfaction is critical

in establishing long-term client relationships [73] and

in increasing profitability [86].

Researchers have investigated various aspects of

success. Aladwani and Palvia [2] reported on the

development of an instrument that captured key

characteristics of Web site quality from a user’s

perspective. Their 25-item instrument measured four

dimensions: specific content, content quality, appear-

ance, and technical adequacy. Shih’s [81] extended

model to predict acceptance of electronic-shopping (e-

shopping) indicated that user satisfaction with the

WWW and perceptions of information, system and

service affected user acceptance significantly. Raga-

nathan and Ganapathy [76] surveyed online shoppers

and found that security, privacy, design, and informa-

tion content had an impact on the online purchase

intent. Liu and Arnett [53] surveyed Web-masters

from Fortune 1000 companies and found four factors

that are critical to success: information and service

Page 3: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 159

quality, system use, playfulness, and system design

quality. Lu [54] proposed a triangular conceptual

framework for evaluating Web-based business-to-

consumer EC applications: EC cost/benefit, EC

functionality and user satisfaction factor arrays. Their

study revealed that for B2C, most main benefits were

fully dependent on or relate to the improvement of

relationship with consumers, and that satisfaction was

determined by EC functionality and maintenance

expense.

Nevertheless, the application functionality cate-

gories used by Lu were mainly focused on B2C

scenarios (advertising, e-mail ordering, user payment

registration, and online shopping). In addition the user

satisfaction construct was measured using only a single

item and not by adopting standard instruments [4,30].

Thus, there appears to have been little methodical

evaluation of usability of commercial web sites [16].

Moreover, the focus onWeb site characteristics was on

the site as an end-product and did not address the

process of its construction for user satisfaction.

In an attempt to fill these gaps, Lu’s [55] triangular

evaluation framework of EC applications (Fig. 1) was

adopted as a reference model, with the aim of zooming

in and elaborating on the relationship between Web

site capabilities (v1) and customer assessment (v2).

This goal was achieved by adopting a prominent and

‘‘richer’’ instrument for measuring user satisfaction;

adopting a commercial typology of EC applications;

and by introducing usability and user-centered design

constructs as moderators.

Fig. 1. An evaluation framework fo

3. Research constructs

We investigated the relationship among four

constructs: user-satisfaction, usability, user-based

design, and Web site type.

3.1. User satisfaction

User satisfaction is a common measure of IS

success [93] for which several standardized instru-

ments have been developed and tested. User satisfac-

tion is a critical construct because it is related to other

important variables in systems analysis and design

[50]. It has been used to assess IS success and

effectiveness [7,60,77], the success of decision

support systems (DSS) [6], office automation success

[90], and the utility of IS in decision-making [70].

Definitions incorporate overarching constructs ran-

ging from IS appreciation [87] and user attitudes [22]

to end-user satisfaction. The end-user computing

instrument (EUCI) comprises five measure of user

satisfaction: end-user trust in the system, presenting

accurate information, using a clear presentation

format, ensuring timeliness of information, and

perceived ease of use.

Recognition of the dominance of user satisfaction

in the success of an EC application [23] has led to an

increased effort on the part of the research community

to explore how to measure and model satisfaction of

users and their preferences [51]. Muylle et al. [63]

empirically validated a standard instrument for

r EC applications (Lu [55]).

Page 4: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178160

measuring the Web site user satisfaction construct

(WUS). Their instrument consisted of three compo-

nents: information (relevance, accuracy, comprehen-

sibility, and comprehensiveness), connection (ease-of-

use, entry guidance, structure, hyperlink connotation,

and speed), and layout. Trepper [89] found that

convenient site design and financial security had a

significant effect on user assessment of EC applica-

tions, but that, while an EC application can be

technically successful and meet its financial objec-

tives, it can still be a failure if the customers are

unhappy with the result. McKinney et al. [59]

presented evidence that a user’s satisfaction of an

EC Web site can be modeled as a perceived

disconfirmation, resulting from a gap between user

expectations and the actual performance of the EC

Web site with respect to information and software

quality. Khalifa and Liu [49] argued and empirically

demonstrated the need to consider the evolutionary

nature of satisfaction of Internet-based services.

3.2. Usability

According to ISO 9241 [42,43], usability is ‘‘the

extent to which intended users of a product achieve

specified goals in an effective, efficient and satisfactory

manner within a specified context of use.’’ Researchers

have adopted different approaches in specifying

usability measures. One approach posits that usability

is promoted if its designmethodmeets a hierarchical set

of criteria in learnability, flexibility, and robustness

[29]. Measuring usability is then based on evaluating

the experience of the user interacting with the system,

which involves a focus on the interface.

Other researchers have viewed usability as depen-

dent on product characteristics such as consistency,

user control, appropriate presentation, error handling,

etc. [58,83]. A different approach adopts clusters of

such factors as speed, errors, time to learn, retention,

flexibility, attitude [80], learnability, efficiency,

retention, errors and pleasing ability [64], or accuracy,

completeness, temporal, human and financial effi-

ciency, comfort and acceptance. Several question-

naires have been developed (www.usabilitynet.org/

tools/r_questionnaire.htm).

It is important to note that, while the usability

engineering approach of deriving appropriate design

targets is useful, usability does not fully determine

actual system use. Thus, designers may produce a well

engineered artifact that meets set criteria but still fails

to gain the acceptance of users. In other words,

usability is a necessary but insufficient determinant of

use [28]. To address this problem, the Technology

Acceptance Model (TAM [25]) was tailored to model

user acceptance of IS, in order to explain behavioral

intention of using the system. Perceived usefulness

(PU) and perceived ease of use (PEU) are important in

explaining the behavioral intention to use IS [3]. Thus,

users may express a preference for a system based on

personal judgment, previous experience, aesthetics,

cost, etc., and the final driver must be the user’s

perception of or attitude toward the technology.

Our study adopted the system usability scale (SUS)

questionnaire [18], developed at Digital Equipment

Corporation. Mature, robust, extensively used, and

adapted, it is the most strongly recommended of all

public domain questionnaires. It has a simple, 10-item

scale giving a global view for quick assessment of the

usability of a system in comparison to its competitors

or predecessors.

The technique used in constructing the SUS was

that a pool of 50 potential questionnaire items was first

assembled. Two examples of software systems were

then selected (a linguistic tool aimed at end-users and

a tool for systems programmers) where there was

general agreement that one was ‘‘really easy to use’’

and the other was almost impossible to use, even for

highly technically skilled users. Twenty people from

the office systems engineering group, with occupa-

tions ranging from secretary to systems programmer,

then rated both systems against all 50 items using a 5-

point Likert scale ranging from ‘‘strongly agree’’ to

‘‘strongly disagree’’. The items leading to the most

extreme responses were then selected (the inter-

correlations between all selected items were close:

�0.7 to �0.9). In addition, items were selected so

that the common response to half of them was strong

agreement and the other half strong disagreement (to

prevent biases caused by respondents not having to

think about each statement) (www.usability.serco.-

com/trump/documents/Suschapt.doc).

3.3. User-based (user-centered) design

In contrast to the usability approach, the user-based

design paradigm has a broader scope. It involves the

Page 5: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 161

user throughout the whole life cycle of the system–

information gathering, development, evaluation, and

implementation [1]. User input is gathered at three

different times:

(1) e

Fig.

arly in the project, to determine the evaluative

criteria users apply to the Web sites they use;

(2) a

fter a preliminary design, to elicit feedback and

comments and/or to evaluate aspects of the site;

(3) w

hen the Web site is operational, to elicit

continual feedback and suggestions for additions

and/or modifications to the site.

Fig. 2 depicts the six criteria used for constructing

Web sites with the user in the focus. These criteria are

operationalized into Web site features that can be

measured in order to evaluate Web sites.

The rationale of the user-based design is that users

who are consulted at early stages have less antagonism

towards the new system [36,37,79]. The cultural

variation of the Web underscores the need for a

tailored design [61], with the initial questions during

the design process being ‘‘who is the user?’’ and

‘‘what are his or her goals?’’—though design guide-

lines are not available at this stage [82]. In another

approach, the visitor and the site manager serve as

focal points for activating the development process

[27].

2. User-based design criteria and their relationship (Abels [1]).

3.4. Classification of Web sites

The Internet houses Web sites of diverse types

with different target populations, making it difficult

to classify them. In studying the evolution of

functional characteristics of 98 Hong Kong-based

commercial sites, Yeung and Lu [91] showed that

though the content of the sampled Web sites grew

larger, their functions were only marginally enhanced.

This is in contrast to the general impression of

fast growing e-commerce activities. Hoffman et al.

[40] proposed a classification of commercial web

sites into six categories: online storefront, Internet

presence, content, mall, incentive, and search agent.

Cappel and Myerscough [19] classified the business

use of the web into marketplace awareness,

customer support, sales, advertising, and electronic

information services. Practitioner classifications

included, among others: inner-directed, information-

oriented, transaction-driven, and relationship-oriented

sites (www.businesstown.com/internet/basic-types.

asp), and promotional, content, portal, and e-com-

merce sites (www.home-basedbusinessopportunities.

com/library/webdesign101-types.shtml).

In our study, we adopted the compact IBM

classification of Web sites according to volume of

traffic [41]. Based on criteria such as: pages retrieved,

number of transactions, their complexity, type, and

number of searches, information stability, and security

concerns, this classification proposed five types of

high-volume Web sites: publish/subscribe, online

shopping, customer self-service, trading, and B2B

(see Appendix A for details). Of these, we excluded

the last because of its overlap with others, due to the

nature of procurement activities of businesses.

4. Research model and hypotheses

development

The goal of our effort was empirically to test user

satisfaction in different types of Web sites as a

function of two attributes: usability and user-based

design. The independent construct Web site usability

mainly referred to the subjective feeling of the user

towards the Web site that served as a revenue channel

for the merchant [33]. It was expected that the better

the Web site’s interface fit the user preferences, the

Page 6: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178162

higher would be the value and satisfaction attributed to

the Web site. This should result in loyalty and repeat

customers, with potentially increased revenues,

particularly when entering a competitive environment

with well-established brands, where standards are

stringent. Thus, our first hypothesis was

H1. Web sites exhibiting a higher degree of usability

will be associated with greater perceived user satisfac-

tion.

The importance of the user-based design construct

stems from the growing emphasis on design

approaches, with the intention of promoting usability

[44]. A designer should adhere to the following

principles: knowing the user, minimizing memoriza-

tion, optimizing operations, and engineering for error

[39]. The expectation is that the better the design fits

the user perception, the higher the value and

satisfaction attributed by the user to the Web site.

Thus, our second hypothesis is

H2. Web sites adhering to user-based design princi-

ples will result in greater perceived user satisfaction.

The amount and heterogeneity of Web sites make it

difficult to provide a uniform classification of Web

sites. We believed thatWeb sites belonging to different

types or domains would possess different character-

istics that differentially affected the relationship

between usability, user-based design, and user

satisfaction (the dependent variable). For example,

online shopping sites are usually based on visual

catalogues with a relatively low frequency of updates

and a high volume of transactions and searches, while

publish/subscribe sites (like newspapers) have content

that is modified frequently, but the number of

transactions and search operations are lower. This

leads to our third and fourth hypotheses.

H3. The type of a Web site influences the relationship

between the Web site’s usability and perceived user

satisfaction.

H4. The type of a Web site influences the relationship

between the Web site’s user-based design capabilities

and perceived user satisfaction.

The research model is presented in Fig. 3.

5. Methodology

5.1. Instrument

The questionnaire used to collect the data was

constructed from several instruments used in previous

research.

The user satisfaction construct used the well-

known questionnaire developed by Doll et al., which

consists of a 12-item measure of the users’ reactions to

a specific computer interface. All items had large

(>0.72) and significant loadings on their correspond-

ing factors, indicating good construct validity. R-

square ranged from 0.52 to 0.79, indicating acceptable

reliability for all items.

Usability was tested using the SUS instrument

developed at Digital Equipment Corporation. It has

been extensively used and adapted. For proprietary

reasons, measures of its validity and reliability have

not been published; however, in an independent study,

Lucey [56] demonstrated that this short 10-item scale

has a reliability of 0.85.

User-based design has not been used in previous

studies on user satisfaction; we merged three

questionnaires that address Web site failures, Web

searching challenges, and the design of transactive

content [32] as the questions after trimming out

redundant items.

The composite preliminary questionnaire then

consisted of 45 questions; four of these collected

demographic details of the respondents. The ques-

tionnaire was pre-tested in a pilot study and further

refined and calibrated with the aid of experts,

particularly with respect to the user-based design

constructs. The final questionnaire had 39 questions,

including five demographic items and one question

designed to verify internal consistency. Table 1 depicts

the sources and categories of questions used in the final

questionnaire, whichmay be obtained from the authors.

5.2. Instrument refinement

ExploratoryFactorAnalysis (EFA)was employed as

a data reduction method on the composite question-

naire. For the user satisfaction items, principal

component analysis with Varimax rotation using

Kaiser’s normalization (Table 2) revealed five factors:

Page 7: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 163

Fig. 3. The research model.

content, accuracy, format, ease of use, and timeliness.

These explain 81.4% of the user satisfaction variance.

In order to test whether a mean for questions Q1

through 12 can be used to estimate user satisfaction, a

second-order analysis was conducted. The first factor

‘‘content’’ (mean of Q1 through 4) is able to explain

61.6% of the variance in user satisfaction (Cronbach

a = 1 for the 12-item questionnaire); this is higher than

the 0.7 or lower threshold found in the literature [71].

For the Web site usability construct, principal

component analysis using eigenvalues revealed three

items in the original SUS questionnaire (Q13, Q17,

Q21) overlapped with items from the other constructs

and thesewere thus omitted. TheCronbacha reliability

score for the seven-item questionnaire was 0.83.

For the user-based design construct, principal

component analysis with Varimax rotation using

Kaiser’s normalization produced four factors: content,

navigation, search, performance (see Table 3). These

factors explain 52.5% of the variance in the user-based

design construct and are congruent with the factors for

promoting user-based design of Web-based systems

reported by Abels et al.: content, linkage, search

capability, and use. A second-order factor analysis

yielded one factor that explained 48.4%of the variance.

Finally, the correlation among the user satisfaction,

usability and user-based design constructs is shown in

Table 4. All correlations were significant at p = 0.01,

except performance with navigation ( p = 0.059) and

usability with performance ( p = 0.0929).

5.3. Data collection

The questionnaire was Web-based. This allowed us

easier control and quicker processing of data for

statistical analysis. The Web site presented each

respondent with a list of commercialWeb sites that fell

under the heading of one or other of the four types of

Web sites from IBMs classification (Appendix A):

publish/subscribe (90 respondents), online shopping

(90), customer self-service (90), and trading (89). The

respondents were presented with a quota of Web sites

designating the number of required exposures for each

of the types. Upon logging on, the respondent was first

Page 8: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178164

Table 1

Constructs, items and sources

Construct/source Item Comments Questions

User Satisfaction, Doll et al. [30] Content User trust in site-provided content 4-1

Accuracy Precision of site-provided information 6-5

Format Clarity of information presentation 8-7

Ease of use Subjective impression of user 10-9

Timelines Temporal relevance of information 12-11

Web-site usability, Brooke (SUS) [18] Usability 22-13

User-oriented design, Abels et al. [1] Personalization 25-23

Structure Organization of information in the site 27-26

Navigation 29-28

Layout 33-30

Search 36-34

Performance Quality of user-site dialogue 39-37

Internal consistency 40

Demographic characteristics Gender, marital status, education, average weekly Web surfing time,

age

45-41

given an introductory screen with explanations about

the research procedure and then presented with a list of

sites. After selection of a specific one, the system

presented the Web site and questionnaire in two

adjacent windows. Upon completing the question-

naire, the respondent received an acknowledgement

from the system.

Most of the 359 respondents were students at a

major business school (58% men and 42% women);

47.4% of the respondents were undergraduates, 42.9%

were graduate students and the rest were faculty

members. A t-test revealed no significant difference

between the groups. Most respondents (81%) were in

Table 2

Rotated component matrix for user satisfaction

Question Component

Content Accuracy Format Ease of use Timeliness

Q1 .807 .257 .120 .142 .168

Q2 .792 .230 .222 .116 .264

Q3 .819 .074 .116 .218 .166

Q4 .695 .300 .331 .049 .116

Q5 .242 .875 .131 .072 .212

Q6 .324 .783 .136 .209 .257

Q7 .231 .165 .800 .280 .126

Q8 .240 .103 .766 .336 .166

Q9 .233 .094 .434 .735 .129

Q10 .136 .142 .244 .879 .114

Q11 .292 .178 .074 .334 .775

Q12 .228 .323 .233 .023 .789

Extraction method: Principal Component Analysis. Rotation

method: Varimax with Kaiser normalization.

the 20–30 age-group and seemed to have had

significant exposure to the Web. For example, 43%

said that they browsed the Web for more than 8 h a

week and 53% for over 6 h a week.

One dilemma in setting up the survey was in

selecting an optimal number of respondents for

detecting usability problems. The recommended

number is 3–5 [66] and a single user making the same

number of repetitions as a group of users is likely to be

biased. Querying more users makes it is easier to

account for the variance due to individual differences

among users.

It should be noted that our goal was not to

document the usability problem of a given site, but

rather to investigate the relationships across various

Web site types between usability, user-based design,

and user satisfaction. Accordingly, the sample size

was selected to provide approximately 15 responses

per site. This size enabled detection of practically all

usability problems. The Web sites reviewed by the

respondents were almost evenly distributed across all

of the four types investigated.

6. Hypotheses testing

The hypotheses of this study were investigated

using stepwise regression (see Table 4). Since the

number of observations is sufficiently large relative to

the number of independent variables, there is no need

Page 9: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 165

Table 3

First-order factor analysis on user-based design

Component Initial eigenvalues Extraction sums of squared loadings Rotation sums of squared loadings

Total % of

variance

Cumulative (%) Total % of

variance

Cumulative (%) Total % of

variance

Cumulative (%)

Content 4.02 26.8 26.8 4.02 26.8 26.82 2.56 17.0 17.0

Navigation 1.59 10.6 37.4 1.59 10.6 37.42 1.95 13.0 30.1

Search 1.18 7.92 45.3 1.18 7.92 45.35 1.91 12.7 42.9

Performance 1.06 7.12 52.4 1.06 7.12 52.47 1.43 9.56 52.4

5 .97 6.46 58.9

6 .86 5.78 64.7

7 .81 5.43 70.1

8 .73 4.86 75.0

9 .68 4.54 79.5

10 .62 4.13 83.7

11 .53 3.59 87.2

12 .53 3.53 90.8

13 .48 3.21 94.0

14 .46 3.06 97.1

15 .43 2.89 100

Extraction method: Principal Component Analysis.

to use partial least squares regression. Considering the

number of observations in each group of sites,

normality can be assumed.

As noted in the instrument validation section, the

factors driving user-based design were identified as:

‘‘content’’ (Q24 through 28); ‘‘navigation’’ (Q29

through 31); ‘‘search’’ (Q32 through 35); and

‘‘performance’’ (Q23, 38, and 39).

Table 4

Correlation summary for constructs (N = 359)

Component Satisfaction Usability

Satisfaction r 1.00 .565

p ... .000

Usability r 1.00

p ...

Content r

p

Navigation r

p

Search r

p

Performance r

p

Since usability was measured as a single numeric

value based on the reduced seven-item SUS scale,

the initial regression model was stated as follows:

satisfaction

¼ aþ b0ðusabilityÞ þ b1ðcontentÞ þ b2ðsearchÞ

þ b3ðnavigationÞ þ b4ðperformanceÞ

Content Navigation Search Performance

.690 .364 .464 .155

.000 .000 .000 .003

.413 .201 .222 .005

.000 .000 .000 .929

1.00 .419 .515 .170

..

. .000 .000 .001

1.00 .316 .100

..

. .000 .059

1.00 .241

..

. .000

1.00

..

.

Page 10: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178166

The final model was

satisfaction

¼ 0:218þ 0:368ðusabilityÞ þ 0:485ðcontentÞ

þ 0:139ðsearchÞ

The results (see also Table 4) indicated that both H1

and H2 are supported. The amount of variance in user

satisfaction explained by these three constructs is

58.6%. An F-test on the final regression equation

confirmed that all constructs contributed to explaining

the variance in user satisfaction at a significance level

of p < 5%.

Table 5

Backward regression on user satisfaction (without site type)

Modela Unstandardized

coefficients

B S.E.

1 (Constant) .025 .204

USAB .369 .041

CONTENT .460 .045

SEARCH .123 .040

NAV .053 .033

PERF .045 .040

2 (Constant) .139 .178

USAB .365 .041

CONTENT .464 .045

SEARCH .131 .039

NAV .054 .033

3 (Constant) .218 .172

USAB .368 .041

CONTENT .485 .043

SEARCH .139 .039

Model R R2 Adjusted R2 S.E. of the estimate C

R2

Model summary

1 .769 .591 .585 .447

2 .768 .589 .585 .447 �3 .766 .586 .583 .448 �

Predictors: (1) (Constant), PERF, USAB, NAV, SEARCH, CONTENT; (2

USAB, SEARCH, CONTENT.a Dependent variable: User satisfaction (SAT).

In order to test H3 and H4, three dummy variables

were used to denote the type of a Web site.

SITE2 ¼ 1 if the site is of type online shopping;

0 otherwise:

SITE3 ¼ 1 if the site is of type customerself-service;

0 otherwise:

SITE4 ¼ 1 if the site is of type trading;

0 otherwise:

Therefore, if all the SITE variables are 0, the Web

site is of type publish/subscribe.

The initial regression model was similar to the one

used for testing H1 and H2, except for the dummy

Standardized coefficients t Significant

Beta

.123 .902

.337 8.98 .000

.453 10.1 .000

.127 3.10 .002

.062 1.63 .103

.040 1.12 .260

.781 .435

.334 8.92 .000

.457 10.2 .000

.135 3.36 .001

.062 1.64 .101

1.27 .205

.336 8.96 .000

.478 11.1 .000

.143 3.59 .000

hange statistics

change F change d.f.1 d.f.2 Significant F change

.591 101. 5 353 .000

.001 1.27 1 355 .260

.003 2.71 1 356 .101

) (Constant), USAB, NAV, SEARCH, CONTENT; (3) (Constant),

Page 11: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 167

Table 6

Backward regression on user satisfaction (with site type)

Model Unstandardized

coefficients

Standardized coefficients t Significant

B S.E. Beta

1 (Constant) .199 .210 .949 .343

USAB .363 .041 .332 8.92 .000

CONTENT .436 .045 .430 9.65 .000

NAV .045 .033 .053 1.40 .161

SEARCH .136 .039 .140 3.49 .001

PERF .026 .040 .023 .672 .502

SHOPPING 0.51 .066 .032 .783 .434

SELF-SERVICE .022 .066 .014 .340 .734

TRADING �.188 .067 �.117 �2.81 .005

2 (Constant) .209 .207 1.01 .313

USAB .362 .041 .331 8.93 .000

CONTENT .439 .045 .432 9.80 .000

NAV .044 .032 .052 1.38 .167

SEARCH .136 .039 .140 3.49 .001

PERF .026 .040 .024 .679 .498

SHOPPING .040 .057 .025 .709 .479

TRADING �.199 .058 �.124 �3.40 .001

3 (Constant) .280 .179 1.56 .119

USAB .360 .040 .329 8.91 .000

CONTENT .440 .045 .434 9.86 .000

NAV .044 .032 .052 1.38 .166

SEARCH .141 .038 .145 3.694 .000

SHOPPING .040 .057 .026 .717 .474

TRADING �.203 .058 �.127 �3.50 .001

4 (Constant) .290 .178 1.62 .105

USAB .363 .040 .331 9.03 .000

CONTENT .440 .045 .433 9.86 .000

NAV .043 .032 .050 1.35 .177

SEARCH .141 .038 .145 3.68 .000

TRADING �.217 .055 �.135 �3.96 .000

5 (Constant) .358 .172 2.08 .038

USAB .364 .040 .333 9.06 .000

CONTENT .456 .043 .449 10.61 .000

SEARCH .148 .038 .152 3.89 .000

TRADING �.223 .055 �.139 �4.08 .000

Model R R2 Adjusted R2 S.E. of the estimate Change statistics

R2 change F change d.f.1 d.f.2 Significant F change

Model summary

1 .780 .608 .599 .439 .608 67.8 8 350 .000

2 .780 .608 .600 .439 .000 .115 1 352 .734

3 .779 .607 .601 .439 �.001 .461 1 353 .498

4 .779 .607 .601 .438 �.001 .514 1 354 .474

5 .778 .605 .600 .439 �.002 1.82 1 355 .177

Predictors: (1) (Constant), TRADING, SEARCH, USAB, PERF, SELF-SERVICE, NAV, SHOPPING, CONTENT; (2) (Constant), TRADING,

SEARCH, USAB, PERF, NAV, SHOPPING, CONTENT; (3) (Constant), TRADING, SEARCH, USAB, NAV, SHOPPING, CONTENT; (4)

(Constant), TRADING, SEARCH, USAB, NAV, CONTENT; (5) (Constant), TRADING, SEARCH, USAB, CONTENT.

Page 12: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178168

variables:

satisfaction

¼ aþ b0ðusabilityÞ þ b1ðcontentÞ þ b2ðsearchÞ

þ b3ðnavigationÞ þ b4ðperformanceÞ

þ b5ðSITE2Þ þ b6ðSITE3Þ þ b7ðSITE4Þ

The final model was

satisfaction

¼ 0:358þ 0:364ðusabilityÞ þ 0:456ðcontentÞ

þ 0:148ðsearchÞ � 0:223� SITE4

The results (see Table 5) indicated that both H3 and

H4 were supported.

In the case of trading sites, user satisfaction was

significantly lower than that for all other sites, all

coefficients being highly significant. The amount of

variance in user satisfaction explained by the site’s

usability, content, search capability and being of type

trading, was 60.5%. An F-test on the final regression

equation verified that they all contribute to explaining

Table 7

Data scheme for factor analysis of Web sites

Site number Questions, Vs1, Vs2, . . ., Vsp Rotated factor s

Site 1 R11, R12, . . ., R1p FS11, FS12, . . .,

R21, R22, . . ., R2p FS21, FS22, . . .,

..

. ...

Rs1, Rs2, . . ., Rsp FSs1, FSs2, . . ., F

Site 1 mean F̄1ðS1Þ; F̄2ðS2Þ; .

Site 2 R11, R12, . . ., R1p FS11, FS12, . . .,

R21, R22, . . ., R2p FS21, FS22, . . .,

..

. ...

Rs1, Rs2, . . ., Rsp FSs1, FSs2, . . ., F

Site 2 mean F̄1ðS1Þ; F̄2ðS2Þ; .

Site n R11, R12, . . ., R1p FS11, FS12, . . .,

R21, R22, . . ., R2p FS21, FS22, . . .,

..

. ...

Rs1, Rs2,. . ., Rsp FSs1, FSs2, . . ., F

Site n mean F̄1ðS1Þ; F̄2ðS2Þ; .F: factor analysis, x: number of factor scores, FS: factor score,D: discrimina

score, n: number of sites, V: question (variable), R: response, p: question

the variance in user satisfaction at a significance level

of p < 5%.

Finally, we tested the data to exclude the possibility

of multicollinearity between the independent variables.

In the first test, the condition number [15] was

calculated for the matrix of coefficients of the sample

observations. Applications with experimental and

actual datasets suggested that condition numbers

higher than 20 indicated serious collinearity problems.

Two other tests were used to examine the stability of

the regression equations after omitting several obser-

vations or several variables [47,57]. The relatively low

condition numbers (varying from 5.32 to 7.43), and the

low variance in the regression coefficients when the

two omission tests were performed (less than 7%),

suggested that a high degree of multicollinearity did

not exist. Hence, the final regression equations were

judged to be stable (Table 6).

7. Visualization of web site attributes

Perceptual maps presented by multidimensional

scaling (MDS) can be considered an alternative to

factor analysis. In factor analysis, the similarities

cores, F1, F2, . . ., Fx Rotated factor scores, D1, D2, . . ., Dy

FS1x DS11, DS12, . . ., DS1y

FS2x DS21, DS22, . . ., DS2y

..

.

Ssx DSs1, DSs2, . . ., DSsy

. . ; F̄xðSxÞ D̄1ðS1Þ; D̄2ðS2Þ; . . . ; D̄yðSyÞ

FS1x DS11, DS12, . . ., DS1y

FS2x DS21, DS22, . . ., DS2y

..

.

Ssx DSs1, DSs2, . . ., DSsy

. . ; F̄xðSxÞ D̄1ðS1Þ; D̄2ðS2Þ; . . . ; D̄yðSyÞ

FS1x DS11, DS12, . . ., DS1y

FS2x DS21, DS22, . . ., DS2y

..

.

Ssx DSs1, DSs2, . . ., DSsy

. . ; F̄xðSxÞ D̄1ðS1Þ; D̄2ðS2Þ; . . . ; D̄yðSyÞtory analysis, y: number of discriminatory scores, DS: discriminatory

number, s: number of observations per site.

Page 13: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 169

between objects (e.g., variables) are those expressed

in the correlation matrix, but with MDS one can

analyze any kind of similarity or dissimilarity

matrix, in addition to correlation matrices. In

general, the goal of the MDS analysis is to detect

meaningful underlying dimensions that allow the

researcher to explain observed similarities or

dissimilarities (distances) between the investigated

objects. Both factor analysis and MDS reduce the

observed complexity of nature, because the distance

Table 8

Data scheme for discriminatory analysis of Web sites

Site type Site number Questions,

Vs1, Vs2, . . ., Vsp

Site type 1 Site1 R11, R12, . . ., R1p

..

.

Site2 R11, R12, . . ., R1p

..

.

..

.

Sitenk R11, R12, . . ., R1p

..

.

Site type 1 mean Sitenk mean

Site type 2 Site1 R11, R12, . . ., R1p

..

.

Site2 R11, R12, . . ., R1p

..

.

..

. ...

Sitenk R11, R12, . . ., R1p

..

.

Site type 2 mean Sitenk mean

..

. ...

Site type k Site1 R11, R12, . . ., R1p

..

.

Site2 R11, R12, . . ., R1p

..

.

..

. ...

Sitenk R11, R12, . . ., R1p

..

.

Site type k mean Sitenk mean

FS: factor score, D: discriminatory analysis, y: number of discriminatory

observations per site, F: factor analysis, x: number of factor scores, k: nu

(variable), R: response.

matrix explains the observations in terms of fewer

underlying dimensions (www.statsoftinc.com/text-

book/stathome.html).

In our study, an MDS procedure was performed

using dimensions and distances based on

(1) s

R

F

F

..

.

F

..

.

..

.

F

..

.

F

..

.

F

..

.

..

.

F

..

.

..

.

F

..

.

F

..

.

..

.

F

..

.

score

mbe

cores of the factor analysis procedure;

(2) a

discriminant analysis procedure, which yielded

the most powerful discriminant functions across

the sample.

otated factor scores,

1, F2, . . ., Fx

Rotated factor scores,

D1, D2, . . ., Dy

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

..

.

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

1ðS1Þ; F̄2ðS2Þ; . . . ; F̄xðSxÞ D̄1ðS1Þ; D̄2ðS2Þ; . . . ; D̄yðSyÞ

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

..

.

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

1ðS1Þ; F̄2ðS2Þ; . . . ; F̄xðSxÞ D̄1ðS1Þ; D̄2ðS2Þ; . . . ; D̄yðSyÞ...

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

..

.

S11, FS12, . . ., FS1x DS11, DS12, . . ., DS1y

..

.

1ðS1Þ; F̄2ðS2Þ; . . . ; F̄xðSxÞ D̄1ðS1Þ; D̄2ðS2Þ; . . . ; D̄yðSyÞs, DS: discriminatory score, p: question number, s: number of

r of site types, n: number of observations per site, V: question

Page 14: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178170

The procedures were then repeated for obs-

ervations at the site level and the site type level.

The non-attribute-based version of the MDS me-

thod was used here because it facilitated the naming

of dimensions, made clustering them into groups

with similar characteristics easier, and was more

easily connected to other computer programs [24].

Fig. 4. Perceptual maps using facto

Tables 7 and 8 depict the arrangement of the data

used for the factor and discriminant analysis

procedures.

As an example, a perceptual map based on factor

analysis at both the Web site and Web site type level

for performance (Y-axis) versus content (X-axis) is

shown in Fig. 4.

r analysis (site and site type).

Page 15: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 171

Table 9

Discriminant factors and questions at the Web site level

Dimension Question # Question content Question locus

Information and

presentation

27 Is multimedia/graphics used strictly to support the site purpose? Graphics presentation

25 Do you think you have received complete information both on basic facts

and on full product details?

Information presentation

26 To what degree is categorization of the content logical? Content presentation

Search 34 To what degree does the search engine deal with misspellings and synonyms? Search actions

Information

completeness

24 Is content exposed in logical increments so that people are not overwhelmed? Information actions

Personalization 23 Can you personalize the site in order to speed up use? Personalization actions

Error handling 29 Does error handling offer the ability to move forward and not hit dead ends? Error handling

Fig. 5. Perceptual maps using discriminant functions (by site type).

Page 16: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178172

Table 10

Discriminant factors and questions at the Web site type level

Dimension Question # Question content Question locus

Presentation 27 Is multimedia/graphics used strictly to support the site purpose? Graphics presentation

25 Do you think you have received complete information both on

basic facts and on full product details?

Information presentation

26 To what degree is categorization of the content logical? Content presentation

28 Do navigation aids serve as a logical road map? Navigation presentation

32 Are there update clues (colors, URL or category trail, etc.) to

ensure you that you know your location on the site?

Navigation presentation

35 To what degree are the results listed in relevant order? Search presentation

User and administrative

tasks

31 Do multiple navigation bars serve completely separate purposes

and not overlap each other?

Navigation actions

24 Is content exposed in logical increments so that people

are not overwhelmed?

Content (information)

actions

39 Does the site contain errors, such as JavaScript crashes? Error actions

38 Does the site inform you of browser specific design requirements? System operation actions

33 Does the site balance scrolling the page with screen layout density

(the page arrangement)?

Presentation actions

23 Can you personalize the site in order to speed up use? Personalization actions

34 To what degree does the search engine deal with

misspellings and synonyms?

Search actions

Robustness 29 Does error handling offer the ability to move forward

and not hit dead ends?

Error handling

(browsabilitya)

30 Are navigation bars consistent? Observabilitya

a Mentioned as a feature of robustness.

From the map it is evident that, on average, online

shopping sites provided higher content and perfor-

mance capabilities than all other site types. Trading

sites were relatively low on content capabilities, and

customer self-service sites were relatively low on

performance capabilities. The variance of trading sites

was high, both on content and performance dimensions;

this may indicate that these sites were developed in a

highly dynamic and uncertain environment. Customer

self-service sites also had a small variance on both

dimensions but in general seemed to be mediocre

compared to the other types. Possibly some companies

focus their efforts on developing online shopping sites

because they generate substantial revenue, whereas

customer-service sites are perceived as a burden.

Table 11

Factor analysis in cognitive mapping—findings

Feature Highest site type Lowe

Navigation Publish/subscribe

Performance Online shopping Self-s

Content Online shopping Tradi

Search Equal across site types

At the individual site level, Barnes and Noble (site

number 24) seemed to be a leader with regard to the

combination of content and performance, whereas the

Virtual Shopping Center (site number 29) lagged

significantly behind.

Discriminant analysis was performed at the site and

site type level. At the site level it produced 15 possible

discriminatory functions. Using the SCREE method

[88], five functions which explained more than 6.66%

(1/15) of the variance were selected. The dimensions

based on these functions were named: ‘‘information

and presentation’’, ‘‘search’’, ‘‘information complete-

ness’’, ‘‘personalization’’, and ‘‘error handling’’ (see

Table 9). Perceptual maps drawn using these dimen-

sions provide additional evidence of the relative

st site type Comments

ervice

ng

High variance for customer self-service

Page 17: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 173

Table 12

Discriminant dimensions—findings

Analysis by Web Site type Analysis by Web site

Discriminant dimensions Presentation Level of information and presentation

User and administrative tasks Search capabilities

Robustness Information completeness

Personalization

Error handling

Table 13

Discriminant analysis—findings

Scope of analysis Factor Highest site type Lowest site type

Web site name Level of information and presentation Equal across sites

Search capabilities Online shopping and customer self-service Trading

Information Completeness Customer self service and publish/subscribe Trading

Personalization Online shopping and customer self-service Trading

Error Handling No significant finding across all sites

Presentation Publish/subscribe

Web site type User and Administrative Tasks Online shopping and customer self-service

Robustness Online shopping and customer self-service Trading

weakness of trading sites (see Fig. 5). Customer self-

service and online shopping, on the other hand, were

quite consistently better on all dimensions. This

finding could be explained by their strong customer

orientation and the fact that the customer was usually

the main source of revenue for most firms.

A similar analysis performed at the Web site type

level elicited the following discriminant dimensions:

‘‘robustness’’,1 ‘‘presentation’’ and ‘‘user and admin-

istrative tasks’’ (see Table 10). Perceptual maps based

on these dimensions depicted publish/subscribe sites

as the most robust. The best presentation and user-and-

administrative-tasks capabilities were exhibited in

online shopping and customer self-service sites. The

weakest were again trading sites. All findings based on

the discriminant analysis methods, including best and

worst performers on each dimension, are summarized

in Tables 11–13.

8. Conclusions

Our study empirically investigated the effect of

user-based design and Web site usability on user

1 ‘‘Level of support provided for successful attainment of user’s

goals’’.

satisfaction across four types of commercial Web

sites: online shopping, customer self-service, trading,

and publish/subscribe. By investigating the typology

of IBM, this study addressed the increasing differ-

entiation of Web sites according to type and purpose,

an issue that has received little attention. We also

refined recent studies showing that Web site success

was related to usability measures, as well as

incorporating the user-based design construct, which

had not been investigated previously in IS user

satisfaction research.

The significance of the findings was enhanced by

the dual validation design of the study, combining both

hypothesis testing and perceptual mapping supported

byMDS visualization capabilities. These twomethods

have not yet been used in combination in the context of

user satisfaction.

Our findings indicated that Web sites have

different, hidden, and subjective factors that stem

from the process of user and system interaction and

affect overall user satisfaction and that they can serve

the development and maintenance phases of Web site

creation.

The items of the questionnaire can be used as a

checklist in the development process, especially for

trading sites, which have consistently been found to be

a problem. Online shopping and customer self-service

Page 18: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178174

exhibited good capabilities and may therefore serve as

a model. The observation that both online shopping

and customer self-service possessed better capabilities

is not surprising in view of the fact the user is the focus

of commercial ventures and must be satisfied if

profitability is to be attained.

The study had several limitations. First, it focuses

on user satisfaction as the dependent variable.

However, as indicated by ISO 13407, there is a

relationship between usability and user centered

design which suggests that alternative models should

be evaluated. Second, the IBM framework, while

proved useful, has not been validated to verify that it

is comprehensive and that its categories are mutually

exclusive. Third, Web site users are random Web

surfers who do not participate in the design process

and, therefore, the user-based design instrument

might need to be tested and adapted. Fourth, the

administration of the experiment asked each

respondent to evaluate only two types of sites;

previous experience of respondents with certain sites

and the time of evaluation were not measured; the

classification of Web sites into different categories

was done by a single author and not an expert panel;

Web sites are dynamic and might have changed

during the evaluation sessions, causing measurement

biases; the measurement of usability also could be

misleading; and finally, demographic limitations of

the study are the relatively small size of the sample

and the fact that almost half of the participants were

students. The latter point is somewhat mitigated by

the t-test performed across the four Web site groups,

which showed no significant differences among

them.

Acknowledgement

The authors would like to thank the Editor-in-Chief

and three anonymous reviewers for their valuable and

thorough comments throughout the review process.

Appendix A. Summary of high-volume Website classifications

Publish/subscribe Web sites provide visitors with

information. Some examples include search engines,

media sites such as newspapers and magazines, and

event sites such as those for the Olympics and for the

tennis championships at Wimbledon. Site content

changes frequently, driving changes to page layouts.

While search traffic is low volume, the number of

unique items sought is high resulting in the largest

number of page views of all site types. As an

example, the Wimbledon site successfully handled a

peak volume of 430,000 hits per min using IBM

WebSphere Performance Pack. Security considera-

tions are minor compared to other site types. Data

volatility is low. This site type processes the fewest

transactions and has little or no connection to any

legacy systems.

Online shopping sites let visitors browse and buy.

Examples are typical retail sites where visitors buy

books, clothes, and even cars. Site content can be

relatively static, such as a parts catalog, or dynamic

where items are frequently added and deleted, for

example, as promotions and special discounts come

and go. Search traffic is heavier than the publish/

subscribe site, though the number of unique items

sought is not as large. Data volatility is low.

Transaction traffic is moderate to high, and almost

always grows. The typical daily volumes for many

large retail customers, running on IBM Net.Com-

merce, range from less than one million hits per day to

over 3 million hits per day, and with a range from

100,000 transactions per day to 700,000 transactions

per day in the top range; of the total transactions,

typically between 1% and 5% are buy transactions.

When visitors buy, security requirements become

significant and include privacy, nonrepudiation,

integrity, authentication, and regulations. Shopping

sites have more connections to legacy systems, such as

fulfillment systems, than the publish/subscribe sites,

but generally less than the other site types.

Customer self-service sites let visitors help

themselves. Sample sites include banking from home,

tracking packages, and making travel arrangements.

Data comes largely from legacy applications and often

comes from multiple sources, thereby exposing data

consistency. Security considerations are significant for

home banking and purchasing travel services, less so

for other uses. Search traffic is low volume;

transaction traffic is low to moderate, but growing.

Trading sites let visitors buy and sell. Of all site

types, trading sites have the most volatile content, the

Page 19: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 175

highest transaction volumes (with significant swing),

the most complex transactions, and the most time

sensitivity. Products like IBMs CICS high-volume

transaction processing system play a key role at

these sites. Trading sites are tightly connected to the

legacy systems, for example, using IBM MQSeries

for connectivity. Nearly all transactions interact with

the back-end servers. Security considerations are

high, equivalent to online shopping, with an even

larger number of secure pages. Search traffic is low

volume.

Business-to-business sites let businesses buy from

and sell to each other. Many businesses are

implementing a Web site for their purchasing

applications. Such purchasing activity may also be

characteristic of other site types, such as publish/

subscribe sites and self-service sites. Data comes

largely from legacy applications and often comes from

multiple sources, thereby exposing data consistency.

Security requirements are equivalent to online

shopping. Transaction volume is low to moderate,

but growing; transactions are typically complex,

connecting multiple suppliers and distributors.

References

[1] E. Abels, M.D. White, K. Hahn, A user-based design process

for Web sites, Internet Research: Electronic Networking

Applications and Policy 8(1), 1998, pp. 39–48.

[2] A.M. Aladwani, P.C. Palvia, Developing and validating an

instrument for measuring user-perceived web quality, Informa-

tion & Management 39, 2002, pp. 467–476.

[3] K. Amoako-Gyampah, A.F. Salam, An extension of the tech-

nology acceptance model in an ERP implementation environ-

ment, Information & Management 41, 2004, pp. 731–745.

[4] J.E. Bailey, S.W. Pearson, Development of a tool for measuring

and analyzing computer user satisfaction, Management

Science 29(5), 1983, pp. 530–545.

[5] H. Barki, J. Hardwick, Measuring user participation, user

involvement, and user attitude, MIS Quarterly 18(1), 1994,

pp. 59–79.

[6] H. Barki, S. Huff, Change, attitude to change, and decision

support success, Information & Management 9(5), 1985, pp.

261–268.

[7] J. Baroudi, M.H. Olson, B. Ives, An empirical study of the

impact of user involvement on system usage and information

satisfaction, Communications of the ACM 29(3), 1986, pp.

232–238.

[8] S.J. Barnes, R.T. Vidgen, WebQual: an exploration of Web

site quality, in: Proceedings of the Eighth European Confer-

ence on Information Systems, vol. 1, Vienna, 2000, pp. 298–

305.

[9] S.J. Barnes, R.T. Vidgen, An evaluation of cyber-bookshops:

The WebQual Method, International Journal of Electronic

Commerce 6, 2001, pp. 6–25.

[10] S.J. Barnes, R.T. Vidgen, Assessing the effect of a web site

redesign initiative: An SME case study, International Journal

of Management Literature 1, 2001, pp. 113–126.

[11] S.J. Barnes, R.T. Vidgen, Assessing the quality of auction Web

sites, in: Proceedings of the Hawaii International Conference

on Systems Sciences, CD-ROM, Maui, Hawaii, 2001.

[12] S.J. Barnes, R.T. Vidgen, An integrative approach to the

assessment of e-commerce quality, Journal of Electronic

Commerce Research 3(3), 2002.

[13] S. Barnes, R.T. Vidgen, Measuring Web site quality improve-

ments: a case study of the forum on strategic management

knowledge exchange, Industrial Management & Data Systems

103(5), 2003, pp. 297–309.

[14] C.M. Beise, Assessment of Information Systems Effectiveness

through Examination of IS/Organizational Interfaces, unpub-

lished paper, Georgia State University, 1988.

[15] D.A. Belsley, E. Kuh, R.E. Welsch, Regression Diagnostics,

Identifying Influential Data Sources of Collinearity, Wiley,

New York, NY, 1980.

[16] R. Benbunan-Fich, Using protocol analysis to evaluate the

usability of a commercial website, Information&Management

39, 2001, pp. 151–163.

[17] M. Bensaou, M. Earl, The right mind-set for managing infor-

mation technology, Harvard Business Review 1998, pp. 118–

129.

[18] J. Brooke, SUS: a quick and dirty usability scale, in: P.W.

Jordan, B. Thomas, B.A. Weerdmeester, I.L. McClelland

(Eds.), Usability Evaluation in Industry, Taylor & Francis,

London, UK, 1996, pp. 189–194. Available from www.cee.h-

w.ac.uk/�ph/sus.html.

[19] J.J. Cappel, M.A. Myerscough, World Wide Web uses for

electronic commerce: towards a classification scheme, in:

Proceedings of the 1996 Second AIS Conference, Phoenix,

Arizona, 1996.

[20] J.F. Chang, G. Torkzadeha, G. Dhillon, Re-examining the

measurement models of success for Internet commerce, Infor-

mation & Management 41, 2004, pp. 577–584.

[21] M.K. Chang,W. Cheung, V.S. Lai, Literature derived reference

models for the adoption of online shopping, Information &

Management 42(4), 2005, pp. 543–559.

[22] P.M. Cheney, G.W. Dickson, Organizational characteristics

and information systems: an exploratory investigation, Acad-

emy of Management Journal 25(1), 1982, pp. 170–184.

[23] S. Cho, Customer-focused Internet commerce at Cisco sys-

tems, IEEE Communications Magazine 37(9), 1999, pp. 61–

63.

[24] G.A. Churchill, Marketing Research: Methodological

Foundations, 8th ed., South-Western College Pub., Cincinnati,

Ohio, 2001.

[25] F. Davis, Perceived usefulness, perceived ease of use, and user

acceptance of information technology, MIS Quarterly 13(3),

1989, pp. 318–339.

Page 20: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178176

[26] W.H. Delone, E.R. McLean, Information systems success: the

quest for the dependent variable, Information Systems

Research 3(1), 1992, pp. 60–95.

[27] O. De Troyer, C. Leune, WSDM: a user-centered design

method for Web sites, in: Proceedings of the Seventh Inter-

national World Wide Web Conference on Computer Networks

and ISDN Systems, Elsevier, 1998, pp. 85–94.

[28] A. Dillon, M. Morris, Power perception and performance:

From usability engineering to technology acceptance with

the P3 model of user response, in: Proceedings of the 43rd

Annual Conference of the Human Factors and Ergonomics

Society, Santa Monica, CA, HFES, 1999, pp. 1017–1021.

[29] A. Dix, J. Finlay, G. Abowd, R. Beale, Human–Computer

Interaction, Prentice-Hall, 1993.

[30] W.J. Doll, W. Xia, G. Torkzadeh, A confirmatory factor

analysis of the end-user computing satisfaction instrument,

MIS Quarterly 18(4), 1994, pp. 453–461.

[31] P. Ein-Dor, E. Segev, Organizational context and success of

management information systems, Management Science

24(10), 1978, pp. 1064–1077.

[32] Forrester Research, Must search stink, Report, by P.R. Hagen,

2000. http://www.forrester.com.

[33] Forrester Research, Web sites continue to fail the usability test,

IT View and Business View Brief, by B.D. Temkin, 2003.

http://www.forrester.com.

[34] R.Y.K. Fung, A.C. Pereira, W.H.R. Yeung, Performance eva-

luation of aWeb-based information system for laboratories and

service centres, Logistics Information Management 13(4),

2000, pp. 218–227.

[35] D.F. Galletta, A.L. Lederer, Some cautions on the measure-

ment of user information satisfaction, Decision Sciences 20,

1989, pp. 419–438.

[36] J.D. Gould, C. Lewis, Design for usability: Key principles and

what designers think, Communications of the ACM 28(3),

1985, pp. 360–411.

[37] J. Greenbaum, M. King, Design at Work, Cooperative Design

of Computer Systems, Lawrence Erlbaum Associates, Hills-

dale, NJ, 1991.

[38] G.Hamel, J. Sampler, E-corporation;more than justWeb-based,

it’s building a new industry order, Fortune 1998, pp. 52–63.

[39] W.J. Hansen, User engineering principles for interactive sys-

tems, in: Proceedings of the Fall Joint Computer Conference,

Montvale, NJ, AFIPS Press, 1981, pp. 523–532.

[40] D.L. Hoffman, T.P. Novak, P. Chatterjee, Commercial scenar-

ios for the web: opportunities and challenges, Journal of

Computer-Mediated Communication, Electronic Commerce

1(3), 1995.

[41] IBM, Summary of high-volume Web site classifications, 1999.

www7b.software.ibm.com/wsdd/library/techarticles/hvws/

personalize.html #appendixb.

[42] ISO 9241 (1994) ISO 9241-11 DIS Ergonomic requirements

for office work with visual display terminals (VDTs). Part 11.

Guidance on usability. Draft international standard.

[43] ISO 9241, ISO 9241-10 Ergonomic requirements for office

work with visual display terminals (VDTs). Part 10. Dialogue

principles, 1996.

[44] ISO13407, Human-centered design processes for interactive

systems, International Organization for Standardization, 1999.

Geneva, Switzerland.

[45] B. Ives, M.H. Olson, User involvement and MIS success: a

review of research, Management Science 30(5), 1984, pp. 586–

603.

[46] M.Y. Ivory, M.A. Hearst, Improving web site design, IEEE

Internet Computing, Special Issue on Usability and the World

Wide Web 6(2), 2002, pp. 56–63.

[47] J. Johnston, Econometric Methods, 3rd ed., McGraw-Hill,

Japan, 1984.

[48] N. Kano, N. Seraku, F. Takahashi, S. Tsuji, Attractive and

normal quality, Quality 14(2), 1984.

[49] M. Khalifa, V. Liu, Determinants of satisfaction at different

adoption stages of Internet-based services, Journal of AIS 4(5),

2003, pp. 206–232.

[50] K. Klenke, Construct measurement in management informa-

tion systems: A review and critique of user satisfaction and

user involvement instruments, Information Systems and

Operations Research 30(4), 1992, pp. 325–348.

[51] S. Kurniawan, Modeling online retailer customer preference

and stickiness: A mediated structural equation model, in:

Proceedings of the Fourth Pacific Asia Conference on Infor-

mation Systems, 2000, pp. 238–252.

[52] Y. Lee, J. Kim, From design features to financial performance:

a comprehensive model of design principles for online stock

trading sites, Journal of Electronic Commerce Research 3(3),

2002, pp. 128–143.

[53] C. Liu, K.P. Arnett, Exploring the factors associated with Web

site success in the context of electronic commerce, Information

& Management 38, 2000, pp. 23–33.

[54] J. Lu, Assessing web-based electronic commerce applications

with customer satisfaction: an exploratory study, International

Telecommunication Society’s Asia-Indian Ocean Regional

Conf., Telecommunications and E-Commerce 2001, pp.

132–144.

[55] J. Lu, A model for evaluating e-commerce based on cost/

benefit and customer satisfaction, Information Systems Fron-

tiers 5(3), 2003, pp. 265–277.

[56] N.M. Lucey, More than Meets the I: User-Satisfaction of

Computer Systems. Unpublished thesis for Diploma in

Applied Psychology, University College Cork, Ireland,

1991.

[57] G.S. Maddala, Econometrics, McGraw-Hill, Japan, 1977 .

[58] D.J. Mayhew, Principles and Guidelines in Software

User Interface, Prentice-Hall, Englewood Cliffs, NJ, 1992.

[59] V. McKinney, K. Yoon, F. Zahedi, The measurement of Web-

customer satisfaction: an expectation and disconfirmation

approach, Information Systems Research 13(3), 2002, pp.

296–315.

[60] N.P. Melone, A theoretical assessment of the user satisfaction

construct in information systems research, Management

Science 36(1), 1990, pp. 76–91.

[61] M.J. Muller, Defining and designing the Internet: participation

by Internet stakeholder constituencies, Social Science Com-

puter Review 14(1), 1996, pp. 30–33.

Page 21: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178 177

[62] M. Mulvenna, S. Anand, A. Buchner, Personalization on the

Net using Web mining, Communications of the ACM 43(8),

2000, pp. 123–125.

[63] S. Muylle, R. Moenaert, M. Despontin, The conceptualization

and empirical validation of web site user satisfaction, Informa-

tion & Management 41, 2004, pp. 543–560.

[64] J. Nielsen, Usability Engineering, Academic Press Inc.,

Boston, MA, 1993.

[65] J. Nielsen, Usability as barrier to entry, Jakob Neilsen’s Alert-

box, 28 November 1999. http://www.useit.com/alertbox/

9911028.html.

[66] J. Nielsen, Why you only need to test with 5 users, Jakob

Neilsen’s Alertbox, 19 March 2000. http://www.useit.com/

alertbox/20000319.html.

[67] J. Nielsen, Novice vs. expert users, Jakob Neilsen’s Alertbox,

6 February 2000. http://www.useit.com/alertbox/20000206.

html.

[68] J. Nielsen, Designing Web Usability, New Riders Publishing,

Indianapolis, IN, 2000.

[69] J. Nielsen, J. Levy, Measuring usability: preference vs. per-

formance, Communications of the ACM 37(4), 1994, pp. 66–

75.

[70] R.L. Nolan, H.H. Seward, Measuring user satisfaction to

evaluate information systems, in: R.L. Nolan (Ed.), Managing

the Data Resource Function, West Pub., St. Paul, MN, 1974,

pp. 253–275.

[71] J. Nunnaly, Psychometric Theory, McGraw-Hill, New York,

1978.

[72] J.W. Palmer, Web site usability, design, and performance

metrics, Information Systems Research 13(2), 2002, pp.

151–167.

[73] P.G. Patterson, R.A. Spreng, Modeling the relationship

between perceived value, satisfaction and repurchase inten-

tions in a business-to-business, services context: an empirical

examination, International Journal of Service Industry Man-

agement 8(5), 1997, pp. 414–434.

[74] J. Preece, Y. Rogers, H. Sharp, D. Benyon, S. Holland,

T. Carey, Human–Computer Interaction, Addison-Wesley,

Wokingham, UK, 1994.

[75] J.A. Quelch, L.R. Klein, The Internet and international

marketing, Sloan Management Review 3, 1996, pp. 60–

75.

[76] C. Raganathan, S. Ganapathy, Key dimensions of business-to-

consumer web sites, Information &Management 39, 2002, pp.

457–465.

[77] L. Raymond, Validating and applying user satisfaction as a

measure of MIS success in small organizations, Information &

Management 12(3), 1987, pp. 173–179.

[78] D. Robins, S. Kelsey, Analysis of Web-based information

architecture in a university library: navigating for known items,

Information Technology and Libraries 21(4), 2002, pp. 158–

169.

[79] D. Schuler, A. Namioka (Eds.), Participatory Design: Princi-

ples and Practices, Lawrence Erlbaum Associates, Hillsdale,

NJ, 1993.

[80] B. Shackel, Usability – context, framework, design and eva-

luation, in: B. Shackel, S. Richardson (Eds.), Human Factors

for Informatics Usability, Cambridge University Press, Cam-

bridge, 1991, pp. 21–38.

[81] H. Shih, An empirical study on predicting user acceptance of e-

shopping on the Web, Information & Management 41, 2004,

pp. 351–368.

[82] B. Shneiderman, Designing Information Abundant Websites,

1996. ftp://ftp.cs.umd.edu/pub/hcil/Reports-abstracts-Biblio-

graphy/3634.txt.

[83] S.L. Smith, J.N. Mosier, Design guidelines for user-system

interface software, Technical Report ESD-TR-84-190, The

Mitre Corporation, Bedford, MA, 1984.

[84] M. Spiliopulou, Web usage mining for Web site evaluation:

making a site better fit its users, Communications of the ACM

43(8), 2000, pp. 127–134.

[85] A. Srinivasan, Alternative measures of system effectiveness,

MIS Quarterly 9(3), 1985, pp. 243–253.

[86] D.W. Straub, D. Hoffman, B. Weber, C. Steinfield, Measuring

e-commerce in Net-enabled organizations, Information Sys-

tems Research 13(2), 2002, pp. 115–124.

[87] E.B. Swanson, Management information systems: apprecia-

tion and involvement, Management Science 21(2), 1974, pp.

178–188.

[88] B.G. Tabachnick, L.S. Fidell, Using Multivariate Statistics,

3rd ed., Harper Collins, New York, 1996.

[89] C. Trepper, E-commerce Strategies, Microsoft Press,

Washington, DC, 2000.

[90] B.W. Tan, T.W. Lo, Validation of a user satisfaction instrument

for office automation success, Information & Management

18(4), 1990, pp. 203–208.

[91] W.L. Yeung, M. Lu, Functional characteristics of commercial

web sites: a longitudinal study in Hong Kong, Information &

Management 41(4), 2004, pp. 483–495.

[92] P. Zhang, G.M. Von Dran, User expectations and rankings of

quality factors in different Web site domains, International

Journal of Electronic Commerce 6(2), 2002, pp. 9–33.

[93] M. Zviran, Z. Erlich, Measuring IS user satisfaction: review

and implications, Communications of the AIS 12(5), 2003, pp.

81–104.

Moshe Zviran is associate professor of Information Systems in the

Faculty of Management, The Leon Racanati Graduate School of

Business Administration, Tel Aviv University. He received his B.Sc.

degree in mathematics and computer science and the M.Sc and

Ph.D. degrees in information systems from Tel Aviv University,

Israel, in 1979, 1982 and 1988, respectively. He held academic

positions at the Claremont Graduate University, California, the

Naval Postgraduate School, California, and Ben-Gurion University,

Israel. His research interests include information systems planning,

measurement of IS success and user satisfaction and information

systems security. He is also a consultant in these areas for a number

of leading organizations. Prof. Zviran’s research has been published

in: MIS Quarterly, Communications of the ACM, Journal of Man-

agement Information Systems, IEEE Transactions on Engineering

Management, Information and Management, Omega, Data and

Knowledge Engineering, The Computer Journal and other journals.

He is also co-author (with N. Ahituv and S. Neumann) of Informa-

tion Systems for Management (Tel-Aviv, Dyonon, 1996) and Infor-

Page 22: User Satisfaction From Commercial Web Sites

M. Zviran et al. / Information & Management 43 (2006) 157–178178

mation Systems – from Theory to Practice (Tel-Aviv, Dyonon,

2001).

Chanan Glezer is a lecturer at the department of Information

Systems Engineering, Ben-Gurion University of the Negev, Israel.

He holds a Ph.D. degree in Information Systems from Texas

Tech University. His main areas of interest are: Electronic com-

merce, organizational computing and Internet security. His

research has been published in journals such as Communications

of the ACM, Journal of Organizational Computing and Electronic

Commerce, Journal of Strategic Information Systems, Data and

Knowledge Engineering, Journal of Information Warfare, Inter-

national Journal of Electronic Business, and the Journal of

Medical Systems.

Itay Avni is a graduate of the M.Sc. program in Information

Systems at the Faculty of Management, The Leon Racanati Grad-

uate School of Business Administration, Tel Aviv University.