25
11 User-Experience from an Inference Perspective PAUL VAN SCHAIK, Teesside University MARC HASSENZAHL, Folkwang University JONATHAN LING, University of Sunderland In many situations, people make judgments on the basis of incomplete information, inferring unavailable at- tributes from available ones. These inference processes may also well operate when judgments about a prod- uct’s user-experience are made. To examine this, an inference model of user-experience, based on Hassenzahl and Monk’s [2010], was explored in three studies using Web sites. All studies supported the model’s pre- dictions and its stability, with hands-on experience, different products, and different usage modes (action mode versus goal mode). Within a unified framework of judgment as inference [Kruglanski et al. 2007], our approach allows for the integration of the effects of a wide range of information sources on judgments of user-experience. Categories and Subject Descriptors: H.1.2 [Models and Principles]: User/Machine Systems—Human information processing; H.5.2 [Information Interfaces and Presentation]: User Interfaces—Theory and methods; H.5.4 [Information Interfaces and Presentation]: Hypertext/Hypermedia—Theory; I.6.5 [Simulation and Modeling]: Model Development General Terms: Experimentation, Human Factors, Theory Additional Key Words and Phrases: User-experience, model, inference perspective, beauty, aesthetics ACM Reference Format: van Schaik, P., Hassenzahl, M., and Ling, J. 2012. User-experience from an inference perspective. ACM Trans. Comput.-Hum. Interact. 19, 2, Article 11 (July 2012), 25 pages. DOI = 10.1145/2240156.2240159 http://doi.acm.org/10.1145/2240156.2240159 1. INTRODUCTION Imagine you want to enhance your voice-over-IP-calls with a high-definition image. By coincidence, a local shop makes an exceptional offer (in terms of “value for money”) of a multifunctional (“all-singing-all-dancing”) webcam. Will you accept? The problem is to predict whether or to what extent the product would meet your needs. As you have no hands-on experience, you visit the shop to see for yourself what the product looks like in reality and to get further information from the helpful staff. However, you are not allowed to open the attractive transparent box in which the seductive product patiently awaits your expenditure. You simply cannot try the product before buying it. Therefore, in effect, you try to “guess”—or infer—the product’s reliability, usefulness and ease of use from the specific pieces of information that you find relevant. This type of inference is a ubiquitous process, which underlies many phenomena [Kardes et al. 2004b; Loken 2006]; some even argue that it is the very essence of Authors’ addresses: P. van Schaik, School of Social Sciences and Law, Teesside University, United Kingdom; email: [email protected]; M. Hassenzahl, Ergonomics in Industrial Design, Folkwang University, Germany; J. Ling, Faculty of Applied Sciences, University of Sunderland. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permit- ted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from the Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701, USA, fax +1 (212) 869-0481, or [email protected]. c 2012 ACM 1073-0516/2012/07-ART11 $15.00 DOI 10.1145/2240156.2240159 http://doi.acm.org/10.1145/2240156.2240159 ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11

User-Experience from an Inference Perspective

PAUL VAN SCHAIK, Teesside UniversityMARC HASSENZAHL, Folkwang UniversityJONATHAN LING, University of Sunderland

In many situations, people make judgments on the basis of incomplete information, inferring unavailable at-tributes from available ones. These inference processes may also well operate when judgments about a prod-uct’s user-experience are made. To examine this, an inference model of user-experience, based on Hassenzahland Monk’s [2010], was explored in three studies using Web sites. All studies supported the model’s pre-dictions and its stability, with hands-on experience, different products, and different usage modes (actionmode versus goal mode). Within a unified framework of judgment as inference [Kruglanski et al. 2007], ourapproach allows for the integration of the effects of a wide range of information sources on judgments ofuser-experience.

Categories and Subject Descriptors: H.1.2 [Models and Principles]: User/Machine Systems—Humaninformation processing; H.5.2 [Information Interfaces and Presentation]: User Interfaces—Theoryand methods; H.5.4 [Information Interfaces and Presentation]: Hypertext/Hypermedia—Theory; I.6.5[Simulation and Modeling]: Model Development

General Terms: Experimentation, Human Factors, Theory

Additional Key Words and Phrases: User-experience, model, inference perspective, beauty, aesthetics

ACM Reference Format:van Schaik, P., Hassenzahl, M., and Ling, J. 2012. User-experience from an inference perspective. ACMTrans. Comput.-Hum. Interact. 19, 2, Article 11 (July 2012), 25 pages.DOI = 10.1145/2240156.2240159 http://doi.acm.org/10.1145/2240156.2240159

1. INTRODUCTION

Imagine you want to enhance your voice-over-IP-calls with a high-definition image. Bycoincidence, a local shop makes an exceptional offer (in terms of “value for money”) ofa multifunctional (“all-singing-all-dancing”) webcam. Will you accept? The problemis to predict whether or to what extent the product would meet your needs. As youhave no hands-on experience, you visit the shop to see for yourself what the productlooks like in reality and to get further information from the helpful staff. However, youare not allowed to open the attractive transparent box in which the seductive productpatiently awaits your expenditure. You simply cannot try the product before buying it.Therefore, in effect, you try to “guess”—or infer—the product’s reliability, usefulnessand ease of use from the specific pieces of information that you find relevant.

This type of inference is a ubiquitous process, which underlies many phenomena[Kardes et al. 2004b; Loken 2006]; some even argue that it is the very essence of

Authors’ addresses: P. van Schaik, School of Social Sciences and Law, Teesside University, United Kingdom;email: [email protected]; M. Hassenzahl, Ergonomics in Industrial Design, Folkwang University,Germany; J. Ling, Faculty of Applied Sciences, University of Sunderland.Permission to make digital or hard copies of part or all of this work for personal or classroom use is grantedwithout fee provided that copies are not made or distributed for profit or commercial advantage and thatcopies show this notice on the first page or initial screen of a display along with the full citation. Copyrightsfor components of this work owned by others than ACM must be honored. Abstracting with credit is permit-ted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component ofthis work in other works requires prior specific permission and/or a fee. Permissions may be requested fromthe Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701, USA, fax +1 (212)869-0481, or [email protected]© 2012 ACM 1073-0516/2012/07-ART11 $15.00

DOI 10.1145/2240156.2240159 http://doi.acm.org/10.1145/2240156.2240159

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 2: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:2 P. van Schaik et al.

human judgment [Kruglanski et al. 2007]. The purpose of the current studies wasto investigate how people infer specific attributes of an interactive product fromother attributes or broader evaluations, such as beauty or an overall evaluation (e.g.,goodness).

1.1 Inference in Judgments about Interactive Products

While inference is ubiquitous, available models of the perceived quality of interactiveproducts, from Technology Acceptance [Venkatesh et al. 2003] to User-Experience (UX)[Hartmann et al. 2008; Hassenzahl 2003; Lavie and Tractinsky 2004], predominantlyassume only one particular pattern of inference: induction or a specific-to-general-inference [Kardes et al. 2004b]. This approach suggests that overall assessments orattitudes are “built” from the careful consideration, weighting and integration of spe-cific attributes, such as usability, functionality, expressive aesthetics, hedonic quality,and/or engagement. The approach is reminiscent of computational, multi-attributetheories of decision-making [Keeney and Raiffa 1976]. These assume that people con-struct their overall assessment from single, distinct and specific attributes, which areassessed separately (e.g., “How usable is the product?”), weighted (e.g., “How impor-tant is usability to me?”) and combined into an overall evaluation.

While studies seem to provide some support for the induction of general value fromspecific attributes, it should not be taken as the major or even the only process thatoperates. Current approaches to judgment and decision making take a rather non-computational approach [Chater and Brown 2008; Chater et al. 2003; Gigerenzer andGaissmaier 2011; Kruglanski and Gigerenzer 2011]. Supported by a wealth of em-pirical evidence [Gigerenzer and Gaissmaier 2011], these approaches suggest that,rather than doing complex (weighted and summated) calculations to induce, people userelatively simple cognitive strategies (e.g., “simple rules” [Chater and Brown 2008];“heuristics” [Gigerenzer and Gaissmaier 2011]) to make judgments.

This type of processing is due to the way the world is structured in terms of availableinformation. Induction assumes, for example, some knowledge about each attribute,which is only rarely available [Gigerenzer and Gaissmaier 2011]. People may simplylack hands-on experience of a product, which makes it difficult to assess its usabil-ity. Nevertheless, people make global value judgments and more specific attributejudgments, even when information is absent or limited. They do so by inferring unob-servable, momentarily hard-to-assess product attributes from their global valuation ofthe product (i.e., general-to-specific) or by inferring them from other, more accessibleattributes (i.e., specific-to-specific).

At the heart of this inference process are rules, which tie together available andunavailable information. These rules are based on lay theories and knowledge abouttheir applicability in a particular situation. A well-known inference rule, for example,is that the more expensive a product, the higher the quality (price-quality correlation,e.g., Kardes et al. [2004a]). People, for example, guess the taste (quality) of wine, basedon its price. Note that the application of rules is context-dependent. People do notapply the price-quality rule when a product is on special offer [Chernev and Carpenter2001]. In addition, the application of inference rules is not necessarily conscious ordeliberate, it can also be automatic, unconscious and hardly accessible to a particularindividual [Kardes et al. 2004b; Kruglanski et al. 2007].

1.2 Inference in User Experience

The variety of potential rules to infer attributes from other attributes has interest-ing implications for the study of UX, preferences, and acceptance. In most studies,

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 3: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:3

data collection is concurrent, that is, all constructs are assessed simultaneously. Re-searchers then evaluate their specified model, which is almost exclusively inductive.However, disregarding potential deduction (e.g., general-to-specific inference) can eas-ily lead to false conclusions. Consider the following hypothetical example. Participantsare asked to assess the (perceived) usability and innovativeness of a given product andits general appeal. The researcher regresses usability and innovativeness on appealto determine the relative importance of usability and innovativeness in explainingappeal. Now assume that people have only limited or even no hands-on experiencewith the product. This makes their assessment of usability difficult: without hands-onexperience, the question “Do I find the product predictable?” is hard to answer. Onthe other hand, from the description of the product concept and what they see theymay tell right away whether they find it innovative or not. People who appreciateinnovativeness will provide high, but others low appeal ratings. Innovativeness be-comes predictive of appeal. Usability, however, cannot be assessed as easily. Peoplenevertheless provide an assessment, when prompted to do so. But they do not inferusability from actual specific information, but may deduce it from their general appealrating, following the simple rule of “I like it, it must be good on all attributes” (whichis simply a version of the ubiquitous “halo-effect” [Thorndike 1920]). The consequenceis that people who find the product appealing (mainly due to its innovativeness) pro-vide higher ratings of usability as well. However, the researcher applying an inductivemodel to the data will reach quite a different conclusion. Usability becomes highly pre-dictive of appeal. It becomes “important” for any judgment of appeal. However, this isnot a consequence of usability’s role in forming an overall judgment (of appeal in thiscase), but a consequence of people deducing a hard-to-assess attribute (e.g., usability)from general value (e.g., appeal). As long as people strive for consistency in their judg-ments, these “hypothetical” effects are likely to be responsible for many findings andpotentially false beliefs in UX and information systems (IS) research.

The potential different interpretations of people’s judgment of attributes and gen-eral values alone justify the quest for a better understanding of the structure of UXmodels. Take the study of Cyr et al. [2006] as one of many examples. In their study,60 participants used a single mobile Web site for five to fifteen minutes to performsome given information retrieval tasks. Subsequently, design aesthetics, perceivedusefulness, perceived ease-of-use, perceived enjoyment, and loyalty were assessed con-currently with a questionnaire. Their suggested model assumes that design aestheticsis used to infer three aspects: enjoyment, ease-of-use, and usefulness. Ease-of-use isthen in turn used to infer enjoyment and usefulness. From the latter two, loyalty isinferred.

Given the mechanisms of human judgment, this particular model is hard to justify.For example, one could easily argue that participants got an impression of design aes-thetics from looking at the site and ease of use from using the site. From these twopieces of specific information, usefulness and enjoyment are then inferred separatelyand combined into loyalty. In this interpretation, the effect of design aesthetics on easeof use is spurious, perhaps due to the correlation of both these variables with useful-ness and enjoyment. Similarly, the effects of ease of use on enjoyment and of designaesthetics on enjoyment would be spurious.

The point here is that without the notion of inference and a careful consideration ofhow assessments are potentially made in different situations—boundary conditions—the theoretical justification of a model is almost impossible. However, this is exactlywhat is needed, because whatever sophisticated statistical modeling techniques areemployed, the results and their credibility depend on the specification of the model,which is outside of statistical considerations.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 4: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:4 P. van Schaik et al.

Fig. 1. Basic inference model.

1.3 Taking an Inference Perspective in User-Experience Research

Despite the potential implications (in terms of better-specifying UX models) of con-sidering various types of situation-dependent inference rules beyond the specific-to-general that is predominantly used, neither the field of IS nor human-computer inter-action (HCI) seem to explicitly consider an inference perspective. A recent exceptionis Hassenzahl and Monk’s work [2010]. They argue that “beauty” should be thought ofas an affect-driven, evaluative response to the visual Gestalt of an interactive product[Hassenzahl 2008]. This has two implications: first, judgments of beauty are alwaysbased on information; they require only a visual input, which is almost always avail-able. Second, its predominantly affective nature makes it very quick [Tractinsky et al.2006]. Both characteristics point to beauty as an important starting point for inferringother attributes, which are at least initially hard to access, due to, for example, a lackof hands-on experience or other missing information. Therefore, Hassenzahl and Monk[2010] argued inference from beauty to be extremely likely, especially in the absence ofany further experience, but this inference may also remain a dominant mode of judg-ment even after hands-on experience later. They further suggested specific rules thatgovern inference for interactive products.

Due to its evaluative nature, beauty is an important input to the general evaluationof the product (goodness) (the direct link from Beauty to Goodness in Figure 1). Thisis reminiscent of Dion et al.’s [1973] classic “what is beautiful is good”: a stereotypejudgment in person perception.

Two further constructs are of interest in the IS and HCI literature: perceived prag-matic quality (broadly related to perceived usability, perceived ease-of-use) and per-ceived hedonic quality (broadly related to perceived enjoyment, novelty, stimulation)[Hassenzahl 2010]. Taking an inference perspective, Hassenzahl and Monk [2010] pro-posed two distinct rules to account for the inference of pragmatic and hedonic qualityfrom beauty. The link between pragmatic quality and beauty is indirect. It is a con-sequence of evaluative consistency [Lingle and Ostrom 1979], where individuals inferunavailable attributes from general value [goodness] to keep their judgments consis-tent (the path from beauty to Goodness and from Goodness to Pragmatic quality). Incontrast to Pragmatic quality, Hedonic quality is directly inferred from beauty (thedirect link from Beauty to Hedonic quality in Figure 1). According to probabilistic con-sistency [Ford and Smith 1987], individuals infer unavailable attributes directly froma specific available attribute believed to be conceptually or even causally linked to theunavailable attribute. In other words, while people may hold specific beliefs about

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 5: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:5

Fig. 2. Hassenzahl and Monk’s [2007] test results of an inference model. Figures are (standardized) pathcoefficients (with bivariate correlations in brackets). Bold signifies statistical significance (p < .05).

how beauty and hedonic quality are related, any observable link between beauty andpragmatic quality is just the consequence of people inferring overall quality (goodness)from beauty and then inferring conceptually different specific product attributes fromoverall quality.

Hassenzahl and Monk [2007, 2010] put these rules to an initial test. Figure 2 showsthe results of LISREL analysis from a sample of 430 assessments of 21 different inter-active products.

The sample was recruited using the online questionnaire tool AttrakDif21. The datawere fully anonymized, thus, nothing can be said about the specific interactive prod-ucts constituting this sample. The analysis reported was published in Hassenzahl andMonk [2007], but the data are identical with Dataset 4 of Hassenzahl [2010]. However,the latter paper used a different analysis strategy and rather focused on the inferenceof pragmatic quality (i.e., perceived usability) from beauty. For the current paper, theLISREL analysis from 2007 is more illustrative.

As expected, the relation between beauty and goodness was substantial (.71) andstronger than the relation between goodness, and pragmatic quality (.65) and hedo-nic quality (.48). Moreover, the link between beauty and pragmatic quality, which wassubstantial as a bivariate correlation (.53), completely disappeared when goodness wasincluded in the analysis (path coefficient: −.07), emphasizing the fully mediated na-ture of the link between beauty and pragmatic quality (i.e., an example of evaluativeconsistency). In contrast, the direct link between beauty and hedonic quality remainedintact (path coefficient: .42), hinting at a partial mediation, where some of the hedonicquality is directly inferred from beauty (i.e., an example of probabilistic consistency)and some indirectly through goodness (i.e., an example of the evaluative consistency).The results provide strong support for an inference-based model, which was substan-tiated with three further studies [Hassenzahl and Monk 2010]. In sum, Hassenzahland Monk [2007, 2010] suggest that (1) beauty is a direct determinant of goodness, (2)beauty is an indirect determinant of pragmatic quality, operating through goodness,and (3) beauty is a direct determinant of hedonic quality.

1http://www.attrakdiff.de

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 6: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:6 P. van Schaik et al.

1.4 Aims of the Current Study

While representing a first step towards more adequately addressing the variety of hu-man judgment processes, the studies of Hassenzahl and Monk [2010] left a number ofquestions unanswered. We aim to explore some of these in the present paper. We re-port three independent, complementing studies, which focus on the following aspects.

Replicability of the suggested inference for single products. Monk [2004] stronglyurged us to avoid a “fixed-effect fallacy” when studying the relationship between at-tribute judgments about products. In brief, the argument was that typically in thesestudies participants as well as products contribute variance. Accordingly, we need tocarefully sample products as well and need to make sure that all models hold for peo-ple (i.e., subjects analysis) and products (i.e., material analysis) alike. Hassenzahl andMonk [2010] tested the inference model (as described in Figure 1) on four independentdatasets, in a subjects as well as a materials analysis, and found the model was sta-ble. However, while methodologically sound, the requirement of having a sample ofproducts does not fit well with practices of HCI or the domain of UX. Typically, practi-tioners and researchers alike evaluate a single product by handing out questionnairesto a sample of people. While the results from Hassenzahl and Monk [2010] suggestthat the inference model should also hold for a single product (i.e., when the productvariance is held constant), this was not tested yet. Therefore, we used only a differ-ent single product in each of three studies. Our first aim was simply to replicate theinference model.

Effects of hands-on experience. In Hassenzahl and Monk [2010], all participantswere instructed to have brief hands-on experience with a particular product. Evenwhen assuming that inference is a stable process, which easily overrules brief hands-on experience, this is not the most straightforward way of testing the model. Accord-ingly, the present studies contrasted two different measurements, one based on thepresentation of an interactive product only and another after verifiable hands-on ex-perience. The inference model we have outlined should hold for the “presentation-only”condition (in the sense of a control condition, where an interactive product is presented,but users do not interact with the product). In addition, this study design allows test-ing of the stability of the model, given additional hands-on experience. Our second aimwas to explore potential effects of hands-on experience on the model.

Types of experience. In most studies [though, see van Schaik and Ling [2011],hands-on experience is either task-oriented (i.e., people are asked to complete giventasks, such as the information retrieval tasks provided by Cyr et al. [2006]) or leftto the participants [e.g., Hassenzahl and Monk 2010]. Inspired by Apter [1989],Hassenzahl [2003] conceptualized the psychological consequences of the different situ-ations while interacting with a product in goal mode or action mode. In goal mode thefulfillment of a given goal is to the fore. The goal has clear relevance and determinesall actions. The product is therefore just a means to an end. While interacting witha product, people in goal mode try to be effective and efficient. In action mode theaction itself is to the fore. It determines rather volatile goals during use. Using theproduct is an end in itself. Several studies [Hassenzahl and Ullrich 2007; van Schaikand Ling 2009, 2011] revealed a profound effect of mode of use on how products arejudged, which merits its inclusion. In the present paper, we studied experience in ac-tion mode in Study 1, by not specifying any particular task, and asking participantsto just explore the artifact. In Study 2, we introduced specific information-retrievaltasks (experience in goal mode), which enabled us to examine potential differences.In Study 3, we additionally varied task complexity to introduce more or less demand

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 7: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:7

in goal mode [van Oostendorp et al. 2009; van Schaik and Ling (to appear)]. Taskcomplexity (path length, defined as the number of steps involved in finding the infor-mation) has a negative effect on task performance, due to the increase in probabilityto select a wrong link on longer paths or to misjudged the relevance of presented infor-mation [Gwizdka and Spence 2006]. For Study 3, we assumed that the manipulationof task complexity would influence perceptions of pragmatic quality but not of hedonicquality. Hassenzahl [2001], for example, found pragmatic and hedonic quality to be in-dependent after hands-on experience in a goal-oriented mode. In this study, subjectivemental effort—a consequence of task demands and/or usability problems—negativelyimpacted pragmatic quality, but not hedonic quality. We assume a similar asymme-try here. Thus, in addition to studying the inference model under two different usagemodes (Study 1 and Study 2), in Study 3 we set out to deliberately manipulate ex-perience by making the task more or less complex and to explore its effects on theprediction of the inference model. Note that, within the framework of the person-artifact-task model [Finneran and Zhang 2003; van Schaik and Ling, to appear], taskcomplexity is only one of the possible variables that affect judgment. Existing mod-els of judgment in HCI [Hartmann et al. 2008; Hassenzahl 2003] address externalvariables (such as person artifact and task), but can be classified uniformly as specific-to-general. This limits their potential when applied to the general-to-specific type ofinference investigated in the current research. In other words, these models are use-ful in highlighting the effects of external variables, but do not illuminate how thesevariables would affect the specific inference rules suggested in the present paper. Ourthird aim was to explore how well the inference model works across different types ofexperience.

In the following, we report three independent studies. For each we expect the basicinference model described previously to hold (Figure 1). However, we set out to clar-ify whether the model replicates when different single products are used (Aim 1, allstudies). To do so, we employed three different Web sites that were not homogeneousand varied in familiarity to participants, in order to establish the generality of the re-sults over a range of artifacts. In addition, we studied the effect of additional hands-onexperience (Aim 2, all studies) by deliberately comparing judgments before and afteractual experience. This comparison provides information about the persistence of aninference model when users gain additional information in the process of product use.To further broaden the scope and potential generality of the inference model (Aim 3),we employed two different types of experience (activity versus goal-oriented, Study1 versus Study 2 and Study 3) and even deliberately varied experience through anexternal factor (in Study 3).

2. STUDY 1: ACTION-MODE

2.1 Method

2.1.1 Participants. Ninety-four undergraduate psychology students (73 females and21 males), with a mean age of 24 years (SD = 9) took part in the experiment as acourse requirement. All participants had used the World Wide Web and all but twohad used the target Web site (Wikipedia). Mean expertise using the Web was 8 years(SD = 3), mean time per week spent using the Web and Wikipedia was 17h and 3hrespectively (SD = 12/7) and mean frequency of Web/Wikipedia use per week was 17/2times (SD = 15/4).

2.1.2 Materials and Equipment. Participants gave responses to a 10-item short versionof the AttrakDiff2 questionnaire [Hassenzahl and Monk 2010], consisting of 7-pointsemantic-differential items (see Appendix A). The following constructs were measured:

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 8: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:8 P. van Schaik et al.

Fig. 3. Sample Web page (Experiment 1). Wikipedia.org, Creative Commons License 3.0.

pragmatic quality (four items), hedonic quality (four items), beauty (one item) andgoodness (one item).2

Participants used Wikipedia’s Web site as it existed in January 2010 (see Figure 3for a sample Web page). The experiment in this study and the following were pro-grammed in Visual Basic 6 and ran on personal computers (Intel Pentium, 1.86 GHz,2 GB RAM, Microsoft Windows XP operating system, 17-inch monitors); the screendimensions were 1280–1024; contrast (50%) and brightness (75%) were set to optimallevels.

2.1.3 Procedure. The study consisted of two phases and ran in a computer laboratorywith groups of 15–20 participants working independently. In Phase 1, each partic-ipant was introduced to the Web Site through self-paced presentation of 10 nonin-teractive screenshots of different pages from Wikipedia’s Web site. Participants thencompleted the short version of the AttrakDiff2 questionnaire. In Phase 2, participantswere free to use the same Web site to explore their own interests for 20 minutes (i.e.,action mode). The median number of pages visited was 25, with a semi-interquartilerange of 22. After this, participants again completed the AttrakDiff2 and answereddemographic questions. The study took about 35 minutes to complete. The proce-dure used in this and the following study represents an extension of that used byHassenzahl and Monk [2010] in that there were two separate phases for presenta-tion only and additional hands-on experience, with measures taken at the end of each

2Refer to Appendix B, Tables II and III for a summary of the scales’ psychometric properties.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 9: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:9

phase, and having a fixed set of screen shots for presentation and a predefined task forinteraction.

2.1.4 Data Analysis with PLS. Partial-least-squares (PLS) path modeling [Vinzi et al.2010] was used for data analysis in all three studies for the following reasons. PLSallows the analysis of both single-stage and multi-stage models with latent variables,allowing the integrated analysis of a measurement model and a structural model. Eachlatent variable (usually a psychological construct) is measured using one or more man-ifest variables (usually items). PLS is compatible with multiple regression analysis,analysis of variance and unrelated t-tests: the results of these techniques are specialcases of the results of PLS, but these techniques do not account for measurement er-ror, whereas PLS does. PLS does not require some of the assumptions imposed bycovariance-based structural equation modeling—including those of large sample sizes,and univariate and multivariate normality. All PLS analyses reported here satis-fied the following minimum requirement for robust estimations in PLS path modeling[Henseler et al. 2009]: the larger of (a) ten times the number of indicators of the scalewith the largest number of formative indicators and (b) ten times the largest numberof structural paths directed at a particular construct in the inner path model. Re-cent simulation studies have demonstrated that PLS path modeling performs at leastas well as and, under various conditions, is superior to covariance-based structuralequation modeling in terms of bias, root mean square error and mean absolute devi-ation [Hulland et al. 2010; Vilares et al. 2010]. For a consistent approach, the dataanalyses for all studies were conducted with PLS by way of the SmartPLS software[Ringle et al. 2005], unless stated otherwise. A bootstrapping procedure N = 5000, asHenseler et al. [2009] recommend was used to test the significance of model param-eters. Each indirect (mediated) effect (e.g., the effect of beauty on pragmatic qualitymediated by goodness) was calculated as the product of the two constituent direct ef-fects (e.g., of beauty on goodness and of goodness on pragmatic quality) comprisingthe indirect effect. Then bootstrapping was used to test the indirect effect. In partic-ular, each bootstrap sample produced parameter estimates for the constituent directeffects, from which the indirect effects were calculated. The mean and standard errorof this calculated estimate over the bootstrap samples and, from these, a t-statistic wasthen calculated to test the significance of the indirect effect. The total effect (e.g., ofbeauty on pragmatic quality) was broken down into the indirect effect (e.g., of beautyon pragmatic quality mediated by goodness) and the direct effect (e.g., of beauty onpragmatic quality with goodness held constant). In each experiment, tests of the dif-ference between regression coefficients before and after interaction were conducted totest their stability. In a single analysis, each bootstrap sample produced all coeffi-cients (for presentation-only and hands-on experience). Mean difference and standarderror of the difference of each pair of coefficients (for presentation-only and hands-onexperience) and, from these, a t-statistic were calculated to test the significance of thedifference.

2.2 Results and Discussion

Figure 4 shows the results of the proposed inference model (see Figure 1) for(a) presentation-only and (b) additional hands-on experience. Presented (a) are stan-dardized path coefficients, with figures in brackets representing indirect effects, and(b) the variance in each endogenous latent variable that is explained by the di-rect effects on it. For example, in Figure 1(a), 39% (R2 = .39) of variance in prag-matic quality was explained by the direct effects of beauty and goodness, while 35%(R2 = .35) of variance in hedonic quality was explained by beauty, goodness and prag-matic quality.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 10: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:10 P. van Schaik et al.

Fig. 4. Structural model (Study 1) – (a) presentation-only, (b) with hands-on experience.

In the presentation-only condition the link between beauty and pragmatic qualitywas fully mediated, with a nonsignificant direct effect of beauty on pragmatic quality,β = −.05, p > .05, but a significant indirect effect of beauty on pragmatic quality viagoodness, β = .34, p < .001. The opposite was true for the effect of beauty on hedonicquality. The direct effect was significant, β = .42, p < .01, while the indirect was not,β = .06, p > .05. After additional hands-on experience, the results changed little. Theindirect effect of beauty on hedonic quality became significant, β = .13, p < .05, butwas still much smaller than the direct effect, β = .62, p < .001.

Test results of the difference between each of the path coefficients in presentation-only and with additional hands-on experience were not significant, all |t| ≤ 1.22,p > .05, demonstrating the stability of the inference model. This is the case de-spite differences in the standardized means between the four measures (with a smallchange for goodness, but a very small change for hedonic quality and a negligiblechange for pragmatic quality and beauty; see Table II in Appendix B for details).Overall, the inference model was replicated and proved stable, even with hands-onexperience.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 11: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:11

Fig. 5. Sample Web page (Study 2). Reproduced with permission of Manchester City Council.

3. STUDY 2: GOAL MODE

3.1 Method

3.1.1 Participants. Sixty-six undergraduate psychology students (49 females and 17males), with a mean age of 24 years (SD = 8) took part in the study as a course re-quirement. All participants had used the World Wide Web, but only one had ever usedthe target site that was employed in the experiment (Manchester City Council’s Website). Mean expertise using the Web was 10 years (SD = 3), mean time per week spentusing the Web was 19 hr (SD = 16) and mean frequency of Web use per week was 16times (SD = 10).

3.1.2 Materials and Equipment. As in Study 1, participants gave responses to AttrakD-iff2.3 Participants used Manchester City Council’s Web site as it existed in October2009 (see Figure 5 for a sample Web page).

3.1.3 Procedure. The study was run in a computer laboratory with groups of 15–20participants who worked independently. This study, like the previous study, consistedof two phases. In Phase 1, each participant was introduced to the Web site throughthe self-paced presentation of nine noninteractive screenshots of different pages fromthe site. Participants then completed the short version of the AttrakDiff2 question-naire. In Phase 2, a series of information retrieval tasks were presented, which re-flected the various types of information that were available on the Web site. The target

3Refer to Appendix C, Tables IV and V for a summary of the scales’ psychometric properties.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 12: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:12 P. van Schaik et al.

information was factual, could be found by following links in the Web site starting fromthe homepage, and the location of the target information within the information areaof the Web pages was not predictable from one task to the next. In each trial, a questionappeared at the top of the screen, for instance “What is the cost of a new applicationfor a Street Trading License in Manchester?” Once participants had read the question,they clicked on a button labeled “Show Web site.” The home page of the site appearedon the screen and they had to find the answer to the question, which remained visiblewhile participants used the site to search for the appropriate information. Partici-pants were told to take the most direct route possible to locate the answer. Havingfound it, they clicked on a button labeled “Your answer,” which opened a dialogue boxat the bottom of the screen. Participants typed their answers into the box, clicked onOK and moved on to the next question. After 3 practice questions, the main set of10 information retrieval tasks followed, with a maximum duration of 20 minutes. Af-ter the information retrieval task, participants again completed the AttrakDiff2 beforeanswering demographic questions. The study took about 35 minutes to complete. Dataanalysis was identical to that conducted in Study 1.

3.2 Results and Discussion

Figure 6 shows the results of the PLS analysis of the inference model for(a) presentation-only and with (b) additional hands-on experience. In the presentation-only condition the link between beauty and pragmatic quality was fully mediated, witha significant indirect effect, β = .48, p < .001. Again, the opposite was true for the ef-fect of beauty on hedonic quality. The direct effect was significant, β = .55, p < .001,while the indirect was not, β = .11, p > .05. The same pattern was apparent afterhands-on experience. The link between beauty and pragmatic quality was fully medi-ated, with a significant indirect effect, β = .45, p < .001. However, the direct effect ofbeauty on hedonic quality was significant, β = .43, p < .001, while the indirect effectwas not β = .19, p > .05.

Test results of the difference between each of the regression coefficients in thepresentation-only condition and the hands-on experience condition were not signifi-cant, all |t| < 1, further demonstrating the stability of the basic inference model. Thiswas the case despite differences in the standardized means between the four measures(with large changes for goodness and hedonic quality, but medium for beauty and smallfor pragmatic quality, see Table V(c) in Appendix C for details).

Overall, the inference model was replicated and proved stable even with hands-onexperience. It did so although the usage mode (action versus goal) and the product(Wikipedia versus council Web site) differed between Study 1 and Study 2.

4. STUDY 3: GOAL MODE WITH VARIED COMPLEXITY

4.1 Method

4.1.1 Participants. One hundred and twenty-seven undergraduate psychology stu-dents (102 females and 25 males), with a mean age of 23 years (SD = 8) took partin the experiment as a course requirement. All participants had used the World WideWeb, but had not used the target Web site that was employed ([fictional] WhitmoreUniversity’s psychology intranet site). Mean expertise using the Web was 11 years(SD = 3), mean time per week spent using the Web was 17 hr (SD = 14) and meanfrequency of Web use per week was 15 times (SD = 10).

4.1.2 Study Design. Study 3 advanced Study 2 by introducing two additional factors,varying the hands-on experience: task complexity (simple or complex; see Section4.1.4) and site complexity (simple or complex; see Section 4.1.3). Site complexity was

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 13: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:13

Fig. 6. Structural model (Study 2) – (a) presentation-only, (b) with hands-on experience.

primarily included to establish the generality of the effect of task complexity acrossdifferent levels of site complexity.

4.1.3 Materials and Equipment. As in the previous two experiments, participants gaveresponses to the short version of the AttrakDiff2 questionnaire.4 Two versions of a Website were modeled as a typical psychology site for university students, and especiallydesigned and programmed for the experiment. In addition to the homepage, the mainpages of the high-complexity site (see Figure 7a) were Teaching, Research, Fees andFunding, Hall of Fame, Library, Staff, Sports and Leisure, Careers, and About, with9990 further Web pages. In addition to the home page, the main pages of the low-complexity site (see Figure 7(b)) were Teaching, Research, Fees and Funding, and Hallof Fame, with 620 further Web pages. All links and content of the low-complexity sitewere also included in the high-complexity site. The latter had more pages than the

4Refer to Appendix D, Tables VI and VII for a summary of the scales’ psychometric properties.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 14: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:14 P. van Schaik et al.

Fig. 7. Low (a) and high (b) complexity site with a sample Web page (Study 3).

former, but both sites had an equal number of four levels of depth (from the home page)and therefore both site versions allowed simple and complex tasks to be completed.

4.1.4 Procedure. The procedure of Study 3 was essentially the same as that of Study2. In Phase 1, five noninteractive screenshots of different pages from either the low- orhigh-complexity site were presented. In Phase 2, 3 practice questions and a main setof 37 information retrieval tasks followed, with a maximum of 20 minutes available tocomplete the tasks. From the home page, complex tasks required users to select fourlinks in succession to access the Web page with the necessary information. Simpletasks required users to select two successive links.

4.1.5 Data Analysis with PLS. In Study 3, we used the inference model in thepresentation-only condition similar to Study 1 and Study 2. Theoretically, site com-plexity could impact perception of the site in the presentation-only condition, but thisseemed unlikely due to its operationalization (see section Materials and equipmentand Procedure). Site complexity was varied by the number of pages in the Web site(620 versus 9990) and the number of sections offered (4 versus 9). In the presentation-only condition, however, each participant was presented with the same number of sam-ple pages as screenshots. Therefore, it was essentially not possible to experience sitecomplexity in terms of number of pages. The number of sections, though, was visiblethrough the sites’ main menus (see Figure 7). However, from visual inspection thisdifference seemed negligible, when one does not interact with the site.

For the hands-on experience condition, we extended the model to account for po-tential effects of task complexity. As in Study 1 and 2, PLS was used for data anal-ysis. It was felt important to verify that the relations in the basic inference modelwere not influenced (moderated) task complexity, in order to establish that the effectsin this model held true irrespective of task complexity. For this purpose, Henseler’s[2009] nonparametric analysis for two-group comparisons of PLS model parameterswas used. Significance tests were conducted of the difference between each of theregression coefficients from the low-task complexity conditions and the high-task com-plexity conditions in the basic inference model after hands-on experience.

4.2 Results and Discussion

Figure 8 shows the results of the PLS analysis of the inference model for (a) presenta-tion only, (b) additional hands-on experience, and (c) additional hands-on experience,

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 15: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:15

including task complexity and site complexity. In the presentation-only condition (seeFigure 8(a)) the link between beauty and pragmatic quality was fully mediated, witha significant indirect effect, β = .33, p < .001. The effect of beauty on hedonic qual-ity was partially mediated, with a large significant direct effect, β = .57, p < .001,and a small significant indirect effect, β = .16, p < .001. In the hands-on experiencecondition (see Figure 8(b)), the link between beauty and pragmatic quality was onlypartially mediated, with a significant indirect effect, β = .36, p < .001. However, whilesignificant, the direct effect of beauty on pragmatic quality, β = −.25, p < .001, wasnegative. The link between beauty and hedonic quality was partially mediated, with alarge significant direct effect, β = .66, p < .001, and a small significant indirect effect,β = .10, p < .05.

Tests of the difference between each of the regression coefficients in thepresentation-only condition and the hands-on experience condition were not signifi-cant, all |t| < 1, further demonstrating the stability of the inference model. This isthe case despite differences in the standardized means between the four measures(with a large change for pragmatic quality, but a very small change for goodness anda negligible change for hedonic quality and beauty, see Table VII(c) in Appendix D fordetails).

Figure 8(c) shows the basic inference model, including task- and site complexity. Asexpected, task complexity only influenced the perception of pragmatic quality but notthat of hedonic quality. The more complex the task, the less pragmatic the Web siteappeared. This lends further support to the conceptual distinction (i.e., construct va-lidity) between pragmatic and hedonic quality. While the former is related to goals andtheir achievement, the latter is not. This is reflected by the differential relationships.Besides this, the relationships in the model did not change due to the manipulationof pragmatic quality through task complexity. In fact, none of the coefficients differedbetween participants who were exposed to low task complexity (n = 64) and those whowere exposed to high task complexity (n = 63), with p > .05 for all comparisons. Thus,these effects were independent of the manipulation of task complexity; task complexitywas not a moderator of the coefficients.

In conclusion, the results of Study 3 further support the inference model. The re-sults also provide support for the negative effect of task complexity on pragmatic qual-ity and the lack of such an effect on hedonic quality, while the basic relations accordingto general-to-specific inference remained intact.

Table I summarizes the results from all three studies to facilitate comparison.The rightmost column further presents the average coefficient across all studies. Asexpected, the direct links between beauty and goodness, and between beauty andhedonic quality were substantial (.56, .55, respectively). The direct link betweenbeauty and pragmatic quality was not (−.05). In contrast, the indirect effect of beautyon pragmatic quality was substantial (.38) and larger than the indirect effect of beautyon hedonic quality (.13).

5. GENERAL DISCUSSION

5.1 The Inference Model

In relation to Aim 1, three studies supported our specific inference model for tyingtogether beauty, pragmatic quality (i.e., perceived usability) and hedonic quality (i.e.,stimulation, identification). Beauty and overall evaluation were highly correlated, con-firming the longstanding inference rule of “What is beautiful is good” [Dion et al. 1972],borrowed from person perception. As further expected, the effect of beauty on hedonicquality was primarily direct (probabilistic consistency as an inference rule), but the

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 16: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:16 P. van Schaik et al.

Fig. 8. Structural model (Study 3) – (a) presentation-only, (b) with hands-on experience, (c) with hands-onexperience, including task and site complexity.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 17: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:17

Table I. Standardized Regression Coefficients for the Inference Model Across Studies

Study 1 Study 2 Study 3 Meana

Effect Presentation With Presentation With Presentation Withonly hands-on only hands-on only hands-on

experience experience experience

DirectB→G ***.52 ***.61 ***.60 ***.59 ***.56 ***.48 .56B→HQ **.42 ***.62 ***.55 ***.43 ***.57 ***.66 .55B→PQ −.05 −.04 .03 .13 −.09 ***−.25 −.05G→HQ .11 *.21 .18 .33 ***.29 *.21 .22G→PQ ***.65 ***.54 ***.79 ***.76 ***.59 ***.75 .69PQ→HQ .19 ***.20 .21 −.01 −.07 −.07 .08

IndirectB→G→HQ .06 .13 .11 .19 .16 .10 .13B→G→PQ ***.34 ***.33 ***.48 ***.45 ***.33 ***.36 .38

TotalB→HQ .48 .75 .65 .62 .74 .76 .68B→PQ .29 .29 .51 .58 .24 .11 .35

Note: To facilitate comparison between studies, the results of Study 3 are those from the model without the experimentalmanipulations (i.e., task complexity and site complexity).aCoefficients were Fisher-Z transformed, averaged, and retransformed.* p < .05. ** p < .01. *** p < .001.

effect of beauty on pragmatic quality was primarily indirect (evaluative consistency asan inference rule), in other words, mediated by goodness.

The present findings replicate Hassenzahl and Monk’s [2007, 2010] previous work,but with the following advances. First, in relation to Aim 2, evidence for inferencerules was found when hands-on experience was experimentally controlled. Specifi-cally, the same pattern of results was found after only presentation of a product andwith subsequent hands-on experience. This increases the generalizability of the model.Admittedly, the hands-on experience was rather brief and may have not been intenseenough to overrule well-used inference rules. The results from Study 1, however, sug-gest that this may not be the case. This is because participants were reportedly regu-lar (in frequency) and substantial (in time spent) users of Wikipedia (i.e., the Web siteused in the study). However, expertise over time (for example, years of experience)was not recorded, so the extent of this experience could not be analyzed in relationto inference of usability from beauty. Thus, future studies may extend the hands-onexperience even further to explore when acquired specific information about attributesmay overrule simple inference rules. But for now, our finding of a stable model adds toits applicability.

Second, in relation to Aim 3, evidence was found for the suggested inference rulesacross two types of tasks (goal mode and action mode), within different products(Wikipedia, council Web site, hypothetical University Web site) and even when taskcomplexity and artifact complexity were systematically varied. Our findings, thus in-crease external validity in terms of generalizability across tasks and products as wellas levels of task- and product complexity.

Despite the achievements of the research reported here, there are some potentiallimitations, in particular in relation to the choice of Web sites and range of partic-ipants. Although the Web sites used here were representative of different contentclasses of Web sites as artifacts (knowledge repository, government service and studyin formal education), were substantially large and predominantly presented informa-tion through text, with some use of graphics, they do not exhaustively represent thefull range of Web sites or artifacts more generally. Therefore, replication using of awider range of content (e.g. online sales, content sharing and social networking) anduse of media (e.g. more extensive use of graphics, sound and video) would furtherincrease confidence in the generalizability of the findings.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 18: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:18 P. van Schaik et al.

The use of psychology students as participants is typically justified in research,such as that reported here, where the goal is to examine psychological processes thatoperate generally across the population to which the tasks apply, and where there isno credible reason to assume that a different type of processing would occur in othermembers of the same population. In our research there is no apparent reason whydifferent psychological processing would occur in the formation of UX judgments byother members of the populations of artifact users in each of our experiments than thesamples that took part. Still, further confidence would be gained in the generalizabilityof the results reported here if they were replicated with samples from nonstudentpopulations of Web users.

Three findings require some further discussion: the negative direct link betweenbeauty and pragmatic quality (Study 3), the combination of evaluative and probabilis-tic consistency for beauty and hedonic quality, and the relationship between hedonicand pragmatic quality.

Beauty and Pragmatic Quality. In Study 3, a significant direct effect of beauty onpragmatic quality was observed. However, this effect was negative, hinting at a “Whatis beautiful cannot be usable”-rule. Indeed, Hassenzahl and Monk [2010] reported twoout of seven direct effects of beauty on pragmatic quality to be negative and significant(but not a single significant positive one). It may be that another particular type ofinference is operating here, namely compensatory inference [Chernev and Carpenter2001]. Imagine you are faced with two laptops of the same price. While laptop A hasa mediocre processing power, but a large hard disk, laptop B has a better processingpower, but the capacity of the hard disk is unknown. In this situation many peopleinfer that the hard disk capacity of laptop B must be inferior to those of A. Kardes andcolleagues [2004b] call this negative-correlation-based inference, and the present ex-ample draws upon people’s lay theories of competition in the market. Given the sameprice and some outstanding attributes, there must be some other, poorer attributesthat are compensated for. While Chernev and Carpenter [2001] demonstrated the rel-evance of compensatory rules only in the specific domain of prices and product details,they might be much more pervasive. Who would not start to feel suspicious, if a friendexcitedly tells about a new acquaintance, who is not only “educated”, “good-looking”and “sexy”, but also “unearthly rich”? In an unpublished study on strategies of peoplefor inferring usability from a number of other attributes, we found only 3% of all par-ticipants to use beauty as an indicator of a lack of pragmatic quality (i.e., “the darkside of beauty” [Hassenzahl 2008]). In the present studies, of six models only one re-vealed a negative correlation. In other words, while potentially interesting and worthfurther exploration, we believe this negative stereotyping to be rather marginal.

Evaluative and Probabilistic Inference Combined. In three out of six models (Study1, Study 2) the effect of beauty on hedonic quality was partially mediated, that is, alarger direct effect (.55 on average, Table I) was complemented by a smaller indirecteffect (.13 on average, Table I). We do not see this as a contradiction, but rather believethis to be a good illustration of how different inference rules intertwine. Evaluativeconsistency is the consequence of applying the “What is beautiful is good”-rule andthe “I like it, it must be good on all attributes”-rule (i.e., “halo”-effect). This resultsin a highly beauty-driven overall evaluation, which in turn impacts all other attributejudgments. In other words, the more beautiful the better on all other attributes. Itis easy to envision this as a ubiquitous, almost automatic process—the predominantmode to handle momentarily unavailable information. In addition, more specific rulesmay modify the outcome of this process, such as the rule that beauty is a direct indi-cator of hedonic quality. (Another example, would be the “What is beautiful cannot be

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 19: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:19

usable”-rule discussed before.) Therefore, stronger direct effects occur (due to proba-bilistic consistency), in addition to indirect effects (due to evaluative consistency).

Furthermore, in the present studies goodness is more related to pragmatic quality(average .69, see Table I) than to hedonic quality (average .22, see Table I). In general,it is assumed that both qualities equally contribute to a product’s appeal [Hassenzahlet al. 2000; Hassenzahl 2001]; however, situational aspects may override this. In fact,people only use available information to infer the unavailable, if they find the availableinformation relevant [Kruglanski et al. 2007]. For example, people rely more on theirfeelings towards an object when the judgment calls for the affective. In other words,people may find it more appropriate to rely on their feelings when asked about whetherthey believe a shoe to be comfortable (i.e., experiential) compared to whether theybelieve a shoe to be a good bargain (i.e., factual) [Yeung and Wyer 2004]. If they find aparticular piece of information to be inappropriate, they will tend to discount it. Thesame may happen here. As long as all products used in the present studies are ratherutilitarian in nature (even the product used in action mode in Study 1), people mayfind it inappropriate to infer hedonic quality from their global judgments, resulting ina weaker link between goodness and hedonic quality than that between goodness andpragmatic quality.

Pragmatic and Hedonic Quality. In general, we expected perceptions of pragmaticquality and hedonic quality to be independent. On average, this was true for thepresent studies (the average regression coefficient was .07; see Table I). In addition,task complexity had its significant effect only on pragmatic quality, but not on hedonicquality, thereby further corroborating the conceptual distinction between both con-structs in the sense of construct validity. Nevertheless, Study 1 revealed a small butsignificant relationship between pragmatic and hedonic quality. One explanation maybe, that the independence between pragmatic and hedonic quality may be less strongin a situation where the focus is on the action itself [action mode; Hassenzahl 2003]rather than on achieving goals. This is because in such a situation, the interactionitself could to some extent be a source of pleasure. Our results provide evidence todemonstrate independence between pragmatic quality and hedonic quality after task-oriented hands-on experience in Study 2 and Study 3 where standardized regressioncoefficients were (extremely) small and not positive. However, consistent with our ac-count, in Study 1 the small positive coefficient remained stable after exploration-basedhands-on experience. Overall, the small correlations between pragmatic and hedonicquality found in Study 1 and Study 2 do not throw doubt on the general findings pre-sented here. They are nevertheless an interesting topic for future research.

5.2 Inference of User-Experience from a Wider Perspective

Two broad classes of models accounting for processes of judgment and decision-makingcan be distinguished. One class considers these processes as computational, whereasthe other does not. The approach taken here, based on Kruglanski et al.’s [2007] uni-fied framework for conceptualizing and studying judgment as inference, fits with theapproach taken by noncomputational theories. In contrast to computational theories(such as normative theories [e.g. subjective utility theory] and psychological descrip-tive theories [e.g., prospect theory]), noncomputational theories assume that peopledo not make complex (weighted and summated) calculations using probability andutility. Instead, noncomputational theories assume that people use relatively simplecognitive strategies (simples rules Chater and Brown [2008], heuristics [Gigerenzerand Gaissmaier 2011; Kruglanski and Gigerenzer 2011] in making judgments anddecisions. Kruglanski et al. [2007] provide a unified framework for conceptualizingand studying human judgment as inference. Specifically, they present cogent evidence

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 20: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:20 P. van Schaik et al.

that, perhaps surprisingly, inference is ubiquitous in the formation of judgments acrossa wide range of judgment types, including conditioning, perception, pattern recognitionand social judgment, which were previously believed to be ruled by disparate mecha-nisms. Important questions are then what sources are used for inference and how arethese used.

Information Sources in Judgment. Our participants had two or three sources ofinformation available to make their judgment: (1) their impression from the pre-sentation of a product, (2) hands-on experience from of subsequent interaction withthe product, and (3) memory of previous product experience (in Study 1). A system-atic study of the effects of the different information sources in UX seems important.Recent research in cognitive psychology, for example, has demonstrated that the re-sults of judgments from description and those from experience can differ dramatically[Barron and Leidner 2010; Hertwig et al. 2004].

Perhaps as important as the distinction between the different sources of judgmentis that the effect of one source on judgment may be moderated by the effects of others,for example, where product attributes in a product description contradict those thata user encounters in the presentation of the same product. Therefore, the choice ofthe combination of sources that are studied could have profound effect on the resultsand the conclusions drawn from these results regarding the way people form judg-ments. Indeed, research has demonstrated that product description can have an effecton product evaluations if they considered these before they considered their own prod-uct experience [Wooten and Reed 1998] and sometimes people cannot even distinguishwhether they actually experienced a product or only were presented with a descriptionof an experience [Rajagopal and Montgomery 2011].

Judgment Parameters in Inference-Based Judgment. After making a cogent casefor the ubiquity of inference in human judgment, Kruglanski et al. [2007] provide anaccount for the use of information sources in the formation of judgments by the appli-cation of (simple) inference rules of the form “If X then Y.” A large store of such rules,if judiciously applied in appropriate circumstances, can be a powerful tool in a widerange of human judgments. In their inference framework, four largely independentparameters influence judgment. First, informational relevance, is the extent to whicha particular inference rule is seen as valid and therefore can affect the likelihood ofits application (such as whether goodness may be relevant to infer hedonic qualityof a predominantly utilitarian product). Second, task demands can affect the likeli-hood of the application of an inference rule. Third, cognitive resources, which are anindividual’s capabilities in a particular situation, will also influence the application ofinference. Fourth, motivation controls both the amount of effort expended to processinformation (nondirectional motivation) and the weight attached to each element of in-formation, with a higher weight for items that are seen as more desirable (directionalmotivation).

Kruglanski et al. [2007] maintain that in a particular judgment situation, the val-ues on the identified parameters of judgment can account for people’s judgment, ir-respective of the process(es) that produced these values. This is therefore a unifiedframework for studying judgment and allows for the systematic investigation of theparameters on people’s judgment of UX quality. When placed in the context of thisframework, the current study found evidence that the application of inference rulesfor beauty, goodness, pragmatic quality and hedonic quality were not influenced bythe task type (goal mode or action mode, although this situation was confounded withthe artifact that was used). Moreover, when task demands increased, evidence for theapplication of the basic inference rules remained essentially unchanged. An attractive

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 21: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:21

feature of Kruglanski et al.’s framework is that it can be seen as compatible with exist-ing frameworks for studying human-computer interaction, in particular Finneran andZhang’s [2003] person-artifact-task model. In this context, cognitive resources and mo-tivation would be examples of person characteristics. Information relevance would bean example of an artifact characteristic. Task demand would be an instance of a taskcharacteristic.

One potential limitation, in principle, of Kruglanski et al.’s framework in the con-text of UX is that they seem to focus on sources from description, presentation andmemory, but (whether intentionally or not) not direct interaction or actual task per-formance with a product, which is ultimately important in human-computer inter-action. However, research on the experience-description gap demonstrates that first-hand experience may produce different outcomes for judgment and decision-making.In the present study, the relationships between the constructs in the inference modelremained essentially unchanged with experience of interacting with a particular arti-fact. Perhaps this was because participants were familiar with the type of task thatthey were performing (i.e., navigating a Web site), so that judgment processes were notradically altered by hands-on experience. Another apparent limitation of Kruglanskiet al.’s work is the use of simple inference rules of the form “If X then Y.” However, wefound evidence for a more complex rules in the form of an extension of evaluative con-sistency (a relationship between beauty and pragmatic quality mediated by goodness)and other work has revealed the operation of a still more complex type [moderatedmediation, Hassenzahl et al. 2010]. Nevertheless, by allowing more complex rules,Kruglanski et al.’s framework could accommodate these findings.

Collections of inference rules rather than models. The conceptualization of judg-ment as inference with different rules for different types of judgment has consequencesfor UX models. Given the focus on such rules, it follows that the evidence needed toestablish whether particular rules have been applied in judgment is the predictivevalue of antecedents on consequents of each rule [Kruglanski et al. 2007]. We there-fore suggest abandoning the notion of a model in favor of a network of (interconnected)inference rules or even a set of rules (with unspecified connections), where the pre-dictive value of antecedent for consequent constructs are analyzed, but covariancesbetween indicators in different rules are not the focus of interest. This proposition hassome interesting implications. For example, we need to find ways to disentangle thecombination of rules manifest in data. As noted previously, it is likely that multiplerules operate simultaneously. Therefore, patterns in a particular data set are the re-sult of the combined effect of rules that operated on variable values. In addition, wemay broaden our methods to study individual judgment processes, that is how a singleperson makes inferences under different conditions [see Jacobson 2004 for an examplein the domain of aesthetic judgments].

5.3 Implications of the Inference Perspective and Future Work

Given the consistent findings reported here, one might ask whether there is any waythat the suggested inference model could be “falsified.” In the light of the previousdiscussion, the answer would be that from an inference perspective a comprehensivemodel is not of interest. Rather, the task is to demonstrate evidence for particularinference rules that people may apply in making their judgments of UX, and some ofthese rules may be applied more frequently than others. For example, in the case ofproduct purchase, price may moderate the influence of beauty on other variables. Ifprice and beauty do not match (for instance, beauty is very high, but price is very low)then people may conclude, through compensatory inference, that overall quality andpragmatic quality must be low and perhaps even that hedonic quality in interaction

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 22: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:22 P. van Schaik et al.

with the product must be low. The more rules we know, the better we are able tounderstand the facetted nature of judging interactive products.

Consistent with the inference rules of evaluative consistency for pragmatic quality(and hedonic quality) and of probabilistic consistency for hedonic quality, the results ofthe current study indicate that beauty and goodness are important predictors of prag-matic quality and hedonic quality both after presentation of a product and after sub-sequent hand-on experience. Therefore, from the perspective of design, characteristicsthat contribute positively to people’s evaluations of beauty and overall quality beforeand after actual use of a product would contribute to their perceptions of pragmaticand hedonic quality, in particular when hands-on experience is limited or absent.

From an inference perspective, future work would further test the effect of the pu-tatively generally applicable judgment parameters on people’s judgments in human-computer interaction, while taking into account the different sources of informationidentified here. For example, given that the current study was mainly concerned withthe inference of judgments of product attributes, in particular pragmatic quality, incontrast to hedonic quality, we manipulated task complexity to establish evidence forthe application of a rule that links task complexity to pragmatic quality. However,other experimental manipulations could further elucidate the operation of inferencerules in judgment. For instance, the evaluation of beauty would be a mediator ofthe effect of user-interface design decisions (e.g., fonts, colors, grids) on goodness andhedonic quality, and goodness would be a further mediator of the mediated effect ofaesthetics, but, perhaps as perceptions of beauty diminish over time, mediation mayno longer occur. As another example, the inherent design characteristics of a productthat contribute to its “objective” usability would also contribute to judgments of prag-matic quality, in addition to inferences from product evaluations and task complexity.Furthermore, Bloch et al.’s [2003] research on consumer-products suggests that somepeople may be more influenced in their judgments by beauty than others. If this werethe case for interactive computer-based products then it could have implications forjudgments from inference. The consequence would be that people hold different rulesin terms of the predictive validity of beauty. Obviously, experimental research can con-trol the situations in which people experience through the presentation of and inter-action with products, and scenarios in which they imagine product experience throughdescriptions.

According to Kruglanski et al. [2007], people only apply inference rules (not nec-essarily consciously) that they deem valid (but, again, not necessarily consciously)to make judgments. However, while not explicitly taking an inference perspective,Hassenzahl [2010] found evidence to support the idea that attribution (that the prod-uct is responsible for their product experience) is necessary for need fulfillment to havean effect on perceptions of hedonic quality. The more general question then becomes,to what extent people need to attribute their experience during their encounter (pre-sentation, interaction, description, memory) with a particular product to the productfor inference rules to operate. This is particularly important given that when peopleanswer the question to what extent (or whether) they attribute their experience to theproduct they may differ in the aspects of UX (that would map onto UX variables thatthe research investigates) they consider and they even may not consider the aspectsthat correspond to UX variables that the research investigates.

Given that with repeated experience of interaction with a product people have in-creasingly more directly relevant information to make judgments about product at-tributes, a further research question would be whether general-to-specific inferencerules still operate on a longer time-scale (months or years). Within Kruglanski et al.’s[2007] framework, this will depend on the extent to which a particular inference rulehas been activated in the past. Here the consistent previous successful application of

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 23: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:23

the same rule (across products of the same type) would be important for its applicationin the future. For example, if probabilistic consistency for hedonic quality (“What isbeautiful is pleasurable”) has been applied repeatedly (over a long period of time) andevery time a beautiful product (condition met) was found to be pleasurable (outcomeachieved) then it would be more likely that this rule will be applied next time when oneencounters a new product, but less likely if when the condition was met the outcomewas not consistently achieved.

6. CONCLUSION

In conclusion, we found consistent evidence for the operation of a basic set of infer-ence rules to account for people’s perceptions of product attributes across differentartifacts and usage modes. Our approach and findings fit into a more general theo-retical framework of judgment as inference that leads to interesting propositions forfuture research. We look forward to the wider application of this approach, in partic-ular experimental work that investigates the effects of person characteristics and ofmanipulating artifact- and task characteristics on model parameters.

APPENDIX A

Pragmatic Quality – I judge the Web pages to bePQ1 Confusing – StructuredPQ2 Impractical – PracticalPQ3 Unpredictable – PredictablePQ4 Complicated – Simple

Hedonic Quality – I judge the Web pages to beHQ1 Dull – CaptivatingHQ2 Tacky – StylishHQ3 Cheap – PremiumHQ4 Unimaginative – Creative

Beauty – I judge the Web pages overall to beBeauty1 Ugly – Beautiful

Goodness – I judge the Web pages overall to beGoodness1 Bad – Good

ELECTRONIC APPENDIX

The electronic appendix for this article can be accessed in the ACM Digital Library.

REFERENCESAPTER, M. J. 1989. Reversal Theory: Motivation, Emotion and Personality. Taylor & Frances/Routledge,

Florence, KY.BARRON, G. AND LEIDNER, B. 2010. The role of experience in the Gambler’s Fallacy. J. Behav. Decis. Making

23, 117–129.BLOCH, P. H., BRUNEL, F. F., AND ARNOLD, T. J. 2003. Individual differences in the centrality of visual

product aesthetics: Concept and measurement. J. Consum. Resear. 29, 551–565.CHATER, N. AND BROWN, G. D. A. 2008. From universal laws of cognition to specific cognitive models.

Cogn. Sci. Multidisc J. 32, 36–67.CHATER, N., OAKSFORD, M., NAKISA, R., AND REDINGTON, M. 2003. Fast, frugal, and rational: How

rational norms explain behavior. Organiz. Behav. Hum. Decis. Proc. 90, 63–86.CHERNEV, A. AND CARPENTER, G. S. 2001. The role of market efficiency intuitions in consumer choice: A

case of compensatory inferences. J. Market. Resear. 38, 349–361.CHIN, W. W. 2010. How to write up and report PLS analyses. In Handbook of Partial Least Squares: Con-

cepts, Methods and Applications in Marketing and Related Fields, V. E. Vinzi, W. W. Chin, J. Henseler,and H. Wang Eds., Springer, Berlin, 655–690.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 24: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

11:24 P. van Schaik et al.

CYR, D., HEAD, M., AND IVANOV, A. 2006. Design aesthetics leading to m-loyalty in mobile commerce. Inf.Manage. 43, 950–963.

DION, K., BERSCHEID, E., AND WALSTER, E. 1972. What is beautiful is good. J. Person. Soc. Psych. 24,285–290.

FINNERAN, C. M. AND ZHANG, P. 2003. A person-artefact-task (PAT) model of flow antecedences incomputer-mediated environments. Int. J. Hum.-Comput. Stud. 59, 475–496.

FORD, G. T. AND SMITH, R. A. 1987. Inferential beliefs in consumer evaluations: An assessment of alterna-tive processing strategies. J. Consum. Resear. 14, 363–371.

GIGERENZER, G. AND GAISSMAIER, W. 2011. Heuristic decision making. Ann. Rev. Psych. 62, 451–482.GWIZDKA, J. AND SPENCE, I. 2006. What can searching behavior tell us about the difficulty of information

tasks? A study of web navigation. In Proceedings of the 69th Annual Meeting of the American Society forInformation Science and Technology.

HARTMANN, J., SUTCLIFFE, A., AND DE ANGELI, A. 2008. Towards a theory of user judgment of aestheticsand user interface quality. ACM Trans. Comput.-Hum. Interact. 15, 15.

HASSENZAHL, M. 2001. The effect of perceived hedonic quality on product appealingness. Int. J. Hum.-Comput. Interact. 13, 481–499.

HASSENZAHL, M. 2003. The thing and I: Understanding the relationship between user and product. InFunology: From Usability to Enjoyment, M. Blythe, C. Overbeeke, A. Monk, and P. Wright Eds., Kluwer,Dordrecht, 31–42.

HASSENZAHL, M. 2008. Aesthetics in interactive products: Correlates and consequences of beauty. InProduct Experience, R. Schifferstein and P. Hekkert Eds., Elsevier, 287–302.

HASSENZAHL, M. 2010. Experience design: Technology for All the Right Reasons. Morgan and Claypool, SanRafael, CA.

HASSENZAHL, M. AND MONK, A. F. 2007. Was uns Schonheit signalisiert. Zum Zusammenhang vonSchonheit, wahrgenommener Gebrauchstauglichkeit und hedonischen Qualitaten [What beauty sig-nals to us. About the association between beauty, perceived usability and hedonic quality]. In Prospek-tive Gestaltung von Mensch-Technik-Interaktion. 7. Berliner Werkstatt Mensch-Maschine-Systeme,M. Rotting, G. Wozny, A. Klostermann, and J. Hus Eds., VDI, Dusseldorf, 227–232.

HASSENZAHL, M. AND MONK, A. 2010. The inference of perceived usability from beauty. Hum.-Comput.Interact. 25, 235–260.

HASSENZAHL, M. AND ULLRICH, D. 2007. To do or not to do: Differences in user experience and retro-spective judgments depending on the presence or absence of instrumental goals. Interact. Comput. 19,429–437.

HASSENZAHL, M., PLATZ, A., BURMESTER, M., AND LEHNER, K. 2000. Hedonic and ergonomic qualityaspects determine a software’s appeal. In Proceedings of the Conference on Human Factors in ComputingSystems. ACM, New York, NY, 201–208.

HASSENZAHL, M., DIEFENBACH, S., AND GORITZ, A. 2010. Needs, affect, and interactive products—Facetsof user experience. Interact. Comput 22, 353–362.

HENSELER, J., RINGLE, C., AND SINKOVICS, R. 2009. The use of partial least squares modeling in interna-tional marketing. In New Challenges in International Marketing (Advances in International Marketing),T. Cavusgil, R. Sinkovicks, and P. Ghauri Eds., Emerald, London, 277–319.

HERTWIG, R., BARRON, G., WEBER, E. U., AND EREV, I. 2004. Decisions from experience and the effect ofrare events in risky choice. Psych. Sci. 15, 534–539.

HULLAND, J., RYAN, M., AND RAYNER, R. 2010. Modeling customer satisfaction: A comparative eval-uation of covariance structure analysis versus partial least squares. In Handbook of Partial LeastSquares: Concepts, Methods and Applications in Marketing and Related Fields, V. E. Vinzi, W. W. Chin,J. Henseler, and H. Wang Eds., Springer, 307–325.

JACOBSEN, T. 2004. Individual and group modelling of aesthetic judgment strategies. Brit. J. Psych. 95,41–56.

KARDES, F. R., CRONLEY, M. L., KELLARIS, J. J., AND POSAVAC, S. S. 2004a. The role of selective infor-mation processing in price-quality inference. J. Consum. Resear. 31, 368–374.

KARDES, F. R., POSAVAC, S. S., AND CRONLEY, M. L. 2004b. Consumer inference: A review of processes,bases, and judgment contexts. J. Consum. Psych. 14, 230–256.

KEENEY, R. L. AND RAIFFA, H. 1976. Decisions with Multiple Objectives: Preferences and Value Tradeoffs.Cambridge University Press, New York, NY.

KRUGLANSKI, A. W. AND GIGERENZER, G. 2011. Intuitive and deliberate judgments are based on commonprinciples. Psych. Rev. 118, 97–109.

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.

Page 25: User-Experience from an Inference Perspectivesss-studnet.tees.ac.uk/psychology/staff/Paul_vs/papers/vanSchaikEt… · cific attributes, such as usability, functionality, expressive

User-Experience from an Inference Perspective 11:25

KRUGLANSKI, A. W., PIERRO, A., MANNETTI, L., ERB, H., AND CHUN, W. Y. 2007. On the parametersof human judgment. In Advances in Experimental Social Psychology, Vol 39, M. P. Zanna Ed., ElsevierAcademic Press, San Diego, CA, 255–303.

LAVIE, T. AND TRACTINSKY, N. 2004. Assessing dimensions of perceived visual aesthetics of web sites. Int.J. Hum.-Comput. Stud. 60, 269–298.

LINGLE, J. H. AND OSTROM, T. M. 1979. Retrieval selectivity in memory-based impression judgments. J.Person. Soc. Psych. 37, 180–194.

LOKEN, B. 2006. Consumer psychology: Categorization, inferences, affect, and persuasion. In Annual Reviewof Psychology, Vol 57, D. L. Schacter Ed., Annual Reviews, Palo Alto, CA. 453–485.

MONK, A. F. 2004. The product as a fixed-effect fallacy. Hum.-Comput. Interact. 19, 371–375.OOSTENDORP, H. V., MADRID, R. I., AND PUERTA MELGUIZO, M. C. 2009. The effect of menu type and

task complexity on information retrieval performance. The Ergon. Open J. 2, 64–71.RAJAGOPAL, P. AND MONTGOMERY, N. V. 2011. I imagine, I experience, I like. J. Consum. Resear. 38,

578–594.RINGLE, C., WENDE, S., AND WILL, A. 2005. SmartPLS 2.0. Institute of Operations Management and

Organizations, University of Hamburg, Germany.THORNDIKE, E. L. 1920. A constant error on psychological rating. J. Appl. Psych. 4, 25–29.TRACTINSKY, N., COKHAVI, A., KIRSCHENBAUM, M., AND SHARFI, T. 2006. Evaluating the consistency of

immediate aesthetic perceptions of web pages. Int. J. Hum. Comput. Stud. 64, 1071–1083.VAN SCHAIK, P. AND LING, J. 2009. The role of context in perceptions of the aesthetics of web pages over

time. Int. J. Hum. Comput. Stud. 67, 79–89.VAN SCHAIK, P. AND LING, J. 2011. An integrated model of interaction experience for information retrieval

in a Web-based encyclopaedia. Interact. Comput. 23, 18–32.VAN SCHAIK, P. AND LING, J. (To appear). An experimental analysis of experiential and cognitive variables

in web navigation. Hum.-Comput. Interact. 26.VENKATESH, V., MORRIS, M. G., DAVIS, G. B., AND DAVIS, F. D. 2003. User acceptance of information

technology: Toward a unified view. MIS Q. 27, 425–478.VILARES, M., ALMEIDA, M., AND COELHO, P. 2010. Comparison of likelihood and PLS estimators for struc-

tural equation modeling: A simulation with customer satisfaction data. In Handbook of Partial LeastSquares: Concepts, Methods and Applications in Marketing and Related Fields, V. E. Vinzi, W. W. Chin,J. Henseler, and H. Wang Eds., Springer, 289–305.

VINZI, V. E., CHIN, W. W., HENSELER, J., AND WANG, H. Eds. 2010. Handbook of Partial Least Squares:Concepts, Methods and Applications in Marketing and Related Fields. Springer.

WOOTEN, D. B. AND REED, A. 1998. Informational influence and the ambiguity of product experience:Order effects on the weighting of evidence. J. Consum. Psych. 7, 79–99.

YEUNG, C. W. M. AND WYER, J. 2004. Affect, appraisal, and consumer Judgment. J. Consum. Resear. 31,412–424.

Received April 2011; revised October 2011, January 2012; accepted January 2012

ACM Transactions on Computer-Human Interaction, Vol. 19, No. 2, Article 11, Publication date: July 2012.