11
Decisions beyond boundaries: When more information is processed faster than less Andreas Glöckner a, , Tilmann Betsch b a Max Planck Institute for Research on Collective Goods, Bonn, Germany b University of Erfurt, Germany abstract article info Article history: Received 19 September 2011 Received in revised form 20 January 2012 Accepted 24 January 2012 Available online 28 February 2012 PsycINFO codes: 2340 Cognitive Processes 4160 Neural Networks Keywords: Bounded rationality Parallel constraint satisfaction Holistic processing Adaptive decision making Heuristics Response time Bounded rationality models usually converge in claiming that decision time and the amount of computational steps needed to come to a decision are positively correlated. The empirical evidence for this claim is, howev- er, equivocal. We conducted a study that tests this claim by adding and omitting information. We demon- strate that even an increase in information amount can yield a decrease in decision time if the added information increases coherence in the information set. Rather than being inuenced by amount of informa- tion, decision time systematically increased with decreasing coherence. The results are discussed with refer- ence to a parallel constraint satisfaction approach to decision making, which assumes that information integration is operated in an automatic, holistic manner. © 2012 Elsevier B.V. All rights reserved. 1. Introduction In western religion and philosophy, decision making is generally considered the supreme discipline of conscious thought. Free will is exclusively attributed to human beings, manifesting itself in the capability of making choices upon anticipating and waging the conse- quences of the alternatives. Common language reects this denota- tion by dening a decision as a determination arrived at after consideration(Merriam-Webster, online). This notion is maintained in the maximization principle of expected utility theory. Accordingly, a rational decision maker should identify the entire set of eligible options, calculate each option's expected utility (EU) and select the one with the highest EU (Savage, 1954; von Neumann & Morgenstern, 1944). Expected utility models do not claim that people indeed calcu- late weighted sums, but only that their choices can be predicted by such a model (Luce, 2000; Luce & Raiffa, 1957). Simon (1955; see also Veblen, 1898) drew attention to the decision process. He questioned the assumption that people rely on deliberate calculations of weighted sums in decisions, because the limitations of cognitive capacity and the multitude of decision options do not allow them to do so. Essentially two alternative process model approaches have been suggested. The rst approach is based on the idea of adaptive strategy selection. People might use effortful weighted sum calculations only in some situations (Beach & Mitchell, 1978; Gigerenzer, Todd, P. M., and the ABC Research Group, 1999; Payne, Bettman, & Johnson, 1988, 1993). In other situations, for instance under time pressure, they might rely on short-cut strategies, which consist of stepwise cog- nitive operations that are (usually) carried out deliberately (cf. Payne et al., 1988; although they also consider a possible implementation as production rules). The considered strategies are usually well specied on a process level; their cognitive costs are measured as the number of necessary calculations for applying the respective strategy, called elementary information processes (EIPs; Newell & Simon, 1972; Payne et al., 1988). Consequently, all adaptive strategy selection approaches converge in assuming that the cognitive effort and the necessary time for a decision increase with the number of processing steps (i.e., EIPs) required by the decision strategy (see also Brandstätter, Gigerenzer, & Hertwig, 2006; Bröder & Gaissmaier, 2007, for examples related to the adaptive toolbox approach). 1 Hence, response times should increase with an increasing number of information to be processed. The second approach suggests that people utilize strategies that partially rely on automatic processes (for overviews see Evans, 2008; Gilovich, Grifn, & Kahneman, 2002; Glöckner & Witteman, 2010), thereby using the huge computational and storage power of Acta Psychologica 139 (2012) 532542 Corresponding author at: Max Planck Institute for Research on Collective Goods, Kurt- Schumacher-Str. 10, D-53113 Bonn, Germany. Tel.: +49 2 28 9 14 16 857; fax: +49 2 28 9 14 16 858. E-mail address: [email protected] (A. Glöckner). 1 Note, that the approach by Payne et al. (1988) highlights the EIP perspective stronger than both the contingency model (Beach & Mitchell, 1978) and the adaptive toolbox model (Gigerenzer et al., 1999), but the prediction concerning time that we in- vestigate in this paper can still be derived from all of them. 0001-6918/$ see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.actpsy.2012.01.009 Contents lists available at SciVerse ScienceDirect Acta Psychologica journal homepage: www.elsevier.com/ locate/actpsy

Decisions beyond boundaries: When more information is processed faster than less

Embed Size (px)

Citation preview

Page 1: Decisions beyond boundaries: When more information is processed faster than less

Acta Psychologica 139 (2012) 532–542

Contents lists available at SciVerse ScienceDirect

Acta Psychologica

j ourna l homepage: www.e lsev ie r .com/ locate /actpsy

Decisions beyond boundaries: When more information is processed faster than less

Andreas Glöckner a,⁎, Tilmann Betsch b

a Max Planck Institute for Research on Collective Goods, Bonn, Germanyb University of Erfurt, Germany

⁎ Corresponding author at: Max Planck Institute for ResSchumacher-Str. 10, D-53113 Bonn, Germany. Tel.: +49 214 16 858.

E-mail address: [email protected] (A. Glöckner

0001-6918/$ – see front matter © 2012 Elsevier B.V. Alldoi:10.1016/j.actpsy.2012.01.009

a b s t r a c t

a r t i c l e i n f o

Article history:Received 19 September 2011Received in revised form 20 January 2012Accepted 24 January 2012Available online 28 February 2012

PsycINFO codes:2340 Cognitive Processes4160 Neural Networks

Keywords:Bounded rationalityParallel constraint satisfactionHolistic processingAdaptive decision makingHeuristicsResponse time

Bounded rationality models usually converge in claiming that decision time and the amount of computationalsteps needed to come to a decision are positively correlated. The empirical evidence for this claim is, howev-er, equivocal. We conducted a study that tests this claim by adding and omitting information. We demon-strate that even an increase in information amount can yield a decrease in decision time if the addedinformation increases coherence in the information set. Rather than being influenced by amount of informa-tion, decision time systematically increased with decreasing coherence. The results are discussed with refer-ence to a parallel constraint satisfaction approach to decision making, which assumes that informationintegration is operated in an automatic, holistic manner.

© 2012 Elsevier B.V. All rights reserved.

1. Introduction

In western religion and philosophy, decision making is generallyconsidered the supreme discipline of conscious thought. Free will isexclusively attributed to human beings, manifesting itself in thecapability of making choices upon anticipating and waging the conse-quences of the alternatives. Common language reflects this denota-tion by defining a decision as a “determination arrived at afterconsideration” (Merriam-Webster, online). This notion is maintainedin the maximization principle of expected utility theory. Accordingly,a rational decision maker should identify the entire set of eligibleoptions, calculate each option's expected utility (EU) and select theone with the highest EU (Savage, 1954; von Neumann & Morgenstern,1944). Expected utility models do not claim that people indeed calcu-late weighted sums, but only that their choices can be predicted bysuch a model (Luce, 2000; Luce & Raiffa, 1957). Simon (1955; see alsoVeblen, 1898) drew attention to the decision process. He questionedthe assumption that people rely on deliberate calculations of weightedsums in decisions, because the limitations of cognitive capacity andthe multitude of decision options do not allow them to do so.

Essentially two alternative process model approaches have beensuggested. The first approach is based on the idea of adaptive strategy

earch on Collective Goods, Kurt-28 9 14 16 857; fax:+49 2 28 9

).

rights reserved.

selection. People might use effortful weighted sum calculations onlyin some situations (Beach & Mitchell, 1978; Gigerenzer, Todd, P. M.,and the ABC Research Group, 1999; Payne, Bettman, & Johnson,1988, 1993). In other situations, for instance under time pressure,they might rely on short-cut strategies, which consist of stepwise cog-nitive operations that are (usually) carried out deliberately (cf. Payneet al., 1988; although they also consider a possible implementation asproduction rules). The considered strategies are usually well specifiedon a process level; their cognitive costs are measured as the numberof necessary calculations for applying the respective strategy, calledelementary information processes (EIPs; Newell & Simon, 1972; Payneet al., 1988). Consequently, all adaptive strategy selection approachesconverge in assuming that the cognitive effort and the necessary timefor a decision increase with the number of processing steps (i.e., EIPs)required by the decision strategy (see also Brandstätter, Gigerenzer, &Hertwig, 2006; Bröder & Gaissmaier, 2007, for examples related to theadaptive toolbox approach).1 Hence, response times should increasewith an increasing number of information to be processed.

The second approach suggests that people utilize strategies thatpartially rely on automatic processes (for overviews see Evans,2008; Gilovich, Griffin, & Kahneman, 2002; Glöckner & Witteman,2010), thereby using the huge computational and storage power of

1 Note, that the approach by Payne et al. (1988) highlights the EIP perspectivestronger than both the contingency model (Beach & Mitchell, 1978) and the adaptivetoolbox model (Gigerenzer et al., 1999), but the prediction concerning time that we in-vestigate in this paper can still be derived from all of them.

Page 2: Decisions beyond boundaries: When more information is processed faster than less

Table 2Example of a binary decision task.

Cue Validity Option A Option B

w 0.55 − +x 0.70 + −y 0.80 + −z 0.60 − +

533A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

the brain to overcome the obvious limitations of conscious cognitivecapacity. Automatic information structuring processes, for instance,are activated in visual perception (McClelland & Rumelhart, 1981)and social perception (Bruner & Goodman, 1947; Read & Miller,1998) to quickly form reasonable interpretations (i.e., Gestalten;Wertheimer, 1938), which can constitute a basis for judgments anddecisions (Betsch & Glöckner, 2010; Glöckner & Betsch, 2008b). Find-ings indicate that people seem to rely at least partially on such auto-matic processes in probabilistic inference decisions (e.g., Glöckner &Betsch, 2008c; Glöckner, Betsch, & Schindler, 2010; Hilbig, Scholl, &Pohl, 2010; Horstmann, Ahlgrimm, & Glöckner, 2009; Simon, Pham,Le, & Holyoak, 2001; Simon, Snow, & Read, 2004) and risky choices(e.g., DeKay, Patino-Echeverri, & Fischbeck, 2009a, 2009b; Glöckner& Betsch, 2008a; Glöckner & Herbold, 2011; Hilbig & Glöckner, 2011).

Several models exist that aim to describe the underlying cognitiveprocesses. According to Glöckner and Witteman (2010), these pro-cesses can be categorized intomainly reflex-like associativemechanisms(e.g., Betsch, Haberstroh, Molter, & Glöckner, 2004; Finucane, Alhakami,Slovic, & Johnson, 2000), more complex pattern matching mechanismsinvolving memory prompting (e.g., Dougherty, Gettys, & Ogden, 1999;Fiedler, 1996; Juslin & Persson, 2002; Thomas, Dougherty, Sprenger, &Harbison, 2008), automaticity based evidence-accumulation mecha-nisms (e.g., Busemeyer & Johnson, 2004; Busemeyer & Townsend,1993; Diederich, 2003), and constructivist mechanisms based on holisticevaluations of the evidence (e.g., Glöckner & Betsch, 2008b; Holyoak &Simon, 1999; Monroe & Read, 2008; Read, Vanman, & Miller, 1997;Thagard & Millgram, 1995). In the current work, we mainly focus onmodels for constructivist mechanisms. The other mechanisms are, how-ever, briefly discussed in the final part of this paper. Aswewill explain inmore detail below, these constructivist-automatic processes operate in aholistic fashion and can profit from more information. In contrast to theadaptive strategy selection approaches, it can be predicted that, undercertain conditions, more information can be processed more quickly thanless. In the study reported in this paper, we test this claim empirically.

In the remainder of the introduction, we will first explain the EIP-based perspective underlying the adaptive strategy selection approachin more detail. Then we will discuss the parallel constraint satisfactionmodels that can beused to computationally implement holistic process-es and we will conclude with reviewing previous evidence concerningthe relation between decision time and processing steps.

1.1. Elementary Information Processes (EIPs) perspective: moreinformation = slower decision

According to the adaptive strategy selection approach, cognitive ef-fort is measured to quantify the costs of thinking. In this way, decisionmaking processes are decomposed into elementary information pro-cesses (EIP, Newell & Simon, 1972). Table 1 shows some EIPs involvedin solving decisions fromdescription (Bettman, Johnson, & Payne, 1990).

For example consider the following simple decision problemconsisting of a choice between two options based on the four cuespresented in Table 2. For illustration purposes, let the options be con-sumer products and let the cues be consumer testing institutes. Theentries in the matrix are evaluations of the product on a relevant

Table 1EIPs used in decision strategies.Source:Bettman et al. (1990, p.114).

READ Read an alternative's value into STM [short-term memory]COMPARE Compare two alternatives on an attributeDIFFERENCE Calculate the size of the difference of two alternatives for an

attributeADD Add the values of an attribute in STMPRODUCT Weight one value by another (multiply)ELIMINATE Remove an alternative or attribute from considerationMOVE Go to next element of external environmentCHOOSE Announce preferred alternative and stop process

criterion dimension, say, its predicted durability, for more (+) orless (−) than 2 years. The cues differ with respect to their cue validityv, each representing the probability that a tester's prediction is cor-rect. The decision task is to select products with the higher durability.

Adaptive strategy selection models assume that the cognitive effortspent on a decision depends primarily on the chosen strategy. A personapplying a lexicographic strategy (LEX; Fishburn, 1974; see alsoGigerenzer & Goldstein, 1999), for instance, will look up cues in theorder of their validity and choose the option that is better on the first dif-ferentiating cue. In the example task (Table 2), LEXwould require at least5 EIPs (i.e., 2 READ, 1MOVE, 1 COMPARE, 1 CHOOSE) to reach a decision.

Now consider a compensatory strategy that processes and integratesall information. Theweighted additive (WADD) strategy, underlying util-ity theory, requires the individual to read all information given, weigh(PRODUCT) the outcome values ([+]=+1; [−]=−1) with validitiesand sum up the products for each option (ADD). Finally, the individualmust COMPARE the two aggregate values of the two options in order tochoose the dominant one. Thus, application of a WADD strategy to theexample task requires at least 47 EIPs (16 READ [i.e., 2×(4 cues and 4cue validities)], 15 MOVE, 8 PRODUCT, 6 ADD, 1 COMPARE, 1 CHOOSE).

Hence, LEX and WADD strategy differ with regard to number ofEIPs; the cognitive effort involved in application should be smallerfor LEX compared to WADD. Of course, there are some confinements.Research on the metric of cognitive effort has shown that: (i) differ-ent EIPs consume different amounts of cognitive effort (Lohse &Johnson, 1996); (ii) cost differences between strategies are less pro-nounced in binary decision tasks compared to those involving moreoptions; and (iii) learning reduces the relative costs of a strategy(e.g., Abelson & Levi, 1985, for an overview).2 However, all otherthings being equal, one can surely assume that an increase in thenumber of EIPs used by a strategy (i.e., adding further processingsteps) should never result in a decrease of cognitive effort.3 One mea-surable correlate of cognitive effort is decision time (cf. Lohse &Johnson, 1996). Accordingly, processing time should be equal or greateras EIPs increase. The more information we have to consider, compareand integrate, the longer it should take us to arrive at a decision.

At first glance, this claim might be considered a truism. In thispaper, however, we question its general validity. Whereas the claimis most likely valid with respect to deliberative processes, it is lesslikely that it applies to all kinds of automatic processes. Deliberationinvolves slow, step-by-step consideration of information. It requiresconscious control and substantially consumes cognitive resources.Automatic processes, in contrast, operate rapidly and can include ahuge amount of information (e.g., Glöckner & Betsch, 2008c;Hammond, Hamm, Grassia, & Pearson, 1987; Hilbig et al., 2010;

2 Recently, it has furthermore been argued that cognitive effort might be reduced byusing internal short-cuts within compensatory strategies in that information integra-tion for the second option might be aborted if the remaining cues cannot compensatefor the advantage of the first option (Bergert & Nosofsky, 2007).

3 Note, that the adaptive toolbox (cf. also the comment on production rules used byadaptive decision maker above) could potentially contain tools that are based onautomatic-intuitive processes (Gigerenzer et al., 1999). Additional tools have, howev-er, not been clearly specified yet, except for the assumption that automatic-intuitivetools might rely on the same stepwise processes assumed for deliberate heuristic(Gigerenzer, 2007). It is important to note that under this seriality assumption the coreargument of this paper equally applies since for any kind of serial processing (includingautomatic processing) decision time should not decrease with increasing amount of in-formation to be processed.

Page 3: Decisions beyond boundaries: When more information is processed faster than less

4 As can be seen below, the requirement of all differentiating cues held only in threeout of four cue patterns. All results, however, also hold when considering these threecue patterns only.

534 A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

Hogarth, 2001). They can work without explicit control and requireonly a minimum of cognitive resources. Some of these automatic pro-cesses implement holistic mechanisms in that information is automat-ically structured to form Gestalten in a perception-like process (i.e.,constructivist mechanisms; see above and Glöckner & Witteman,2010), which we will henceforth refer to as holistic processing.

1.2. Parallel constraint satisfaction as computational implementation ofholistic processing: decision time depends on coherence

Holistic processing has beenmodeled using parallel constraint satis-faction (PCS) networks (for overviews see Holyoak & Spellman, 1993;Read et al., 1997). PCS networks consist of nodes that represent hypoth-eses or elements as well as bidirectional links between these represent-ing their relations (Thagard, 1989). Through spreading activation in thenetwork, the best possible interpretation under parallel considerationof all constraints (i.e., links) is constructed. In this process, activationof nodes change, which means that the hypothesis represented by thisnode is perceived as more or less likely (high vs. low activation).

PCS can be applied to a decision task as described in Table 2(Glöckner & Betsch, 2008b). Cues and options form the elements(nodes) in a working network. Connections between the elements rep-resent their relations (e.g., cues speaking for or against an option). Rel-evant information encoded from the environment and relatedinformation in (long-term) memory is automatically activated and fedinto the working network. Information gained by active search canalso be added. The working network represents a subset of knowledgefrom long-term memory and the environment. It is possible but notnecessary that parts of theworking network enter conscious awareness.

PCS operates on a subconscious level and is assumed to capitalizeon the high computational capacity of automatic processing. Accord-ing to PCS, decision time should mainly depend on initial coherencein the network (Glöckner & Betsch, 2008b). Coherence is high if allpieces of information in the network fit together well. Coherence islow if there is conflict or inconsistency between elements in the net-work (cf. Festinger, 1957; Heider, 1958). Consider a network contain-ing two options (cf. Table 2). The more strongly and frequently oneoption is positively linked to cues compared to the competitor, theclearer the evidence is in favor of this option and the less inconsisten-cy has to be resolved. In such cases coherence is high from the begin-ning. In contrast, a high degree of conflict in the network (i.e., if cuesare equally strong for both options) makes it more difficult to find acoherent solution and, therefore, leads to an increase in decision time.PCS mechanisms are implemented as iterative updating of nodes. Deci-sion time is predicted by the number of iterations until node activationsreach an asymptotic level (Freeman & Ambady, 2011; Glöckner, 2009;Glöckner & Betsch, 2008b). Furthermore, according to PCS the optionwith the higher activation after settling is chosen and the difference inactivation between option nodes predicts confidence.

In summary, the PCS-approach predicts that decision time will bea function of coherence in the network. In contrast to the EIP perspec-tive, decision time should be rather independent of the amount ofencoded information. Specifically, we predict that (i) decision timewill increase if information is removed so that coherence decreases;and (ii) decision time will decrease if information is removed sothat coherence increases.

1.3. Previous evidence on the relation between decision time andprocessing steps

Several results support the hypothesis derived from the EIP per-spective. Payne et al. (1988) could show that individuals under timepressure tend to switch to less effortful strategies in situations inwhich information has to be actively searched using the mouse.Bröder and Gaissmaier (2007) showed that response time increaseswith the number of computational steps necessary to implement a

lexicographic strategy for memory based probabilistic inferencetasks. Similarly, Bergert and Nosofsky (2007) found response timeswhich were in line with lexicographic strategies. Finally, in thedomain of risky choices Brandstätter et al. (2006) report that decisiontimes increase with the steps necessary to differentiate betweengambles using a (semi-)lexicographic strategy for risky choices (i.e.,the priority heuristic).

However, there is also evidence showing that this assumed posi-tive relation does not always hold. For probabilistic inference tasksin which information is openly displayed, Glöckner and Betsch(2008c) found a decrease of decision times when comparing tasksfor which a lexicographic strategy predicted the opposite. Glöcknerand Hodges (2011) qualified the findings by Bröder and Gaissmaier(2007) on memory based decisions by showing that decision timesfor a substantial portion of participants can be better explained byPCS than by serial heuristics. Ayal and Hochman (2009) attemptedto replicate the decision time findings for risky choices byBrandstätter et al. (2006) and found a significant effect in the oppo-site direction. Similarly, also further investigations of risky choicesprovided support for the decision time predictions of PCS than forthe predictions of the suggested semi-lexicographic strategy(Glöckner & Betsch, 2008a; Glöckner & Herbold, 2011; Glöckner &Pachur, 2012; see also Hilbig, 2008). Also investigations of decisiontimes in probabilistic inferences involving recognition informationhave shown data more in line with PCS than with strategies assumingstepwise processing such as recognition heuristic (Glöckner & Bröder,2011; Hilbig & Pohl, 2009; Hochman, Ayal, & Glöckner, 2010).

Hence, overall, evidence is equivocal and calls for further investi-gation. A closer look at papers challenging the EIP perspective revealsone potential weakness in their argument. Specifically, it could beargued that persons might have used another strategy for which pre-dictions were not considered in the analysis (cf. Bröder & Schiffer,2003a). Since, the number of heuristics is huge and still growing itis often hard to impossible to include all of them in a single modelcomparison test. We use an improved design to rule out this argu-ment. The basic idea is to manipulate tasks so that all establishedEIP-based strategies predict a reduction or equal decision timewhereas PCS predicts an increase in half of them and a decrease inthe other half. Note, however, that we of course cannot rule out thatan EIP-based strategy might be developed in the future that canaccount for our findings. As we will discuss in more detail inSection 4.2, our investigation by necessity has to be limited to thespecified parts of adaptive-decision-making approaches.

In the current study we rely on the standard paradigm for investi-gating probabilistic inference tasks in which persons make decisionsbased on probabilistic cues. The new idea is to reduce the complexityof tasks by selectively dropping less valid information. For all strate-gies considering information from all cues (e.g., WADD) this shouldlead to a reduction in decision time because less information has tobe processed. For all lexicographic or elimination strategies (e.g.,take-the-best; elimination by aspects; minimalist) dropping cueswith low validity should have no influence on decision times if allcues make differentiating predictions.4 The same should be the casefor guessing strategies. Hence, for all strategies that we are aware ofdropping less valid cues should lead to a reduction of decision time(or should have no influence). As will be discussed below in more de-tail, dropping can be done so that coherence is increased or decreasedwhich allows realizing the aspired predictions of PCS. A set of proto-typical strategies, which were also used in the model comparisonreported later, is described in Appendix A.

One important factor influencing decision time is constraints ininformation acquisition (Glöckner & Betsch, 2008c). If information

Page 4: Decisions beyond boundaries: When more information is processed faster than less

Table 3Decision tasks used in the experiment.

v Cue patterns

1 2 3 4

A B A B A B A B

Complete decision tasks0.80 + − + − + − + −0.70 + − + − − + − −0.60 + − − + + − + −0.55 − + − + − + − +

Decreased coherence0.80 + − + − + − + −0.70 + − − + − −0.60 − +0.55 − + − + − + − +

Increased coherence0.80 + − + − + − + −0.70 + − + − − −0.60 + − + − + −0.55 − + − +

Note. v indicates the validity of the respective cue (0.5 = chance; 1 = perfectprediction).

6 Using the computer mouse (instead of hitting keys) might increase error variance

535A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

acquisition is very time-consuming (e.g., each piece of informationhas to be looked up for 1 min) the prediction of increasing decisiontime with increasing number of information would be trivial. Weare, however, interested in the time needed for information integra-tion and therefore use an open matrix paradigm to minimize con-straints to information search.

2. Method

In repeated decision trials, participants were instructed to select thebetter of two products (options). Theywere given information from fourtesters (cues) with different predictive validity (cue validity), whichprovided dichotomous quality ratings (good vs. bad) for each product.Following the procedure used in previous studies (e.g., Glöckner &Betsch, 2008c; Exp. 3), information was presented in an “open” matrix(no covered information). The order of cues and options was random-ized to avoid effects of pattern learning and recognition.

The amount of informationwasmanipulated by omitting informationfrom one of the less valid testers. We either removed information thatsupported the dominating alternative (decrease in coherence) or infor-mation that conflicted with the dominating alternative (increase incoherence). Information on the most valid cue was always availableand always discriminated between options. Therefore, application of sim-ple, lexicographic strategies should be unaffected by our manipulation.

Besides our main dependent variables, response time and choice,we also assessed confidence ratings after choice, which provide a use-ful additional measure to investigate individuals' decision strategies(Glöckner, 2009, 2010; see also Jekel, Fiedler, & Glöckner, 2011;Jekel, Nicklisch, & Glöckner, 2010).

2.1. Participants and design

There were 112 participants from the MPI Decision Lab subjectpool.5 They were mainly students from the University of Bonn (meanage: 22.9 years; 60 female). The experiment lasted approximately20 min and was part of a 1 hour experimental battery. Participantswere compensated with 12 Euro (approx. USD 16.80 at that time).Decision tasks varied as a within-participants factor, resulting in a 4(CUE PATTERN)×3 (VERSION: Complete, Decreased Coherence, In-creased Coherence) design. The factor CUE PATTERN represents fourdifferent basic decision tasks. Participants worked on these tasks in aregular (i.e., complete) version and in two variants in which informa-tion of one cuewas removed, respectively (Table 3). Our centralmanip-ulation was contained in the factor VERSION. For all cue patterns,information was removed that increased vs. decreased coherence totest our hypotheses. The cue validities v (here: probabilities of correctpredictions) were 0.80, 0.70, 0.60, and 0.55. O1 to O2 represent theeligible options. Cue values are represented by the symbols “+” (good)and “−” (bad). Each of the twelve decision tasks was presented fivetimes. For each of these decision tasks, PCS predictions for decisiontime and confidence were calculated using standard parameters asdescribed in Appendix B. Thirty additional tasks were used as distractersresulting in a total of 90 decisions.

2.2. Materials and procedure

A computer programwritten in Visual Basic 6.0 was used to run theexperiment. Participants were instructed to repeatedly select the betterof two options. They were informed about the testers' cue validities. Tofacilitate participants understanding of the provided cue validity infor-mation, they were informed that a validity of 0.50 represents chanceand a validity of 1 represents a cue with perfect predictions. Moreover,participants were asked to make good decisions and to be as fast as

5 Participants signed up online using the subject-pool management software ORSEE(Greiner, 2004).

possible in deciding (Fazio, 1990). All pieces of cue value information(6 to 8) were presented simultaneously in an information matrix withcues displayed in rows and options in columns (see format used inTable 2). The presentation order of cues and options in the matrix wasrandomized. Participants chose one option by mouse click. Choicesand decision times were recorded.6 Afterwards, they rated their confi-dence in choice on a scale from very uncertain (−100) to very certain(100) using a horizontal scroll bar. Participants clicked a button cen-tered on an empty screen to start the next trial.

A warm-up decision trial was used to familiarize participants withthe material and the procedure. It was followed by 90 trials includingtargets and distracters presented in randomized order. A 1-minutebreak was embedded after half of the tasks to minimize the effectsof decreasing concentration.

3. Results

3.1. Choices

All participants were able to complete the tasks. No missing valueswere encountered. The observed proportion of choices for option Aare summarized in Table 4. Participants' choices were highly consis-tent. In all twelve tasks the majority of choices were in favor of optionA. In 97% of the repeated choices, participants chose the same optionin all five repetitions of the respective variant of a cue pattern, indi-cating a high choice reliability.

3.2. Decision times

Participants followed the instruction to make quick decisions andshowed a mean decision time of M=4128 ms (SD=2922 ms;Skew=3.53; Kurt=23.5; MD=3295 ms) over all 90 trials. To reduceskewness and the influence of outliers, all decision time analyseswere computed for ln-transformed data. We predict that decisiontime will be a function of coherence instead of merely depending onthe amount of information. Specifically, we hypothesize that, in com-parison to the Complete condition with 8 cue values, (i) decision timewill increase if information is removed so that coherence decreases

in response time which could work against our hypothesis. Nevertheless, we decidedto use the mouse because otherwise no fine-grained confidence measurement wouldhave been possible.

Page 5: Decisions beyond boundaries: When more information is processed faster than less

Table 4Choices for option A in percent.

Version Cue pattern

1 2 3 4

Complete 1 0.98 0.98 0.99Decreased coherence 0.99 0.82a 0.65a 0.99Increased coherence 0.99 0.99 1 0.99Total 0.99 0.93 0.87 0.99

Note. N=112 for each combination of cue pattern and version.a Significantly different from the version Complete at pb0.001.

536 A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

(Decreased Coherence), and (ii) decision time will decrease if informa-tion is removed so that coherence increases (Increased Coherence).Descriptively, decision times are mainly in line with both hypothesesin that decision time was high for the Decreased Coherence tasks,intermediate for the Complete tasks, and low for the Increased Coher-ence tasks (Fig. 1).

To test the hypotheses statistically, we regressed decision time onour factor VERSION, controlling for differences in cue patterns (bothdummy coded) and the order of trial presentation (Table 5; column1). The dummy variables Decreased and Increased Coherence directlytest our hypotheses in that they express the difference of each versionfrom the complete cue patterns (i.e., control). Both variables turnedout to be significant supporting our hypotheses. Decision time signifi-cantly increased when informationwas removed that decreased coher-ence. Decision times significantly decreased, however, if reduction ofinformation resulted in an increase in coherence. Estimated means(standard error of predictions in parentheses) are MComp=8.06(0.024), MDecCo=8.12 (0.025), and MIncCo=7.92 (0.026).7 Decisiontimes also differed significantly between cue patterns and decreasedover trials indicating learning effects.

To further explore the effects of our coherence manipulation ondecision time, we ran regressions separately for each cue pattern.Both effects were also found for individual cue patterns (all pb0.01)with the exception of cue pattern 4. In cue pattern 4, decision timein the decreased coherence tasks did not differ from decision timein the complete tasks (b=−0.007, t=−0.31, p=0.76). Note, how-ever, that this null-effect does not contradict our general findingsand might be due to random fluctuation.

3.3. Confidence

Participants were rather confident in their choices. Ratings showthe inverse pattern compared to that observed for decision times(Fig. 2). We regressed confidence on the same variables used in thedecision time regression (Table 5, column 2). All coefficients, exceptfor order, were significant in the opposite direction to that observedfor decision times. Confidencewas decreased for the reduced coherencetask, in which information consistent with the favored option wasremoved, and increased when contrary information was removed (i.e.,increased coherence tasks).

The above analyses support the qualitative predictions of PCS. Wefurthermore investigated whether data was also in line with quanti-tative predictions of the model. Specifically, we investigated the fitbetween PCS predictions for decision times and confidence with theobserved data. Predictions for decision time are derived from thenumber of iterations needed to stabilize and confidence predictionsare calculated as the advantage in activation of the chosen optionover the non-chosen option. Fig. 3 shows that PCS predicts time andconfidence aggregated for the 12 decision tasks very well (bothpb0.001). The same holds in regression analyses that take into

7 Lexicographic models (searching cues in order of validity) predict equal decisiontime for all tasks since the most valid cue always differentiates between options. Alltallying models predict reduced decision time for the two reduced versions comparedto the complete versions. Our findings allow rejecting both hypotheses.

account individual-level data. We regressed ln-decision time onPCS-time-prediction and order and found that PCS-time-predictionshad a significant effect, b=0.0066, t=17.16, pb0.0001 (R2=0.15).In an equivalent regression with confidence as criterion we foundthat PCS-confidence-predictions had a significant effect as well,b=478.09, t=19.19, pb0.0001 (R2=0.25). Note, that for both de-pendent variables the explained variance for the model with PCS pre-dictions was essentially the same as for the full models including cuepattern dummies and dummies for our reduction manipulation (cf.Table 5). Hence, PCS can account for the systematic variance in the data.

In a final step we jointly analyzed choices, time, and confidenceusing Multiple Measure Maximum Likelihood estimation (Glöckner,2009, 2010; Jekel et al., 2010) which allows investigating individualdifferences in decision strategies. The analysis shows that implemen-tations of PCS account best for the overall behavior of the large major-ity of participants (i.e., 74%; for details see Appendix C). This providesfurther evidence for the importance of PCS mechanisms in probabilis-tic inference tasks.

4. Discussion

One cornerstone assumption of the bounded rationality approachstates that cognitive capacity is constrained. Building on this assump-tion, adaptive-decision-making models converge in assuming thathumans use a variety of simple decision strategies that, in certain situa-tions, allow them to reduce cognitive costs (Beach & Mitchell, 1978;Gigerenzer et al., 1999; Payne et al., 1988). According to these models,cognitive costs and decision time are predicted to increase, ceteris pari-bus, the more elementary information processes (EIPs) are necessary tomake a decision. Consequently, less information should be processedfaster than more. We argued that – at least in environments allowingfor quick information acquisition – this notion is valid for serial proces-sing but not for holistic processes that are involved in decision making,as well. Based on a Parallel Constraint Satisfaction (PCS) approach to de-cision making, we predicted that the reduction of information in a deci-sion task can yield either an increase or decrease in decision timedepending on whether it increases or decreases coherence in the entireset of information. These PCS predictions were strongly corroborated inthe current study. Decision timewas not generally reduced by removinginformation fromdecision tasks. Itwas, however, systematically affectedby changes in coherence. When information was removed, participantsneeded more (vs. less) time to arrive at a decision if information reduc-tion resulted in a decrease (vs. increase) in overall coherence. Hence, wehave shown that – under certain circumstances – providing more infor-mation leads to quicker decisions than providing less information. Themajor new contribution of the current study is that tasks were con-structed in such a way that allows testing a critical prediction thatholds for all EIP-based strategies suggested in the literature in contrastto previous tests which focused on tests of one or a few strategies only.

The findings challenge the general validity of the cognitive-cost as-sumption underlying the bounded rationality approach and importantadaptive-decision-making models (see also Hilbig, 2010; Newell &Bröder, 2008) and corroborate the holistic processing perspective. Thefact that coherence drives decision times supports PCS models but isalso in line with many other models (see below). From a more generalperspective, the findings support theoretical approaches assuminga) that automatic processes play a crucial role in decision making andb) that these processes have very specific properties that are markedlydifferent from stepwise computations (e.g., Beach & Mitchell, 1996;Betsch, 2005; Busemeyer & Townsend, 1993; Dougherty et al., 1999;Glöckner &Betsch, 2008b;Hogarth, 2001; Kahneman& Frederick, 2002).

4.1. Related findings

Our findings concerning decision time are in line with the classic dis-tance effect in choices, which states that decision time increases with

Page 6: Decisions beyond boundaries: When more information is processed faster than less

Fig. 1. Decision time by cue pattern and version. ‘comp’ refers to the complete regular cue pattern. ‘decCo’ and ‘incCo’ refer to the cue pattern in which coherence was decreased vs.increased by removing two pieces of information from the regular cue pattern. On a millisecond (ms) scale the y-axis ranges from 2441 ms (e7.8) to 4447 ms (e8.4).

537A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

decreasing distance of the options on the criterion value (for an overviewsee Birnbaum & Jou, 1990; for a recent investigation see also Brown &Tan, 2011). Several paramorphic models contain specific assumptionsto account for the distance effect without providing well specifiedprocess-based explanations for its emergence (e.g., Birnbaum & Jou,1990; Cartwright & Festinger, 1943). In PCS models, the distance effectfollows from the general spreading activation mechanisms.

The current findings elaborate previous work on probabilisticinference tasks, which show that decision time increases with de-creasing coherence while holding the amount of information in thetask constant (Glöckner & Betsch, 2008c). As mentioned above, simi-lar findings were observed for probabilistic inferences including rec-ognition information (Glöckner & Bröder, 2011; Hilbig & Pohl, 2009;Hochman et al., 2010) and for risky choices (Glöckner & Betsch,2008a; Glöckner & Herbold, 2011; Hilbig, 2008). Decision timesmore in line with an adaptive strategy selection perspective wereobserved in tasks with more effortful information acquisition, which

Table 5Regression analysis of decision time.

(1) (2)

ln(time) Confidence

Decreased coherence 0.0625⁎⁎⁎ −21.27⁎⁎⁎

(1=yes) (4.91) (−15.52)Increased coherence −0.145⁎⁎⁎ 11.36⁎⁎⁎

(1=yes) (−10.89) (11.05)Cue pattern 2 0.211⁎⁎⁎ −30.12⁎⁎⁎

(1=yes) (13.60) (−17.96)Cue pattern 3 0.301⁎⁎⁎ −41.17⁎⁎⁎

(1=yes) (17.75) (−19.15)Cue pattern 4 0.142⁎⁎⁎ −24.25⁎⁎⁎

(1=yes) (7.67) (−11.13)Order −0.00544⁎⁎⁎ −0.0171

(−14.75) (−0.73)Constant 8.201⁎⁎⁎ 84.25⁎⁎⁎

(291.00) (44.61)Observations 6720 6720R2 0.163 0.263

Note. Coefficients for decreased and increased coherence are comparisons againstcomplete cue patterns (i.e., control); likewise coefficients for cue patterns 2 to 4 arecomparisons against cue pattern 1. t statistics in parentheses; standard errors wereadjusted for 112 clusters in observations due to repeated measurement (Gould,Pitblado, & Sribney, 2006; Rogers, 1993).⁎⁎⁎ pb0.001.

we explicitly did not address in our study. A generally higher prevalenceof lexicographic strategies (Bröder & Schiffer, 2003b) and decisiontimes more in line with its predictions were particularly observed inmemory-based probabilistic inferences (Bröder & Gaissmaier, 2007)as well as in tasks in which cue validities had to be acquired in a previ-ous learning phase (Bergert & Nosofsky, 2007).

4.2. Potential caveats and alternative accounts

4.2.1. Testing specified parts of adaptive-decision-making modelsThe most prominent models for adaptive decision making – the

Adaptive Decision Maker and the Adaptive Toolbox – are formulatedas general frameworks. They basically do not limit the possible num-ber of heuristics and strategies. From the vantage point of hindsight,these frameworks can be tuned to “account” for any type of findingby postulating a new strategy (for a critical discussion see alsoGlöckner & Betsch, 2011, 2010). Our results can only rule out the bynow specified parts of these frameworks. For proponents of adaptivedecision making models, however, our findings might provide guid-ance for efficiently extending their models.8

4.2.2. Ignoring information and lexicographic strategiesOf course, we cannot rule out that some individuals may have ig-

nored some pieces of information sometimes. Our Multiple MeasureMaximum Likelihood analysis provides convergent evidence, howev-er, that the majority of individuals systematically processed all piecesof the given information in a holistic fashion. Removing less valid in-formation influenced choices, decision times and confidence ratingsconsiderably, indicating that cue information even on these lessvalid cues was taken into account. Furthermore, note that we neverremoved information on the high validity cues. Hence, if participants

8 Please note, that many further possible heuristics can be directly ruled out by ourdata. Consider, for example a set of strategies one could call lexicographic race-models (or take-two-heuristic; take-three-heuristic etc.), which go through cues (or-dered by validity) and chooses the option for which they first finds two (three, four,…) positive cue values (we thank an anonymous reviewer for suggesting this alterna-tive). Considering cue pattern 1, a take-two-heuristic would not predict any time dif-ferences between versions and can directly be rules out; a take-three-heuristic couldaccount for the increased decision time in the decCo version but it cannot explainthe reduced decision time for the incCo version (both compared to the complete ver-sion). This example should illustrate that it is quite hard to think of simple serial heu-ristics that could account for the current findings.

Page 7: Decisions beyond boundaries: When more information is processed faster than less

Fig. 2. Confidence by cue pattern and version. ‘comp’ refers to the complete regular cue pattern. ‘decCo’ and ‘incCo’ refer to the cue pattern in which coherence was decreased vs.increased by removing two pieces of information from the complete cue pattern.

538 A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

would have employed, for example, a lexicographic strategy, our ma-nipulation would have no effect on decision times and confidence, be-cause the options always differed on the most valid cue and in all butone task each cue differentiated.

4.2.3. Matching processesOne might argue that participants encoded constellations of infor-

mation and compared them with prototypes or exemplars usingautomaticity-based matching processes (cf. Dougherty et al., 1999;Fiedler, 1996; Juslin & Persson, 2002). Recall, however, that the ar-rangement of cue patterns changed over trials and within each condi-tion, as the order of cues was randomized. Furthermore, increasingdecision times after removing information could not easily beexplained by these approaches.

4.2.4. TallyingIt is also not possible to account for our findings by postulating

that participants simply counted pluses and minuses and selectedthe alternative with the higher sum (i.e., an equal-weight or tallyingstrategy). An application of this strategy should have resulted in a de-crease in decision time when information was removed.We found ev-idence for the very opposite and in the Multiple Measure MaximumLikelihood analysis no participant was classified as user of EQW (seeAppendix C).

4.2.5. Accumulative processesEvidence accumulation models (Busemeyer & Townsend, 1993)

can well account for our decision time findings, because the likeli-hood for reaching the decision threshold more quickly increaseswith increasing superiority of one option over the other. Due to thefact that evidence is always accumulated until certain evidencestrength is reached, these kinds of models cannot, however, easilyaccount for our confidence findings.9 It would be premature a conclu-sion to interpret this finding as evidence against evidence accumula-tion models in general. Further assumptions would, however, have tobe incorporated in these models to account for the very systematic

9 This is the case under the assumption that confidence ratings are based on the totalamount of accumulated evidence for the favored option. Given a constant threshold,there should be no differences in accumulated evidence between tasks. Consequentlyalso no differences in confidence should be observed.

findings on confidence ratings. It is due to further research to investi-gate whether recently developed two-stage models (Pleskac &Busemeyer, 2010) can account for the current findings.

4.2.6. Short-cut weighted compensatory strategyBergert and Nosofsky (2007) argued that people might use inter-

nal short-cuts within compensatory strategies (see footnote 2). Spe-cifically, the strategy predicts that information integration for thesecond option is aborted as soon as the remaining cues cannot com-pensate for the advantage of the first option. Our data speaks againstthis explanation. First, the observed decision time of about 4 s willmake it hard to take into account 5 pieces of information in a weight-ed compensatory manner. Second, for cue pattern 1 in versionsdecreased and increased coherence calculation should be abortedafter considering the first cue of option 2. Hence, the strategy wouldpredict equal decision time for both versions and cannot account forthe observed significant differences. Third, for the complete versionsof cue patterns 3 and 4 the strategy would predict that integrationis aborted after considering the first cue of the second option. Notethat in both cases exactly the same information would be processed.Therefore, the significant differences between both decision tasksconcerning time and confidence cannot be explained.

4.2.7. ConfusionRemoving information might have surprised or confused partici-

pants. Consequently, they might have begun to ponder longer onthe decision, thus yielding an increase in decision time. Such an alter-native explanation, however, only accounts for parts of the result.Recall that removing information led to a decrease in decision timeif coherence increased. Therefore, variations in decision time cannotbe attributed to surprise or confusion.

4.2.8. Modeling information acquisitionOne of the differences between the implementation of PCS used

in this paper and heuristics is that PCS does not model informationacquisition. Consequently, eventual differences in decision time betweencue patterns resulting from information acquisition are not taken into ac-count. It is, however, possible to extend PCS in this respect. Under the as-sumptions that a) individuals look up all pieces of cue information and b)need afix amount of time for each cue, information acquisition should bequicker by a fix amount of time for both reduced versions of cue patterns

Page 8: Decisions beyond boundaries: When more information is processed faster than less

Fig. 3. Fit between PCS predictions for decision times and confidence with data collapsed for 4 cue patterns with 3 versions each (12 observations).

Table A1Strategies.

Model Description Timeprediction⁎⁎

Confidenceprediction⁎⁎

Lexicographicstrategies/take-thebest (LEX/TTB)

Search cues in order ofvalidity and choose theoption according to thefirst cue that discriminatesbetween options

Number ofcues to besearched

Validity of thediscriminatingcue

Equal WeightStrategy (EQW)

Add up all cue valuesand choose the optionwith the higher sum

Number ofcues to beintegrated

Differencebetween sumsof thealternatives

Weighted AdditiveStrategy (WADD)

Weight all cue values bytheir validity, add themup and choose the optionwith the higher weightedsum⁎

Number ofcues to beintegrated

Differencebetweenweighted sumsof thealternatives

Random ChoiceStrategy (RAND)

Choose one optionrandomly

All equal All equal

Parallel ConstraintSatisfaction(PCS)⁎⁎⁎

Construction of coherentinterpretations byspreading activation inan iterative process untilconvergence. The optionwith the higher activationis chosen.

Number ofiterations toconverge

Difference inactivationbetweenoptions

⁎Cue validities are corrected for chance level by subtracting 0.5 before weighting.⁎⁎In the model comparison described in Appendix C predictions are generated ascontrast weights comparing between tasks but within each strategy.⁎⁎⁎Network parameters for PCS are described in Appendix B. In the model comparisonPCS is used in two implementations as described in Appendix C.

539A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

compared to the complete version. Note that this main effect of acquisi-tion timeworks against themore-information-is-processed-faster-than-less hypothesis. That is, in the decreased coherence tasks the increase inprocessing timemust overcome themain effect of decreased time for in-formation uptake. Inspection of Table 5 shows that there is support forthis additional main effect of information acquisition: the effect for in-creased coherence on time is twice as large as the effect for decreased co-herence (although the effect for confidence is larger for the decreasedcoherence case). Therefore, the reported analysis of decision time is aconservative test of the more-information-is-processed-faster-than-lesshypothesis. The effect of coherence on time for information integrationmight even be somewhat underestimated because differences in infor-mation acquisition are not taken into account.

4.3. The efficient interaction of deliberate and automatic processes

As in several previous studies (Glöckner & Betsch, 2008c; Glöckner &Bröder, 2011) we observed that participants are able to make decisionsrather quickly while taking into account many cues and their validities.Considering the very quick responses in the current study (i.e.,MD=3.2 s), automatic processes of information integration seem toplay an important role. However, it is also clear that parts of the processare under deliberate control. Hence, automatic and deliberate processesseem to jointly drive decisions. Recently, we postulated that deliberateand automatic processes serve different functions (Betsch & Glöckner,2010; Glöckner & Betsch, 2008b). Deliberate processes are necessarilyinvolved if the decision maker actively searches information in the en-vironment aswas the case in our study.Moreover, deliberation is essen-tial to modify mental representations of the decision problem bychanging relations among elements in the working network or by gen-erating new information via inferential processes. These processes areperformed step-by-step, require conscious control and consume cogni-tive resources. We assume that they are supplemented by automaticprocesses that work in a holistic manner below the level of conscious-ness and that consume only aminimum of cognitive resources.We pro-posed that these (PCS-) processes are responsible for integratinginformation in many decision situations.

Such a notion is largely in line with recent default-interventionist(e.g., Evans, 2006; Kahneman & Frederick, 2002) and parallel-activation (Sloman, 2002) dual-process models (for an overview see

Evans, 2008). We claim that automatic processes are always activatedand perform automatically in the mental background irrespective ofthe amount of deliberation. These processes can be nicely accountedfor by the PCS approach. Different strategies come into play at thelevel of information search and construction of the problem space.The two scissors, described by Herbert Simon (1955), bound strate-gies of search and construction but not the process of informationintegration itself.

Appendix A. Description of strategies

Page 9: Decisions beyond boundaries: When more information is processed faster than less

540 A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

Appendix B. Specification of PCS

PCS was simulated using the network model proposed byGlöckner and Betsch (2008b) with two layers. The first layer con-sisted of cue nodes which were activated by a general validity node.The second layer consisted of option nodes with mutual inhibition.Both layers are connected by bidirectional links representing cue pre-dictions. Connections between the general validity node and the cuenodes represent the weight given to each cue. Spreading activationin the network is simulated by an iterative updating algorithmwhich uses a sigmoid activation function proposed by McClellandand Rumelhart (1981; see also Read and Miller, 1998):

ai t þ 1ð Þ ¼ ai tð Þ 1−decayð Þ þ if inputib0 inputi ai tð Þ−f loorð Þif inputi≥0 inputi ceiling−ai tð Þð Þ

Table B1Model parameters for PCS simulations.

Value/function Description

Decay 0.10 Decay parameter for node activation;influences the overall activation levelof the nodes, the higher the value thelower the final activation level.

wo1–o2 −0.20 Mutual inhibitory connection betweenoptions.

wc–o 0.01/−0.01 Connection between cues and optionsrepresenting positive or negative cuepredictions.

wv wv=(v−0.5)p Links between general validity nodeand cues representing a priori cuevalidity. v is the objective cue validitywhich is corrected for chance level (bysubtracting 0.5). For the standardimplementation of PCS p was set to1 (but see Appendix C).

ceiling/floor 1/−1 Upper and lower limit for cue activations.Stability criterion 10−6 The network was considered having

reached a stable solution if there wasno energy change in the network for10 iterations which exceeded 10−6.

Fig. B1. PCS predictions for deci

with

inputi tð Þ ¼ ∑j¼1→n

wijaj tð Þ:

ai(t) represents the activation of the node i at iteration t. Theparameters floor and ceiling stand for the minimum and maximumpossible activation. Inputi(t) is the activation node i receives at itera-tion t, which is computed by summing up all products of activationsand connection weights wij for node i. Decay is a constant decayparameter.

The model was applied without formal parameter fitting. We usedthe parameters presented in Table B1 which are based on parametersused in previous simulations (Glöckner & Bröder, 2011; Glöckner &Hodges, 2011; Glöckner et al., 2010). We thereby had to adapt thefunction for determining cue weights wv because we used objectivecue validities instead of subjective cue usage ratings as input.

Cue values for options A and B (i.e., wc–o) were transformed intoweights of −0.01 (negative prediction) or 0.01 (positive prediction).Cue weights (i.e., wv) were computed from objective validities by cor-recting for chance validity.

The option with the highest final activation is predicted to be cho-sen. The number of iterations to find the solution is used as predictorfor decision time, the absolute difference in activation between thetwo options is used as predictor for confidence (Glöckner, 2010;Glöckner & Betsch, 2008b). Predictions are shown in Fig. B1.

Appendix C. Strategy Classification

For a more in-depth investigation of interindividual differences,choices, decision time and confidence were simultaneously analyzedby conducting a Multiple MeasureMaximum Likelihood strategy clas-sification (Glöckner, 2009; Jekel et al., 2010). We thereby includedthe already described strategies PCS (with parametrization fromTable B1), LEX/TTB, and WADD (with chance corrected validities) inthe analysis. As further competitors the comparison included anequal weight strategy (EQW) assuming that persons add up cuevalues without weighting them by validity; a random choice strategy(RAND); and a second implementation of PCS (PCS2). PCS2 uses theparameters listed in Table B1 except for applying a different transfor-mation function for cue validities, which has been introduced in

sion times and confidence.

Page 10: Decisions beyond boundaries: When more information is processed faster than less

541A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

previous research (Glöckner & Bröder, 2011). Specifically, it useswv=(v−0.5)2 to transform cue validities into network weights(note the change in the exponent p). PCS2 has been shown to approx-imate a rational Bayes-solution for probabilistic inferences bestamong all the competitors considered here (Jekel et al., underreview). Following recent suggestions (Moshagen & Hilbig, 2011),the analysis was complemented by an additional global fit test forchoices with pb0.05 to avoid misclassification due to the fact thatthe real strategy was not included in the set of competitors. Timeand confidence predictions for TTB, WADD, EQW and RAND werederived using standard conventions (for a detailed description seeGlöckner, 2010).

This analysis revealed that 83 of the participants (74%) were bestdescribed by one of the two versions of PCS (PCS: 38; PCS2: 45); 23participants (21%) were classified as users of WADD; 5 participants(5%) were classified as users of LEX/TTB, and one person was not clas-sified because it failed the global fit test for choices. The average pre-diction errors in choices of the different strategies (i.e., deviations ofbehavior from model prediction) taking into account all participantswere PCS: ε=0.08; PCS2: ε=0.05; TTB: ε=0.05; EQW: ε=0.14;WADD: ε=0.05. The average individual correlations of model predic-tions and observed choice time were PCS: r=0.33; PCS2: r=0.34;EQW: r=0.05; WADD: r=0.05 (times were ln-transformed andorder effects were partialled out before correlating; correlationswere conducted per participant over 60 observations and averagedacross ALL participants). The respective average correlations for con-fidence were: PCS: r=0.59; PCS2: r=0.57; EQW: r=0.33; WADD:r=0.61 (TTB predicted no between-tasks variation in time and confi-dence and correlations cannot be computed).

References

Abelson, R. P., & Levi, A. (1985). Decision making and decision theory. In G. Lindzey, &E. Aronson (Eds.), (3ed.). Handbook of social psychology, Vol. 1. (pp. 231–309) NewYork: Random House.

Ayal, S., & Hochman, G. (2009). Ignorance or integration: The cognitive processesunderlying choice behavior. Journal of Behavioral Decision Making, 22, 455–474.

Beach, L. R., & Mitchell, T. R. (1978). A contingency model for the selection of decisionstrategies. Academy of Management Review, 3, 439–449.

Beach, L. R., & Mitchell, T. R. (1996). Image theory, the unifying perspective. In L. R.Beach (Ed.), Decision making in the workplace: A unified perspective (pp. 1–20).Hillsdale, NJ: Lawrence Erlbaum.

Bergert, F. B., & Nosofsky, R. M. (2007). A response-time approach to comparing general-ized rational and take-the-best models of decision making. Journal of ExperimentalPsychology: Learning, Memory, and Cognition, 33, 107–129.

Betsch, T. (2005). Preference theory: An affect-based approach to recurrent decisionmaking. In T. Betsch, & S. Haberstroh (Eds.), The routines of decision making(pp. 39–65). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.

Betsch, T., & Glöckner, A. (2010). Intuition in judgment and decision making: Extensivethinking without effort. Psychological Inquiry, 21, 279–294.

Betsch, T., Haberstroh, S., Molter, B., & Glöckner, A. (2004). Oops, I did it again—Relapseerrors in routinized decision making. Organizational Behavior and Human DecisionProcesses, 93, 62–74.

Bettman, J., Johnson, E., & Payne, J. (1990). A componential analysis of cognitive effortin choice. Organizational Behavior and Human Decision Processes, 45, 111–139.

Birnbaum, M. H., & Jou, J. -w. (1990). A theory of comparative response times and“difference” judgments. Cognitive Psychology, 22, 184–210.

Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heuristic: Makingchoices without trade-offs. Psychological Review, 113, 409–432.

Bröder, A., & Gaissmaier, W. (2007). Sequential processing of cues in memory-basedmultiattribute decisions. Psychonomic Bulletin & Review, 14, 895–900.

Bröder, A., & Schiffer, S. (2003). Bayesian strategy assessment in multi-attribute deci-sion making. Journal of Behavioral Decision Making, 16, 193–213.

Bröder, A., & Schiffer, S. (2003). Take The Best versus simultaneous feature matching:Probabilistic inferences from memory and effects of reprensentation format.Journal of Experimental Psychology. General, 132, 277–293.

Brown, N. R., & Tan, S. (2011). Magnitude comparison revisited: An alternative approachto binary choice under uncertainty. Psychonomic Bulletin & Review, 18, 392–398.

Bruner, J. S., & Goodman, C. C. (1947). Value and need as organizing factors in percep-tion. Journal of Abnormal and Social Psychology, 42, 33–44.

Busemeyer, J. R., & Johnson, J. G. (2004). Computational models of decisionmaking. In D. J.Koehler, & N. Harvey (Eds.), Blackwell handbook of judgment and decision making(pp. 133–154). Malden, MA: Blackwell Publishing.

Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitiveapproach to decision making in an uncertain environment. Psychological Review,100, 432–459.

Cartwright, D., & Festinger, L. (1943). A quantitative theory of decision. PsychologicalReview, 50, 595–621.

DeKay, M. L., Patino-Echeverri, D., & Fischbeck, P. S. (2009). Better safe than sorry:Precautionary reasoning and implied dominance in risky decisions. Journal ofBehavioral Decision Making, 22, 338–361.

DeKay, M. L., Patino-Echeverri, D., & Fischbeck, P. S. (2009). Distortion of probabilityand outcome information in risky decisions. Organizational Behavior and HumanDecision Processes, 109, 79–92.

Diederich, A. (2003). Decision making under conflict: Decision time as a measure ofconflict strength. Psychonomic Bulletin & Review, 10, 167–176.

Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). MINERVA-DM: A memory pro-cesses model for judgments of likelihood. Psychological Review, 106, 180–209.

Evans, J. S. B. T. (2006). The heuristic–analytic theory of reasoning: Extension and eval-uation. Psychonomic Bulletin & Review, 13, 378–395.

Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and socialcognition. Annual Review of Psychology, 59, 255–278.

Fazio, R. H. (1990). A practical guide to the use of response latency in social psycholog-ical research. In C. Hendrick, & M. S. Clark (Eds.), Research methods in personalityand social psychology (pp. 74–97). Thousand Oaks, CA: Sage Publications, Inc.

Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford UniversityPress.

Fiedler, K. (1996). Explaining and simulating judgment biases as an aggregation phe-nomenon in probabilistic, multiple-cue environments. Psychological Review, 103,193–214.

Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic injudgments of risks and benefits. Journal of Behavioral Decision Making, 13, 1–17.

Fishburn, P. C. (1974). Lexicographic orders, utilities, and decision rules: A survey.Management Science, 20, 1442–1472.

Freeman, J. B., & Ambady, N. (2011). A dynamic interactive theory of person construal.Psychological Review, 118, 247–279.

Gigerenzer, G. (2007). Gut feelings: The intelligence of the unconscious. New York: VikingPress.

Gigerenzer, G., & Goldstein, D. G. (1999). Betting on one good reason: The take the bestheuristic. Simple heuristics that make us smart (pp. 75–95). New York, NY: OxfordUniversity Press.

Gigerenzer, G., Todd, P. M., and the ABC Research Group (1999). Simple heuristics thatmake us smart. Evolution and cognition. New York, NY: Oxford University Press.

Gilovich, T., Griffin, D., & Kahneman, D. (2002). Heuristics and biases: The psychologyof intuitive judgment. Heuristics and biases: The psychology of intuitive judgmentxviNew York, NY, US: Cambridge University Press 857 pp..

Glöckner, A. (2009). Investigating intuitive and deliberate processes statistically: TheMultiple-Measure Maximum Likelihood strategy classification method. Judgmentand Decision Making, 4, 186–199.

Glöckner, A. (2010). Multiple measure strategy classification: Outcomes, decision timesand confidence ratings. In A. Glöckner, & C. L. M. Witteman (Eds.), Foundations fortracing intuition: Challenges and methods (pp. 83–105). London: Psychology Press &Routledge.

Glöckner, A., & Betsch, T. (2008). Do people make decisions under risk based on igno-rance? An empirical test of the Priority Heuristic against Cumulative ProspectTheory. Organizational Behavior and Human Decision Processes, 107, 75–95.

Glöckner, A., & Betsch, T. (2008). Modeling option and strategy choices with connec-tionist networks: Towards an integrative model of automatic and deliberate deci-sion making. Judgment and Decision Making, 3, 215–228.

Glöckner, A., & Betsch, T. (2008). Multiple-reason decision making based on automaticprocessing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34,1055–1075.

Glöckner, A., & Betsch, T. (2010). Accounting for critical evidence while being preciseand avoiding the strategy selection problem in a parallel constraint satisfactionapproach — A reply to Marewski. Journal of Behavioral Decision Making, 23, 468–472.

Glöckner, A., & Betsch, T. (2011). The empirical content of theories in judgment anddecision making: Shortcomings and remedies. Judgment and Decision Making, 6,711–721.

Glöckner, A., Betsch, T., & Schindler, N. (2010). Coherence shifts in probabilistic infer-ence tasks. Journal of Behavioral Decision Making, 23, 439–462.

Glöckner, A., & Bröder, A. (2011). Processing of recognition information and additionalcues: A model-based analysis of choice, confidence, and response time. Judgmentand Decision Making, 6, 23–42.

Glöckner, A., & Herbold, A. -K. (2011). An eye-tracking study on information processingin risky decisions: Evidence for compensatory strategies based on automatic pro-cesses. Journal of Behavioral Decision Making, 24, 71–98.

Glöckner, A., & Hodges, S. D. (2011). Parallel constraint satisfaction in memory-baseddecisions. Experimental Psychology, 58, 180–195.

Glöckner, A., & Pachur, T. (2012). Cognitive models of risky choice: Parameter stabilityand predictive accuracy of Prospect Theory. Cognition, 123, 21–32.

Glöckner, A., & Witteman, C. L. M. (2010). Beyond dual-process models: A categoriza-tion of processes underlying intuitive judgment and decision making. Thinkingand Reasoning, 16, 1–25.

Gould, W., Pitblado, J., & Sribney, W. (2006). Maximum likelihood estimation with Stata(3rd ed.). College Station, TX: Stata Press.

Greiner, B. (2004). An Online Recruitment System for Economic Experiments. In K.Kremer, & V. Macho (Eds.), Forschung und wissenschaftliches Rechnen 2003.GWDG Bericht 63 (pp. 79–93). Göttingen: Ges. für Wiss. Datenverarbeitung.

Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1987). Direct comparison of theefficacy of intuitive and analytical cognition in expert judgment. IEEE Transactionson Systems, Man, and Cybernetics, 17, 753–770.

Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.

Page 11: Decisions beyond boundaries: When more information is processed faster than less

542 A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

Hilbig, B. E. (2008). One-reason decision making in risky choice? A closer look at thepriority heuristic. Judgment and Decision Making, 3, 457–462.

Hilbig, B. E. (2010). Reconsidering ‘evidence’ for fast and frugal heuristics. PsychonomicBulletin & Review, 17, 923–930.

Hilbig, B. E., & Glöckner, A. (2011). Yes, they can! Appropriate weighting of small prob-abilities as a function of information acquisition. Acta Psychologica, 138, 390–396.

Hilbig, B. E., & Pohl, R. F. (2009). Ignorance- versus evidence-based decision making: Adecision time analysis of the recognition heuristic. Journal of Experimental Psychology:Learning, Memory, and Cognition, 35, 1296–1305.

Hilbig, B. E., Scholl, S. G., & Pohl, R. F. (2010). Think or blink—Is the recognition heuristican “intuitive” strategy? Judgment and Decision Making, 5, 300–309.

Hochman, G., Ayal, S., & Glöckner, A. (2010). Physiological arousal in processing recog-nition information: Ignoring or integrating cognitive cues? Judgment and DecisionMaking, 5, 285–299.

Hogarth, R. M. (2001). Educating intuition. Chicago, IL: University of Chicago Press.Holyoak, K. J., & Simon, D. (1999). Bidirectional reasoning in decision making by con-

straint satisfaction. Journal of Experimental Psychology. General, 128, 3–31.Holyoak, K. J., & Spellman, B. A. (1993). Thinking. Annual Review of Psychology, 44,

265–315.Horstmann, N., Ahlgrimm, A., & Glöckner, A. (2009). How distinct are intuition and de-

liberation? An eye-tracking analysis of instruction-induced decision modes. Judg-ment and Decision Making, 4, 335–354.

Jekel, M., Fiedler, S., & Glöckner, A. (2011). Diagnostic task selection for strategy classi-fication in judgment and decision making. Judgment and Decision Making, 6,782–799.

Jekel, M., Nicklisch, A., & Glöckner, A. (2010). Implementation of the Multiple-MeasureMaximum Likelihood strategy classification method in R: Addendum to Glöckner(2009) and practical guide for application. Judgment and Decision Making, 5, 54–63.

Jekel, M., Glöckner, A., Fiedler, S., & Bröder, A. (under review). The rationality of differ-ent kinds of intuitive processes.

Juslin, P., & Persson, M. (2002). PROBabilities from EXemplars (PROBEX): A “lazy”algorithm for probabilistic inference from generic knowledge. Cognitive Science: AMultidisciplinary Journal, 26, 563–607.

Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitutionin intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics andbiases: The psychology of intuitive judgment (pp. 49–81). New York, NY: CambridgeUniversity Press.

Lohse, G. L., & Johnson, E. J. (1996). A comparison of two process tracing methods forchoice tasks. Organizational Behavior and Human Decision Processes, 68, 28–43.

Luce, R. D. (2000). Utility of gains and losses: Measurement-theoretical and experimentalapproaches. Mahwah, NJ: Erlbaum.

Luce, R. D., & Raiffa, H. (1957). Games and decisions: Introduction and critical survey.New York: Wiley.

McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activationmodel of context effectsin letter perception: I. An account of basic findings. Psychological Review, 88, 375–407.

Monroe, B. M., & Read, S. J. (2008). A general connectionist model of attitude structureand change: The ACS (Attitudes as Constraint Satisfaction) model. PsychologicalReview, 115, 733–759.

Moshagen, M., & Hilbig, B. E. (2011). Methodological notes on model comparisons andstrategy classification: A falsificationist proposition. Judgment and Decision Making,6, 814–820.

Newell, B. R., & Bröder, A. (2008). Cognitive processes, models and metaphors in deci-sion research. Judgment and Decision Making, 3, 195–204.

Newell, A., & Simon, H. A. (1972). Human problem solving. Oxford, England: Prentice-Hall Print.

Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in deci-sion making. Journal of Experimental Psychology: Learning, Memory, and Cognition,14, 534–552.

Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. Theadaptive decision maker. New York, NY: Cambridge University Press xiii, 330 pp.

Pleskac, T. J., & Busemeyer, J. R. (2010). Two-stage dynamic signal detection: A theoryof choice, decision time, and confidence. Psychological Review, 117, 864–901.

Read, S. J., & Miller, L. C. (1998). On the dynamic construction of meaning: An interac-tive activation and competition model of social perception. In S. J. Read, & L. C.Miller (Eds.), Connectionist models of social reasoning and social behavior(pp. 27–68). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.

Read, S. J., Vanman, E. J., & Miller, L. C. (1997). Connectionism, parallel constraint satis-faction processes, and Gestalt principles: (Re)introducing cognitive dynamics tosocial psychology. Personality and Social Psychology Review, 1, 26–53.

Rogers, W. H. (1993). Regression standard errors in clustered samples. Stata TechnicalBulletin, 13, 19–23.

Savage, L. J. (1954). The foundations of statistics (2nd ed.). New York: Dover.Simon, H. A. (1955). A behavioural model of rational choice. Quarterly Journal of

Economics, 69, 99–118.Simon, D., Pham, L. B., Le, Q. A., & Holyoak, K. J. (2001). The emergence of coherence

over the course of decision making. Journal of Experimental Psychology: Learning,Memory, and Cognition, 27, 1250–1260.

Simon, D., Snow, C. J., & Read, S. J. (2004). The redux of cognitive consistency theories:Evidence judgments by constraint satisfaction. Journal of Personality and SocialPsychology, 86, 814–837.

Sloman, S. A. (2002). Two systems of reasoning. In T. Gilovich, D. Griffin, & D.Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment(pp. 379–396). New York: Cambridge University Press.

Thagard, P. (1989). Explanatory coherence. The Behavioral and Brain Sciences, 12,435–502.

Thagard, P., & Millgram, E. (1995). Inference to the best plan: A coherence theory ofdecision. In A. Ram, & D. B. Leake (Eds.), Goal-driven learning (pp. 439–454). Cam-bridge, MA: MIT Press.

Thomas, R. P., Dougherty, M. R., Sprenger, A. M., & Harbison, J. I. (2008). Diagnostichypothesis generation and human judgment. Psychological Review, 115, 155–185.

Veblen, T. (1898). Why is economics not an evolutionary science. Quarterly Journal ofEconomics, 12, 373–397.

von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior(1st ed.). Princeton, NJ: Princeton University Press.

Wertheimer, M. (1938). Gestalt theory. In W. D. Ellis (Ed.), A source book of Gestaltpsychology (pp. 1–11). London, England: Kegan Paul, Trench, Trubner & Company.