21
DEVELOPING THEORY THROUGH SIMULATION METHODS JASON P. DAVIS KATHLEEN M. EISENHARDT Stanford University CHRISTOPHER B. BINGHAM University of Maryland We describe when and how to use simulation methods in theory development. We develop a roadmap that describes theory development using simulation and position simulation in the “sweet spot” between theory-creating methods, such as multiple case inductive studies and formal modeling, and theory-testing methods. Simulation strengths include internal validity and facility with longitudinal, nonlinear, and process phenomena. Simulation’s primary value occurs in creative experimentation to produce novel theory. We conclude with evaluation guidelines. Simulation is an increasingly significant methodological approach to theory develop- ment in the literature focused on strategy and organizations (e.g., Adner, 2002; Lant & Mezias, 1990; Repenning, 2002; Rivkin & Siggelkow, 2003; Zott, 2003), Indeed, several influential research efforts (e.g., Cohen, March, & Olsen, 1972; March, 1991) have used simulation as their primary method. Yet while simulation has become an important methodology, its value for theory de- velopment remains clouded and even controver- sial. On the one hand, some argue that simulation methods contribute effectively to theory devel- opment. For example, simulation can provide superior insight into complex theoretical rela- tionships among constructs, especially when challenging empirical data limitations exist (Zott, 2003). It can also provide an analytically precise means of specifying the assumptions and theoretical logic that lie at the heart of ver- bal theories (Carroll & Harrison, 1998; Kreps, 1990). In addition, simulation can clearly reveal the outcomes of the interactions among multiple underlying organizational and strategic pro- cesses, especially as they unfold over time (Re- penning, 2002). From these perspectives, simula- tion can be a powerful method for sharply specifying and extending extant theory in useful ways. On the other hand, some researchers maintain that simulation methods often yield very little in terms of actual theory development. They sug- gest that simulations are simply “toy models” of actual phenomena, in that they either replicate the obvious or strip away so much realism that they are simply too inaccurate to yield valid theoretical insights (Chattoe, 1998; Fine & Els- bach, 2000). For example, simulation research is usually based on at least some clearly unreal- istic assumptions, such as zero search costs (Rivkin, 2000) and all strategic rules are effective (Davis, Eisenhardt, & Bingham, 2007). In addi- tion, simulation constructs are often “measured” by empirically distant approaches, such as “0” and “1” bit strings as representations of organi- zations (Bruderer & Singh, 1996) and strategies (Rivkin, 2001). The results of research using sim- ulation methods can also be dynamically inde- terminate and overly complex (Fichman, 1999). From these perspectives, the value of simula- tion methods for theoretical development is tenuous. The controversy surrounding the value of sim- ulation methods for theory development par- tially arises, in our view, from a lack of clarity about the method and its related link to theory development. There appears to be limited un- derstanding within the broad research commu- We appreciate helpful conversations with Ron Adner, Pe- ter Glynn, Ashish Goel, Riitta Katila, Bruce Kogut, Nelson Repenning, Jan Rivkin, and Jesper Sorenson; the suggestions of our anonymous reviewers; and the support of the Stanford Technology Ventures Program and the National Science Foundation (IOC Award #0323176). Academy of Management Review 2007, Vol. 32, No. 2, 480–499. Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holder’s express written permission. Users may print, download, or email articles for individual use only. 480

DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

  • Upload
    doque

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

Page 1: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

DEVELOPING THEORY THROUGH SIMULATIONMETHODS

JASON P. DAVISKATHLEEN M. EISENHARDT

Stanford University

CHRISTOPHER B. BINGHAMUniversity of Maryland

We describe when and how to use simulation methods in theory development. Wedevelop a roadmap that describes theory development using simulation and positionsimulation in the “sweet spot” between theory-creating methods, such as multiplecase inductive studies and formal modeling, and theory-testing methods. Simulationstrengths include internal validity and facility with longitudinal, nonlinear, andprocess phenomena. Simulation’s primary value occurs in creative experimentation toproduce novel theory. We conclude with evaluation guidelines.

Simulation is an increasingly significantmethodological approach to theory develop-ment in the literature focused on strategy andorganizations (e.g., Adner, 2002; Lant & Mezias,1990; Repenning, 2002; Rivkin & Siggelkow, 2003;Zott, 2003), Indeed, several influential researchefforts (e.g., Cohen, March, & Olsen, 1972; March,1991) have used simulation as their primarymethod. Yet while simulation has become animportant methodology, its value for theory de-velopment remains clouded and even controver-sial.

On the one hand, some argue that simulationmethods contribute effectively to theory devel-opment. For example, simulation can providesuperior insight into complex theoretical rela-tionships among constructs, especially whenchallenging empirical data limitations exist(Zott, 2003). It can also provide an analyticallyprecise means of specifying the assumptionsand theoretical logic that lie at the heart of ver-bal theories (Carroll & Harrison, 1998; Kreps,1990). In addition, simulation can clearly revealthe outcomes of the interactions among multipleunderlying organizational and strategic pro-cesses, especially as they unfold over time (Re-

penning, 2002). From these perspectives, simula-tion can be a powerful method for sharplyspecifying and extending extant theory in usefulways.

On the other hand, some researchers maintainthat simulation methods often yield very little interms of actual theory development. They sug-gest that simulations are simply “toy models” ofactual phenomena, in that they either replicatethe obvious or strip away so much realism thatthey are simply too inaccurate to yield validtheoretical insights (Chattoe, 1998; Fine & Els-bach, 2000). For example, simulation research isusually based on at least some clearly unreal-istic assumptions, such as zero search costs(Rivkin, 2000) and all strategic rules are effective(Davis, Eisenhardt, & Bingham, 2007). In addi-tion, simulation constructs are often “measured”by empirically distant approaches, such as “0”and “1” bit strings as representations of organi-zations (Bruderer & Singh, 1996) and strategies(Rivkin, 2001). The results of research using sim-ulation methods can also be dynamically inde-terminate and overly complex (Fichman, 1999).From these perspectives, the value of simula-tion methods for theoretical development istenuous.

The controversy surrounding the value of sim-ulation methods for theory development par-tially arises, in our view, from a lack of clarityabout the method and its related link to theorydevelopment. There appears to be limited un-derstanding within the broad research commu-

We appreciate helpful conversations with Ron Adner, Pe-ter Glynn, Ashish Goel, Riitta Katila, Bruce Kogut, NelsonRepenning, Jan Rivkin, and Jesper Sorenson; the suggestionsof our anonymous reviewers; and the support of the StanfordTechnology Ventures Program and the National ScienceFoundation (IOC Award #0323176).

� Academy of Management Review2007, Vol. 32, No. 2, 480–499.

Copyright of the Academy of Management, all rights reserved. Contents may notbe copied, emailed, posted to a listserv, or otherwise transmitted without thecopyright holder’s express written permission. Users may print, download, or emailarticles for individual use only.

480

Page 2: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

nity about (1) when simulation is a useful meth-odological choice for theory development, (2)how to select among the various simulation ap-proaches (e.g., system dynamics versus geneticalgorithms), (3) the appropriate steps for per-forming simulation research, and (4) the rele-vant criteria for evaluating simulation research.Most significant, there seems to be limited rec-ognition within the research community of howsimulation methods fit into the broader schemeof relating methodological choices to theoreticaldevelopment.

Our purpose is to address these issues byclarifying when and how to use simulationmethods in theory development. Although schol-ars writing about theory development (e.g., Du-bin, 1976; Pfeffer, 1982; Priem & Butler, 2001; Sut-ton & Staw, 1995; Whetten, 1989) may havedifferent emphases, most agree that theory hasfour elements: constructs, propositions that linkthose constructs together, logical argumentsthat explain the underlying theoretical rationalefor the propositions, and assumptions that de-fine the scope or boundary conditions of thetheory. Consistent with these views, we definetheory as consisting of constructs linked to-gether by propositions that have an underlying,coherent logic and related assumptions.

Broadly, we attempt to make two contribu-tions. First, we offer a roadmap for how to usesimulation methods to develop theory. Thisroadmap synthesizes prior work on the design ofsimulation research (e.g., Sterman, 2000) and ex-tends that work into specific areas, such as iden-tifying appropriate research questions, choos-ing among simulation approaches, developingcomputational representations, elaborating thefundamental role of verification versus experi-mentation, and evaluating simulation research.We rely on exemplars from the extant literatureto ground our observations. The intended resultis a more complete roadmap for executing andevaluating theory development research usingsimulation methods than has previously ex-isted. This roadmap is summarized in Table 1.

Second, we position simulation methodologywithin the broad context of theoretical develop-ment in the organizations and strategy litera-ture. Relying on the related theory developmentliterature (e.g., Priem & Butler, 2001; Sutton &Staw, 1995; Weick, 1989), we explore the ratio-nale for using simulation, its relationship toother methods such as laboratory experiments,

simulation’s strengths and weaknesses, andguidelines for evaluating theoretical develop-ment using simulation. We argue that simula-tion is especially useful in the “sweet spot” be-tween theory-creating research using suchmethods as inductive multiple case studies(Eisenhardt, 1989) and formal modeling (Freese,1980), and theory-testing research using multi-variate, statistical analysis (Pfeffer, 1993). Thatis, simulation enables the elaboration of rough,basic (or what we term simple) theory that isoften derived from inductive cases or formalmodeling into logically precise and comprehen-sive theory. This theory can then be effectivelyexamined further using deductive logic and em-pirical evidence. Simulation is particularly use-ful when the theoretical focus is longitudinal,nonlinear, or processual, or when empiricaldata are challenging to obtain.

We begin by discussing the background ofsimulation methods. We then turn to the basicsteps of conducting simulation research, includ-ing selecting research questions, choosing acomputational approach, and developing exper-iments. We conclude with a discussion of thestrengths and weaknesses of simulation re-search and guidelines for its evaluation.

BACKGROUND

We define simulation as a method for usingcomputersoftwaretomodeltheoperationof“real-world” processes, systems, or events (Law &Kelton, 1991). This definition is consistent withother definitions that describe simulation mod-els as virtual experiments (Carley, 2001) or assimplified pictures of the world having some,but not all, of the characteristics of that world(Lave & March, 1975). In particular, simulationinvolves creating a computational representa-tion of the underlying theoretical logic that linksconstructs together within these simplifiedworlds. These representations are then codedinto software that is run repeatedly under vary-ing experimental conditions (e.g., alternative as-sumptions, varied construct values) in order toobtain results.

While simulation can be used purely for de-scription or exploration,1 our focus is on using

1 There are a number of uses of simulation, includingdescription of complicated systems, such as telecommuni-

2007 481Davis, Eisenhardt, and Bingham

Page 3: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

simulation for theory development when simpletheory exists (e.g., Rivkin, 2000; Rudolph & Re-penning, 2002). By simple theory, we mean un-developed theory that has only a few constructsand related propositions with modest empirical

or analytic grounding such that the propositionsare in all likelihood correct but are currentlylimited by weak conceptualization of constructs,few propositions linking these constructs to-gether, and/or rough underlying theoreticallogic. Simple theory also includes basic pro-cesses that may be known (e.g., competition,imitation) but that have interactions that areonly vaguely understood, if at all. Thus, simpletheory contrasts with well-developed theory,such as institutional and transaction cost theo-ries that have multiple and clearly defined the-oretical constructs (e.g., normative structures,mimetic diffusion, asset specificity, uncertainty),

cation networks, and purely exploratory efforts, such as ob-serving what emerges from simulating complex relation-ships. In contrast, our focus is specifically on usingsimulation for theory development when some simple theoryexists. Its application ranges from incremental extensions ofthat theory to exploratory work that may rely on only a roughunderstanding of theoretical mechanisms. We appreciatethe comments of an anonymous reviewer in clarifying thisscope condition of our efforts and in suggesting language.

TABLE 1Roadmap for Developing Theory Using Simulation Methods

Step Activities Rationale

Begin with a research question • Determine a theoretically intriguingresearch question

• Look for a basic tension like structureversus chaos, long versus short run

Focuses efforts on a theoreticallyrelevant issue for whichsimulation is especially effective

Identify simple theory • Select simple theory that addressesthe research question

• Look for intertwined processes (e.g.,competition and legitimation),nonlinearities, and longitudinaleffects

• Look for theory that requires datathat are challenging to obtain

• Forms basis of computationalrepresentation by giving shape totheoretical logic, propositions,constructs, and assumptions

• Focuses efforts on theoreticaldevelopment for which simulationis especially effective

Choose a simulation approach • Choose simulation approach that fitswith research question, assumptions,and theoretical logic

Ensures that the research uses anappropriate simulation approachgiven the research at hand

• If the research does not fit anapproach or if the approach requiresextensive modification, choosestochastic processes

Create computationalrepresentation

• Operationalize theoretical constructs• Build computational algorithm that

mirrors theoretical logic• Specify assumptions• Ensure that computational

representation allows theoreticallyvaluable experimentation

• Embodies theory in software• Provides construct validity• Improves internal validity by

requiring precise constructs, logic,and assumptions

• Sets the stage for theoreticalcontributions

Verify computationalrepresentation

• Replicate propositions of simpletheory with simulation results

• Conduct robustness checks ofcomputational representation

• Confirms accuracy and robustnessof computational representation

• Confirms internal validity of thetheory

• If verification fails, correct theoryand/or software coding

Experiment to build noveltheory

Create experimental design (e.g., varyconstruct values, unpack constructs,alter assumptions, add new features)based on likely theoreticalcontribution and realism

• Focuses experimentation ontheory development

• Builds new theory throughexploration, elaboration, andextension of simple theory

Validate with empirical data Compare simulation results withempirical data

Strengthens external validity of thetheory

482 AprilAcademy of Management Review

Page 4: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

well-established theoretical propositions thathave received extensive empirical grounding,and well-elaborated theoretical logic. Simpletheory also contrasts with situations wherethere is no real theoretical understanding of thephenomena.

Simulation is particularly suited to the devel-opment of simple theory because of its strengthsin enhancing theoretical precision and relatedinternal validity and in enabling theoreticalelaboration and exploration through computa-tional experimentation. In particular, simulationrelies on some theoretical understanding of thefocal phenomena in order to construct a compu-tational representation. Yet simulation also de-pends on an incomplete theoretical understand-ing such that fresh theoretical insights arepossible from the precision that simulation en-forces and the experimentation that simulationenables.

In contrast, while it is possible to simulatewell-developed theories, such as institutionaland transaction cost theories, they offer feweropportunities for new theoretical insights usingthe logical precision and experimentation ofsimulation. These theories are already well-elaborated. Thus, there is likely to be less payofffrom simulation because it is likely just to rep-licate known theoretical ideas.

At the other end of the theory developmentspectrum are “clean-slate” theoretical situa-tions. Here there is not much to simulate, andother methods such as multiple case inductionand formal modeling are typically better meth-odological choices. In contrast with these ex-tremes, simple theory has enough theoreticaldevelopment to construct a computational rep-resentation, but also sufficient room to improveinternal validity and to develop novel theoreti-cal insights. In addition, simulation is espe-cially useful for theory development when thefocal phenomena involve multiple and interact-ing processes, time delays, or other nonlineareffects such as feedback loops and thresholds.In these situations, simulation is likely to revealnonintuitive elaborations of simple theory thatare difficult to uncover using other methods, in-cluding armchair thought processes.

Overall, a central value of simulation for the-ory development lies in the exploration, elabo-ration, and extension of simple theories. Theorydevelopment begins with one or several theoret-ical ideas (what we term simple theory) that are

then represented in computer software, verified,and explored through computational experi-mentation. These experiments can range fromincremental (e.g., adding a moderating relation-ship) to elaborate, involving numerous, wide-ranging experiments that push the theory wellbeyond its immediate application—for example,examining alternative theoretical logics (Lee,Mitchell, & Sablynski, 1999).2 Several exemplarstudies using simulation in theory developmentare listed in Table 2.

ROADMAP FOR DEVELOPING THEORIES WITHSIMULATIONS

Begin with an Intriguing Research Questionand Simple Theory

Like all good research, studies that developtheory through simulation should begin with anintriguing research question that reflects deepunderstanding of the extant literature and re-lates to a substantial theoretical issue (Weick,1989). Without such a question, simulation re-search simply becomes a “fishing expedition” inwhich the researcher lacks focus and theoreticalrelevance and risks becoming overwhelmed bycomputational complexity.

Research questions can originate from manysources. Sometimes they come from conundrumswithin basic science. March (1991), for example,relied on complexity theory from the biologicaland computer sciences to conceptualize a re-search question that examined the trade-off be-tween the exploration of new possibilities andthe exploitation of old certainties. Sometimesresearch questions are motivated by intriguingobservations from inductive case study re-search. For example, Rudolph and Repenning(2002) used the counterintuitive observationsfrom a case study describing the 1977 Tenerifeairport disaster (Weick, 1993) as the inspirationfor their research question asking how smallproblems can become major catastrophes.Sometimes research questions emerge fromcombining process theories, such as whenGavetti and Levinthal (2000) asked how cogni-tion influences experiential learning. Researchquestions may also come from the classic tradi-

2 We appreciate the comments of one of our anonymousreviewers in assisting us in highlighting this continuum andlanguage.

2007 483Davis, Eisenhardt, and Bingham

Page 5: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

tion extending formal analytic models (Adner,2002).

While simulation shares an emphasis on in-triguing and theoretically relevant researchquestions with other methods, simulation is par-ticularly suited to the theoretical development

of simple theory. As described earlier, simpletheory is undeveloped theory that involves a fewconstructs and related propositions with someempirical or analytic grounding but that is lim-ited by weak conceptualization, few proposi-tions, and/or rough underlying theoretical logic.

TABLE 2Recent Examples of Simulation Research

Study Research Question Key Processes ApproachRepresentativeFindings

Rudolph & Repenning(2002)

When do smallinterruptionscreate majorcatastrophes?

Adaptation andselection

Systemdynamics

Variance in the rate ofinterruptions affectsemergence of tippingpoints

Sastry (1997) How doorganizationsundergofundamentalchange?

Change andinertia

Systemdynamics

An additional negativefeedback loopcorrects theoreticallogic of punctuatedequilibrium theory

Rivkin (2000) What is the optimalstrategiccomplexity?

Replication andimitation

NK fitnesslandscape

Moderate strategiccomplexity isoptimal

Gavetti & Levinthal(2000)

How does cognitionimproveexperientiallearning?

Experientiallearning andcognition

NK fitnesslandcape

Cognition is mostuseful in improvingexperiential learningat moderate levels ofK interactions

Zott (2002) How does adaptivelearning occurwithinbargaining?

Adaptive learning Geneticalgorithm

Adaptive failures mayoccur even withcomplete information

Bruderer & Singh(1996)

How does organiza-tional learningaffect theevolution of apopulation oforganizations?

Variation,adaptation, andselection

Geneticalgorithm

Environmentinfluences whenorganizationallearning is mostuseful for thepopulation’sconvergence to anoptimal form

Lomi & Larsen (1996) How do competitionand legitimationaffect densitydependence?

Competition andlegitimation

Cellularautomata

Neighborhood sizemoderates therelationship ofdensity withfounding and failurerates

March (1991) What is therelationshipbetweenexploration andexploitation?

Exploitation andexploration

Stochasticprocesses

Exploitation processesare effective in theshort run butdestructive in thelong run

Davis, Eisenhardt, &Bingham (2007)

What is the optimaldegree ofstructure?

Improvisation Stochasticprocesses

Unpredictability (notambiguity,complexity, orvelocity) is the driverof the tensionbetween structureand chaos

484 AprilAcademy of Management Review

Page 6: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

Simple theory may also include concepts andbasic processes from well-known theories (e.g.,competition, imitation), especially when the re-search focus is on their vaguely (if at all) under-stood interactions. Propositions may be formallystated (Davis et al., 2007; Rivkin, 2001) or implicit(Rudolph & Repenning, 2002). The fundamentalidea is that theory development using simula-tion should begin with a simple theory, ratherthan either an extensive theoretical base or aclean theoretical slate. This simple theoryshould address a theoretically intriguing re-search question that focuses on a fundamentalphenomenon. Such theory can then be a plat-form from which powerful theory can be devel-oped through the verification and experimenta-tion enabled by simulation (Lave & March, 1975;Stinchcombe, 1968).

Rivkin (2001), for example, focused on a re-search question that asked, “What is the optimallevel of strategic complexity?” He relied on asingle proposition linking strategic complexitywith performance that was based on case stud-ies and prior theoretical work. This propositionstated that a moderate level of strategic com-plexity leads to the highest performance. Simi-larly, Rudolph and Repenning (2002) used previ-ous case study research as the basis of a simpletheory describing how minor events could cre-ate major catastrophes. The theory consisted ofseveral propositions that linked quantity of in-terruptions, stress, and performance.

Simulation is particularly effective when thesimple theory involves several basic processes,such as competition and legitimation (Lomi &Larsen, 1996) or imitation and experimentation(Zott, 2003), with only vaguely (if at all) under-stood longitudinal interactions. These interac-tions are often difficult to study with traditionalstatistical methods or to anticipate with thoughtprocesses. In contrast, these processes usuallycan be computationally represented, verified,and then explored (separately and in interac-tion) using simulation. For example, Sastry(1997) investigated the longitudinal interactionbetween inertial and change processes. Her re-sults included unexpected insights about time-pacing and dynamic markets that were not an-ticipated in previous theory and in previousempirical evidence (Tushman & Romanelli,1985).

Simulation is also particularly effective fortheory development when the research question

involves a fundamental tension or trade-off. Thetension may be temporal, such as short- versuslong-run implications (March, 1991; Sterman, Re-penning, & Kofman, 1997); structural, such as toomuch structure versus too little (Davis et al.,2007; Rudolph & Repenning, 2002); or spatial,such as near versus far away (Lomi & Larson,1996; Schelling, 1971). These tensions often resultin nonlinear relationships, such as tipping pointtransitions and steep thresholds. These andother nonlinear relationships are difficult to dis-cover using inductive case methods and difficultto explore with traditional statistical tech-niques. Yet they often offer surprising, nonintui-tive results.

Choose a Simulation Approach

Assuming that simulation is the best way toproceed (e.g., intriguing research question andsimple theory), the next step is to select a simu-lation approach. This choice should depend onthe fit of the research question, assumptions,and the theoretical logic of the simple theorywith those of the simulation approach. Thischoice is crucial because the simulation ap-proach can impose a theoretical logic, type ofresearch question, and related assumptions.Much as the choice of a statistical technique(e.g., OLS regression versus event history) canshape theory-testing empirical research, thechoice of simulation approach shapes the sub-sequent results. In fact, the choice of simulationapproach may be closer to choosing a theoreti-cal framework (e.g., resource dependence versustransaction cost economics) because of its fram-ing of research questions, key assumptions, andtheoretical logic.

Several well-known simulation approacheshave been used for theory development in theorganizations and strategy literature.3 Some arecommon, such as system dynamics (Repenning,2002; Rudolph & Repenning, 2002) and NK fitness

3 Other researchers (e.g., Dooley, 2002) have used simula-tion typologies. We chose our typology because it is a fine-grained mapping that accurately relates to the major ap-proaches that are used in the organizations and strategyliterature. That is, it reflects the most frequently used cate-gorization in the relevant extant research. Given the scope ofour paper, our description of these approaches is necessarilylimited. Interested readers should turn to technical treat-ments if they wish to use these approaches.

2007 485Davis, Eisenhardt, and Bingham

Page 7: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

landscapes (Levinthal, 1997; Rivkin & Sig-gelkow, 2003). Others are less frequently used,such as genetic algorithms (Bruderer & Singh,1996) and cellular automata (Lomi & Larsen,1996). Each of these is a structured approach thatconstrains the theoretical logic, assumptions,and research questions that can be explored. Incontrast, the stochastic process approach (Daviset al., 2007; March, 1991; Zott, 2003) is a custom-ized approach that is useful when the structuredapproaches do not fit with the research at hand(see Table 3 for a comparison of these ap-proaches).

System dynamics. The system dynamics ap-proach focuses on how causal relationshipsamong constructs can influence the behavior ofa system (Forrester, 1961; Sastry, 1997; Stermanet al., 1997). The approach typically models asystem (e.g., organization) as a series of simpleprocesses with circular causality (e.g., variableA influences variable B, which influences vari-

able A). These processes have some commonconstructs and so intersect in a set of circularcausal loops. These causal loops can be positivesuch that feedback is self-reinforcing and am-plifying, or negative such that feedback isdampening (Sterman, 2000). While each processmay be well-understood, their interactions areoften difficult to predict. The system typicallyincludes stocks, acting as buffers (i.e., constructswith values that accumulate and dissipate overtime, and so introduce time delays) and flows(i.e., constructs specifying temporal rates in thesystem).

Rudolf and Repenning (2002), for example,used system dynamics to examine why minorinterruptions sometimes trigger sudden catas-trophes within organizations. The authors usedintersecting causal loops to model two differentprocesses (one positive loop, one negative loop)by which stress, quantity of interruptions, andperformance were causally related (Yerkes &

TABLE 3Comparison of Simulation Approaches

Simulation Approach FocusCommon ResearchQuestion(s) Key Assumptions Theoretical Logic Common Experiments

System dynamicsSastry (1997),Sterman,Repenning, &Kofman (1997),Repenning (2002),Rudolph &Repenning (2002)

Behavior of a systemwith complexcausality andtiming

What conditions createsystem instability?

● System of intersecting, circularcausal loops

● Stocks that accumulate anddissipate over time

● Flows that specify rates withinsystem

● Description● Inputs to a system

of interconnectedcausal loops, stocks,and flows producesystem outcomes

● Add causal loops● Change mean of flow rates● Change variance of flow

rates

NK fitness landscapesLevinthal (1997),Gavetti & Levinthal(2000), Rivkin (2000),Rivkin & Siggelkow(2003)

Speed andeffectiveness ofadaptation ofmodular systemswith tight versusloose coupling toan optimal point

● How long does ittake to find anoptimal point (e.g.,high-performingstrategy)?

● What is theperformance of theoptimal point?

● System of N nodes, K couplingbetween nodes

● Fitness landscape that mapsperformance of all combinations

● Adaptation via incrementalmoves and long jumps

● Optimization● Adaptation of a

modular systemusing searchstrategies (i.e., longjumps, incrementalmoves) to find anoptimal point on afitness landscape

● Vary N and K● Change adaptation moves● Add a “map” of the

landscape● Create an environmental jolt

Genetic algorithmsBruderer & Singh(1996), Zott (2002)

Adaptation of apopulation ofagents (e.g.,organizations) viasimple learningto an optimalagent form

● What affects the rateof adaptation (orlearning or change)?

● When and/or does anoptimal formemerge?

● Population of agents with genes● Evolutionary adaptation (v-s-r)● Variation via mutation

(mistakes) and crossover(recombinations)

● Selection via fitness(performance)

● Retention via copying selectedagents

● Optimization● Adaptation of a

population ofagents using anevolutionary processtoward an optimalagent form

● Vary mutation probability● Vary crossover probability● Vary length of time of

evolution● Create an environmental jolt

Cellular automataLomi & Larsen(1996)

Emergence of macropatterns frommicro interactionsvia spatialprocesses (e.g.,competition,diffusion) in apopulation ofagents

● How does the patternemerge and change?

● How fast does apattern emerge?

● Population of spatially arrayedand semi-intelligent agents

● Agents use rules (local andglobal) for interaction, somebased on spatial processes

● Neighborhood of agents wherelocal rules apply

● Description● Interactions among

agents followingrules producemacrolevel patterns

● Change the rules● Change the neighborhood

size

Stochastic processesMarch (1991),Carroll & Harrison(1998), Zott (2003),Davis, Eisenhardt, &Bingham (2007)

Flexible approach toa wide variety ofresearchquestions,assumptions, andtheoretical logics

No specific researchquestions beyondasking what theeffects of varyingthe stochasticsources are

● One or more processes by whichsystem operates

● One or more stochastic sources(e.g., process elements)

● Probablistic distributions foreach stochastic source

No specific theoreticallogic

● Change stochastic sources● Vary levels of stochasticity● Unpack constructs● Change pieces of theoretical

logic

486 AprilAcademy of Management Review

Page 8: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

Dodson, 1908). The number of accumulated inter-ruptions was included as a stock, and the rate ofinterruptions and the rate of their resolutionwere flows. By varying these rates, the authorswere able to develop theory about when thesystem would be stable, when it would hit tip-ping points, and whether (and, if so, how fast)those tipping points would lead to catastrophe.

System dynamics is particularly applicablefor understanding the behavior of systems withcomplex causality and timing. Research ques-tions are often framed in terms of how specificinitial conditions affect the stability of the sys-tem. That is, researchers are usually interestedin finding the initial conditions that lead toabrupt, nonlinear changes, such as tippingpoints, catastrophes, and the emergence of vi-cious or virtuous cycles. Researchers often beginwith two causal loops and then add successiveones in order to build intuition, understanding,and realism in a structured way. Although theapproach can accommodate some stochasticity,it relies on deterministic differential equationsand so is not as useful when there are many orcomplicated sources of stochasticity.

NK fitness landscapes. The NK approach wasdeveloped in evolutionary biology to study ge-netic systems (Kauffman, 1993). This approachfocuses on how rapidly and effectively a modu-lar system adapts to reach an optimal point,especially when interactions among the systemcomponents are crucial. Specifically, the system(e.g., organization, product, strategy) is concep-tualized as a set of N nodes, and K interactionsamong the nodes. The system is assumed to useadaptation (sometimes termed search) strate-gies, such as incremental moves and longjumps, to find the optimal point (e.g., best orga-nization, best product). To illustrate, Rivkin(2001) focused on strategy. He defined N as theelements of a strategy (e.g., manufacturing andadvertising choices) and K as the degree of in-teraction among the elements. The optimal pointwas the highest performing strategy.

A key concept is the fitness landscape (Wright,1931) that is created by assigning performancevalues (termed fitness) to every combination ofvalues. The shape of the landscape depends onthe interaction (K) among the nodes (Kauffman,1989). When there is little interaction (low K),there is one optimal combination (or perhaps afew). The corresponding fitness landscape is“smooth,” with only one or a few hills such that

it is easy to find the optimal point. As interaction(K) increases, more combinations become lo-cally optimal. The corresponding fitness land-scape is “rugged,” with many hills of varyingheights that correspond to varying performancelevels. This landscape type is hard to traverse tofind the optimal point (Kauffman, 1989). Theoverall theoretical logic of the NK approach em-phasizes the adaptation of a modular systemusing specific search strategies to find an opti-mal point on a fitness landscape.

For example, Gavetti and Levinthal (2000) ex-amined how experiential learning (alone andwith cognition) affected the time needed to findan optimal policy for an organization. They rep-resented organizational policy as N policy ele-ments and K interactions among them. Theythen compared the time to find an optimal policy(and its performance) using only experientiallearning (i.e., incremental moves) with usingboth experiential learning and cognition (i.e., abasic “map” of the landscape that enabled longjumps) under varying levels of coupling (K)among elements.

The NK approach is particularly applicablefor understanding how the speed and effective-ness of adaptation to an optimum within a mod-ular system are influenced by tight versus loosecoupling among the system’s components. Re-search questions are often framed as “problemsolving” or “searching” to find an optimal point.Researchers often ask how long it takes to findan optimal point (e.g., high-performing strategy,high-performing product) or what the effective-ness or performance of the optimal point is (e.g.,local versus global optimum). They are typicallyinterested in how the number of nodes (e.g.,number of product features), the interactionamong nodes (e.g., tight versus loose coupling),types of adaptation (i.e., long jumps versus in-cremental moves), or the use of landscape“maps” (e.g., cognition, science) influences thetime to find an optimal solution and/or the per-formance of that solution. Although the ap-proach can be modified to accommodate someenvironmental dynamism (e.g., environmentaljolts) or node intelligence, it is less useful whenthe effects of market dynamism, the intelligenceand/or uniqueness of nodes in a modular system(e.g., differences among business units in anorganization), or varied types of interaction (e.g.,strong versus weak ties) are a central interest.

2007 487Davis, Eisenhardt, and Bingham

Page 9: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

Genetic algorithms. Like the NK approach, ge-netic algorithms are an optimization approachwith roots in biology. But unlike the NK ap-proach, genetic algorithms focus on how rapidlyand effectively a population of heterogeneousagents composed of genes (e.g., population oforganizations composed of routines, populationof consumers with preferences) adaptivelylearns (Goldberg, 1989; Holland, 1975).4 Adapta-tion occurs through a stochastic evolutionaryprocess (i.e., variation-selection-retention) thatfavors gradual improvement through accumu-lated experience (Aldrich, 1999; Holland, 1975).Variations occur in two ways: mutation (i.e., ran-dom change of one or a few genes that corre-sponds to mistakes) and crossover (i.e., randomswitching of sets of genes among agents thatcorresponds to recombination of existing genes).Selection of agents (also termed genes) occursaccording to their performance (also termed fit-ness) with respect to some metric. Retention(also termed reproduction) is the copying of theselected agents from one generation to the next.Over time, successful variations are more likelyto be retained and form the basis of future vari-ations (Andreoni & Miller, 1995; Arifovic et al.,1997). Eventually, only high-performing agentsremain in the population, and, ultimately, oftenonly a single agent form survives (e.g., dominantdesign). The theoretical logic emphasizes theevolutionary adaptation of a population towardan optimal form that is path dependent andcombinatorial.

For example, Bruderer and Singh (1996) used agenetic algorithm to examine organizationalevolution within a population of organizations.In their model, each of the 250 organizations inthe population was composed of 20 routines thatcould change as a result of both mutation andcrossover variation processes. The authors wereparticularly interested in the effect of learningon organizational evolution. They found that

learning accelerated the discovery of an effec-tive organizational form. Genetic algorithmscan also be used to examine the evolution ofspecific types of strategies within an agent. Forexample, Zott (2002) used a genetic algorithmapproach to model the bargaining behavior of anegotiator with private information. The geneswere the negotiator’s bargaining rules that de-termined when to offer a contract and when toaccept or reject another’s offer. Among other in-sights, the study clarified the effects of completeand incomplete information in producing ineffi-cient (i.e., nonoptimal) outcomes, such as delaysand failure to agree.

Genetic algorithms are particularly applica-ble for describing how heterogeneous agents(e.g., organizations, consumers) learn improvedsolutions (e.g., better organizational form, betterstrategy) through experimentation. As such, thisapproach is consistent with some important as-pects of human learning, such as experience-based action, importance of recent experience,creative synthesis of ideas, and occurrence ofmistakes (Zott, 2002). Research questions aretypically framed in terms of what affects the rateof adaptation (or sometimes learning or change)and whether a dominant form emerges. Re-searchers often experiment with rates and pro-cesses of mutation and crossover, as well asperformance metrics. Although the approachcan be modified, it is less useful when the valueof the performance metrics varies (e.g., as indynamic markets), the agents can engage insophisticated cognitive processes that are notwell-captured by experiential learning (e.g.,foresight), or crossover is unlikely (e.g., techni-cal standards limit reuse of product componentsacross products in a population, internal labormarkets limit mobility across organizations in apopulation).

Cellular automata. Although not widely usedin the organizations and strategy literature,cellular automata have gained traction in thephysical sciences as an approach that focuseson the emergence of macrolevel system pat-terns from microlevel interactions among spa-tially related and semi-intelligent agents(Wolfram, 2002). Cellular automata assume asystem of agents that are spatially related(e.g., competitors in an industry, cities in acountry). Spatial relatedness implies that thedegree to which agents influence each otherdepends on the distance between them. The

4 Although there has been only limited use of geneticalgorithms in the organizations and strategy literature weare focusing on, there is an emerging tradition in the eco-nomics literature. Applications include economic growth(e.g., Arifovic, Bullard, & Duffy, 1997), auctions (e.g., Andreoni& Miller, 1995), and forecasting (e.g., Bullard & Duffy, 2001).This work is often framed in juxtaposition with highly ra-tional models of agent behavior, such as game theoreticformulations. Readers with particular interest in genetic al-gorithms should consult these sources, as well as more gen-eral sources (e.g., Goldberg, 1989; Holland, 1975).

488 AprilAcademy of Management Review

Page 10: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

agents behave according to a few simple rules(i.e., semi-intelligent agents), some of whichrelate to how the agents influence each other(Langton, 1984). Typically, the rules relate tospatial processes such that nearby agents areinfluenced more than distant ones—for exam-ple, diffusion, propagation, and competitionprocesses (Schelling, 1971). Although not re-quired, the rules are usually uniform (i.e., allagents have the same rules) and deterministic(i.e., all rules are fixed). An important conceptis the neighborhood that defines which agentsare local (and so neighbors) and which are not.At least some rules designate behaviors takenin interaction with neighbors, but not withmore distant agents.

For example, Lomi and Larsen (1996) used cel-lular automata to study density-dependencetheory and the tension between competition andlegitimation processes (Hannan & Freeman,1989). The organizations were spatially arrayedrelative to one another in a two-dimensionalgrid. The organizations possessed rules for com-petition (affecting only those in the neighbor-hood) and legitimation (affecting all organiza-tions). The authors then observed how thecompetitive and legitimating interactionsamong organizations (i.e., microlevel interac-tions) affected population density, foundingrates, and failure rates (i.e., macrolevel patterns)over time.

Cellular automata are particularly useful forexamining how macrolevel patterns emergefrom spatial processes (e.g., diffusion, competi-tion, propagation, and segregation) that operateat a micro level. In these processes, the behav-iors of agents affect each other, but their influ-ence diminishes with distance. Research ques-tions usually ask how macrolevel patternsemerge and change. Researchers typically varythe size of the neighborhood, the relevant pro-cesses (e.g., diffusion, segregation) and their re-lated rules, and the spatial array of agents (e.g.,density of agents) in order to describe the emer-gence and change of macrolevel phenomenafrom microlevel interactions among agents.While cellular automata can be modified, theapproach is less useful when intelligence re-sides at the system level (e.g., CEO in an orga-nization of business units) or the interactionsamong agents are not spatially dependent.

Stochastic processes. Stochastic processes area flexible approach that enables researchers to

custom design their simulations (Gallager,1996).5 The approach makes no particular as-sumptions about the system, research question,or theoretical logic. It is especially useful forexploring theories that do not fit with the theo-retical logic and assumptions of the structuredsimulation approaches. For example, the envi-ronment may be dynamic (Davis et al., 2007;March, 1991), or the theoretical logic may involvetemporal transmission processes (Carroll & Har-rison, 1998). Researchers typically piece togetherdistinct processes that mirror the theoreticallogic. They also build in several sources of sto-chasticity (e.g., environmental input, timing, el-ements of the process) and endow them withstochastic distributions that can be simple (e.g.,50/50 draw) or complex (e.g., normal and Poissondistributions). A key point is that while the sim-ulation is custom designed and may have someoriginal processes or constructs, researchers of-ten use existing building blocks, such as knownprocesses—for instance, Markov chains—andfamiliar sources of stochasticity and their re-lated distributions—for instance, Poisson distri-bution for input arrival times (Law & Kelton,1991).

An example is research by Carroll and Harri-son (1998). Building on their previous work (Har-rison & Carroll, 1991), these authors developedseveral underlying theoretical logics for thesimple, well-known proposition positing thatheterogeneity of tenure is related to heterogene-ity of culture. These logics related to temporaltransmission of culture and so did not fit withthe theoretical logics of structured simulationapproaches, such as system dynamics (circularcausality) and NK (search for the optimal point).The authors custom designed their simulationby combining several processes, including turn-over and socialization, and designating sourcesof stochasticity, such as the hiring and firingprocesses, and their related distributions.

Stochastic processes are particularly applica-ble when the research question, assumptions, ortheoretical logic does not fit with a structuredapproach. Although it is sometimes possible to

5 By stochastic processes, we mean a broad class of sim-ulations that are unified by the use of custom-designedalgorithms or combinations of algorithms. By contrast, al-though the structured simulation approaches may also ofteninvolve stochasticity, they have a much more specific struc-ture.

2007 489Davis, Eisenhardt, and Bingham

Page 11: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

modify a structured approach to fit the researchat hand, stochastic processes are the preferredapproach when these modifications are exten-sive. Extensive modifications can become un-wieldy, yielding a poor computational represen-tation. When using stochastic processes,researchers typically ask how alternative theo-retical logics, different assumptions, or varyingsources of stochasticity affect system outcomes.They often hold some sources of stochasticityconstant while varying others. The result is aflexible approach to theory development, albeitat the price of a lack of standardization andincreased need for modeling ingenuity.

Create the Computational Representation

In the previous sections we described theimportance of a theoretically intriguing re-search question, a simple theory that is likelyto be accurate but incomplete, and an appro-priate simulation approach (i.e., fits with theresearch question, assumptions, and theoreti-cal logic). In this section we turn to computa-tional representation of the theory. This activ-ity is central to theory development usingsimulation. It involves (1) operationalizing thetheoretical constructs, (2) building the algo-rithms that mirror the theoretical logic of thefocal theory, and (3) specifying assumptionsthat bound the theory and results. Althoughwe describe them separately, researchers usu-ally engage in these activities interactivelybecause constructs, algorithms, and assump-tions are highly interdependent. Creating thecomputational representation is roughly anal-ogous to the activities reported in the methodssection of other types of research.

Operationalizing theoretical constructs con-cerns defining the computational measures foreach construct (and, if not already done, creat-ing its verbal definition). This is roughly analo-gous to creating empirical measures for theoret-ical constructs in other types of research. Asdescribed below, effective operationalization in-volves choosing an appropriate computationalmeasure and range of values for each construct.It also involves using construct definitions andnames that fit with the extant literature wherepossible (i.e., inventing new theoretical con-structs only when none exists) in order to buildreader intuition and confidence and to enhance

the clarity of the theoretical contributions (Re-penning, 2003).

One type of computational measure is a sin-gle measure (termed parameter) that has arange of possible values. For example, Davisand colleagues (2007) operationalized their am-biguity construct by defining a single computa-tional measure with values ranging from 0 (noambiguity) to 1 (complete ambiguity). For con-structs with a range of values having no naturalbounds (e.g., rates ranging from 0 to infinity), itis appropriate to artificially bound the rangeand then test these bounds to guard against thepossibility that the values outside the boundsproduce qualitatively different or contradictoryresults (Law & Kelton, 1991). If so, the range ofvalues should be adjusted.

A second type of computational measure is amultidimensional measure (often termed a bitstring). Typically, it is effective to measure themost central constructs as bit strings (not pa-rameters) because bit strings have more preci-sion and dimensionality. These features enablemore refined and better experimentation. An ex-ample of bit string representation is strategy asa set of ten decisions (Rivkin, 2000). In this typeof representation, the dimensions of a constructare measured by specific values (often 0 and 1and occasionally ? to indicate an absent value).For example, Bruderer and Singh (1996) opera-tionalized organizational form as a bit string oftwenty routines with 1 indicating a correct rou-tine, 0 indicating an incorrect one, and ? indicat-ing an absent routine that could be learned.

A strength of simulation research is constructvalidity—that is, accurate specification andmeasurement of constructs (Cook & Campbell,1979). Simulation requires precise specificationof constructs and their measures, and so avoids“noisy” measurement that affects construct va-lidity in empirical research (Rosenthal & Rose-now, 1991). In addition, as required by its rigor-ous, step-by-step logic, simulation involvesprecise specification of units of analysis (e.g.,product, strategy) and intervening constructsthat are often poorly conceptualized and unmea-sured in empirical research. Since simulationeliminates the measurement errors associatedwith empirical data, convergent and discrimi-nant validity are not germane (Campbell &Fiske, 1959). Finally, simulation also has the ad-vantage of quick, flexible adjustment of con-struct measures (and even the constructs them-

490 AprilAcademy of Management Review

Page 12: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

selves) by changing the computer code. Suchadjustment is usually challenging in empiricalresearch, particularly after the data are col-lected.

The computational representation also in-volves building algorithms in software that cap-tures the step-by-step theoretical logic underly-ing the simple theory. In other words, thesoftware code should embody the theoreticallogic. Specifically, the algorithms should consistof a series of steps for modifying construct val-ues in accordance with the underlying theoreti-cal logic of the simple theory. As noted earlier,structured simulation approaches (e.g., NK, ge-netic algorithms) offer a standardized approachfor encoding theoretical logic that is usefulwhen the research fits the simulation approach(Rivkin, 2001). In contrast, an unstructured ap-proach such as stochastic processes offers flex-ibility when no structured approach fits the re-search.

Algorithmic representations vary in complex-ity. Some algorithms consist of two or threesteps involving a few constructs (e.g., March,1991; Rudolph & Repenning, 2002). Others aremultistep, involving several or more constructs(e.g., Carroll & Harrison, 1998; Sterman et al.,1997). Algorithmic complexity should depend onthe complexity of the underlying theoreticallogic being represented and should relate to thewell-known theoretical trade-off between parsi-mony and accuracy (Pfeffer, 1982). Like othermethods, simulation research attempts to bal-ance parsimony and accuracy by capturing thecentral logic while stripping away the nones-sential. This balance is ultimately a judgmentcall. But, unlike other methods, it is often effec-tive to err on the side of simplicity in simulationresearch in order to enhance the intuition andconfidence of readers in the simulation results(Repenning, 2003). Complexity (and, thus,greater accuracy and realism) can then beadded through a series of structured experi-ments.

Finally, computational representation in-volves specifying assumptions. Some of theseassumptions relate to boundary or scope condi-tions of the theory. But other assumptions aresimplifications of the simulation itself that en-able the researcher to strip out complexity inorder to focus on the central logic and con-structs. Thus, these assumptions are intimatelyrelated to the complexity of the algorithmic rep-

resentation. For example, both Rivkin’s (2001)assumption of zero search costs and Adner’s(2002) assumption that all markets are the samesize enable less complicated algorithmic repre-sentations. As above, the key judgment is thebalance between parsimony and accuracy. Thisis, of course, a familiar judgment in other re-search methods, where choices about samplingin specific contexts (e.g., single industry studies)or about using the controlled environment of thelaboratory, in effect, enable a less complicatedtheoretical logic (Fine & Elsbach, 2000). Thisjudgment, however, is more readily changed insimulation research because of the relative easeof shifting computer code.

An important strength of theory developmentusing simulation methods is internal validity(Campbell & Stanley, 1966). Creating a compu-tational representation involves the precisespecification of the theoretical logic that is en-forced through the discipline of algorithmic rep-resentation in software (Abelson, Sussman, &Sussman, 1996). Coupled with the precise spec-ification of constructs, measures, and assump-tions that is also enforced by the software, thediscipline of algorithmic representation sharp-ens loose theoretical arguments about the defi-nition of constructs, relationships among con-structs, and underlying logic (Carroll & Harrison,1998; Sastry, 1997). The result is theory that is morelikely to exhibit strong internal validity.

Verify the Computational Representation

A critical step in developing theory using sim-ulation methods is verification of the computa-tional representation. Roughly analogous to ma-nipulation checks in laboratory experimentsand examination of correlation matrices in mul-tivariate analysis, verification helps to ensurethat the computational representation accu-rately embodies the theoretical logic, the theoryis internally valid, and the simulation resultscan be interpreted with confidence.

Researchers should verify their computationalrepresentation in several ways. The most impor-tant is comparing simulation results with the(implicit or explicit) propositions of the simpletheory. If the simulation confirms the proposi-tions, then the theoretical logic and its compu-

2007 491Davis, Eisenhardt, and Bingham

Page 13: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

tational representation are likely to be correct.6

Rivkin (2001), for example, verified his computa-tional representation by confirming his proposi-tion that the advantage of a firm’s replication ofits own strategy over another firm’s imitation ofthe same strategy was greatest at moderate lev-els of strategic complexity. He did so by runningthe simulation at various values of strategiccomplexity and then comparing the simulationresults with his inverted U-shaped prediction. Insimulations where the simple theory centers onseveral well-known processes, each processshould be verified.

Simulation researchers should further verifytheir computational representation with robust-ness checks (sometimes termed sensitivity anal-ysis7) that increase confidence that the compu-tational representation is stable. For example,Zott (2003) used alternative starting values of thedecision variables to confirm that his computa-tional representation was robust to alternativeinitial conditions. Davis and colleagues (2007)used two alternative operationalizations of their

organizational structure construct to verify thattheir simulation results were not dependentupon a specific representation of their centralconstruct. Researchers should also use othertechniques to verify that their software coding iscorrect, such as tracking variables at intermedi-ate steps in simulation and running the simula-tion with extreme values of constructs, andthose techniques specific to particular simula-tion approaches (e.g., pulse tests in system dy-namics). These tests often have no theoreticalimplications but, rather, are a means of verify-ing the accuracy of the software code.

When researchers find a mismatch betweenthe propositions of their simple theory and thesimulation results, it is often due to softwarecoding errors that are subsequently fixed. But,occasionally, the mismatch reveals shortcom-ings in the theoretical logic. In these situations,researchers often have an opportunity to de-velop fresh theoretical insights. For example,Davis and colleagues (2007) unexpectedly ob-served that the anticipated inverted U-shapedrelationship between the amount of organiza-tional structure and performance was asymmet-ric (i.e., steep on one side and skewed on theother). This led the authors to alter their simpletheory to account for this skew. While Davis andcolleagues (2007) modestly improved their initialtheory, Sastry (1997) made a significant improve-ment. In her attempt to verify her computationalrepresentation of punctuated equilibrium the-ory, she found shortcomings in the theoreticallogic. Sastry returned to the original theory anddiscovered that Tushman and Romanelli (1985)had failed to account for some aspects of reori-entation. This unexpected logical flaw led Sas-try to reformulate the theory by adding assump-tions and a negative feedback process.

Overall, the key point of verification is to en-sure that the computational representation ac-curately represents the underlying theoreticallogic. Thus, given that the researcher attemptsto encode the theoretical logic in the software,the simulation results should replicate the sim-ple theory, bolster internal validity, and therebyincrease confidence in the results of the simula-tion.

Experiment to Build Novel Theory

Experimentation is at the heart of the value ofsimulation methods for developing theory.

6 Both verification and experimentation involve runningthe simulation, and thus making several specific decisionsabout those runs. One is the number of time steps. Theappropriate number depends on researcher objectives (e.g.,observing the process unfold, assessing the time to reach anoptimal point, and examining steady-state processes), andso researchers should justify their choice in light of theirobjectives. A second decision is the number of runs per set ofspecific values of independent constructs. This is roughlyanalogous to sample size in hypothesis-testing research—specifically, the sample size in a cell of an ANOVA labora-tory study. Here the decision depends on statistical power.More runs raise the statistical power, so the choice dependson the power necessary to perform adequate statistical anal-ysis (i.e., creating confidence intervals for dependent vari-able values). The third is the number of sets, roughly anal-ogous to the number of different conditions in a laboratorystudy—that is, the number of cells in an ANOVA laboratorystudy. However, unlike laboratory researchers, simulationresearchers are able to make many runs, and so often onlyreport those that are the most significant vis-a-vis conveyingtheir results. Researchers should also justify these latterdecisions in light of their research. We refer interested read-ers to texts on simulation (e.g., Law & Kelton, 1991) for moreinformation.

7 Sensitivity analysis is a widely used term within simu-lation with several meanings. We use more precise lan-guage to distinguish three kinds of analysis that are oftentermed sensitivity analysis: verification of propositions ofsimple theory, robustness checks of the computational rep-resentation (roughly analogous to the meaning of robust-ness using empirical methods), and experimentation to cre-ate new theory.

492 AprilAcademy of Management Review

Page 14: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

While verification of the computational repre-sentation is necessary and useful for demon-strating the accuracy of the computational rep-resentation, establishing internal validity, andenhancing reader confidence, it involves confir-mation of theory that is already known. In con-trast, effective experimentation builds new the-ory by revealing fresh theoretical relationshipsand novel theoretical logic. Moreover, experi-mentation is a particular strength of simulationmethods. Empirical experimentation is con-strained by data limitations, and formal model-ing experimentation is constrained by mathe-matical tractability. In contrast, simulationmethods enable experimentation across a widerange of conditions, simply by changing thesoftware code (Bruderer & Singh, 1996; Zott,2003).

There are several approaches to effective ex-perimentation. A common one is varying thevalue of constructs that were held constant inthe initial simple theory. This approach is par-ticularly useful for uncovering moderating andinteraction relationships, new main effects, andintervening constructs. Typically, researchersexperiment with a wide range of values of apreviously fixed construct in order to fully ex-plore the effects of the construct on outcomes,but they then only report the most intriguingresults. For example, after verifying their initialproposition that experiential learning and cog-nition would lead to a higher performance thanexperiential learning alone, Gavetti andLevinthal (2000) varied the coupling among pol-icy elements that was previously held constant.This enabled the authors to investigate howcoupling moderated the relationship betweencognition and performance and to extend theirsimple theory with this interaction effect. Simi-larly, Rudolph and Repenning (2002) examinedthe effects of minor interruptions on the emer-gence of major organizational catastrophes. Af-ter verifying the anticipated tipping points, theauthors experimented by varying the previouslyconstant variance in the interruption rate. Thisenabled the authors to extend their original sim-ple theory to include the interaction effects ofinterruption variance on performance.

A second approach to experimentation is un-packing key theoretical constructs. By unpack-ing, we mean breaking a single construct intoconstituent component constructs. Unpacking aconstruct is particularly useful when the origi-

nal construct is imprecise, abstract, or multidi-mensional such that the different dimensionsmay have distinct effects. Experiments then fo-cus on uncovering these possible effects. Forexample, after verifying their simple theory,Davis and colleagues (2007) unpacked their mar-ket dynamism construct into four granular con-structs (i.e., velocity, complexity, ambiguity, andunpredictability) that captured distinct dimen-sions of market dynamism. They then experi-mented with each construct by running the sim-ulation holding three constructs constant andvarying the fourth. This enabled the authors toextend their theory to the performance implica-tions of different kinds of market dynamism andto isolate the specific implications of each con-struct for their theory.

A third approach to experimentation is vary-ing assumptions. This approach is particularlyuseful when fundamentally different processesmay reasonably exist. Experiments focus on re-vealing their possible distinct effects. For exam-ple, Rivkin (2000) verified his theory relating thespeed with which executives find high-perform-ing strategies to the coupling among elementsof strategy. In doing so, he assumed an “incre-mental” search process in which executiveschange their current strategy only if alteringspecific decisions within that strategy leads toincreased performance. He then experimentedwith two alternative search processes: “follow-the-leader” and “hybrid.” These experiments en-abled him to elaborate his original theory toinclude the implications of alternative searchprocesses.

A fourth approach to experimentation is add-ing new features to the computational represen-tation. By adding this complexity in successivecomputational representations, researchers canbuild theoretical understanding of the phenom-enon in a structured way that enhances the un-derstanding of both researchers and readers.Additional complexity (e.g., processes and con-structs) is particularly useful when researcherswant to explore the interaction of multiple pro-cesses that are well-known alone but not incombination and, more broadly, when greaterrealism is desired. For example, Repenning’s(2002) simple theory related persistence of man-agerial commitment to the success of new inno-vations. After verifying this theory with a singleorganizational group, he experimented by add-ing an additional feature (i.e., second organiza-

2007 493Davis, Eisenhardt, and Bingham

Page 15: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

tional group) that made the simulation morecomplex but also more realistic. In revealingthat a second organizational group is unlikely tobe successful in adopting an innovation, Repen-ning extended his simple theory to include mul-tiple groups, thereby enhancing its theoreticalrelevance for real organizations.

Several criteria should shape experimenta-tion design. The most important criterion is the-oretical contribution. Understanding where the-oretical contributions are likely to be usuallystems from knowledge of the literature. Al-though serendipitous discoveries can occur,knowing the literature often enables the re-searcher to know where theoretical discrepan-cies are and so where to experiment to improvethe likelihood of intriguing theoretical insights.Such insights are the reason d’etre of the re-search. A related criterion for experimentationdesign is realism. As noted earlier, it is ofteneffective to begin with a simple computationalrepresentation in order to build reader intuitionand confidence in the simulation. Experimenta-tion can then add realism that enhances thegeneralizability of the resulting theory.

Experimentation is closely associated withbuilding theory using “disciplined imagination”(Weick, 1989). That is, researchers using simula-tion can readily engage in an evolutionary pro-cess of speculative experiments to create alter-native versions of theory (imagination) and thenselect the best among them (discipline). Al-though researchers can engage in disciplinedimagination using other methods, such as sys-tematic thought experiments, these methods of-ten fail when the theory involves intertwinedand longitudinal processes, timing effects, andnonlinearities that are the province of simula-tion. In effect, the simple theory of a simulationbecomes a platform for elaborating and extend-ing theory through creative and systematic ex-perimentation (Weick, 1989). The result may befundamental (i.e., widely applicable and nonob-vious) theory (Lave & March, 1975).

Overall, experimentation is at the heart of thevalue of using simulation methods for theorydevelopment. While verification confirms exist-ing theory and bolsters internal validity, exper-imentation adds the creative and even surpris-ing theoretical contributions that build newtheory. Sometimes this new theory is an incre-mental advance. But sometimes this theory is afundamental improvement in which frame-

breaking insights generate qualitatively bettertheory.

Validate with Empirical Data

A final step in theory development using sim-ulation methods is validation. Validation in-volves comparison of simulation results withempirical data. If the results of the simulationmatch the empirical evidence, then the simula-tion is validated for that empirical context. Thisstrengthens the external validity of the theory—that is, generalizability and predictability of thetheory (Campbell & Stanley, 1966).

There is, however, some debate over the valueof validation. For example, some theorists arguethat the central task of theory development iscreating interesting theory, and so they dimin-ish the importance of validation (e.g., VanMaanen, 1995; Weick, 1989). Others disagree.Our view is contingent—that is, the importanceof validation should depend on the source of thesimple theory that is the basis of the simulation.If this theory is based primarily on empiricalevidence (e.g., field-based case studies and em-pirically grounded processes), then validation isless important, because the theory already hassome external validity. In contrast, if the theoryis based primarily on nonempirical argument(e.g., formal analytic modeling) or on evidencefrom distant scientific disciplines (e.g., physics),then validation is more important.

There are several approaches to effective val-idation. One is to compare the results of thesimulation to statistical results derived fromlarge-scale empirical data. This approach en-ables a broad-brush validation of the simulationresults (Adner & Levinthal, 2001). A standardtechnique in economics, for instance, is to usetheories developed with simulation to “predict”the results of some well-known data set. Forexample, Nelson and Winter (1982) created asimulation of their theory of evolutionary eco-nomic change and then used that simulation toreproduce historical productivity data. Anotherapproach is to compare the simulation resultsand theoretical logic with case study data. Thisapproach seeks to demonstrate that the simula-tion is consistent with the specific details of oneor a few examples. This enables granular vali-dation. For example, Sterman and colleagues(1997) used interview data from one organizationto show that their theory of organizational

494 AprilAcademy of Management Review

Page 16: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

change was plausible in at least this one orga-nization. The choice between these two ap-proaches usually depends on data availability.

DISCUSSION

While simulation is an increasingly signifi-cant methodological approach in the organiza-tions and strategy literature (e.g., Lant & Mezias,1990; Rivkin & Siggelkow, 2003; Repenning, 2002;Zott, 2003), its link to theory development re-mains unclear and even controversial. Our pur-pose here has been to clarify when and how touse simulation methods to develop theory.

Our first contribution was a roadmap for con-ducting effective theory development using sim-ulation. Like all effective research, developingtheory from simulation methods depends on be-ginning with an intriguing research questionthat relates to fundamental theoretical issues.Simulation is especially effective when that re-search question relates to competing tensions(e.g., long versus short run, structure versuschaos) and intertwined processes (e.g., inertiaand change, competition and legitimation) thatare especially well-addressed by simulation.The central activity is developing an accuratecomputational representation of simple (i.e., un-developed) theory through appropriate selectionof a simulation approach (e.g., genetic algorithmversusstochasticprocesses), constructoperation-alization, and algorithmic representation. Thebenefits of simulation emerge in verifying thecomputational representation, which strength-ens the internal validity of the simple theory,and, more significant, in creatively experiment-ing to produce novel theoretical insights. In-deed, creative experimentation is at the heart ofthe value of simulation methods for theory de-velopment.

Our second contribution was positioning sim-ulation methods within the broad context of the-oretical development in the organizations andstrategy literature. Simulation is particularlyuseful for developing theory in the “sweet spot”between theory creating using such methods asmultiple case inductive research (Eisenhardt,1989) and formal modeling (Freese, 1980) andtheory testing using empirical evidence andmultivariate statistical techniques (Lattin, 2003;Pfeffer, 1993). On the one hand, theory-creatingmethods can reveal simple theory but are lim-ited in their ability to elaborate that theory. The

inductive case method is constrained by limiteddata, and formal modeling is constrained bymathematical tractability. Simulation can miti-gate these weaknesses by exploring, elaborat-ing, and extending simple theory that is pro-duced by these theory-creating methods. On theother hand, as Sutton and Staw (1995) argue,many theory-testing studies lack conceptualprecision and logical theoretical argument. Sim-ulation can mitigate these weaknesses in inter-nal validity by tightening the rigor of the under-lying theoretical logic, sharpening constructs,and elaborating theoretical propositions prior toempirical test.

Strengths and Weaknesses

These observations suggest several importantstrengths for developing theory with simulation.One is internal validity (Cook & Campbell, 1979).The computational rigor of simulation forcesprecise specification of constructs, assumptions,and theoretical logic that creates strong internalvalidity. This emphasis on internal validity isparticularly valuable because it addresses acommon weakness of empirical research: oftenpoor theoretical logic regarding “why” proposi-tions might be true (Sutton & Staw, 1995; Whet-ten, 1989). This emphasis also mitigates anothercommon weakness of empirical research: weakspecification of boundary conditions. By requir-ing precise specification of assumptions, simu-lation typically bounds the scope of the theoryand so clarifies boundary conditions. Thus,while other methods emphasize constructs andpropositions, simulation puts their underlyingtheoretical logic and assumptions center stage.

Another strength is experimentation. Simula-tion creates a computational laboratory inwhich researchers can systematically experi-ment (e.g., unpack constructs, relax assump-tions, vary construct values, add new features)in a controlled setting to produce new theoreti-cal insights. This experimentation is particu-larly valuable when the theory seeks to explainlongitudinal and processual phenomena thatare challenging to study using empirical meth-ods because of their time and data demands.Simulation is also well-suited to theory develop-ment related to nonlinear phenomena, such astipping points, feedback loops, thresholds andcatastrophes, and asymmetries. These are diffi-cult to uncover using inductive case methods

2007 495Davis, Eisenhardt, and Bingham

Page 17: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

and to examine using standard statistical tech-niques. Yet, significantly, these phenomena arebecoming central as theory development movesfrom cross-sectional and equilibrium perspec-tives to longitudinal and dynamic ones.

Yet, like all methods, theory development us-ing simulation has weaknesses. A primary oneis external validity. Simulation eliminates com-plexity in order to focus on the core aspects ofphenomena and so uses computational repre-sentations that are often stark, such as repre-senting an organization by a 0/1 bit string (Bru-derer & Singh, 1996), or making clearly falseassumptions, such as no disadvantages to beinga second mover (Rivkin, 2000). The result can bean overly simplistic and distant model that failsto capture critical aspects of reality.

Toward Better Theory Development UsingSimulation Research

Like all research, theory development usingsimulation methods should be evaluated ac-cording to two fundamental criteria: theoreticalcontribution and strength of method. Theoreticalcontribution is partially determined by the qual-ity of the research question and its relatedgrounding in the literature. That is, the researchquestion should center on a significant issuewithin the related literature. In contrast, re-search with no grounding in the relevant litera-ture often has a research question that does notaddress a useful theoretical gap, or it has theo-retical constructs and theory that do not fit withtheir conceptualization and terminology withinthe literature. This lack of grounding can furtherlead to weak integration of the simulation re-sults with the literature. While awareness ofand at least some grounding in the extant liter-ature seem obvious, our experience indicatesthat simulation researchers are more likely tomisunderstand their research context (i.e.,choose poor research questions, fail to under-stand and use well-developed concepts) thanothers, or to simply pass over the contributionsof others working with different methods. Yetwithout situating the research question and the-ory in the current literature and then relatingresults to that literature, it is difficult to have atheoretical contribution. Thus, the reader shouldask whether the research is situated within therelevant research literature and addresses asubstantive research question.

Theoretical contribution is also determined bythe quality of the experimentation. As noted ear-lier, experimentation is a key strength of simu-lation and the primary source of theoretical in-sight. That is, systematic experimentationfocused on unexplored theoretical constructs(e.g., varying construct values, unpacking con-structs into more precise constructs), major as-sumptions (e.g., relaxing them, varying them),and important elaborations (e.g., adding newfeatures) is integral to theoretical contribution.Therefore, when evaluating theory developmentusing simulation, the reader should focus on theexperimentation to assess whether and to whatextent theoretical contribution has been made.

Simulation authors should likewise distin-guish between the verification of the simpletheory that confirms the computational repre-sentation and the internal validity, and thetheoretical insights that emerge from system-atic experimentation (or occasionally fromfailed verification) that creatively elaborate andextend the theory. Although this may seem ob-vious, simulation researchers have a tendencyto communicate their findings as if simply sim-ulation of a phenomenon per se is a theoreticalcontribution. That is, they confuse building asimulation with theoretical contribution. Somealso stop at verification (e.g., various robustnesschecks), rather than continuing to genuine ex-perimentation. But research that only verifiesknown theory or jumbles verification and exper-imentation typically fails to create theoreticalcontribution. Indeed, like all research, the theo-retical contribution of simulation research is in-dependent of method and relates to the additionof new theoretical insight. Thus, the readershould ask whether the results constitute a the-oretical contribution.

The second criterion is the strength of method.In the context of the roadmap outlined earlier, ahigh-quality method includes justification forusing simulation for the research question athand and using a simulation approach (e.g., cel-lular automata, stochastic processes) that fitsthe research. For example, as is common prac-tice in empirical research, simulation research-ers should clearly justify the simulation ap-proach (as one would justify a statisticalapproach), define constructs, and indicate thecomputational measures (as one would describeempirical measures). The methods section of

496 AprilAcademy of Management Review

Page 18: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

other types of research provides a useful tem-plate for conveying this information.

In addition, high-quality simulation methodsshould have an accurate computational repre-sentation of the theoretical logic that is con-veyed to the reader in a way that carefullybuilds understanding of assumptions and thelogical flow of the simulation. Given the opacityof computer coding, this puts a premium on theeffective presentation of the theoretical logic. Inaddition to logical argument (roughly analo-gous to the theoretical development of hypothe-ses in empirical research), pictures and flow-charts are helpful in this regard (see Rudolph &Repenning, 2002, and Zott, 2003, for exemplars).High-quality simulation methods also includeexplicit verification of the computational repre-sentation that confirms the accuracy and inter-nal validity of the computational representationand builds reader confidence in the simulation.It is helpful to explicitly state the propositions ofthe simple theoretical ideas that form the basisof the simulation and then to verify them clearlyby running the simulation. In contrast, when thecomputational representation is poor, poorly de-scribed, or unverified, the simulation results areunreliable and/or not believed.

Finally, a high-quality method involves theappropriate statistical design of the verificationand experimentation simulation runs. This de-sign should include an appropriate number oftime steps in each simulation run and number ofsimulation runs per experiment (related to thestatistical power). As in empirical research,high-quality simulation methods also includestatistical confidence intervals (Law & Kelton,1991). In contrast, when research does not justifythe basic statistical properties of the simulation(e.g., number of time steps, number of runs perexperiment) or provide basic statistical analysis(i.e., confidence intervals), confidence in the sta-tistical results is compromised.

Overall, readers should expect that high-quality theory development using simulationmethods meets the usual standards of strongtheory, such as parsimony, internal validity,accuracy, and interest (Davis, 1971; Eisen-hardt, 1989; Pfeffer, 1982; Priem & Butler, 2001).Or, more simply, the result should explain,predict, and delight (Weick, 1989). Given thestarting point of simple theory, the result ofeffective simulation can be fundamental the-ory that is parsimonious, internally consistent,

and applicable to a wide range of situations(Pfeffer, 1982).

CONCLUSION

We began by observing that while simulationis a significant method for theory development,its usefulness is unclear and even controversial.Our purpose was to clarify how and when to usesimulation methods for theory development. Weprovide two contributions. One is a roadmap fordeveloping theory through simulation (Table 1),including the pivotal role of experimentation increating theoretical contribution. The second ispositioning simulation methods in the “sweetspot” between theory creating using inductivecase methods and formal modeling, and theorytesting using multivariate statistical tech-niques. We conclude by observing that simula-tion is likely to become an increasingly promi-nent method of theory development. Asorganizations and strategy scholars move to anemphasis on theory explaining dynamic andlongitudinal phenomena, simulation methodswill be a natural choice.

REFERENCES

Abelson, H., Sussman, G. J., & Sussman, J. 1996. Structure andinterpretation of computer programs. Cambridge, MA:MIT Press.

Adner, R. 2002. When are technologies disruptive? A de-mand-based view of the emergence of competition. Stra-tegic Management Journal, 23: 667–688.

Adner, R., & Levinthal, D. 2001. Demand heterogeneity andtechnology evolution: Implications for product and pro-cess innovation. Management Science, 47: 611–628.

Aldrich, H. 1999. Organizations evolving. Thousand Oaks,CA: Sage.

Andreoni, J., & Miller, J. 1995. Auctions with artificial adap-tive agents. Games and Economic Behavior, 10: 39–64.

Arifovic, J., Bullard, J., & Duffy, J. 1997. The transition fromstagnation to growth: An adaptive learning approach.Journal of Economic Growth, 2: 185–209.

Bruderer, E., & Singh, J. S. 1996. Organizational evolution,learning, and selection: A genetic-algorithm-basedmodel. Academy of Management Journal, 39: 1322–1349.

Bullard, J., & Duffy, J. 2001. Learning and excess volatility.Macroeconomic Dynamics, 5(2): 272–302.

Campbell, D. T., & Fiske, D. W. 1959. Convergent and dis-criminant validation by the multitrade-multimethod ma-trix. Psychological Bulletin, 56: 81–105.

Campbell, D. T., & Stanley, J. C. 1966. Experimental and

2007 497Davis, Eisenhardt, and Bingham

Page 19: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

quasi-experimental designs for research. Chicago: RandMcNally.

Carley, K. M. 2001. Computational approaches to sociologi-cal theorizing. In J. Turner (Ed.), Handbook of sociologi-cal theory: 69–84. New York: Kluwer Academic/Plenum.

Carroll, G., & Harrison, J. R. 1998. Organizational demogra-phy and culture: Insights from a formal model and sim-ulation. Administrative Science Quarterly, 43: 637–667.

Chattoe, E. 1998. Just how (un)realistic are evolutionary al-gorithms as representations of social processes? Journalof Artificial Social Science Simulation, 1(3): 2.1–2.36.

Cohen, M. D., March, J., & Olsen, J. P. 1972. A garbage canmodel of organizational choice. Administrative ScienceQuarterly, 17: 1–25.

Cook, T. D., & Campbell, D. T. 1979. Quasi-experimentation:Design and analysis issues for field settings. Boston:Houghton Mifflin.

Davis, J., Eisenhardt, K., & Bingham, C. 2007. Complexitytheory, market dynamism, and the strategy of simplerules. Working paper, Stanford Technology VenturesProgram, Stanford University, Stanford, CA.

Davis, M. S. 1971. That’s interesting! Towards a phenomenol-ogy of sociology and a sociology of phenomenology.Philosophy of Social Science, 1: 309–344.

Dooley, K. 2002. Simulation research methods. In J. A. C.Baum (Ed.), Companion to organizations: 829–848. Ox-ford: Blackwell.

Dubin, R. 1976. Theory building in applied areas. In M. Dun-nette (Ed.), Handbook of industrial and organizationalpsychology: 17–40. Chicago: Rand McNally.

Eisenhardt, K. M. 1989. Building theories from case studyresearch. Academy of Management Review, 14: 532–550.

Fichman, M. 1999. Variance explained: Why size doesn’t (al-ways) matter. Research in Organizational Behavior, 21:295–331.

Fine, G. A., & Elsbach, K. D. 2000. Ethnography and experi-ment in social psychological theory building. Journal ofExperimental Social Psychology, 36: 51–76.

Forrester, J. 1961. Industrial dynamics. Cambridge, MA: MITPress.

Freese, L. 1980. Formal theorizing. Annual Review of Sociol-ogy, 6: 187–212.

Gallager, R. 1996. Discrete stochastic processes. Boston: Klu-wer Academic.

Gavetti, G., & Levinthal, D. 2000. Looking forward and look-ing backward: Cognitive and experiential search. Ad-ministrative Science Quarterly, 45: 113–137.

Goldberg, D. E. 1989. Genetic algorithms: In search of opti-mization and machine learning. Reading, MA: Addison-Wesley.

Hannan, M. T., & Freeman, J. 1989. Organizational ecology.Cambridge, MA: Harvard University Press.

Harrison, J. R., & Carroll, G. R. 1991. Keeping the faith: Amodel of cultural transmission in formal organizations.Administrative Science Quarterly, 36: 552–582.

Holland, J. H. 1975. Adaptation in natural and artificial sys-tems. Ann Arbor: University of Michigan Press.

Kauffman, S. 1989. Adaptation on rugged fitness landscapes.In E. Stein (Ed.), Lectures in the science of complexity,vol. 1. Reading, MA: Addison-Wesley.

Kauffman, S. 1993. The origins of order. New York: OxfordUniversity Press.

Kreps, D. M. 1990. Corporate culture and economic theory. InJ. E. Alt & K. A. Shepsle (Eds.), Perspectives on positivepolitical economy: 90 –143. Cambridge & New York:Cambridge University Press.

Langton, C. G. 1984. Self-reproduction in cellular automata.Physica, 10D: 134–144.

Lant, T., & Mezias, S. 1990. Managing discontinuous change:A simulation study of organizational learning and en-trepreneurship. Strategic Management Journal, 11: 147–179.

Lant, T., & Mezias, S. 1992. An organizational learning modelof convergence and reorientation. Organization Science,3: 47–71.

Lattin, J. 2003. Analyzing multivariate data. Toronto: Brooks/Cole, Thomson Learning.

Lave, C., & March, J. G. 1975. An introduction to models in thesocial sciences. New York: Harper & Row.

Law, A. M., & Kelton, D. W. 1991. Simulation modeling andanalysis (2nd ed.). New York: McGraw-Hill.

Lee, T., Mitchell, T., Sablynski, C. 1999. Qualitative researchin organizational and vocational psychology. Journal ofVocational Behavior, 55: 161–187.

Levinthal, D. 1997. Adaptation on rugged landscapes. Man-agement Science, 43: 934–950.

Lomi, A., & Larsen, E. 1996. Interacting locally and evolvingglobally: A computational approach to the dynamics oforganizational populations. Academy of ManagementJournal, 39: 1287–1321.

March, J. G. 1991. Exploration and exploitation in organiza-tional learning. Organization Science, 2: 71–87.

Nelson, R. R., & Winter, S. G. 1982. An evolutionary theory ofeconomic change. Cambridge, MA: Belknap Press ofHarvard University Press.

Pfeffer, J. 1982. Organizations and organization theory. Bos-ton: Pitman.

Pfeffer, J. 1993. Barriers to the advance of organizationalscience: Paradigm development as a dependent vari-able. Academy of Management Review, 18: 599–620.

Priem, R. L., & Butler, J. E. 2001. Is the resource-based “view”a useful perspective for strategic management re-search? Academy of Management Review, 26: 22–41.

Repenning, N. 2002. A simulation-based approach to under-standing the dynamics of innovation implementation.Organization Science, 13: 109–127.

Repenning, N. 2003. Selling system dynamics to (other) socialscientists. System Dynamics Review, 19: 303–327.

Rivkin, J. W. 2000. Imitation of complex strategies. Manage-ment Science, 46: 824–844.

498 AprilAcademy of Management Review

Page 20: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select

Rivkin, J. W. 2001. Reproducing knowledge: Replication with-out imitation at moderate complexity. Organization Sci-ence, 12: 274–293.

Rivkin, J. W. & Siggelkow, N. 2003. Balancing search andstability: Interdependencies among elements of organi-zational design. Management Science, 49: 290–311.

Rosenthal, R., & Rosenow, R. L. 1991. Essentials of behavioralresearch: Methods and data analysis (2nd ed.). New York:McGraw-Hill.

Rudolph, J., & Repenning, N. 2002. Disaster dynamics: Under-standing the role of quantity in organizational collapse.Administrative Science Quarterly, 47: 1–30.

Sastry, M. A. 1997. Problems and paradoxes in a model ofpunctuated organizational change. Administrative Sci-ence Quarterly, 42: 237–275.

Schelling, T. 1971. Dynamic models of segregation. Journal ofMathematical Sociology, 1: 143–186.

Sterman, J. 2000. Business dynamics: Systems thinking andmodeling for a complex world. New York: IrwinMcGraw-Hill.

Sterman, J., Repenning, N., & Kofman, F. 1997. Unanticipatedside effects of successful quality programs: Exploring aparadox of organizational improvement. ManagementScience, 43: 503–521.

Stinchcombe, A. 1968. Constructing social theories. Chicago:University of Chicago Press.

Sutton, R. I., & Staw, B. M. 1995. What theory is not. Admin-istrative Science Quarterly, 40: 371–384.

Tushman, M., & Romanelli, E. 1985. Organizational evolution:A metamorphosis model of convergence and reorienta-tion. Research in Organizational Behavior, 7: 171–222.

Van Maanen, J. 1995. Style as theory. Organization Science,6: 133–143.

Weick, K. E. 1989. Theory construction as disciplined imagi-nation. Academy of Management Review, 14: 516–531.

Weick, K. E. 1993. The vulnerable system: An analysis of theTenerife air disaster. In K. H. Roberts (Ed.), New chal-lenges to understanding organizations: 173–198. NewYork: Macmillan.

Whetten, D. 1989. What constitutes a theoretical contribu-tion? Academy of Management Review, 14: 490–495.

Wolfram, S. 2002. A new kind of science. Champaign, IL:Wolfram Media.

Wright, S. 1931. Evolution in Mendelian populations. Genet-ics, 16: 97–159.

Yerkes, R. M., & Dodson, J. D. 1908. The relation of strength ofstimulus to rapidity of habit formation. Journal of Com-parative Neurological Psychology, 18: 459–482.

Zott, C. 2002. When adaptation fails: An agent-based expla-nation of inefficient bargaining under private informa-tion. Journal of Conflict Resolution, 46: 727–753.

Zott, C. 2003. Dynamic capabilities and the emergence ofintra-industry differential firm performance: Insightsfrom a simulation study. Strategic Management Journal,24: 97–125.

Jason P. Davis ([email protected]) is a doctoral candidate in management scienceand engineering, Stanford University, where he received his M.A. in sociology. Hiscurrent research interests include competitive strategy, organizational structure, andcollaborative technology innovation in highly dynamic environments. He uses acombination of inductive, multicase, and simulation methods.

Kathleen M. Eisenhardt ([email protected]) is the Stanford W. Ascherman M.D. Pro-fessor at Stanford University and codirector of the Stanford Technology VenturesProgram. She received her Ph.D. from Stanford University. Her current research inter-ests include competitive interaction and strategic approaches to power by enterpre-neurial firms, multibusiness organization, and strategy as simple rules.

Christopher B. Bingham ([email protected]) is assistant professor of strategy andorganization at the Robert H. Smith School of Business, University of Maryland. Hereceived his Ph.D. from Stanford University. His current research interests revolvearound organizational learning and change, dynamic capabilities, and strategy inunpredictable environments.

2007 499Davis, Eisenhardt, and Bingham

Page 21: DEVELOPING THEORY THROUGH SIMULATION …jasond/simulationmethods.pdf · nity about (1) when simulation is a useful meth-odological choice for theory development, (2) how to select