17
http://cps.sagepub.com/ Comparative Political Studies http://cps.sagepub.com/content/46/2/236 The online version of this article can be found at: DOI: 10.1177/0010414012466376 November 2012 2013 46: 236 originally published online 30 Comparative Political Studies Gary Goertz and James Mahoney Qualitative and Quantitative Research Methodological Rorschach Tests : Contrasting Interpretations in Published by: http://www.sagepublications.com can be found at: Comparative Political Studies Additional services and information for http://cps.sagepub.com/cgi/alerts Email Alerts: http://cps.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: What is This? - Nov 30, 2012 OnlineFirst Version of Record - Jan 18, 2013 Version of Record >> at Peking University Library on January 22, 2013 cps.sagepub.com Downloaded from

Comparative Political Studies - Sites@Duke | sites.duke.edusites.duke.edu/niou/files/2014/06/goertz1s3cps1.pdf · 2014. 6. 25. · James Mahoney, Departments of Political Science

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

  • http://cps.sagepub.com/Comparative Political Studies

    http://cps.sagepub.com/content/46/2/236The online version of this article can be found at:

    DOI: 10.1177/0010414012466376

    November 2012 2013 46: 236 originally published online 30Comparative Political Studies

    Gary Goertz and James MahoneyQualitative and Quantitative Research

    Methodological Rorschach Tests : Contrasting Interpretations in

    Published by:

    http://www.sagepublications.com

    can be found at:Comparative Political StudiesAdditional services and information for

    http://cps.sagepub.com/cgi/alertsEmail Alerts:

    http://cps.sagepub.com/subscriptionsSubscriptions:

    http://www.sagepub.com/journalsReprints.navReprints:

    http://www.sagepub.com/journalsPermissions.navPermissions:

    What is This?

    - Nov 30, 2012OnlineFirst Version of Record

    - Jan 18, 2013Version of Record >>

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/http://cps.sagepub.com/content/46/2/236http://www.sagepublications.comhttp://cps.sagepub.com/cgi/alertshttp://cps.sagepub.com/subscriptionshttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsPermissions.navhttp://cps.sagepub.com/content/46/2/236.full.pdfhttp://cps.sagepub.com/content/early/2012/11/27/0010414012466376.full.pdfhttp://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://cps.sagepub.com/

  • Comparative Political Studies46(2) 236 –251© The Author(s) 2013Reprints and permission:sagepub.com/journalsPermissions.navDOI: 10.1177/0010414012466376http://cps.sagepub.com

    466376 CPS46210.1177/0010414012466376Comparative Political StudiesGoertz and Mahoney© The Author(s) 2013

    Reprints and permission:sagepub.com/journalsPermissions.nav

    1University of Notre Dame, Notre Dame, IN, USA2Northwestern University, Evanston, IL, USA

    Corresponding Author:James Mahoney, Departments of Political Science and Sociology, Northwestern University, Scott Hall, Room 402, Evanston, IL 60201, USA Email: [email protected]

    Methodological Rorschach Tests: Contrasting Interpretations in Qualitative and Quantitative Research

    Gary Goertz1 and James Mahoney2

    In A Tale of Two Cultures, we propose that the qualitative and quantitative research traditions are associated with distinctive procedures and practices as well as contrasting values, beliefs, and norms—that is, they are distinct cul-tures. Like most cultures, they are loosely integrated, internally contested, and ever changing. Nevertheless, they feature many important and readily identifiable differences. In the 14 chapters of the book, we explore these dif-ferences through a series of short (i.e., about 10–12 pages) essays that address specific methodological topics.

    In this brief essay, we can provide only some appetizers to give a taste of the longer main course, which is the book itself. We focus here on two differ-ences that cut across many of the chapters and that suggest why a unified approach to social science research is difficult to achieve. First, we argue that logic and set theory lie, implicitly or explicitly, at the core of qualitative methods: logic is the mathematics of qualitative methodology. By contrast, probability and statistics are at the core of quantitative methods.

    The second difference concerns the relative importance of individual cases. In the qualitative tradition, individual case analysis is at the heart of the research enterprise, and within-case causal inference is a central goal. By contrast, in the quantitative tradition, cross-case analysis is the basis for causal inference, and individual cases are usually not a focus of attention.

    Symposium Articles

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • Goertz and Mahoney 237

    To illustrate these differences, we use what we call methodological Rorschach tests. We show how the “same” hypothesis, symbol, equation, question, test, method, finding, and data set can be interpreted in quite differ-ent ways by the two cultures. Each interpretation makes good sense within the norms and practices of a particular culture. But to the other culture, the inter-pretation may seem odd, inappropriate, or perhaps even incomprehensible. Since most political scientists and sociologists have had statistics classes and are at least somewhat familiar with statistical research practices as they appear in major journals, our focus is on the less known and too often implicit prac-tices of the qualitative culture.

    Interpreting Hypotheses, Symbols, and EquationsOur first set of methodological Rorschach tests concern the ways in which scholars from the qualitative and quantitative traditions interpret the same hypotheses, symbols, and equations. These examples call attention to differ-ences between logic/set theory and probability/statistics.

    ***

    Mathematical logic has strong roots in natural language, and verbal hypoth-eses—whether stated by qualitative or quantitative researchers—often imply relationships of logical necessity and/or sufficiency. It is very easy to collect examples of such hypotheses (for 150 examples of necessary condition hypotheses, see Goertz, 2003). Here are three instances from some well-known works in different subfields of political science:

    Theorem 4.1: Communication leads to enlightenment if and only if:

    1. the speaker is persuasive,2. only the speaker initially possesses the knowledge that the principal

    needs, and3. neither common interests nor external forces induce the speaker to

    reveal what he knows. (Lupia & McCubbins, 1998, p. 69)

    Balance-of-power politics prevail whenever two, and only two, requirements are met: that the order be anarchic and that it be popu-lated by units wishing to survive. (Waltz, 1979, p. 121)

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • 238 Comparative Political Studies 46(2)

    A vigorous and independent class of town dwellers has been an indis-pensable element in the growth of parliamentary democracy. No bour-geois, no democracy. (Moore, 1966, p. 418)

    Within the qualitative culture, it is natural to interpret these verbal hypoth-eses, as propositions about necessary and/or sufficient conditions. Thus, the Lupia and McCubbins theorem is read as proposing three conditions that are necessary and jointly sufficient for communication to lead to enlightenment, the Waltz quote is interpreted as identifying two conditions (anarchy and self-help) that are jointly sufficient for balance-of-power politics, and the Moore quote is understood to say that the presence of a bourgeois class is a neces-sary condition for parliamentary democracy.

    In the quantitative culture, however, it is not natural to treat verbal state-ments as literally proposing hypotheses about necessary and/or sufficient con-ditions. Instead, quantitative scholars draw on the resources of probability and statistics. With these resources, hypotheses normally assume forms such as:

    The probability of Y occurring increases (or decreases) with the level or occurrence of X.

    The level of Y increases (or decreases) on average with the level or occurrence of X.

    Given this orientation, it is natural to interpret the Lupia and McCubbins theorem as proposing three conditions that increase the likelihood of com-munication leading to enlightenment, Waltz as suggesting that the presence of anarchy and self-help are strongly correlated with balance-of-power poli-tics, and Moore as saying that the growth of a bourgeoisie increases the chances of democracy.

    Our point is not that one kind of interpretation is right and the other is wrong. Although we selected hypotheses that seem to use the natural lan-guage of logic, one can easily find passages in the same works that clearly use the standard language of the quantitative culture. In practice, individual scholars sometimes go back and forth between the two languages when stat-ing their hypotheses.

    A real-life illustration of this Rorschach test takes place when scholars from the two traditions summarize the core hypotheses of the same work. For example, when analyzing Skocpol’s States and Social Revolutions, it is natu-ral for qualitative scholars to interpret and test her argument in terms of nec-essary and/or sufficient conditions (e.g., Dion, 1998; Mahoney, 1999). By

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • Goertz and Mahoney 239

    contrast, quantitative researchers analyze her central hypotheses in light of the statistical tools of their culture, such as regression analysis and condi-tional probabilities (e.g., Geddes, 1991; Sekhon, 2004).

    ***

    Both qualitative and quantitative scholars use symbols as a means of sum-marizing their causal arguments. Yet even the most elementary causal dia-grams require interpretation. We have all seen the causal arrow, that is, →, in figures. In the quantitative tradition, these arrows are central, for example, in Pearl’s (2000) work on causality (also see Gerring, 2012; Morgan & Winship, 2007). Writings on qualitative methods also depict scholarly arguments using these arrows (e.g., George & Bennett, 2005; Mahoney, 1999). But how is one to interpret this arrow symbol?

    In the quantitative culture, causal graphs with these arrows usually have a fairly accepted counterfactual interpretation. For example, the causal graph X → Y implies that if X varies, Y should vary too (probabilistically). In the dichotomous case, one reads the graph as proposing that, ceteris pari-bus, the probability of Y given X is greater than the probability of Y given not-X. A central goal of statistical analysis is to summarize the magnitude of difference that a change on X has for Y by calculating the average treatment effect (ATE).

    Among qualitative researchers, by contrast, → is not understood from this statistical perspective. Instead, the qualitative culture has three basic kinds of causal relationships: (a) X is necessary for Y, (b) X is sufficient for Y, and (c) X is part of a package of causes that are jointly sufficient for Y. Although the third interpretation has some parallels with a quantitative view, the first two are foreign to mainstream statistical interpretations. When a qualitative per-son sees the causal graph X → M → Y, he or she may read this graph in the following way: “X is sufficient for M (mechanism), which in turn is sufficient for Y.” This is certainly not the way that Imai, Keele, Tingley, and Yamamoto (2011) interpret this kind of figure in their recent article in the American Political Science Review.

    ***

    These same issues of interpretation arise with formal equations. Although very few qualitative scholars write out equations, their arguments often can be summarized using Boolean notation. For example, consider the following Boolean equation:

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • 240 Comparative Political Studies 46(2)

    Y = (A * B * c) + (A * C * D * E) (1)

    This model assumes “multiple, conjunctural causation” (Ragin, 1987) in the sense that (a) causal factors work together as packages or combinations (i.e., conjunctural causation) and (b) more than one combination leads to the same outcome (i.e., multiple causation).1 The fact qualitative arguments often have this format is true regardless of whether the scholar actually uses quali-tative comparative analysis (QCA).

    We can ask what might happen if this equation were presented to a statisti-cal researcher as a Rorschach test. For example, what are some of the features of the model that might jump out to someone trained in statistical analysis, but not qualitative methodology?

    1. High-order interactions. A standard reaction might be to assume that the * symbol stands for multiplication, and thus that the equa-tion is modeling high-order interaction terms.

    2. Deterministic causation. The model posits a deterministic relation-ship, given that no error term is included.

    3. Problematic model. Various other problems with the way in which the model is formulated might be noted, such as the absence of an intercept term, the curious inclusion of capitalized and noncapital-ized letters representing the presence or absence of an independent variable, the failure to include beta coefficients, and the failure to include all lower order constituents of the interaction terms.

    Although these reactions are understandable given the norms of statistical analysis, they do not make good sense in the field of qualitative methodology. For example, the * symbol does not stand for multiplication, but rather the logical AND. Probabilistic assumptions are permitted with this equation, given that the researcher might find that only a certain percentage of cases are consistent with the equation. And the notation oddities from the standpoint of statistical analysis are not problematic if one interprets the equation using the resources of Boolean logic.

    Individual Cases and Causal InferenceHow scholars think about causation is intimately related to whether they are centrally concerned with individual case analysis. For instance, the approach to causal inference that one carries out in a cross-case, large-N study varies quite a lot from the approach to causal inference that is used in case studies and

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • Goertz and Mahoney 241

    small-N designs. Many of the contrasts that we make in the book arise in one way or another from differences in the relative importance of individual cases.

    ***

    If one is interested in an individual case, it is natural to ask for an explanation for the occurrence of Y in that specific case. A large qualitative literature focuses on the causes of major events in world history, such as the French Revolution, World War I, and the end of the Cold War. One begins with an outcome, that is, Y, and then works backward to the causes, that is, Xs. This is known in the methodological literature as a causes-of-effects approach.

    In contrast, within the quantitative culture, scholars tend to ask effects-of-causes questions. These questions begin with a specific cause, X, and ask about its effect (if any) on Y among a population of cases. Thus, one asks if some variable has a causal effect on civil wars, democracy, and growth. The methodological challenge in this kind of research is estimating the average causal effect in a context where a randomized, controlled experiment is not feasible.

    At the base of these different approaches are contrasting ways to interpret the very idea of “cause” and related notions such as causality and causation. Interestingly, even the American Heritage Dictionary definition of “cause” suggests these two approaches: “Cause: 1.a. The producer of an effect, result, or consequence. b. The one, such as a person, an event, or a condition, that is responsible for an action or a result.” The first definition is more congruent with the statistical effects-of-causes approach; the second is more consistent with the qualitative causes-of-effects approach.

    Because scholars from a given culture tend to see and approach causal inference in a particular way, they often have trouble appreciating the alterna-tive. Although the statistician Holland (1986) recognizes the distinction between the causes-of-effects and effects-of-causes approaches, he suggests that there is an “unbridgeable gulf” between the two. He dismisses causes-of-effects questions as unanswerable and focuses exclusively on estimating the effects of causes. Leading experimental and quantitative methodologists in the social sciences also acknowledge the two approaches but end up focusing almost all of their attention on the effects-of-causes approach (Gerring, 2012; Morgan & Winship, 2007; Morton & Williams, 2010).

    In the qualitative tradition, conversely, scholars use methods (e.g., process tracing, counterfactual analysis) designed for causes-of-effects research, and they may be skeptical about effects-of-causes research. When these research-ers see statistical findings with observational data, they may be unconvinced

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • 242 Comparative Political Studies 46(2)

    that the results reflect causal patterns unless the effects can be located within individual cases. These scholars are reluctant to infer causation without pur-suing case study research. Even when presented with findings from a care-fully designed experiment, they often almost instinctively want to try to locate the treatment effect within particular observations.

    ***

    In the quantitative culture, scholars ideally rely on a strong research design, that is, randomized assignment to treatment and control groups, to make valid causal inferences. In the absence of true randomization, they employ various methodological tools in the effort to approximate a strong design. For our purposes, the key point is that cross-case analysis is the focus of the research and the basis for the causal inference. In fact, with a strong design, one need not give special attention to any specific case.

    In the qualitative culture, by contrast, researchers regard the identification of mechanisms operating within individual cases as crucial to causal infer-ence. They see mechanisms as a nonexperimental way of distinguishing causal relations from spurious correlations:

    Mechanisms help in causal inference in two ways. The knowledge that there is a mechanism through which X influences Y supports the infer-ence that X is a cause of Y. In addition, the absence of a plausible mechanism linking X to Y gives us a good reason to be suspicious of the relation being a causal one. . . . Although it may be too strong to say that the specification of mechanisms is always necessary for causal inference, a fully satisfactory social scientific explanation requires that the causal mechanisms be specified. (Hedström & Ylikoski, 2010, p. 54; also see George & Bennett, 2005)

    One might even say that a norm has developed in the qualitative culture that making a strong causal inference requires process tracing within individual cases to see if proposed causal mechanisms are present.

    Because process tracing is so central to causal inference in qualitative work, researchers in this tradition may be skeptical of studies that do not identify or empirically examine causal mechanisms. For example, qualitative researchers may reject the idea that large-N findings not validated by supple-mentary process tracing are causal. It is easy to find examples where the intensive examination of individual cases leads to doubts about hypothesized causal mechanisms from statistical or formal analyses. Several scholars have

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • Goertz and Mahoney 243

    used this approach as a way critiquing major quantitative and formal studies (e.g., Haggard & Kaufman, 2012; Kreuzer, 2010; Narang & Nelson, 2009; Sambanis, 2004; Slater & Smith, 2010; Snyder & Borghard, 2011).

    Conversely, quantitative scholars are often skeptical about the value of process tracing and within-case causal inference more generally. For instance, King, Keohane, and Verba (1994) suggest that process tracing is “unlikely to yield strong causal inference” and can only “promote descriptive generaliza-tions and prepare the way for causal inference” (pp. 227–228). Other scholars stress that causal mechanisms are not “miracle makers” and cannot resolve fundamental difficulties in causal analysis (e.g., Gerring, 2010; Norkus, 2004).

    ***

    When scholars from the two cultures discuss their methods of inference, the result can be misunderstandings and disagreements. This outcome recently occurred in a series of exchanges between Collier, Brady, and Seawright (2010b; Brady, Collier, & Seawright, 2006) and Beck (2006, 2010). One cen-terpiece of their debate concerns Brady’s (2010) analysis of the effect of the early media call that proclaimed an Al Gore victory in the 2000 presidential elections in Florida. Brady questions the statistically derived conclusions of Lott (2000), who finds that George W. Bush lost at least 10,000 votes because of the early media call. In contrast, Brady (2010) finds that “the actual vote loss was probably closer to somewhere between 28 and 56 votes. Lott’s fig-ure of 10,000 makes no sense at all” (p. 240).

    From a qualitative perspective, Brady’s study derives its inference from causal process observations and process tracing, in particular the use of a series of “hoop tests” (Van Evera, 1997; also see Bennett, 2008; Collier, 2011; Mahoney, 2012). A hoop test proposes that a given piece of evidence must be present within an individual case for a hypothesis about that case to be valid. Although passing a hoop test does not confirm a hypothesis, failing a hoop test eliminates the hypothesis. In this sense, the presence of the evidence posited by the hoop test is a necessary condition for the hypothesis to be valid.

    Brady identifies a series of conditions that are necessary for the early media call to have cost Bush the vote of the Florida resident i. These necessary condi-tions include the following four: (a) the resident lived in the eastern panhandle counties of Florida, (b) the resident had not already voted when the media call was made, (c) the resident heard the media call, and (d) the resident favored Bush. An individual must meet all four of these conditions to be a potentially lost vote for Bush. In turn, to estimate the size of the population that can meet

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • 244 Comparative Political Studies 46(2)

    these four conditions, Brady draws on within-case observations and his expert knowledge of studies of the media and voting behavior.

    Reacting to Brady from the perspective of quantitative research, Beck (2010) reaches a perhaps predictable conclusion: statistical analysis has all the tools needed to achieve the best possible inference. Qualitative methodol-ogy has no distinctive value added:

    Finally, let us turn to Brady’s reanalysis of Lott’s finding on the impact of the early election call in Florida in, 2000. In my earlier comment, I noted that there was nothing in Brady’s discussion that was not in the standard quantitative analysts’ toolkit . . . One could also have (easily) improved on Lott’s difference-in-difference design. (Beck, 2010, p. 503)

    Yet, from a qualitative perspective, the methods that Beck has in mind cannot possibly be faithful to the logic-based hoop tests that Brady used to reach his inference.

    Again the debate and misunderstanding are animated by questions of inter-pretation. For scholars who work on qualitative methods, a natural interpreta-tion of Brady’s analysis is via mathematical logic and hoop tests. For scholars who know about statistical methods but not qualitative methods, by contrast, it is natural to think about shortcomings in Lott’s analysis and, at the same time, imagine various ways in which the standard statistical toolkit could be used to improve on that estimate and perhaps mimic Brady’s procedures.

    Interpreting DataA final set of methodological Rorschach tests involve the ways in which quanti-tative and qualitative researchers tend to “see” data. We suggest that quantitative scholars tend to notice symmetrical relationships as summarized by standard statistical measures. By contrast, qualitative researchers are drawn to asymmetri-cal patterns, especially relationships that approximate necessity or sufficiency.

    ***

    Table 1 offers a simple example for a 2 × 2 data set concerning the democratic peace theory. In this data set, the cases are dyads (i.e., pairs) of countries. The outcome of interest is peace, which is treated dichotomously, with the oppo-site of peace being war (an idea that we contest in the book). The independent variable is “democratic dyad,” which is present only if both countries are democracies.

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • Goertz and Mahoney 245

    From a qualitative perspective, the standout feature of these data is the empty lower-right cell, which we call the sufficient condition cell of a 2 × 2 table. When this cell is empty and the others are populated, one can formulate the core relationship in terms of a sufficient condition. In this example, one can say that a democratic dyad is sufficient for peace.

    Using statistics, one can calculate various measures of association for Table 1. Different 2 × 2 measures of association offer alternative interpreta-tions of the data. Some measures would see the relationship as significant but only modestly strong. An odds ratio measure of association would indicate a very strong and significant relationship.

    Consider now Table 2, in which we have simply flipped two of the cells from Table 1. From a qualitative perspective, the obviously key feature of this new table is still the empty cell, but it now corresponds to the necessary con-dition cell of a 2 × 2 table. Thus, with these hypothetical data, the relationship is summarized as follows: A democratic dyad is necessary for peace. From a qualitative perspective, finding a necessary condition is quite different from finding a sufficient condition.

    In contrast, almost all statistical 2 × 2 measures of association interpret Table 2 in the same way as Table 1. In terms of methodological Rorschach tests, the scholar using a particular statistical measure (e.g., odds ratio) would “see” the same thing, because the same statistical results are generated for both tables.

    ***

    Table 1. Example of a Sufficient Condition: The Democratic Peace

    Not democratic dyad Democratic dyad

    Peace 1,045 169War 36 0

    Source: Russett (1995, p. 174).

    Table 2. Example of a Necessary Condition: Inverting the Democratic Peace Data

    Not democratic dyad Democratic dyad

    Peace 0 169War 36 1,045

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • 246 Comparative Political Studies 46(2)

    Figures 1a and 1b present continuously coded scatterplots in which the data are distributed asymmetrically. Given these scatterplots, what are the features that would leap out for quantitative and qualitative researchers?

    Figure 1a. Necessary conditionThe dashed line is the ordinary least squares (OLS) line.

    Figure 1b. Sufficient conditionThe dashed line is the ordinary least squares (OLS) line.

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • Goertz and Mahoney 247

    From a quantitative perspective, we think that the following features stand out:

    1. Modest fits. One would draw a line through the data and find that there was a clear but modest relationship between X and Y.

    2. Identical slopes. The ordinary least squares (OLS) slope in Figure 1a is the same as in Figure 1b.

    3. Heteroscedasticity. The variance around the OLS line is clearly not constant.

    By contrast, for qualitative researchers trained in fuzzy-set analysis (see Ragin, 2000, 2008), the following features jump out:

    1. Prefect fits. A perfect fit in fuzzy-set analysis is present when there are no cases above (or below) the 45 degree diagonal.

    2. Necessary and sufficient conditions. The relationships in the two figures are quite different: Figure 1a is a necessary condition, whereas Figure 1b is a sufficient condition.

    3. Moderate importance. The necessary and sufficient conditions depicted in the figures are neither trivial nor of maximal impor-tance; they are of moderate importance.

    We cannot provide here the mathematical underpinnings for the qualita-tive interpretation of these figures (though we do in the book). Nevertheless, we hope that this brief example might serve to whet one’s appetite for explor-ing further the ways in which set-theoretic methodology differs from statisti-cal methodology. The examples illustrate a basic difference that cuts across many chapters of the book: set-theoretic methods are good at seeing asym-metric patterns in the data, whereas conventional statistical methods are good at noticing symmetric ones.

    ConclusionIn emphasizing differences in the nature of qualitative and quantitative research, our intention is not to drive a wedge between these research com-munities. The differences that we describe in A Tale of Two Cultures are not contradictory: Qualitative and quantitative methods are appropriate for alter-native research tasks and are designed to achieve contrasting research goals.

    In recent years, multimethod (i.e., quantitative–qualitative) research has received increased attention and is becoming de rigueur in some fields. We

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • 248 Comparative Political Studies 46(2)

    believe that multimethod research that draws on the norms and practices associated with both traditions is a viable option. However, we do not think that such research can be carried out effectively unless scholars embrace the goals and employ the tools of both cultures. We propose that

    1. Good multimethod research requires researchers to ask about the causes of outcomes in specific cases as well as the average effect of a cause within a population or sample of cases.

    2. Good multimethod research requires researchers to use methods that depend on set theory/logic as well as statistics/probability.

    Good multimethod research thus requires analysts to have a strong back-ground in both quantitative and qualitative methodology.

    Declaration of Conflicting Interests

    The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

    Funding

    The author(s) received no financial support for the research, authorship, and/or pub-lication of this article.

    Note

    1. In qualitative research, the number of combinations leading to the same outcome isrelatively small, for example, 2 to 5. In contrast, the linear, additive model often used in statistical research implies a very large number of configurations that lead to the same outcome, in fact, so many that the individual configurations are not of interest.

    References

    Beck, N. (2006). Is causal-process observation an oxymoron? Political Analysis, 14, 347-352.

    Beck, N. (2010). Causal process “observation”: Oxymoron or (fine) old wine. Politi-cal Analysis, 18, 499-505.

    Bennett, A. (2008). Process tracing: A Bayesian perspective. In J. Box-Steffensmeier, H. Brady, & D. Collier (Eds.), The Oxford handbook of political methodology(pp. 702-721). Oxford, UK: Oxford University Press.

    Brady, H. E. (2010). Data-set observations versus causal-process observations: The 2000 presidential election. In H. E. Brady & D. Collier (Eds.), Rethinking social inquiry: Diverse tools, shared standards (2nd ed., pp. 237-242). Lanham, MD: Rowman & Littlefield.

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • Goertz and Mahoney 249

    Brady, H. E., Collier, D., & Seawright, J. (2006). Toward a pluralistic vision of meth-odology. Political Analysis, 14, 353-368.

    Collier, D. (2011). Understanding process tracing. PS: Political Science & Politics, 44, 823-830.

    Collier, D., Brady, H., & Seawright, J. (2010b). Sources of leverage in causal infer-ence: Toward an alternative view of methodology. In H. E. Brady & D. Collier (Eds.), Rethinking social inquiry: Diverse tools, shared standards (2nd ed., pp. 161-199). Lanham, MD: Rowman & Littlefield.

    Dion, D. (1998). Evidence and inference in the comparative case study. Comparative Politics, 30, 127-145.

    Geddes, B. (1991). How the cases you choose affect the answers you get: Selection bias in comparative politics. In J. A. Stimson (Ed.), Political analysis (Vol. 2, pp. 131-150). Ann Arbor: University of Michigan Press.

    George, A., & Bennett, A. (2005). Case studies and theory development in the social sciences. Cambridge, MA: MIT Press.

    Gerring, J. (2010). Causal mechanisms, yes, but. . . . Comparative Political Studies, 43, 1499-1526.

    Gerring, J. (2012). Social science methodology: A unified framework (2nd ed.). Cambridge, UK: Cambridge University Press.

    Goertz, G. (2003). The substantive importance of necessary condition hypotheses. In G. Goertz & H. Starr (Eds.), Necessary conditions: Theory, methodology, and applications (pp. 65-94). Lanham, MD: Rowman & Littlefield.

    Haggard, S., & Kaufman, R. R. (2012). Inequality and regime change: Democratic transitions and the stability of democratic rule. American Political Science Review, 106, 495-516.

    Hedström, P., & Ylikoski, P. (2010). Causal mechanisms in the social sciences. Annual Review of Sociology, 36, 49-67.

    Holland, P. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945-960.

    Imai, K., Keele, L., Tingley, D., & Yamamoto, T. (2011). Unpacking the black box of causality. American Political Science Review, 105, 765-789.

    King, G., Keohane, R., & Verba, S. (1994). Designing social inquiry: Scientific infer-ence in qualitative research. Princeton, NJ: Princeton University Press.

    Kreuzer, M. (2010). Historical knowledge and quantitative analysis: The case of the origins of proportional representation. American Political Science Review, 104, 369-392.

    Lott, J. R., Jr. (2000, November 14). Gore might lose a second round: Media sup-pressed the Bush vote. Philadelphia Inquirer, p. 23A.

    Lupia, A., & McCubbins, M. (1998). The democratic dilemma: Can citizens learn what they need to know? Cambridge, UK: Cambridge University Press.

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • 250 Comparative Political Studies 46(2)

    Mahoney, J. (1999). Nominal, ordinal, and narrative appraisal in macrocausal analy-sis. American Journal of Sociology, 104, 1154-1196.

    Mahoney, J. (2012). The logic of process tracing tests in the social sciences. Socio-logical Methods and Research, 41, 560-590

    Moore, B., Jr. (1966). Social origins of dictatorship and democracy: Lord and peas-ant in the making of the modern world. Boston, MA: Beacon.

    Morgan, S., & Winship, C. (2007). Counterfactuals and causal inference: Methods and principles for social research. Cambridge, UK: Cambridge University Press.

    Morton, R., & Williams, K. (2010). From nature to the lab: Experimental political science and the study of causality. New York, NY: Cambridge University Press.

    Narang, V., & Nelson, R. (2009). Who are these belligerent democratizers? Reas-sessing the impact of democratization on war. International Organization, 63, 357-379.

    Norkus, Z. (2004). Mechanisms as miracle makers? The rise and inconsistencies of the mechanistic approach in social science and history. History and Theory, 44, 348-372.

    Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge, UK: Cam-bridge University Press.

    Ragin, C. C. (1987). The comparative method: Moving beyond qualitative and quan-titative strategies. Berkeley: University of California Press.

    Ragin, C. C. (2000). Fuzzy-set social science. Chicago, IL: University of Chicago Press.

    Ragin, C. C. (2008). Redesigning social inquiry: Fuzzy sets and beyond. Chicago, IL: University of Chicago Press.

    Russett, B. (1995). The democratic peace: “and yet it moves.” International Security, 19, 164-175.

    Sambanis, N. (2004). Using case studies to expand economic models of civil war. Perspectives on Politics, 2, 259-279.

    Sekhon, J. (2004). Quality meets quantity: Case studies, conditional probability, and counterfactuals. Perspectives on Politics, 2, 281-293.

    Slater, D., & Smith, B. (2010). Economic origins of democratic breakdown? Contrary evidence from Southeast Asia and beyond. Unpublished manuscript.

    Snyder, J., & Borghard, E. (2011). The cost of empty threats: A penny, not a pound. American Political Science Review, 105, 437-455.

    Van Evera, S. (1997). Guide to methods for students of political science. Ithaca, NY: Cornell University Press.

    Waltz, K. (1979). Theory of international politics. Boston, MA: Addison-Wesley.

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/

  • Goertz and Mahoney 251

    Bios

    Gary Goertz is professor at the Kroc Institute for International Peace Studies at the University of Notre Dame. He is the author or coauthor of nine books and over 50 arti-cles and chapters on issues of international conflict and peace, institutions, and methodology, including War and Peace in International Rivalry (2000), International Norms and Decisionmaking: A Punctuated Equilibrium Model (2004), and Social Science Concepts: A User’s Guide (2006). His current activities include the Causes of Peace project, which explores the rise of interstate peace since 1945.

    James Mahoney is Gordon Fulcher Professor in the Departments of Political Science and Sociology at Northwestern University. He is the author of Colonialism and Postcolonial Development: Spanish America in Comparative Perspective (2010) and coeditor of Explaining Institutional Change: Ambiguity, Agency, and Power (2010). With Kathleen Thelen, he is currently working on a new edition of the volume Comparative Historical Analysis in the Social Sciences (2003).

    at Peking University Library on January 22, 2013cps.sagepub.comDownloaded from

    http://cps.sagepub.com/