15
T here are numerous industries in which experts offer opinions about the quality of products and brands. For example, movie critics make suggestions about a soon-to-be released movie’s artistic and entertainment value, BusinessWeek hosts Robert Parker’s column recom- mending wines, Consumer Reports has long compared brands across numerous product categories, and so forth. In addition, consumers are increasingly posting online evalua- tions of products and brands—for example, they review books on Amazon.com, movies on Netflix.com, video games on Gamespot.com, or restaurants on Citysearch.com. Consumers find judgments from both professional crit- ics and amateur communities to be helpful, in part because the sheer number of new products and the frequency of their launches (e.g., weekly releases for movies) can be over- whelming for consumers in the choice process. In addition, many such products appear wholly unique, so a comparison of the movies Terminator Salvation and X-Men Origins: Wolverine or a comparison of the wines Argentinian Malbec and Italian Prosecco is difficult; thus, both critics’ and other ordinary consumers’ evaluations assist in decision making. 108 Journal of Marketing Vol. 74 (January 2010), 108–121 © 2010, American Marketing Association ISSN: 0022-2429 (print), 1547-7185 (electronic) Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci Dynamic Effects Among Movie Ratings, Movie Revenues, and Viewer Satisfaction This research investigates how movie ratings from professional critics, amateur communities, and viewers themselves influence key movie performance measures (i.e., movie revenues and new movie ratings). Using movie- level data, the authors find that high early movie revenues enhance subsequent movie ratings.They also find that high advertising spending on movies supported by high ratings maximizes the movie’s revenues. Furthermore, they empirically show that sequel movies tend to reap more revenues but receive lower ratings than originals. Using individual viewer–level data, this research highlights how viewers’ own viewing and rating histories and movie communities’ collective opinions explain viewer satisfaction. The authors find that various aspects of these ratings explain viewers’ new movie ratings as a measure of viewer satisfaction, after controlling for movie characteristics. Furthermore, they find that viewers’ movie experiences can cause them to become more critical in ratings over time. Finally, they find a U-shaped relationship between viewers’genre preferences and genre-specific movie ratings for heavy viewers. Keywords: movie ratings, professional critics, amateur communities, movie revenues, consumer satisfaction Sangkil Moon is Associate Professor of Marketing (e-mail: smoon2@ ncsu.edu), and Paul K. Bergey is Associate Professor of Information Sys- tem (e-mail: [email protected]), Department of Business Manage- ment, North Carolina State University. Dawn Iacobucci is E. Bronson Ingram Professor of Marketing, Owen Graduate School of Management, Vanderbilt University (e-mail: [email protected]). The authors appreciate assistance from Natasha Zhang Foutz, Deepak Sirdeshmukh, and Glenn Voss. They also thank the participants in the marketing semi- nar at North Carolina State University for their comments on a previous version of this article. Finally, they acknowledge their indebtedness to the three anonymous JM reviewers. Professional critics commonly provide reviews and rat- ings; this information signals unobservable product quality and helps consumers make good choices (Boulding and Kirmani 1993; Kirmani and Rao 2000). Although amateur consumers can obtain useful information from critics, they are sometimes at odds with critics because of some funda- mental differences between the two groups in terms of experiences and preferences (Chakravarty, Liu, and Mazumdar 2008; Holbrook 1999; Wanderer 1970). There- fore, consumers often seek like-minded amateurs’ opinions in various ways. The recent development and proliferation of online con- sumer review forums, in which consumers share opinions on products, has had an enormous impact on the dynamics of word of mouth (WOM) by effectively connecting con- sumers (Chen and Xie 2008; Eliashberg, Elberse, and Leen- ders 2006; Godes and Mayzlin 2004; Godes et al. 2005; Mayzlin 2006; Trusov, Bucklin, and Pauwels 2009). These online forums lower the product information search costs, which motivates consumers to seek such review information (Stigler 1961). After all, consumer communities’ collective opinions can have as much influence on other consumers’ choices as professional critics’ opinions. In addition to these two influence groups, consumers make choices in accor- dance with their own judgments based on past experiences in the given product category, which can be contrary to opinions from either professional critics or amateur com- munities. In this sense, consumers are active information processors rather than passive information receivers (Bettman 1970). To provide a comprehensive evaluation of how product reviews and ratings influence consumers’ choices and satis- faction arising from their experiential consumption, we con-

Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Embed Size (px)

Citation preview

Page 1: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

There are numerous industries in which experts offeropinions about the quality of products and brands.For example, movie critics make suggestions about a

soon-to-be released movie’s artistic and entertainmentvalue, BusinessWeek hosts Robert Parker’s column recom-mending wines, Consumer Reports has long comparedbrands across numerous product categories, and so forth. Inaddition, consumers are increasingly posting online evalua-tions of products and brands—for example, they reviewbooks on Amazon.com, movies on Netflix.com, videogames on Gamespot.com, or restaurants on Citysearch.com.

Consumers find judgments from both professional crit-ics and amateur communities to be helpful, in part becausethe sheer number of new products and the frequency of theirlaunches (e.g., weekly releases for movies) can be over-whelming for consumers in the choice process. In addition,many such products appear wholly unique, so a comparisonof the movies Terminator Salvation and X-Men Origins:Wolverine or a comparison of the wines Argentinian Malbecand Italian Prosecco is difficult; thus, both critics’ and otherordinary consumers’ evaluations assist in decision making.

108Journal of MarketingVol. 74 (January 2010), 108–121

© 2010, American Marketing AssociationISSN: 0022-2429 (print), 1547-7185 (electronic)

Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci

Dynamic Effects Among MovieRatings, Movie Revenues, andViewer

SatisfactionThis research investigates how movie ratings from professional critics, amateur communities, and viewersthemselves influence key movie performance measures (i.e., movie revenues and new movie ratings). Using movie-level data, the authors find that high early movie revenues enhance subsequent movie ratings. They also find thathigh advertising spending on movies supported by high ratings maximizes the movie’s revenues. Furthermore, theyempirically show that sequel movies tend to reap more revenues but receive lower ratings than originals. Usingindividual viewer–level data, this research highlights how viewers’ own viewing and rating histories and moviecommunities’ collective opinions explain viewer satisfaction. The authors find that various aspects of these ratingsexplain viewers’ new movie ratings as a measure of viewer satisfaction, after controlling for movie characteristics.Furthermore, they find that viewers’ movie experiences can cause them to become more critical in ratings over time.Finally, they find a U-shaped relationship between viewers’ genre preferences and genre-specific movie ratings forheavy viewers.

Keywords: movie ratings, professional critics, amateur communities, movie revenues, consumer satisfaction

Sangkil Moon is Associate Professor of Marketing (e-mail: [email protected]), and Paul K. Bergey is Associate Professor of Information Sys-tem (e-mail: [email protected]), Department of Business Manage-ment, North Carolina State University. Dawn Iacobucci is E. Bronson IngramProfessor of Marketing, Owen Graduate School of Management, VanderbiltUniversity (e-mail: [email protected]). The authorsappreciate assistance from Natasha Zhang Foutz, Deepak Sirdeshmukh,and Glenn Voss. They also thank the participants in the marketing semi-nar at North Carolina State University for their comments on a previousversion of this article. Finally, they acknowledge their indebtedness to thethree anonymous JM reviewers.

Professional critics commonly provide reviews and rat-ings; this information signals unobservable product qualityand helps consumers make good choices (Boulding andKirmani 1993; Kirmani and Rao 2000). Although amateurconsumers can obtain useful information from critics, theyare sometimes at odds with critics because of some funda-mental differences between the two groups in terms ofexperiences and preferences (Chakravarty, Liu, andMazumdar 2008; Holbrook 1999; Wanderer 1970). There-fore, consumers often seek like-minded amateurs’ opinionsin various ways.

The recent development and proliferation of online con-sumer review forums, in which consumers share opinionson products, has had an enormous impact on the dynamicsof word of mouth (WOM) by effectively connecting con-sumers (Chen and Xie 2008; Eliashberg, Elberse, and Leen-ders 2006; Godes and Mayzlin 2004; Godes et al. 2005;Mayzlin 2006; Trusov, Bucklin, and Pauwels 2009). Theseonline forums lower the product information search costs,which motivates consumers to seek such review information(Stigler 1961). After all, consumer communities’ collectiveopinions can have as much influence on other consumers’choices as professional critics’ opinions. In addition to thesetwo influence groups, consumers make choices in accor-dance with their own judgments based on past experiencesin the given product category, which can be contrary toopinions from either professional critics or amateur com-munities. In this sense, consumers are active informationprocessors rather than passive information receivers(Bettman 1970).

To provide a comprehensive evaluation of how productreviews and ratings influence consumers’ choices and satis-faction arising from their experiential consumption, we con-

Page 2: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Movie Ratings, Movie Revenues, and Viewer Satisfaction / 109

to other consumption situations in which consumers contin-ually face new products (e.g., new books, new musicalbums) and determine the expected value of the new prod-ucts according to their own experiences, like-minded ama-teur communities’ general opinions, and critics’ profes-sional reviews.

In the following section, we discuss the theoreticalbackground and develop hypotheses pertaining to the rela-tionships between movie ratings and performances. Ourempirical analyses test the hypotheses using both movie-level data and viewer-level data. Finally, we discuss themanagerial implications of our findings.

Hypotheses DevelopmentMovie-Level Effects: Movie Ratings and Revenues(H1–H3)

Prior research has developed movie revenue evaluationmodels in various contexts, focusing particularly on theaterrevenues (Ainslie, Drèze, and Zufryden 2005; Eliashberg etal. 2000; Jedidi, Krider, and Weinberg 1998) rather than onvideo revenues (Prosser 2002). Some research has alsoexamined the impact of critics’ ratings and reviews ontheater revenues. Specifically, in an empirical examination,Eliashberg and Shugan (1997) find that critical reviews arecorrelated with late and cumulative box office receipts butnot with early box office receipts; thus, they conclude thatcritics are predictors rather than influencers. In contrast,some studies show that critics play a dual role as both pre-dictors and influencers (Basuroy, Chatterjee, and Ravid2003; Boatwright, Basuroy, and Kamakura 2007). Accord-ing to Reinstein and Snyder (2005), when movies receivehigh ratings from either critics or ordinary viewers, reve-nues increase. Early high ratings can generate positiveWOM that can spread to ordinary viewers.

Less obvious, however, is the hypothesis we test regard-ing whether strong revenues can subsequently generatemore positive reviews during the course of the movie. If wecan confirm this reciprocating dynamic interaction betweenreviews and revenues, we would establish that high ratingseffectively sustain high movie revenues and vice versa overthe not-so-long life of the movie. Our reasoning is similar tothat of Duan, Gu, and Whinston (2008), who indicate that aunique aspect of the WOM effect is the presence of a posi-tive feedback mechanism between WOM and sales. Simi-larly, Godes and Mayzlin (2004) theorize that commerciallysuccessful television shows can engender more buzz amongordinary viewers. The enhanced buzz is usually positive forcommercially successful movies because of the generallypositive correlation between movie ratings and movie reve-nues. In other words, movie viewers talk more about suc-cessful movies, which affects revenues and ratings, thanunsuccessful movies. Behavioral learning theory character-izes this dynamic process as vicarious learning becauseconsumers learn from the market and the process positivelyreinforces their satisfaction (Nord and Peter 1980; Roth-schild and Gaidis 1981). The favorable enhanced buzz fromhigh-revenue movies contributes to enhanced movie ratingsin following weeks because there are more viewers who had

sider the opinions from these multiple sources. Such exter-nal and internal information sources are particularly impor-tant in movie choice because viewers constantly face theproblem of choosing satisfying movies among many newand existing ones. Indeed, the development of the Internethas engendered movie rental service Web sites (e.g., Net-flix, Blockbuster), on which members can access a wealthof movie review information with minimal effort. Memberscan also post their own opinions with ease. In such an envi-ronment, the influence of online member communities’ gen-eral opinions on movie choice is maximized (Chevalier andMayzlin 2006; Liu 2006). Although the contributions of thisresearch are intended to encompass multiple industries, wefocus on the movie industry, in part because of its sizablefinancial contribution to the general economy ($10 billionin 2008 U.S. box office ticket revenues according towww.the-numbers.com/market).

This research attempts to highlight the relationshipsbetween product ratings and product financial perfor-mance—more specifically, various sources of movie ratingsand movie performances (i.e., movie revenues and viewersatisfaction)—while considering various movie qualitycharacteristics (e.g., movie costs, original versus sequel). Toaccomplish this objective, we conduct an empirical analysisat two levels: the (aggregate) movie level and the (individ-ual) viewer level. First, we focus on the movie-level analy-sis to examine the two-way dynamic influences betweenmovie ratings and movie revenues. In this analysis, we viewmovie revenues as the collective results of individual view-ers’ choices. In doing so, we focus on collective critics’ andamateur communities’ ratings for each movie but not onindividual viewers’ ratings. Second, to supplement thisaggregate view, we examine how individual viewers’ movieconsumption influences their postconsumption evaluations.To do so, we conduct a viewer-level analysis, in which wetest the influence of both the focal viewer’s viewing and rat-ing history and the movie community’s collective opinionson the focal viewer’s new movie rating, while controllingfor movie quality (i.e., movie characteristics).

This two-level analysis approach enables us to examinethe relationships between movie ratings and movie perfor-mances from complementary angles and to provide impor-tant managerial insights. Importantly, the (macro) movie-level analysis captures moviegoers’ collective choices in themovie industry, whereas the (micro) viewer-level analysistaps into individual consumers’ postconsumption experi-ences. We develop and test five hypotheses based on thistwo-way classification.

From a managerial perspective, on the basis of somekey empirical findings, this research suggests that moviemarketers should persistently promote movies that garnerhigh ratings to sustain movie revenues and should cau-tiously consider sequels despite their originals’ commercialsuccess. For movie rental firms, this study provides insightsinto ways to recommend movies on the basis of the focalmember’s rating history, the member community’s overallmovie rating patterns, and the movie’s characteristics. Net-flix maintains that approximately 60% of its membersselect movies according to movie recommendations tailoredto their tastes. In addition, our results should be applicable

Page 3: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

1Our empirical analysis confirms this strategy as common prac-tice by movie marketers. The correlation between weekly theaterrevenues (during the previous week) and weekly ad spending (dur-ing the current week) is positive and significant and increases oversubsequent weeks. This evidence indicates that movie marketersallocate their advertising money according to movies’ commercialsuccesses as they adjust distribution intensity (i.e., the number ofscreens) in response to the weekly theater revenues (Krider et al.2005).

2In general, high-cost movies generate high revenues and profits(though not always), and ratings, or consumers’ acceptance of aproduct, matter. Consider the movie Hotel Rwanda. Its costs andrevenues were low ($31 million and $32 million, respectively), butits average ratings were high (9.5/10). In contrast, consider Char-lie’s Angels: Full Throttle. Its costs ($147 million) were more thanits revenues ($102 million), and it received poor ratings (5.7/10).

Note that both factors must be in play; neither is suffi-cient on its own.2 That is, positive ratings alone cannoteffectively increase revenues, because not enough potentialviewers know about the movie. In addition, highly adver-tised movies cannot generate enough revenue without favor-able ratings from ordinary moviegoers, because negativeWOM spreads more quickly for these types of movies thanfor others. Yet we anticipate that neither piece of informa-tion is sufficient, because the effect is interactive and syner-gistic. Our theorizing may be consistent with signalingtheory, if both sets of signals are calibrated to be equallyeffective. Realistically, we acknowledge that one set of sig-nals may seem to be more diagnostic as a cue than anotherset (Lynch 2006; Milgrom and Roberts 1986). Thus, wepredict an interactive effect, but without specifying that onecontributing signal attenuates another:

H2: Positive ratings enhance the effectiveness of advertisingspending to raise movie revenues.

Movie sequels build on the original movies’ commercialsuccess (Basuroy and Chatterjee 2008). That is, moviegoerstend to view the high quality of the original movie as a sig-nal of the quality of a sequel because they tend to associatevarious products of the same brand with product quality(Erdem 1998). With generous production budgets andheavy advertising based on the original movie’s brandpower, a sequel usually achieves box office success, even ifit does not meet the box office levels attained by the parentmovie (Basuroy and Chatterjee 2008; Ravid 1999).

Although sequels can make money, they are often ratedless favorably than original movies. That is, the originalmovie’s success leads to high expectations for the sequel,which are often difficult to meet, thus leading to less satis-faction (Anderson 1973; Oliver 2009). Viewers may be lesssatisfied and less impressed as a result of satiation on expe-riential attributes arising from a sequel’s lack of novelty andsurprise, which results in lower ratings by moviegoers. Fig-uratively speaking, when the punch line is known, thehumor is less effective. Distancing a movie somewhat fromthe expectations of the original pays off; Sood and Drèze(2006) find that dissimilar sequels were rated higher thansimilar ones and that sequels with descriptive titles (e.g.,Pirates of the Caribbean: Dead Man’s Chest) were ratedhigher than those with numbered titles (e.g., Spider-Man 2).Low ratings on a sequel tend to spread in the movie popula-tion, thus limiting its revenues in following weeks, whichdoes not bode well for a sequel in the long run. Such lowratings of sequels may partially explain why subsequentsequels are rarely made.

Therefore, we hypothesize that the effects of bothhigher revenues and lower ratings of sequels are likely real-ized predominately in the early weeks after release becausesequels tend to stimulate their loyal consumer base quickly.

110 / Journal of Marketing, January 2010

positive experiences with the movie. This relationship isstrengthened by the mutual confirmation of the online com-munity environment composed of ordinary viewers.

Movie marketers are known to enhance advertisingspending for movies that were commercially successful inpreceding weeks, which in turn draws more positivereviews from movie viewers.1 In other words, advertisingcan also play a role in confirming viewer satisfaction, whichtranslates into higher movie ratings in subsequent weeks(Spreng, MacKenzie, and Olshavsky 1996). Thus, we testthe following hypothesis:

H1: High movie revenues enhance subsequent movie ratings.

This hypothesis implies that early successful revenuesfrom early adopters of new products serve as an informationcue for late adopters’ purchases and satisfaction. Word ofmouth is known to be a powerful and effective force, assist-ing the diffusion of consumer packaged goods, durables,and services in the market. Early commercial success is alsoa proxy of assurance of high quality from early adopters, asegment often acknowledged as experts in the relevantproduct category; as such, early sales figures lend credibil-ity to the product launch, which in turn enhances the prod-uct’s success.

The movie literature takes somewhat of a signalingtheory’s perspective, in that a consumer who witnesses theearly commercial success of a movie can infer that themovie has qualities that make it popular and might alsoinfer that the movie has artistic or creative merit. The litera-ture maintains that the same quality signal from twosources—one from marketers (advertising) and the otherfrom consumer communities (ratings)—can effectivelyinfluence consumers’ choices in the marketers’ favor bygreatly reducing uncertainties about new products’ unob-servable quality (Kirmani 1997; Nelson 1974).

Some recent research suggests that in the movie indus-try, certain signals may become less useful in the presenceof others; for example, Basuroy, Desai, and Talukdar (2006)find the attenuating role of third-party information sources(e.g., critics’ review consensus and cumulative WOM) onthe strength of the adverting signal. Other research arguesthat advertising and ratings indeed function synergistically,enhancing revenues when well-known movies (those withlarge budgets and heavy advertising) receive positive WOM(high movie ratings) (Basuroy, Chatterjee, and Ravid 2003;Liu 2006). To contribute to this line of inquiry, we also testan interactive hypothesis from analogous sources. We pre-dict an interactive effect that advertising spending upgradesrevenues further when ratings are more positive. Our theo-rizing suggests that if movie ratings are positive, potentialconsumers are more likely to respond to the advertising,thus enhancing the effect of advertising on revenues.

Page 4: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

3H3 can be potentially extended to other entertainment products,such as books (e.g., the Harry Potter series), video games, andmusic albums. Whether the same logic applies should be empiri-cally tested. The hypothesis can be further extended to incrementalproduct developments in consumer packaged goods (e.g., CokeZero, Pepsi ONE). One quality that makes movies different fromconsumer packaged goods, and even books, is their extremelyshort life cycle.

raters tend to rate new movies higher than strict raters, andaccordingly, we expect a positive association between aver-age rating and new movie rating. Third, the member’s ratingstandard deviation represents the variability of the mem-ber’s ratings across different movies and reflects his or herrisk tolerance in choosing movies. In other words, a widerating variability may indicate that a member’s choices havebeen experimental and risky; in such a case, the membercan end up with more disappointing movies than memberswith a narrow rating variability. Thus, we expect a negativeassociation between rating standard deviation and newmovie rating.

Fourth, the member’s percentage of each movie genre inhis or her rating history measures how often the membersees movies of the same genre. For example, thriller junkiesview most movies in the genre because they like the genrethe most of all the movie genres. This internal dispositiontends to lead them to rate thriller movies high. Thus, weexpect a positive association between percentage of samegenre and new movie rating. Fifth, the member’s historicalaverage movie rating for the same genre also measures hisor her general preference toward movies of the same genre.Logically, we expect that a rating for a new movie of thesame genre is positively associated with a member’s generalpreference toward the same genre. Sixth, we theorize that arecent satisfactory movie experience raises the aspirationlevel, whereas a recent disappointing experience lowers theaspiration level (Anderson 1973). According to prospecttheory (Kahneman and Tversky 1979), the aspiration levelset by the recent movie experience should function as a ref-erence point, yielding a negative association with mostrecent rating and next rating.

Next, we turn to an understanding of the second factorgroup of the movie recommendation system, the membercommunity’s overall rating patterns; to do so, we examinefive factors: (1) number of ratings, (2) average rating, (3)rating standard deviation, (4) percentage of highest rating,and (5) percentage of lowest rating. These factors echocommunity opinions and are comparable to online WOMeffects. First, the community’s accumulated number of rat-ings of a movie indicates how many members have alreadyseen the movie. Because more interested members view themovie before less interested members, we expect that theaccumulated number of ratings is negatively correlated withthe new rating. Second, we expect that the community’s his-torical average rating of the movie of interest is positivelycorrelated with the new rating because members tend to ratethe same movie similarly—that is, most viewers rate goodmovies high and bad movies low. Third, the community’shistorical rating standard deviation of the movie of interestmeasures the degree of evaluation disagreement toward thesame movie. More diversely discussed movies (i.e., thosewith high rating standard deviations) can attract more view-ers by raising their curiosity than less discussed movies,especially in the ubiquitous and freely accessible onlinemovie community environment. However, negative reviewsof a movie tend to disappoint viewers who are attracted byincreased discussion because these viewers are likely tohave high expectations from positive comments about themovie. Thus, we expect a negative association between rat-

Movie Ratings, Movie Revenues, and Viewer Satisfaction / 111

That is, viewers who already liked an original movie willtend to see its sequel earlier than a new, unknown, originalmovie. Accordingly, these sequel movie effects are alsolikely to dissipate quickly and therefore would not be asstrongly pronounced a few weeks after release. Thus, wepredict the following:

H3: Sequels reap higher revenues but lower ratings than origi-nals, predominately in the early weeks after release.3

General Viewer-Level Effects: Own Past Ratingsand Community Opinions

Movie revenues are one indicator of the results of con-sumers’ collective choices. Individual viewers’ ratings areanother significant measure, effectively summarizing con-sumer satisfaction in the movie industry. Higher ratingsmay lead viewers to choose other movies that share pre-ferred characteristics (e.g., sequel, same actor), and the rea-sons for satisfaction can be spread to online consumer com-munities through text reviews. Viewer preferences candevelop into a stable and established preference for viewers,such as favorite genres or favorite stars. This is particularlyimportant for online movie rental firms (e.g., Netflix)because members have unlimited access to other members’ratings and reviews. For these firms, members’ increasedsatisfaction enhances their loyalty to the company.

Movie rental firms invest tremendous time and effortinto developing an effective movie recommendation systembased on (1) the individual member’s rating history, (2) themember community’s overall rating patterns of the movieof interest, and (3) the movie’s characteristics. The firmscan then use the system for customer relationship manage-ment by using these information sources. Beyond such arecommendation system, it is equally important to under-stand how such information sources are associated withmembers’ ratings on new movies for insights into how thenew ratings are determined.

To understand the individual member’s ratings, weexamine six factors: (1) number of movies rated, (2) aver-age rating, (3) rating standard deviation, (4) percentage ofsame genre, (5) genre-specific average rating, and (6) mostrecent rating. Next, we describe the anticipated effect ofeach factor.

First, the member’s number of movies rated indicatesthe frequency of movie viewing, which is used to identifysegments traditionally referred to as “light” and “heavy”segments. Overall movie-viewing frequency indicates gen-eral liking toward movies, with heavy viewers likingmovies more. Therefore, we expect a positive associationbetween number of movies rated and new movie rating.Second, the member’s average rating shows how strict orlenient he or she is when rating movies on average. Lenient

Page 5: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

4This argument is empirically confirmed by the data used in thisresearch.

riences become stabilized over time. Specifically, on theone hand, it becomes more difficult to satisfy them, andthus they give high ratings less often. On the other hand,their improved expertise and accumulated experienceenable them to avoid movies that are unsuitable to theirtastes, and thus they give low ratings less often. Therefore,amateurs’ movie ratings become stabilized in the form ofless variability with consumption experiences.

H4: Amateur viewers’ movie-viewing experiences generateless favorable ratings with less variability.

In the long run, amateur viewers’ ratings should stabi-lize at a certain level because there will not be any moresubstantial learning experience in critiquing movies. There-fore, this hypothesis is primarily focused on amateur view-ers who are acquiring relative new movie consumptionexperiences as opposed to seasoned and experienced ama-teur viewers.

Next, given the association between movie preferencesand ratings with genre (Liu 2006), such as children beingfans of animation movies, we expect that viewers give theirfavorite genres (more precisely, members’ frequentlyviewed genres) high ratings because they are internally pre-disposed to like that category of movies (upward “preferredgenre effect”). In contrast, as we predict in H4, as viewerschoose more movies beyond their best choices in their non-favorite genres, they may rate those movies lower withouthaving the preferred genre effect as in their favorite genres(downward “viewed set effect”). That is, as viewers choosemore movies, they settle for less attractive movies becausethey have exhausted their top choices in certain genres.Thus, the relationship between genre proportion (i.e., thepercentage of movies seen in the genre compared with allmovies seen for that individual viewer) and average genrerating may be nonmonotonic because of these two conflict-ing effects and warrants further investigation.

Specifically, we expect “genre loyalists” with a highrange of genre proportions to generate high ratings approxi-mately proportional to their genre proportion because oftheir strong internal inclination toward their favorite genres.In most cases, their strong inclination toward their fre-quently viewed genres prevents them from choosing fromother, less favored genres. That is, genre loyalists arestrongly predisposed to specific aspects of their favoritegenres. For example, thriller movie loyalists enjoy how thestory unfolds and entertains their own anticipated scenarios.Thus, such strong preferences for their favorite genres leadthe loyalists to rate most movies in their preferred genresfavorably (upward effect by preferred genres). In contrast,this effect should be weak or nonexistent for viewers whobalance multiple genres, and accordingly, they shouldexhibit a downward effect by viewed set. Finally, we expecta low range of genre proportions to result in a mediumrange of ratings due to viewers’ choosing only the most rec-ognizable movies in a genre that is only a little known tothem (e.g., through intensive advertising exposure, friends’strong recommendation). However, their lack of stronginclination toward a particular genre leads them to haveonly a moderate level of satisfaction, despite the movie’sstrengths.

112 / Journal of Marketing, January 2010

ing standard deviation and a new movie rating. Regardingthe fourth and fifth points, the community’s percentage ofthe highest and lowest ratings of the movie of interest indi-cates two opposite extreme ratings (e.g., 5 and 1 on a five-point scale, respectively) and is a strong indicator of newratings beyond the community’s simple average rating.Thus, the highest rating is positively correlated with a newmovie rating, and the lowest rating is negatively correlatedwith a new movie rating.

These two data perspectives, along with movie charac-teristics, converge to lend a better understanding of howboth consumers’ own consumption experiences and com-munity opinions influence consumers’ postconsumptionevaluations in the movie category. Recently, marketingscholars have emphasized “connected” consumers in theInternet era, in which people can easily exchange consump-tion information (Chen and Xie 2008; Chevalier and May-zlin 2006). However, consumers still value and use theirown experiences (i.e., internal information) in decisionmaking, in addition to community opinions (i.e., externalinformation). In general, consumers are likely to seek vari-ous types of information sources to reduce uncertainty asthe perceived risk associated with a purchase increases(Murray 1991; West and Broniarczyk 1998). In the movieindustry, the uniqueness of each movie makes movie choicechallenging, along with the financial and transaction costs(e.g., ticket price, travel to the theater).

In-Depth Viewer-Level Effects: Rating PatternDevelopments with Experiences (H4–H5)

In the previous subsection, we focused on illuminating twogroups of factors that influence amateur viewers’ new rat-ings—viewers’ own past ratings and movie communities’opinions. Here, we also consider how amateur viewers’ rat-ings develop as these viewers acquire more movie con-sumption experiences (Alba and Hutchinson 1987). There-fore, in this subsection, we present two in-depth hypotheseson individual viewers’ rating pattern developments: onehypothesis on viewers’ rating changes over time (H4) andone hypothesis on how viewers’ movie consumption experi-ences are associated with their genre preferences (H5).

We examine how amateur viewers’ ratings can changeover time. We test and verify that members with more rat-ings experiences rate movies lower, similar to critics’ rat-ings, which are generally lower than amateurs’ ratingsbecause of the critical nature of their reviews.4 By watchingmore movies, members develop a reliable, large referencebase and, accordingly, should be able to analyze moviessimilarly to professional critics. Alba and Hutchinson(1987) indicate that increased familiarity (i.e., the numberof product-related experiences) results in increased con-sumer expertise (i.e., the ability to perform product-relatedtasks successfully). Furthermore, we expect members tochoose preferred movies first and then a set of movies thatdo not include their best choices.

At the same time, members’ ratings become lessvariable with experiences because their consumption expe-

Page 6: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

ratings in both online and offline domains. Thus, in general,late viewers make better-informed choices, which is empir-ically supported by more positive ratings in the followingweeks. In the process, commercially successful movies cangenerate more satisfactory viewers who then rate themovies higher. That is, these viewers choose the moviesbecause of previous ratings and reviews.

We tested H2 regarding the interaction effects of movieratings (from either critics or amateurs) with ad spending onbox office revenues using weekly data. Table 3 provides thesignificant variables at the .05 level (for the initial indepen-dent variables used in each regression, see the Appendix).In the weekly analysis, we used the previous week’s adspending measures (i.e., weekly ad spending, weeklytheater ad proportion, and weekly movie ad proportion) andthe accumulated movie ratings (i.e., critics’ and amateurs’ratings) up to the previous week to measure their effects onthe following week’s theater revenues. We used the accu-mulated ratings because moviegoers can review all the pastratings to determine which movie to see.

In this weekly analysis, we confirmed that the interac-tion effects of ratings and spending (critics’ ratings × adspending and amateurs’ ratings × ad spending) were signifi-cant in Week 2–Week 7, whereas the main effects of ratingswere not. This implies that movie revenues cannot be maxi-mized without the combination of ratings and ad spendingin these weeks; thus, H2 is empirically supported for thelater weeks. The nonsignificant main effects of both ratings(critics’ and amateurs’ ratings) indicate that ratings alonecannot increase movie revenues without enhanced buzz cre-ated through advertising spending. In contrast, we observe amixture of positive and negative main effects of the moviecost variables (i.e., weekly ad spending, weekly theater adproportion, and weekly movie ad proportion). Although weexpect that movie costs are positively correlated with movierevenues, significantly negative movie cost effects implythat some advertising money was excessively wastedbeyond its proper use, based on its effective combinationwith favorable ratings (measured by both interaction terms).In other words, negative movie cost effects occurred afterthe regression was accounted for by both significantly posi-tive interaction term effects. Notably, our empirical analysisshows that movie marketers tend to allocate more advertis-ing dollars to movies that collect high revenues in precedingweeks. It demonstrates that in many cases, marketers usedadvertising money inefficiently because they did not con-sider both revenues and ratings when allocating their adver-tising resources.

In contrast, in the opening week (Week 0), we foundthat only one main effect (critics’ ratings) and one interac-tion effect (amateurs’ ratings × ad spending) were signifi-cant. The result of Week 0 implies that critics’ ratings havea significant main effect on theater revenues because critics’reviews and ratings are intensively published in various out-lets shortly before opening week (Week 0). In the sameweek, amateurs’ ratings showed no significant main effect,probably because there are only a limited number of ama-teur reviews before a movie’s release. Accordingly, highamateur ratings can only enhance revenues with the help ofsubstantial ad spending in the week. In the following week

Movie Ratings, Movie Revenues, and Viewer Satisfaction / 113

Importantly, we hypothesize that this U-shaped relation-ship is pronounced only for heavy viewers, who gainenough consumption experiences through enhanced analy-sis and elaboration abilities to process product information(Alba and Hutchinson 1987). We do not expect the relation-ship to be strong for light viewers (i.e., novices), becausetheir experiences do not allow them to fully develop eitherthe upward effect (preferred genre effect) or the downwardeffect (viewed set effect).

H5: There is a U-shaped relationship between experiencedviewers’ genre proportion (i.e., genre preference) and theirgenre rating.

Empirical AnalysisWe divide our empirical analyses into two parts accordingto the different nature of the available data. First, we pro-vide the empirical results for H1–H3, using movie-level datafrom various sources, including Rotten Tomatoes for pro-fessional critics’ ratings and Yahoo Movies for amateurs’ratings. These data do not include information on individualcritics or individual amateur viewers. Second, with individualmembers’ data mainly from Netflix, we run a regression ana-lyzing individual viewers’ movie ratings to test H4 and H5.

Movie-Level Data

The data contain specific information regarding 246 moviesthat cover six major genre categories: thriller, romance,action, drama, comedy, and animation. The movies werereleased during the May 2003–October 2005 period intheaters and on video. Table 1 provides a summary on thesedata, such as movie characteristics, costs, and revenues.

We gathered the ratings information on the 246 moviesfrom two popular movie Web sites: Rotten Tomatoes andYahoo Movies. Both sites allow members to post movie rat-ings but differ in terms of membership conditions. The Rot-ten Tomatoes site comprises accredited movie critics exclu-sively. Accordingly, these members are active in eitherselect movie critic societies/associations or print publica-tions. They are regarded as professional movie critics. Incontrast, the Yahoo Movies site is open to the public and,for the most part, comprises amateur movie lovers (seeTable 1).

Movie-Level Analysis: Movie Ratings andRevenues (H1–H3)

In the weekly regression summarized in Table 2, highweekly theater revenues induced more favorable ratingsfrom amateurs in the following week in six of the sevenweeks tested, in support of H1. Week 1 was the only excep-tion (with theater revenues in Week 0 [opening week] andmovie ratings in Week 1), which suggests that viewers inthe opening week (Week 0) tended to have mixed evalua-tions about the movie, perhaps because they saw the moviefor different reasons. For example, heavy advertising frommovie marketers or insufficient and inconsistent informa-tion from like-minded moviegoers can make satisfactorychoices difficult. During the opening week, however,enough people view the movie and spread their reviews and

Page 7: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Category Variable Summary Statistics

Movie characteristics Six genres Thriller (35, 14%), romance (25, 10%), action (51, 21%), drama (50,20%), comedy (74, 30%), animation (11, 4%)

Sequel 36 movies (15%)

Eight major studiodistribution

Theater distribution (181, 74%), video distribution (186, 76%)

MPAA rating R (78, 32%), non-R (PG-13, PG, and G) (168, 68%)

7 major holiday release Theater release (34, 14%), video release (36, 15%)

M SD Minimum Maximum

Running time (minutes) 108 19 68Pooh’s Heffalump Movie

201Lord of the Rings: TheReturn of the King

Video release lagging days 143 39 67From Justin to Kelly

431Luther

Movie costs Production budget(thousands of dollars)

47,798 39,098 150Pieces of April

210,000Spider-Man 2

Ad costs(thousands of dollars)

14,132 8,441 271Eulogy

45,981Shrek 2

Total costs(thousands of dollars)

61,930 44,622 2,193Pieces of April

242,077Spider-Man 2

Theater screens 11,325 6,403 44Eulogy

27,354Shrek 2

Movie revenues Box office revenues(thousands of dollars)

42,418 44,897 54Eulogy

301,861Shrek 2

Video rental revenues(thousands of dollars)

24,381 13,121 803She Hate Me

62,068The Day After Tomorrow

Video sales revenues(thousands of dollars)

15,359 24,088 82Northfolk

233,090Finding Nemo

Total revenues(thousands of dollars)

82,158 74,164 1,246She Hate Me

473,118Finding Nemo

Rotten Tomatoes movieratings (critics’ ratings)

Number of ratings 150 43 33I Am David

257The Passion of the Christ

Average rating:ten-point scale

5.50 1.46 1.80Alone in the Dark

8.69Lord of the Rings:TheReturn of the King

Yahoo Movies ratings(amateurs’ ratings)

Number of ratings 1,509 2,585 29Eulogy

34,672The Passion of the Christ

Average rating:ten-point scale

6.89 1.40 2.16House of the Dead

9.50Hotel Rwanda

TABLE 1Summary of the 246 Movie Data Sample

Notes: MPAA = Motion Picture Association of America.

impact of sequels on theater revenues occurred only in thefirst two weeks after the movie’s release and that the impactwas much stronger in the opening week (Week 0) than inthe following week (Week 1). Afterward, the impact weak-ened, probably because the buzz and attention for the sequeldissipated quickly. Our weekly analysis in Table 2 shows anegative impact of a sequel on movie ratings in Weeks 1and 2 as well. In brief, the empirical results show thatsequels can have a positive impact on theater revenuesbased on the originals’ success, but they leave viewersunimpressed and unsatisfied relative to the original movies.Yet these sequel effects are pronounced only in the earlyweeks and become subsequently neutralized because the

114 / Journal of Marketing, January 2010

(Week 1), we observe one significant main effect (amateurs’ratings) and one significant interaction effect (critics’ rat-ings × ad spending). In this particular week, amateurs’ rat-ings create enough information and buzz from early movie-goers, and thus the still-new movies do not need the supportof heavy advertising to enhance revenues. In contrast, thecombination of critics’ ratings and ad spending enhancesmovie revenues effectively beginning this week. After thefirst two weeks, because of reduced voluntary attention andbuzz among ordinary viewers, only a combination of goodratings and heavy ad spending made a substantial influenceon theater revenues.

Next, we tested H3, which compares sequels and theircontemporaneous originals. Table 3 shows that the positive

Page 8: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Week 1 2 3 4 5 6 7

Intercept 1.66354 1.02730 1.30307 .69374 1.75857 2.19604 1.69142Genre (thriller) –.85705 n.s. n.s. n.s. n.s. n.s. n.s.Sequel –.28891 –.32098 n.s. n.s. n.s. n.s. n.s.Running time n.s. n.s. n.s. .00766 n.s. n.s. .01127MPAA rating n.s. n.s. n.s. n.s. n.s. n.s. n.s.Theater revenues in

millions of dollars(previous week)

n.s. .007 .018 .017 .098 .090 .047

Amateurs’ ratings(previous week)

.81645 .84198 .80520 .77912 .70521 .65247 .58119

R2 .7609 .8017 .6627 .6496 .5620 .3894 .3785N 246 243 239 236 229 223 210

TABLE 2Determinants of Amateurs’ Movie Ratings: Stepwise Linear Regression

Notes: Dependent variable = weekly amateurs’ movie ratings (current week). Each regression included only significant independent variables atthe .05 level. Each movie showed its first eight weeks (Week 0 [opening week]–Week 7) after its release in the data used. n.s. = notsignificant.

Week 0 1 2 3 4 5 6 7

Intercept –32,138,681 –4,037,404 –440,143 –321,435 –451,267 –295,158 –334,726 –207,801Sequel 9,398,002 1,450,018 n.s. n.s. n.s. n.s. n.s. n.s.Holiday week 5,445,809 4,383,865 976,660 1,919,559 670,210 463,581 n.s. 406,835Major studio release n.s. –1,725,768 n.s. n.s. n.s. n.s. n.s. n.s.

Weekly number ofscreens 12,205 n.s. 615 n.s. 943 540 1617 1206

Running time 91,297 n.s. n.s. n.s. n.s. n.s. n.s. n.s.Weekly ad spending –2,592 n.s. –827 –914 –2,402 –2,715 –3,351 –5,210Weekly theater

ad proportion n.s. n.s. n.s. n.s. –1,885,493 –941,568 –2,207,783 n.s.Weekly movie

ad proportion n.s. n.s. n.s. n.s. 2,576,562 2,196,000 2,420,714 1,329,766Critics’ ratings 4,117,721 n.s. n.s. n.s. n.s. n.s. n.s. n.s.Amateurs’ ratings n.s. 578,971 n.s. n.s. n.s. n.s. n.s. n.s.Critics’ ratings ×

ad spending n.s. 65 117 40 204 40 66 359Amateurs’ ratings ×

ad spending 217 n.s. 67 93 161 176 488 143Box office revenues

(previous week) N.A. .49371 .44448 .65020 .28627 .48853 .11823 .26666R2 .6182 .8480 .8972 .9132 .8969 .9292 .8540 .8827Adjusted R2 .6086 .8442 .8942 .9113 .8932 .9266 .8492 .8787N 246 246 243 239 236 229 223 210

TABLE 3Determinants of Box Office Revenues: Stepwise Linear Regression

Notes: Dependent variable = weekly amateurs’ theater revenues (current week). Each regression included only significant independentvariables at the .05 level. Each movie showed its first eight weeks (Week 0–Week 7) after its release in the data used. N.A. = not applic-able, and n.s. = not significant.

lected the data between October 1998 and December 2005,and they reflect the distribution of all ratings received dur-ing the period. The title and release year of each movie arealso provided.

From the Netflix Prize public data, we selected13,734,151 ratings of the 246 movies used for our previousmovie-level analysis and matched this viewer-level datawith the movie-level data. The data included 456,476 Net-flix members. The ratings selected cover the June

Movie Ratings, Movie Revenues, and Viewer Satisfaction / 115

fan base stemming from the original tends to view thesequel early.

Viewer-Level Data

We used individual members’ movie rental and rating histo-ries data from the Netflix Prize site for the individualviewer–level analysis. The public data contain more than100 million ratings of 17,770 movie titles from 480,000randomly chosen, anonymous Netflix members. We col-

Page 9: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Variable Group Variable Estimate SE p-ValueIntercept –1.01621 .09838 <.001

Individual member–basedvariables (X)(by the member)

(1) Number of movies rated (+) .00055 .00011 <.001(2) Average rating (+) .04041 .01444 .003(3) Rating standard deviation (–) –.00132 .01903 .472(4) Percentage of same genre (+) .20626 .04627 <.001(5) Genre-specific average rating (+) .93723 .01141 <.001(6) Most recent rating (–) –.00580 .00369 .058

Community-basedvariables (Y)(based on the focal movie)

(1) Number of ratings (–) –.00022 .00004 <.001(2) Average rating (+) .08662 .02050 <.001(3) Rating standard deviation (–) –.09462 .03030 <.001(4) Percentage of highest rating (+) .01374 .00054 <.001(5) Percentage of lowest rating (–) –.02176 .00099 <.001

Movie characteristicsvariables (Z)

Genre: thriller .54469 .02980 <.001Genre: romance .41895 .02915 <.001Genre: action .29751 .02845 <.001Genre: drama .22314 .02955 <.001Genre: comedy .46865 .02911 <.001Sequel –.05267 .01300 <.001MPAA rating (R = 1, non-R = 0) .02891 .00993 .002Running time .00259 .00030 <.001Production budget (millions of dollars) –.00095 .00015 <.001Theater revenues (millions of dollars) .00021 .00015 .074Holiday video release .03485 .01233 .002Video release (days after theater release) –.00026 .00012 .015Video release by the eight major studios –.00489 .01054 .322Video bonus materials .02858 .01391 .020Video ad spending (millions of dollars) .01240 .00261 <.001

TABLE 4Regression Estimates of the Viewer-Level Rating Regression Model (Netflix Data)

Notes: N = 43,204. The sign next to each variable indicates the sign expected by our theory.

viewing and rating history. Next, the fiveYmt variables mea-sure community opinions, which are comparable to onlineWOM effects. Unlike X, Y varies in the dimensions ofmovie (m) and time (t) but does not vary across members(h) as collective group opinions. Finally, Zm includes 11movie characteristics variables. We use these variables ascontrol variables to measure more accurately how both theX and the Y variables influence the dependent variable; Zdoes not vary across members (h) or time (t) as fixed movie(m) characteristics.

To fit this model, we removed members with fewer than10 ratings over the 246 movies. Then, we randomly selected1 of every 300 members to make the regression more man-ageable. The selected sampling resulted in 43,204 ratingsfrom 1014 members. The average rating across the mem-bers was 3.59. The average number of ratings of the mem-bers was 42.6 of the 246 movies.

Table 4 shows the estimation results of the regressionmodel. All the individual member–based variables (X) andcommunity-based variables (Y) showed the expected signs,and 9 of the 11 X andY variables were significant at the .05level. Between the two insignificant variables (X3 and X6),the most recent rating (X6) was close to the cutoff level.Despite its expected sign, rating standard deviation (X3)was insignificant, perhaps because wide rating variabilityelicits both high ratings and low ratings across members.

116 / Journal of Marketing, January 2010

2001–December 2005 period. The rating average of the 246movies was 3.38 on a five-point scale, with an average of55,830 ratings for a movie.

General Viewer-Level Analysis: Own Past Ratingsand Community Opinions

We developed a regression model comprised of threegroups of factors that we anticipated should influence newmovie ratings (dependent variable): (1) individual member–based variables (X), (2) community-based variables (Y),and (3) movie characteristics variables (Z) (see Table 4). Weanalyzed the impacts of the three groups of factors on newmovie ratings to provide comprehensive empirical findingsregarding consumer satisfaction with movies. Thus, weused the following linear regression, in which the dependentvariable, R, represents member h’s rating for movie m attime t (West and Broniarczyk 1998). Our regression reflectscontinuously updated temporal information at the giventime point to evaluate the new rating R:

(1) Rhmt = α + βXhmt + γYmt + δZm + εhmt,

where the Xhmt variables represent the rating member’sindividual viewing experiences and preferences. The Xterm, which is varied for the three dimensions of member(h), movie (m), and time (t), is composed of the six specificindividual member variables based on the focal member’s

Page 10: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Genre InterceptMember Rating

Order R2 N

Overall 3.67597 (<.001) –.00295 (<.001) .57 129Thriller 3.49238 (<.001) –.02320 (<.001) .80 24Romance 3.59049 (<.001) –.01508 (.081) .16 20Action 3.66009 (<.001) –.00323 (.034) .11 42Drama 3.90672 (<.001) –.01695 (<.001) .58 35Comedy 3.41070 (<.001) –.00216 (.155) .05 45Animation 4.14090 (<.001) –.04585 (.009) .65 9

TABLE 5Trend Analysis of Individual Viewers’

Rating Changes Over Time (Netflix Data)

A: Dependent Variable = Rating Mean

Notes: The number in parentheses indicates the p-value for thecorresponding estimate.

Genre InterceptMember Rating

Order R2 N

Overall 1.11418 (<.001) –.00144 (<.001) .21 129Thriller 1.13149 (<.001) –.01257 (.009) .27 24Romance 1.06233 (<.001) –.00725 (.006) .35 20Action 1.12570 (<.001) –.00589 (<.001) .44 42Drama 1.04798 (<.001) –.00280 (.097) .08 35Comedy 1.10261 (<.001) –.00301 (.020) .12 45Animation .95692 (<.001) –.00569 (.486) .07 9

B: Dependent Variable = Rating Standard Deviation

bers indicates that more experiences can make viewerstougher raters, in support of H4 (see also Table 5).

We tested the hypothesized U-shaped relationshipbetween the member’s genre proportion and genre-specificrating average (H5) and confirmed the curvilinear relation-ship (Figure 1). We theorized that genre loyalists, who see amajority of movies in one or two genres, rate movies higherbecause of their strong preference for the favorite genres.Furthermore, perhaps they do a better job at selecting goodmovies in their favorite genres because of their genre-specificexpertise. In contrast, genre switchers, who tend to watchvarious movies across genres, often rate movies lowerbecause they settle for less satisfactory movies after theyexhaust their top choices in chosen genres. This genre-related rating pattern becomes more pronounced for heavyviewers (i.e., experts) than for light viewers (i.e., novices).

Figure 1, Panel B, illustrates our empirical results of therelationship among viewers overall, heavy viewers, andlight viewers according to the regression results in Panel A.In the heavy-viewers segment, which shows the most pro-nounced relationship among the three groups, the genre-specific movie rating becomes lower until the genre-specificproportion reaches 23% (for genre switchers). As men-tioned previously, we expect that these viewers sample lesssatisfactory movies in their nonfavorite genres afterexhausting their favorite movies in the same genres. Afterthey passed the lowest threshold, the genre rating began toincrease quickly and reached the highest possible rating(i.e., five) when the member’s genre proportion hit 77%.

Movie Ratings, Movie Revenues, and Viewer Satisfaction / 117

Notably, all the five community-based variables were sig-nificant. In particular, the results of percentage of highestrating (Y4) and percentage of lowest rating (Y5) imply thatthe two extreme ratings are strong indicators of additionalratings from new viewers of the same movies beyond theaverage rating (Y2). Basuroy, Chatterjee, and Ravid (2003)find that negative reviews hurt performance more than posi-tive reviews help performance (negativity bias). Ourresearch confirms this theory because the absolute estimatevalue of the percentage of the lowest rating (.02176) ismuch larger than that of the percentage of the highest rating(.01376).

Finally, most movie characteristics variables (Z) werealso significant. All five genre dummy variables were sig-nificant, which indicates their differential impact on ratings.The sequel dummy variable has a negative impact on thenew rating, which is consistent with our previous empiricalresults, in support of H3. The Motion Picture Association ofAmerica (MPAA) rating dummy variable has a positivesign, which indicates that R-rated movies are rated higherthan non-R-rated movies. Similarly, a longer movie tends tobe rated higher. After we account for all the movie-relatedfactors in the regression, production budget has a negativeimpact on ratings. This result implies that big-budget movieproducers tend to spend excessively beyond financially jus-tifiable quality improvements. In association with videorelease factors, videos released on holidays and thosereleased shortly after their theater release tend to be ratedhigher. Finally, both video bonus materials and high videoad spending show a positive impact on ratings. In brief, theresults of the Z variables add face validity to our overallregression results.

In-Depth Viewer-Level Analysis: Rating PatternDevelopments with Experiences (H4–H5)

Table 5 shows our temporal trend analysis of individualmembers’ rating changes over time as they view moremovies. For this analysis, we sorted the Netflix data usedfor the regression in Table 4 in the ascending order of therating time for each member. We applied generalized leastsquares estimation to correct for the heteroskedasticity (i.e.,unequal variances) problem in the data (Goldberger 1991;Griffiths, Hill, and Judge 1993). Table 5, Panel A, shows therelationship between members’ viewing experiences (mem-ber rating order) and the rating mean across all members atthe overall level and each of all the six genres, which is con-sistently negative. The negative ratings at the overall levelimply that members tend to rate recently viewed moviesmore strictly as they acquire more consumption experi-ences. The same pattern was confirmed in five of the sixgenres at the .10 significance level. Similarly, Table 5, PanelB, shows that members’ ratings tend to become less variablewith consumption experiences at the overall level. Again,the same pattern is confirmed in five of the six genres at the.10 significance level. In addition, we found that the corre-lation between the movie rating average and the rating fre-quency was positive across movies (.557, p ≤ .01) but nega-tive across members (–.064, p ≤ .01). The positivecorrelation across movies implies that popular movies arerated higher; however, the negative correlation across mem-

Page 11: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Variable Overall Heavy Viewers Light Viewers

Intercept 3.67588 (<.001) 3.78521 (<.001) 3.59785 (<.001)Member’s genre proportion

(linear) –.93434 (<.001) –.33033 (<.001) –.21382 (.568)

Member’s genre proportion(quadratic)

1.86631 (<.001) 5.09324 (<.001) .68759 (.295)

FIGURE 1U-Shaped Relationship Between Members’ Genre Proportion and Genre Ratings

A: Regression Resultsa

B: Illustration of Empirical Resultsb

aDependent variable = movie rating. We divided all the members in the first regression (overall) into two groups of approximately the same size:heavy viewers and light viewers. The number in parentheses indicates the p-value for the corresponding estimate.bIn the line graph, the x-axis is members’ genre proportion, and the y-aixs is movie rating on a five-point scale.

OverallHeavy viewersLight viewers

5

4

3

2

1

00 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

and reviews can capture moviegoers’ attention more easilyin the early weeks because they are published in manynewspapers and magazines and spread through Internetmovie communities in time for new releases. In summary,in the early weeks, critics’ early ratings can be an importantquality signal for marketers to allocate advertising moneyamong available movies in theaters (Nelson 1970, 1974). Toobtain more sustained revenues in the later weeks, mar-keters should sustain their ad spending on movies supportedby high ratings. This pattern suggests that marketers shouldallocate their ad dollars to movies that garner high ratingsnot only in early weeks but also in later weeks.

Many movie producers assume that making sequels ofcommercially successful original movies is a safe invest-ment because they can effectively take advantage of theestablished fan base of the originals (H3). Our empiricalfinding (Table 3) that sequels have a positive impact ontheater revenues only in the first two weeks implies that thefan base reacts swiftly to sequels’ releases and becomesexhausted quickly. Despite such commercial success at thebox office, sequels tend to leave their viewers less satisfiedin the early weeks (Table 2), perhaps because sequels areusually a strengthened and intensified version (e.g., moreviolent action, stronger special effects) of the originals’frameworks and story lines (Sood and Drèze 2006). Fur-thermore, high expectations formed by successful originals

118 / Journal of Marketing, January 2010

DiscussionEmpirical Findings and Managerial Implications

Movie-level effects. From a managerial perspective, thisstudy shows that ratings are associated with movie perfor-mance, as measured by both movie revenues and viewersatisfaction. Our movie-level data analysis implies that mar-keters should allocate more ad dollars to movies that garnerearly high ratings by professional critics. These ratings tendto lead to high ratings by amateurs, which in turn can con-tribute to enhanced revenues. Enhanced revenues can alsoraise movie ratings in the subsequent weeks (H1). More-over, the revenues of highly rated movies are enhancedwhen the movies are heavily advertised (H2).

Our weekly data analysis (Table 3) reveals that ad dol-lars can be used efficiently over the course of new movies,which usually last only a couple of months in theaters.Specifically, we found that prerelease critics’ ratings con-tribute significantly to the theater revenues in the openingweek (Week 0) when amateurs’ ratings are virtually non-existent. In contrast, amateurs’ ratings begin to influencetheater revenues in the following week (Week 1). In laterweeks, only high ratings supported by heavy ad spendingcan sustain theater revenues because high ratings alone arenot sufficient to maintain moviegoers’ attention withoutheavy advertising in later weeks. In contrast, movie ratings

Page 12: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

5For example, three “Number 3” blockbuster movies in 2007fell short of expectations at the box office compared with their firstsequels. Specifically, Spider-Man 3 (2007) reaped $337 millioncompared with $374 million by Spider-Man 2 (2004), Shrek theThird (2007) made $323 million compared with $441 million byShrek 2 (2004), and Pirates of the Caribbean: At World’s End(2007) reached only $309 million compared with $423 million byPirates of the Caribbean: Dead Man’s Chest (2006).

sizing the positive aspects of less viewed DVDs, marketerscan increase the uses of less frequently viewed stocks ofDVDs more efficiently. We point out that the intention ofthis particular research is to provide insight into a viewerrating mechanism rather than to develop a full-fledged rat-ing forecasting model.

Finally, such recommendation systems are essential tothe survival and success of new and existing products as aneffective customer relationship management tool (Ansari,Essegaier, and Kohli 2000; Bodapati 2008; Iacobucci, Ara-bie, and Bodapati 2000; Krasnikov, Jayachandran, andKumar 2009; Moon and Russell 2008). Ubiquitous onlinecommunity forums can be a significant information sourceto improve the performance of such recommendation sys-tems for firms that develop new products (e.g., books,music, video games).

Limitations and Further Research

We indicate some limitations of this research and shed lighton potential avenues for further research. First, this researchwas based primarily on our analysis of the demand side ofthe movie industry and was limited in integrating the supplyside (e.g., production and distribution processes and costs)with the same level of intensity and focus. Because supply-side factors influence movie profits as much as demand-side factors, further research could take a more balancedand integrated approach toward demand and supply factorsof the industry. Second, we used summary ratings, but wedid not analyze the content of the textual reviews (Chevalierand Mayzlin 2006; Wyatt and Badger 1990). Third, wetapped into individual viewers’ postconsumption evalua-tions but not their choices per se. We examined the choiceissue only at the aggregate movie level. Research on indi-vidual choice in movies could be conducted with properdata acquisition. Note that such research would require asophisticated choice model development because the set ofavailable movies is large and changes across both viewersand time.

From a substantive perspective, research on sequels’impact on movie performances, both in the theater and onvideo, in association with viewers’ ratings would provideuseful information to movie production studios. For exam-ple, star power is well known in the movie industry (Ravid1999) and has proved important in the success of sequels.As we noted previously, some sequels do not use the samestars as the originals but still can be successful. It would beworthwhile to investigate the magnitude of star power insequels by comparing sequels with and without the samestars as the originals.

This research could be applied to similar hedonic con-sumption situations in which consumers continually facenew products and thus need to determine the value of thenew products according to their own experiences and thecommunity’s general opinions (e.g., entertainment goods,such as books and music). In these categories, the Internetenables ordinary consumers to share their ratings andreviews based on their own experiences with other like-minded consumers.

Other product categories are not immune to the impactof expert and consumer ratings. Automobiles are rated on

Movie Ratings, Movie Revenues, and Viewer Satisfaction / 119

are likely to leave viewers less satisfied (Anderson 1973).Therefore, a front-loaded advertising strategy should beconsidered when promoting sequels.

Finally, a diminishing fan base may explain why subse-quent sequels can be a risky investment and suggests thatstudios should be cautious about extending sequel moviesinto a long series (Basuroy and Chatterjee 2008).5 Thus,sequels can be a relatively safe investment based on theoriginal movies’ commercial success, but satisfying the fanbase is a critical factor in turning sequels into successfullong-term series, such as the James Bond movies. Theseresults on movie sequels should inform brand extensionsand brand alliances (Rao, Qu, and Ruekert 1999) of otherexperiential hedonic goods because brand extensions arecommon and often effective in such product categories(e.g., Harry Potter book series).

Viewer-level effects. First, for movie rental companies(e.g., Netflix, Blockbuster), we may assume that movie rat-ings are an effective measure of a member’s satisfaction andthat satisfied members stay with the company longer. Thisstudy highlights members’ satisfaction mechanism on thebasis of their viewing and rating histories and movie com-munities’ opinions as internal and external informationsources, respectively (Murray 1991). Indeed, Netflix’semphasis on the importance of providing better recommen-dations to members is clearly indicated by Netflix Prize, acontest in which the firm will give a potential $1 millionprize to a team with the best ideas on how to improve thecompany’s movie recommendation system.

Second, insights from our research findings regardingthe roles of members’ own experiences and communityopinions suggest that these consumer voices can strengthencompanies’ market orientation (Jaworski and Kohli 1993;Kohli and Jaworski 1990). Notably, marketers are espe-cially interested in how to use online consumer forums intheir favor (Dellarocas 2006; Mayzlin 2006). Godes andcolleagues (2005) indicate that at least some of the socialinteraction effects in such forums are partially within thefirm’s control. For example, our movie data show that thenumber of accumulated ratings on the focal movie by themember community has a negative impact on the new rating(Y1 in Table 4) because of the correlation between themember’s interest level and his or her viewing timing. Thatis, the most interested viewers see the new movie first, andthe less interested viewers see it later. Cinematch, Netflix’smovie recommendation system, does not fully considersuch factors indicated in Table 4, which can potentiallyimprove the system’s accuracy.

In addition, when community opinions have a signifi-cant influence on members’ choices, marketers can empha-size community opinions to persuade members. By empha-

Page 13: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

REFERENCESAinslie, Andrew, Xavier Drèze, and Fred Zufryden (2005), “Mod-

eling Movie Life Cycles and Market Share,” Marketing Sci-ence, 24 (3), 508–517.

Alba, JosephW. and J. Wesley Hutchinson (1987), “Dimensions ofConsumer Expertise,” Journal of Consumer Research, 13(March), 411–54.

Anderson, Rolph E. (1973), “Consumer Dissatisfaction: TheEffect of Disconfirmed Expectancy on Perceived Product Per-formance,” Journal of Marketing Research, 10 (February),38–44.

Ansari, Asim, Skander Essegaier, and Rajeev Kohli (2000), “Inter-net Recommendation Systems,” Journal of MarketingResearch, 37 (August), 363–75.

Basuroy, Suman and Subimal Chatterjee (2008), “Fast and Fre-quent: Investigating Box Office Revenues of Motion PictureSequels,” Journal of Business Research, 61 (July), 798–803.

———, ———, and S. Abraham Ravid (2003), “How Critical AreCritical Reviews? The Box Office Effects of Film Critics, StarPower, and Budgets,” Journal of Marketing, 67 (October),103–117.

———, Kalpesh Kaushik Desai, and Debabrata Talukdar (2006),“An Empirical Investigation of Signaling in the Motion PictureIndustry,” Journal of Marketing Research, 43 (May), 287–95.

Bettman, James R. (1970), “Information Processing Models ofConsumer Behavior,” Journal of Marketing Research, 7(August), 370–76.

Boatwright, Peter, Suman Basuroy, and Wagner Kamakura (2007),“Reviewing the Reviewers: The Impact of Individual Film Crit-ics on Box Office Performance,” Quantitative Marketing andEconomics, 5 (4), 401–425.

Bodapati, Anand (2008), “Recommendation Systems with Pur-chase Data,” Journal of Marketing Research, 45 (February),77–93.

Boulding, William and Amna Kirmani (1993), “A Consumer-SideExperimental Examination of Signaling Theory: Do Con-sumers Perceive Warranties as Signals of Quality?” Journal ofConsumer Research, 20 (June), 111–23.

Chakravarty, Anindita, Yong Liu, and Tridib Mazumdar (2008),“Online User Comments Versus Professional Reviews: Differ-ential Influences on Pre-Release Movie Evaluation,” MSIReports, working paper series, Report No. 08-105, 115–37.

category = animation), sequel, running time, MPAA rating(R versus non-R ratings; reference category = non-R ratings),theater distribution by the eight major studio distribution, andtheater revenues (previous week).•Week indicates the week of amateurs’ movie ratings (depen-dent variable).

Details on Table 3

•The initial set of independent variables is genre (six cate-gories; reference category = animation), sequel, holidayweek, number of theater screens, running time, MPAA rating(R versus non-R; reference category = non-R), theater distri-bution by eight major studio distribution, theater ad propor-tion (theater ad spending for the focal movie/ad spending inall theater movies in the corresponding weeks [%] [previousweek]), movie ad proportion (theater ad spending for thefocal movie/[ad spending in all theater movies in the corre-sponding weeks + ad spending in all video movies for thecorresponding weeks] [%] [previous week]), ad spending(previous week), critics’ ratings, amateurs’ ratings, critics’ratings × ad spending (previous week) (interaction term), andamateurs’ ratings × ad spending (previous week) (interactionterm).

Details on Table 5

•Member rating order indicates the rating’s serial number inorder for each member.•Rating mean indicates the average of all the ratings thatbelong to each rating order across all the members.•Rating standard deviation indicates the standard deviation ofall the ratings that belong to each rating order across all themembers.•The overall case is based on all the rating cases across all thesix genres included.•The overall case covers up to #129 in member rating order toselect only cases with at least 21 members for each ratingorder. Rating mean becomes unstable after #129.•In each of the six genre cases, we included only observationswith more than five members for each genre rating order.

120 / Journal of Marketing, January 2010

design and functionality by experts and on consumptionexperience attributes by ordinary drivers. Services, fromrestaurants to dry cleaners, are endorsed by experts and byconsumers who have tried the services and want to have avoice in encouraging or warning others. Thus, expert ratingshave long existed in many product categories. With the easeof online posting, consumer ratings are exploding in popu-larity, indeed motivating experts to provide ratings of prod-ucts previously unrated (e.g., dietary content in fast-foodmeals). Thus, although this research focused on movies, itsapplicability should be much broader. In many purchasecategories, it is important to begin to tease apart the individ-ual and synergistic effects of WOM induced by ratings andadvertising dollars, the dynamic effects of subsequent prod-uct launches, and the moderating effects of customers’ rela-tive experiences.

Appendix: Variable DescriptionsDetails on Table 1

•Eight major studios: Buena Vista, Fox, MGM/UA, Miramax,Paramount, Sony, Universal, Warner Bros.•Seven major holidays: New Year’s Day, Presidents’ Day,Memorial Day, Independence Day, Labor Day, Thanksgiv-ing, Christmas.•Video release lagging days: the number of days betweentheater release and video release dates.•Total costs = production budget + ad costs. All movie costsand revenues are based on the first eight weeks after eithertheater release or video release except for production budget.•The number of theater screens is the accumulated number ofweekly screens.•Total revenues = box office revenues + video rental revenues +video sales revenues.

Details on Table 2

•The initial set of independent variables is made up of the fol-lowing for both regressions: genre (six categories; reference

Page 14: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Graphical Method Applied to Movies,” Marketing Science, 24(4), 635–45.

Liu, Yong (2006), “Word of Mouth for Movies: Its Dynamics andImpact on Box Office Revenue,” Journal of Marketing, 70(July), 74–89.

Lynch, John G., Jr. (2006), “Accessibility-Diagnosticity and theMultiple Pathway Anchoring and Adjustment Model,” Journalof Consumer Research, 33 (June), 25–27.

Mayzlin, Dina (2006), “Promotional Chat on the Internet,” Mar-keting Science, 25 (2), 155–63.

Milgrom, Paul and John Roberts (1986), “Price and AdvertisingSignals of Product Quality,” Journal of Political Economy, 94(4), 796–821.

Moon, Sangkil and Gary J. Russell (2008), “Predicting ProductPurchase from Inferred Customer Similarity: An AutologisticModel Approach,” Management Science, 54 (1), 71–82.

Murray, Keith B. (1991), “A Test of Services Marketing Theory:Consumer Information Acquisition Activities,” Journal of Mar-keting, 55 (January), 10–25.

Nelson, Phillip (1970), “Information and Consumer Behavior,”Journal of Political Economy, 78 (2), 311–29.

——— (1974), “Advertising as Information,” Journal of PoliticalEconomy, 82 (4), 729–54.

Nord, Walter R. and J. Paul Peter (1980), “A Behavior Modifica-tion Perspective on Marketing,” Journal of Marketing, 44(Spring), 36–47.

Oliver, Richard L. (2009), Satisfaction: A Behavioral Perspectiveon the Consumer. NewYork: McGraw-Hill.

Prosser, Elise K. (2002), “How Early Can Video Revenue BeAccurately Predicted,” Journal of Advertising Research, 42(March–April), 47–55.

Rao, Akshay R., Lu Qu, and Robert W. Ruekert (1999), “SignalingUnobservable Product Quality Through a Brand Ally,” Journalof Marketing Research, 36 (May), 258–68.

Ravid, S. Abraham (1999), “Information, Blockbusters, and Stars:A Study of the Film Industry,” Journal of Business, 72 (4),463–92.

Reinstein, David A. and Christopher M. Snyder (2005), “TheInfluence of Expert Reviews on Consumer Demand for Experi-ence Goods: A Case Study of Movie Critics,” Journal of Indus-trial Economics, 53 (1), 27–51.

Rothschild, Michael L. and William C. Gaidis (1981), “BehavioralLearning Theory: Its Relevance to Marketing and Promotions,”Journal of Marketing, 45 (Spring), 70–78.

Sood, Sanjay and Xavier Drèze (2006), “Brand Extensions ofExperiential Goods: Movie Sequel Evaluations,” Journal ofConsumer Research, 33 (December), 352–60.

Spreng, Richard A., Scott B. MacKenzie, and Richard W.Olshavsky (1996), “A Reexamination of the Determinants ofConsumer Satisfaction,” Journal of Marketing, 60 (July),15–32.

Stigler, George J. (1961), “The Economics of Information,” Jour-nal of Political Economy, 69 (3), 213–25.

Trusov, Michael, Randolph E. Bucklin, and Koen Pauwels (2009),“Effects of Word-of-Mouth Versus Traditional Marketing:Findings from an Internet Social Networking Site,” Journal ofMarketing, 73 (September), 90–101.

Wanderer, Jules J. (1970), “In Defense of Popular Taste: Film Rat-ings Among Professionals and Lay Audiences,” AmericanJournal of Sociology, 76 (2), 262–72.

West, Patricia M. and Susan M. Broniarczyk (1998), “IntegratingMultiple Opinions: The Role of Aspiration Level on ConsumerResponse to Critic Consensus,” Journal of ConsumerResearch, 25 (June), 38–51.

Wyatt, Robert O. and David P. Badger (1990), “Effects of Infor-mation and Evaluation in Film Criticism,” Journalism Quar-terly, 67 (2), 359–68.

Movie Ratings, Movie Revenues, and Viewer Satisfaction / 121

Chen, Yubo and Jinhong Xie (2008), “Online Consumer Review:Word-of-Mouth as a New Element of Marketing Communica-tion Mix,” Management Science, 54 (3), 477–91.

Chevalier, Judith A. and Dina Mayzlin (2006), “The Effect ofWord of Mouth on Sales: Online Book Reviews,” Journal ofMarketing Research, 43 (August), 345–54.

Dellarocas, Chrysanthos (2006), “Strategic Manipulation of Inter-net Opinion Forums: Implications for Consumers and Firms,”Management Science, 52 (10), 1577–93.

Duan, Wenjing, Bin Gu, and Andrew B. Whinston (2008), “TheDynamics of Online Word-of-Mouth and Product Sales: AnEmpirical Investigation of the Movie Industry,” Journal ofRetailing, 84 (2), 233–42.

Eliashberg, Jehoshua, Anita Elberse, and Mark A.A.M. Leenders(2006), “The Motion Picture Industry: Critical Issues in Prac-tice, Current Research, and New Research Directions,”Market-ing Science, 25 (6), 638–61.

———, Jedid-Jah Jonker, Mohanbir S. Sawhney, and BerendWierenga (2000), “MOVIEMOD: An Implementable Decision-Support System for Prerelease Market Evaluation of MotionPictures,” Marketing Science, 19 (3), 226–43.

——— and Steven M. Shugan (1997), “Film Critics: Influencersor Predictors?” Journal of Marketing, 61 (April), 68–78.

Erdem, Tulin (1998), “An Empirical Analysis of Umbrella Brand-ing,” Journal of Marketing Research, 35 (August), 339–51.

Godes, David and Dina Mayzlin (2004), “Using Online Conversa-tions to Study Word-of-Mouth Communication,” MarketingScience, 23 (4), 545–60.

———, ———,Yubo Chen, Sanjiv Das, Chrysanthos Dellarocas,Bruce Pfeiffer, et al. (2005), “The Firm’s Management ofSocial Interactions,” Marketing Letters, 16 (3–4), 415–28.

Goldberger, Arthur S. (1991), A Course in Econometrics. Cam-bridge, MA: Harvard University Press.

Griffiths, William E., R. Carter Hill, and George G. Judge (1993),Learning and Practicing Econometrics. NewYork: John Wiley& Sons.

Hirschman, Elizabeth C. and Morris B. Holbrook (1982), “Hedo-nic Consumption: Emerging Concepts, Methods, and Proposi-tions,” Journal of Marketing, 46 (July), 92–101.

Holbrook, Morris B. (1999), “Popular Appeal Versus Expert Judg-ments of Motion Pictures,” Journal of Consumer Research, 26(September), 144–55.

Iacobucci, Dawn, Phipps Arabie, and Anand Bodapati (2000),“Recommendation Agents on the Internet,” Journal of Interac-tive Marketing, 14 (3), 2–11.

Jaworski, Bernard J. and Ajay K. Kohli (1993), “Market Orienta-tion: Antecedents and Consequences,” Journal of Marketing,57 (July), 53–70.

Jedidi, Kamel, Robert E. Krider, and Charles B. Weinberg (1998),“Clustering at the Movies,” Marketing Letters, 9 (4), 393–405.

Kahneman, D. and A. Tversky (1979), “Prospect Theory: AnAnalysis of Decision Under Risk,” Econometrica, 47 (March),263–91.

Kirmani, Amna (1997), “Advertising Repetition as a Signal ofQuality: If It’s Advertised So Much, Something Must BeWrong,” Journal of Advertising, 26 (3), 77–86.

——— and Akshay R. Rao (2000), “No Pain, No Gain: A CriticalReview of the Literature on Signaling Unobservable ProductQuality,” Journal of Marketing, 64 (April), 66–79.

Kohli, Ajay K. and Bernard J. Jaworski (1990), “Market Orienta-tion: The Construct, Research Propositions, and ManagerialImplications,” Journal of Marketing, 54 (April), 1–18.

Krasnikov, Alexander, Satish Jayachandran, and V. Kumar (2009),“The Impact of Customer Relationship Management Imple-mentation on Cost and Profit Efficiencies: Evidence from theU.S. Commercial Banking Industry,” Journal of Marketing, 73(November), 61–76.

Krider, Robert E., Tieshan Li, Yong Liu, and Charles B. Weinberg(2005), “The Lead-Lag Puzzle of Demand and Distribution: A

Page 15: Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci ... · PDF file... and viewers themselves influence key movie performance measures ... Movie Revenues, andViewer Satisfaction / 109

Copyright of Journal of Marketing is the property of American Marketing Association and its content may not

be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written

permission. However, users may print, download, or email articles for individual use.