15
1009 Psychology & Marketing, Vol. 19(12): 1009– 1023 (December 2002) Published online in Wiley InterScience (www.interscience.wiley.com) 2002 Wiley Periodicals, Inc. DOI: 10.1002/mar.10050 Elections and Public Polling: Will the Media Get Online Polling Right? Dennis W. Johnson The George Washington University ABSTRACT Public survey research came in for much criticism during the 2000 elections. The greatest controversy centered on the flawed data coming from the Voter News Service exit polls in Florida. For the long run, however, a more important issue concerned the validity and media reporting of surveys conducted online. Most online surveys are pseudo-polls, whose findings have no merit and should not be reported by the media. Their value is entertainment only. The fundamental problems with online surveys are that the samples drawn are unrepresentative of the population as a whole and the participants are self-selected. Two survey firms are trying to resolve the issue of sampling errors, using fundamentally different strategies, and spending enormous sums of money to create truly representative panels. The 2000 election results showed that online polling, done right, can be even more accurate than traditional telephone surveys. 2002 John Wiley & Sons, Inc. Campaign and election polling received widespread interest and much critical attention during the 2000 general elections. The most conten- tious and controversial issue concerned the flawed data and inaccuracy of exit polls conducted in Florida, whereas a quieter, but in the long run more important, issue concerned the widespread use of online polling. Online polls, so easy to conduct and so inexpensive to produce, are the tempting alternative to legitimate telephone surveys. Unless done

Elections and public polling: Will the media get online polling right?

Embed Size (px)

Citation preview

Page 1: Elections and public polling: Will the media get online polling right?

1009

MAR WILEJ RIGHT BATCH

Base of text

Psychology & Marketing, Vol. 19(12): 1009–1023 (December 2002)Published online in Wiley InterScience (www.interscience.wiley.com)� 2002 Wiley Periodicals, Inc. DOI: 10.1002/mar.10050

Top of text

Top of CT

Base of DF

Elections and PublicPolling: Will the Media GetOnline Polling Right?Dennis W. JohnsonThe George Washington University

ABSTRACT

Public survey research came in for much criticism during the 2000elections. The greatest controversy centered on the flawed datacoming from the Voter News Service exit polls in Florida. For thelong run, however, a more important issue concerned the validityand media reporting of surveys conducted online. Most onlinesurveys are pseudo-polls, whose findings have no merit and shouldnot be reported by the media. Their value is entertainment only. Thefundamental problems with online surveys are that the samplesdrawn are unrepresentative of the population as a whole and theparticipants are self-selected. Two survey firms are trying to resolvethe issue of sampling errors, using fundamentally differentstrategies, and spending enormous sums of money to create trulyrepresentative panels. The 2000 election results showed that onlinepolling, done right, can be even more accurate than traditionaltelephone surveys. � 2002 John Wiley & Sons, Inc.

Campaign and election polling received widespread interest and muchcritical attention during the 2000 general elections. The most conten-tious and controversial issue concerned the flawed data and inaccuracyof exit polls conducted in Florida, whereas a quieter, but in the long runmore important, issue concerned the widespread use of online polling.Online polls, so easy to conduct and so inexpensive to produce, are

the tempting alternative to legitimate telephone surveys. Unless done

Page 2: Elections and public polling: Will the media get online polling right?

1010 JOHNSON

MAR WILEJ LEFT BATCH

shortstandard

Top of textBase of text

Base of RF

right, however, they will be the survey research industry’s fool’s gold.Online polls, in order to fulfill the critical requirement of random selec-tion from entire populations, face many obstacles, and to surmountthem requires the investment of vast sums of money. But if those fundsare invested, if randomly selected online populations can be identified,then there is enormous potential for online polling.For the serious business of reporting on elections, online polling poses

many temptations for the media. As cost-cutting measures, the mediacut back on traditional exit poll research funds, with disastrous resultsfrom Florida. Are the same media willing to invest substantial sums ofmoney to assure that their own online polling efforts are truly scientif-ically based? If they are not, will the media refrain from reporting theresults of unscientific online polling (their own included), or at a mini-mum label them “for entertainment purposes only ?”

FLAWED FLORIDA EXIT POLLS

Coming under the most scrutiny and criticism were the exit polls con-ducted by Voter News Service (VNS) in the state of Florida. Scramblingto be the first to announce, anxious television networks—CBS, NBC,ABC, CNN, and Fox—first reported Al Gore as winner, then retractedGore’s win and announced Bush as winner, then retracted Bush’s vic-tory, and finally declared the race too close to call. This media spectacleled media critic Howard Kurtz (2000) to declare this the “biggest blun-der in television history on election night.” (p. A1). In rushing ahead,each of the networks relied heavily on one source, exit polls with flaweddata from VNS, the polling service created in 1990 by the televisionnetworks. Before 1990, each of the networks relied on its own surveyresearch teams; but as a cost-cutting device, the networks pooled theirresources, creating VNS, and used its data as their chief source of elec-tion night projections.In an internal report written several weeks after the election, VNS

found that it had greatly underestimated the size of the Florida absen-tee vote and how it would be cast. VNS assumed that the absentee votewas 7.2%, when the actual figure was 12%; it also projected that absen-tees would vote 22.4% more for Bush than Election Day voters, whilethe actual number was 23.7% more. The report also found that the exit-poll model itself was flawed, skewing its projection toward Gore. One ofthe key measurements used by VNS was to compare 2000 exit poll find-ings with those in the 1998 Florida governor’s race, rather with exit pollresults in the 1996 presidential contest. In this fast-growing state, the1996 registered voter numbers were becoming outdated quickly, butthey better reflected a presidential turnout than did the more recent1998 numbers. Presidential contests typically bring outmore people andmore independents, an altogether different voting profile than found in

Page 3: Elections and public polling: Will the media get online polling right?

ELECTIONS AND PUBLIC POLLING 1011

MAR WILEJ RIGHT BATCH

shortstandard

Top of textBase of text

Base of RF

a statewide governor’s race. The report also noted that there was someevidence that African-American votes were overestimated, while Cu-ban-American votes were underestimated in the VNS exit polls. In theend, of course, no exit polls, no matter how finely calibrated, could havepredicted the winner in this contest, rife with charges of ballot fraud,ineffective voting machines, and court fights.But the damage was done. Nothing in the 2000 election more hurt

the credibility of the media or the polling industry than the miscalcu-lations of VNS and the overly eager network television reporting of elec-tion-night coverage. The exit-poll fiasco is the most memorable featureof the 2000 election survey research findings, and certainly had themostimmediate impact on the media coverage on election night.

2000 MEDIA POLLING: ALL DAY, EVERY DAY

However, there were three other features of survey research that werefound in the 2000 elections. First, public polling was omnipresent. The2000 elections, fueled by the closest presidential contest in decades, sawpolling all day, every day, by more and more polling organizations.Tracking polls, which were first used a generation ago to determine howcandidates fared in the last critical weeks of a campaign, began earlyin the summer, before the national party conventions, and did not letup until Election Day. At any one time, 25–40 polls taken by surveyresearch firms, media sources, universities, and others were availableto track the candidates’ standings each week.Second, there was an extraordinary emphasis on instant analysis:

polling, focus groups, dial groups, and online surveys were conductedduring the party conventions, after speeches or major events, and es-pecially after the presidential debates, with findings reported withinhours. Analysis truly was instant: Just 30–35 minutes after the conclu-sion of the first presidential debate, the first impressions of the publicwere being reported on television. Networks let their viewing audienceswatch as televised focus groups were conducted immediately after po-litical events. Viewers who had never seen focus groups in action, wereable to see the full range of articulateness and political perception ofpanels drawn from the American citizenry. Instant analysis came at aprice. The Polling Review Board of the National Council on Public Polls(NCPP) warned journalists that there was the likelihood of increasederrors in instant polling because of their much lower response rate(NCPP, 2000b).Third was the growing use of online polling. By the end of the 1996

election cycle, online polling had become commonplace, much of it doneby news organizations and some done by political campaigns (Johnson,1999, p. 715). One study found 57 website polls operating during the1996 presidential campaign (Wu & Weaver, 1997). Many of the web

Page 4: Elections and public polling: Will the media get online polling right?

1012 JOHNSON

MAR WILEJ LEFT BATCH

shortstandard

Top of textBase of text

Base of RF

election surveys were simply horse race polls; some were orientedtoward policy issues; most were nothing more than junk polls, permit-ting people to cast preferences more than once and lacking in any safe-guards that would assure random sampling. The 2000 election saw afar greater use of online polling, both in numbers of organizations doingthe polling but also in the sheer magnitude of such polls. The greatmajority of online polls, however, were methodologically flawed, andcame in for heavy criticism from professional survey research organi-zations, practitioners, and scholars.

LEGITIMATE AND PSEUDO-POLLS

There are three basic forms of polling conducted during presidentialelection cycles. Two forms are legitimate, and one is not; two involveonline polling, and one has yet to fully embrace it. The first often comesunder the guise of legitimate survey, but is nothing more than pseudo-polling, or junk polling. Pseudo-polling deserves mention here only be-cause it is so prevalent and its results too often are reported by themedia without any warning that the results are meaningless. The sec-ond category consists of a wide variety of legitimate public polls, usuallyconducted by telephone, which conform to the scientific standards andprotocols of survey research methodology. The third category, which isnot a focus of this article, consists of private polls, conducted for thecandidates, political parties, or other interested participants, to helpdevise strategy, craft campaign messages, and track the performance ofa candidate. During the 2000 elections, online polling was found inabundance in pseudo-polls, much less so in legitimate public surveys,and very little in private polling.Pseudo-polls come in the form of straw polls, mail-in ballots, 800- and

900-number phone-in polls, man-on-the-street interviews, and others.They are surveys in name only, because they have none of the scientificsafeguards found in legitimate polls (Orton, 1992) The central problemis that respondents in these polls are self-selecting, and in many sur-veys, can vote as many times as they want. Another common problemis that the questions asked often lead to a particular answer. In oneexample, presidential candidate Ross Perot in 1992 used an old-fash-ioned version of the pseudo-poll, the write-in paper ballot. Perot distrib-uted what his campaign called the “First National Referendum—Gov-ernment Reform,” a ballot covering a variety of polling issues that hadbeen printed in popular magazines and newspapers. Self-selected par-ticipants clipped out the ballot and mailed it to Perot’s UnitedWe Standorganization for tabulation and results. The National Council of PublicPolls completely dismissed the write-in ballot: “Ross Perot’s mail-in ref-erendum . . . could be the biggest polling hoax since the Literary Digest

Page 5: Elections and public polling: Will the media get online polling right?

ELECTIONS AND PUBLIC POLLING 1013

MAR WILEJ RIGHT BATCH

shortstandard

Top of textBase of text

Base of RF

declared Alf Landon the winner [in the 1936 presidential contest]”(Morin, 1993).

ONLINE POLLING

What Perot, and many others before him, did with mail-in straw ballotsin 1992 is now done with far greater frequency and efficiencywith onlinepolling. Ballot stuffing is as old as polling, and with online polling isjust becomes much easier to pull off. One simple example of electronicballot stuffing occurred in July 1999, when baseball fans were encour-aged to use the Internet to vote for their favorite All-Star players. OneBoston Red Sox enthusiast programmed his computer to circumvent theone-vote-only restrictions and clicked away, over and over again. Aftertwo days of automated non-stop balloting, he had “voted” 40,000 timesfor his hero, shortstop Nomar Garciaparra.Viewers of NBC’s television show Dateline have the chance to vote

online during the middle of stories—why wait until both sides havebeen heard—featuring domestic battles and court fights. America On-line customers had the chance to vote to keep or replace Dennis Milleras resident comic on Monday Night Football. These online polls do noharm, and for the most part, are meant as nothing more than mereamusement. They are without scientific merit, should be labeled for “en-tertainment purposes only,” and most importantly, should be reportedas such by the media.During the 2000 election season, online ballot stuffing on behalf of

candidates and policy choices occurred. Through its Internet site, Timemagazine conducted a straw poll allowing anyone, not just registeredvoters, to cast their ballot choice for the Republican nominee for presi-dent. Senator Orrin Hatch, who was barely visible in most traditionalpolls, garnered 60% of the votes on the TIME.com survey. Hatch’s cam-paign had hyperlinked the TIME.com poll to its own campaign web site,and Hatch’s supporters enthusiastically clicked away (Wasserman,1999). Similar online polls showed Alan Keyes and Steve Forbes doingfar better than their standing in traditional telephone polls.Online polls conducted at CNN, MSNBC, and other websites imme-

diately after the first presidential debate showed George W. Bush de-feating Al Gore in the debate by an overwhelming 40 percentage points.Most polls taken by traditional telephone survey, using random digitdialing, found that Bush and Gore were more or less even during thedebate (Manjoo, 2000).Often straw polls are created for the sole purpose of rallying the faith-

ful, typically through fund-raising appeals. One example was a “poll”accompanying a fund raising solicitation from the National RepublicanSenatorial Committee several years ago, sent out over the signature of

Page 6: Elections and public polling: Will the media get online polling right?

1014 JOHNSON

MAR WILEJ LEFT BATCH

shortstandard

Top of textBase of text

Base of RF

former vice president Dan Quayle. One of the questions read: “Ted Ken-nedy and the liberals in the Senate are trying to revive the Clinton planto ration and control health care by hiding it in a bill for ‘children.’Should the Republican-led Senate continue to fight this deceptivescheme?” The purpose of this “poll,” of course, was not to solicit publicopinion, but to make a partisan point and to rally the faithful to givemoney (Johnson, 2001).An online version of this kind of straw poll came from the Democratic

National Committee’s website in 2000, which featured a weekly publicpolicy “poll.” One week, the question was this: “As the nation approachesa new millennium, what are the most important priorities facing ournext president? Saving Social Security, strengthening Medicare andpaying down the debt, or implementing George W. Bush’s $1.7 trillionrisky tax scheme that overwhelmingly benefits the wealthy?” The ques-tion was misleading, full of bias, and reaching for a certain answer;further, online respondents could cast as many votes as they wanted.Republican operatives took up the challenge, and retaliated with anorganized effort of electronic ballot stuffing, clicking away at thewebsitepoll so that more than two-thirds of the votes favored Bush’s tax plan.The DNC quickly retracted the poll (Suellentrop, 2000).Pseudo-polls, whether conducted in the old-fashioned way or online,

often blossom during an election year. The Internet, the great demo-cratic leveler, allows anyone to be a book critic, author, artist, or evena pollster. Some websites give amateurs all the tools they need to con-duct “polls.” Mister Poll, which dubs itself as the Internet’s largest on-line polling site, encourages visitors to create their own survey. “Ourdatabase has thousands of polls on every imaginable topic from the con-troversial to the downright zany.” The Mister Poll homepage acknowl-edges that “none of these polls are ‘scientific,’ and, in tiny print, notesthat the polls are for entertainment purposes only (Mister Poll website,undated).Whether the online poll comes from a decidedly amateurish source

like Mister Poll or is generated by a prestigious university, it can sufferfrom the same fundamental methodological flaw—the inability to reacha true random sample of the population, drawn from a pool by the re-searchers themselves.In the second category fall the legitimate public polls, the hundreds

of polls taken during any presidential election, which are meant to betaken seriously and not simply to entertain. The VNS exit polls, citedabove, are a part of the extensive media poll coverage. Exit polls areone-day only polls, conducted on Election Day. The purpose is to deter-mine, with the use of representative samples of voters from carefullyselected precincts matched up with past voting history, how the entirestate (or subdivision) will vote. Online polling does not fit into the cal-culus of exit polling, where the raw data come from literally asking

Page 7: Elections and public polling: Will the media get online polling right?

ELECTIONS AND PUBLIC POLLING 1015

MAR WILEJ RIGHT BATCH

shortstandard

Top of textBase of text

Base of RF

questions of those who have just stepped away from the voting precinct,trying to determine how and for what reasons they cast their votes.However, online polling has been used, and will be used more exten-

sively, in tracking polls, which determine almost day by day who isahead and by what percentage, and in instant analysis polls, such asreaction to the presidential debates, party convention speeches, or otherevents.

SETTING STANDARDS FOR ONLINE POLLING

With the extraordinary number of polls being published, which ones canby considered legitimate and believable? How can the media know thata poll has been conducted correctly? Survey research can go wrong inthree fundamental ways: in sampling errors, when the sample selectedis not representative of a relevant population; in measurement error,when questions are poorly designed or badly arranged; and specificationerror, when a theory is inappropriate for the opinion that the poll istrying to measure (Brady & Orren, 1992). With online polls, the firstproblem, sampling errors, is the most difficult to overcome. To assist themedia in understanding survey research and to help assure that work-ing journalists do a thorough and credible job in covering polls, the Na-tional Council on Public Polls has published “20 Questions a JournalistShould Ask About Poll Results” (Gawisser & Witt, 2000). The questionsare not meant to be a primer on conducting polls, but pose basic ques-tions that journalists should ask when looking at survey results: whoconducted the poll, who paid for it, how many people were interviewed,how were the people chosen, are the results based on the answers of allthe people interviewed, what is the sampling error, what facts can skewthe poll results, and when was the poll done?Andrew Kohut, president of the National Council on Public Polls, an-

nounced the creation of the a three-member Polling Review Board,which would regularly speak out on polling issues, media coverage, andonline polling practices during the 2000 election. Warren Mitofsky (Mi-tofsky International), with strong views against online polling andHumphrey Taylor (Harris Interactive), whose company is betting itsfuture with online surveys, were joined by Harry O’Neill (Roper StarchWorldwide), who serves as chairman. The Polling Review Board posedits own set of questions for journalists to consider when looking at theresults of an online survey: Is the Internet-based survey designed to berepresentative, and if so, of what population? What evidence is therethat the sample is representative of the population it claims to repre-sent? How was the sample drawn? What steps does the organizationtake to prevent people from voting more than once? How were the dataweighted? What is the evidence that the methodology works and pro-

Page 8: Elections and public polling: Will the media get online polling right?

1016 JOHNSON

MAR WILEJ LEFT BATCH

shortstandard

Top of textBase of text

Base of RF

duces accurate data? What is the organization’s experience and trackrecord using Internet-based polls? What is its track record in surveyresearch using traditional methods? Does the organization subscribe tothe code of conduct of AAPOR, NCPP, or CASOR (Council of AmericanSurvey Research Organizations)? When the answers are unsatisfactory,the Polling Review Board concluded, then the survey results were “notworthy of being reported.” (NCCP, 2000).Even with these basic warnings, journalists can get it wrong. In one

recent example, visitors to the ABC.com website were asked to click ontheir choice in a hypothetical Democratic primary matchup among VicePresident Al Gore, former Senator Bill Bradley, and actor WarrenBeatty. This was nothing but a straw vote, with all the evident flaws.Further, it was not conducted by the ABC News polling unit. The bignews was that Beatty captured 25% of the online vote. Columnist Mau-reen Dowd of the New York Times, so astonished by the results, con-cluded that we are truly entering the “nutty season” of presidential pol-itics when an actor like Beatty can pull so many votes. There was not aword, however, in the Dowd column about the obvious flaws of the onlinepoll (Dowd, 1999; Morin, 1999). In response to criticism both inside andoutside the company, ABC finally decided to add this disclaimer to itspolling results: “Not a scientific poll; for entertainment only.”Many online polls may be good entertainment, but they tell nothing

about public opinion. The reaction against online polling has been par-ticularly harsh and swift. There was heated discussion and debate ononline methodology at the annual meeting of the American Associationof Public Opinion Research (AAPOR) and at its regional chapter meet-ings in New York and Midwest. Murray Edelman, former president ofAAPOR and current president of Voter News Service, simply dismissedInternet polls: His advice to journalists: “the best thing to do [with anInternet poll] is just not talk about it, just don’t give it any credence atall” (O’Brien, 2000). Perhaps the harshest criticism has come from vet-eran pollster Warren Mitofsky, who argued that the development ofonline polling would “destroy the survey business.” He expressed fearthat the economics of online polling would “drive out other surveymeth-ods” that are probability based, and that online polling could result in“lots of data, but not valid data” (Collins, 1999).The AAPOR has set out basic criteria in order for online polling to be

considered scientifically valid. To each of these questions, the answermust be “yes”: Does the online poll claim that the results are repre-sentative of a specific population, such as the American public? If so,are the results based upon a scientific sampling that gives everymemberof the population a chance to be selected? Did each respondent haveonly one opportunity to answer the question? Are the results of the on-line survey similar to the results of scientific polls conducted at the sametime? Further, the AAPOR insists that the response rate for the onlinesurvey also must be reported (AAPOR).

Page 9: Elections and public polling: Will the media get online polling right?

ELECTIONS AND PUBLIC POLLING 1017

MAR WILEJ RIGHT BATCH

shortstandard

Top of textBase of text

Base of RF

INHERENT PROBLEMS WITH ONLINE POLLING

Professional polling organizations, such as AAPOR or NCPP, scholars,and survey research practitioners argue that there are several funda-mental problems with online polling. First, nearly all online polling doesnot select a random sample of the population. Random digit dialing(RDD) used in telephone surveys has the beauty and simplicity of draw-ing on all telephone numbers, listed and unlisted, selected by random10 digits—the area code and telephone number—or by selecting thelast four digits from preselected area codes and prefixes. Nothing likethis exists in the world of e-mail addresses. E-mail lists are proprietaryinformation, some are available to the public and others not; there is amultiplicity of e-mail addresses; they follow no standard format; andthere is no central directory of all e-mail addresses. (Rosenblatt, 1999,pp. 33–35; Wasserman, 1999).One of the common ways of recruiting online participants is to have

them volunteer their assistance. For example, Zogby International, aUtica, New York–based media polling firm, states that it has assumeda leading role in “pioneering cyberspace” for the science of polling, andasks readers for help. “If you would like to help us bring the art ofaccurate polling into the 21st century, please complete our registrationform. Your opinions do matter, and we are here to make sure they areheard. Help us to help you.” (Zogby, 2000). Professor Michael Traugott,former president of the AAPOR, has warned against self-recruitment ofparticipants: “Many online polls are compromised because they arebased on the responses of only those people who happened to volunteertheir opinions on the survey. For a survey to be scientific, the respon-dents must be chosen by a carefully designed sampling process thatis completely controlled by the researcher.” (AAPOR). The AAPORcomes out flatly against such use: If the online poll “uses volunteer re-spondents, allows respondents to participate in the survey more thanonce, or excludes portions of the population from participation, it mustbe characterized as unscientific and is unrepresentative of any popula-tion.”Second is the problem of socioeconomic bias. Douglas Rivers, chair-

man of Knowledge Networks, put it succinctly: “There are two funda-mental problems with online polling. The first is that only about 40% ofthe population is online, and the second is that those 40% aren’t nec-essarily reflective of the rest of the country” (Manjoo, 2000).A generation ago, telephone surveys supplanted door-to-door inter-

viewing, becoming more efficient, less costly, and easier to administer.The transition was assisted by the overwhelming availability of the tele-phone. The telephone truly is a public utility, available to virtually ev-eryone, requiring a minimum of technical knowledge, and available atrelatively little cost.In 1998, nearly everybody (94%) in the United States had telephones;

Page 10: Elections and public polling: Will the media get online polling right?

1018 JOHNSON

MAR WILEJ LEFT BATCH

shortstandard

Top of textBase of text

Base of RF

Table 1. Demographic Profile: On-Line Registered Voters

All Use Internet Have E-Mail

Men 46 52 60Women 54 48 4018–29 15 22 2530–49 43 55 5750–64 22 15 1465� 20 7 4Some Grad. 25 42 47Some Coll. 24 29 30HS Grad. 36 25 21�HS 13 3 2$75K� 15 25 31$50–$74.9K 15 21 21$30–$49.9K 27 27 28$20–$29.9K 15 10 7�$20K 18 9 9

Note: Demographic breakdowns from those who participated in online poll based on unweighted data. N �167.

but far fewer (26%) had Internet access, according to a U.S. Departmentof Commerce study (Puzzanghera, 1999). More recent studies indicatethat about 60% of the population is online. For some time to come, alarge minority of the population will not have online access or capabil-ities, and those without access will be the elderly, the poor, and thosewith modest levels of formal education. The Graphics, Visualization,and Usability (GVU) Center at Georgia Tech University, the Pew Re-search Center for the People and the Press, and others have tracked thedemographics of online users, and have all demonstrated the skewednature of the online population. For example, a Pew Research Centerstudy (Pew, 1999), which drew on a rather small sample (N � 167),focused on registered voters, and found on-line registered voters werebetter educated and more affluent than the total population of regis-tered voters (see Table 1).Resolving the issues of online sampling presents some of the most

important obstacles to using the Web for serious political survey re-search. Georgia Tech’s GVU Center has tried to circumvent the prob-lems of sampling by using banner advertisements in high-exposurepages, e-mail announcements, advertising in the popular media, andcash incentives to increase its chances of having an representative sam-ple. Although these techniques have some potential, they still dependon self selection of online respondents. (Rosenblatt, 1999, pp. 35–36).As seen below, Harris Interactive and Knowledge Networks have re-sorted to expensive, albeit different, approaches to resolve this problem.Three other problems have been associated with online polls: first,

concern about the lack of rotation of questions or answers; second, thetendency for online responses to bunch toward middle options rather

Page 11: Elections and public polling: Will the media get online polling right?

ELECTIONS AND PUBLIC POLLING 1019

MAR WILEJ RIGHT BATCH

shortstandard

Top of textBase of text

Base of RF

than the “strongly agree” or “strongly disagree”; and third, there aremore “don’t knows” and “not sures” reported in online polling that ontraditional polling (Collins, 1999). However, these problems are subor-dinated to the two major questions of randomness and representation.

THE POTENTIAL OF ONLINE POLLING

If done properly, online polling offers some very attractive advantagesover traditional telephone surveys. First, once a pool of online partici-pants has been identified, it becomes easy to survey a large, and diverse,panel of individuals who are willing to participate. Second, large num-bers of individuals can be surveyed at once. For example, 10,000 mightbe surveyed online, rather than 1,000 in a telephone survey. Third, theresponse rate has the potential for being much higher than in telephonesurveys. Online survey would be less intrusive—no jangling telephonein the middle of dinner—and respondents would be given a set periodof time—perhaps 48 hours—in which to complete the online survey andclick it back. Early anecdotal evidence suggests that participation is farhigher in online polling. (Morin, 2000). Fourth, survey results would beavailable much quicker than traditional telephone surveys, perhaps astwice as fast. Fifth, online surveys have enormous potential for inter-activity and multimedia. Surveys posted on a website can be in full colorand high-fidelity sound, visuals can make it much easier to understandconcepts that would take considerable verbal explanation. Finally, oneof the most attractive features is the substantially lowered price for anonline survey than for a telephone survey. Harris Interactive chairmanHumphrey Taylor notes that a typical survey of persons with a partic-ular health care problem might cost $300,000 using a traditional tele-phone survey; an online survey would cost only $50,000 (Coy, 2000).Once the considerable start-up costs have been absorbed, online pollingcan lead to extraordinary savings over telephone surveys. The greatdifficulty is absorbing the initial costs. Harris Interactive, for example,anticipates investing some $35–40 million to build its databases.

TWO ATTEMPTS TO RESOLVE RANDOM SELECTIONPROBLEMS

Two survey firms, Harris Interactive and Knowledge Networks, are tak-ing fundamentally different approaches in selecting their online panelparticipants. Harris Interactive, the firm created in 1999 by Louis Har-ris & Associates and Harris Black International, first tested online dur-ing the 1998 elections by running surveys in the two weeks before theNovember general election. In 21 of 22 (95%) of the races, Harris cor-

Page 12: Elections and public polling: Will the media get online polling right?

1020 JOHNSON

MAR WILEJ LEFT BATCH

shortstandard

Top of textBase of text

Base of RF

rectly predicted the winner through its online polls; while it was correctin 49 of 52 (94%) conducted by traditional telephone polls.For the 2000 elections, Harris Interactive scored impressive numbers.

The company announced it had achieved 99% accuracy (that is, callinga race correctly or within � 2 percentage points) in predicting the win-ners in 73 contests. A Harris Interactive press release cited the resultsof a study by Research Business Report, which noted that Harris Inter-active had outperformed telephone polls. The Harris Interactive onlinepolls were off an average of 1.8% for Gore and 2.5% for Bush, whiletelephone polling was off 3.9% on the average for Gore and 4.4% off forBush (Harris Interactive news release, 2000). During the periodOctober30–November 6, 2000, a total of 300,000 members of the voting-agepopulation participated in the online polls, with Harris Interactive pro-cessing more than 40,000 online interviews per hour, including 7,800interviews. All these numbers were the largest ever in survey research.These data were then weighted, with the use of a Harris Interactiveproprietary system, and tabulated in real time. The proprietary weight-ing system, which has its critics, involves using several demographicvariables (such as, sex, age, education, race, and ethnicity) togetherwithvariable representing the propensity of an individual to be online (Har-ris Interactive website, 2000). Harris Interactive considers these resultsas a turning point in the use of the Internet as a replacement technologyfor the more traditional telephone-based research.Harris Interactive boasts the largest online market research panel in

the world, with over 7 million members, from over 200 countries. It has500,000 participants in its Chronic Illness subpanel, it has subpanels ofPhysicians, Movie Critics, Political Issues, Product Advertising andConcept Testing, Information Technology, and Small Office/Home Of-fice, in addition to teenage panelists through HarrisZone andHarrisKidsZone for children 8–12.A different approach to sampling is taken by the California-based

Knowledge Networks. Founded in 1998 by Stanford University researchprofessors Norman Nie and Douglas Rivers, Knowledge Networksuses traditional methodology, probability sampling, combined with thepower of the Internet. Knowledge Networks draws its panel of house-holds by first using random digit dialing, selecting at random from asample of all U.S. households with telephones, “regardless of whetherthey have Web access, or even a computer” (Knowledge Networks,2000). One of the key features is that households selected do not haveto own a computer. In exchange for their participation, households willbe given free Internet WebTV hardware and software, a customer carehotline, and a toll-free number. In return, each member of a participat-ing household is expected to complete, at a maximum, a short, five-minute survey, once a week. Participants are given a variety of incen-tives to keep them engaged: cash, gift certificates, free phone cards,movie passes, and airline bonus miles. Knowledge Networks states that

Page 13: Elections and public polling: Will the media get online polling right?

ELECTIONS AND PUBLIC POLLING 1021

MAR WILEJ RIGHT BATCH

shortstandard

Top of textBase of text

Base of RF

it is the only Web survey providers who can reach people “whether ornot they own computers or have ever before used the Internet,” and itintended to have 250,000 households participating by 2001.One of the recognized problems in asking people to participate mul-

tiple times a year (and especially if the request is once a week) is thepossibility of panel effects—changes in behavior or attitudes that occursimply because the individual participates in a panel study. KnowledgeNetworks intends to constantly refresh its potential participants toavoid respondent fatigue (Morin, 1999).

WHAT LIES AHEAD?

With the conclusion of the 2000 elections, where does online survey re-search stand and what are its prospects in future elections? At the endof the next presidential cycle in 2004, people may look back at 2000 asthe horse and buggy days of electronic communication, just as todaypeople look back on 1990, the year before the creation of theWorldWideWeb. Internet usage undoubtedly will grow, perhaps arithmeticallyrather than geometrically, but it will not reach the saturation levelfound in telephone usage. Online survey research is now a reality, andby 2004 or soon thereafter, the thorny questions of random selectionmay be worked out.By 2004, there will be a greater acceptance and use of private online

polling by candidates. This, again, will depend on the availability ofrandom samples, or, in the Harris Interactive model, weighted samples.Many polling firms that do business with candidates depend on themorelucrative and stable business clients. Only a firm with multimillion-dollar revenues could afford to create databases similar to those of Har-ris Interactive or Knowledge Networks, or will rent database accessfrom the handful of survey firms that have been able to invest themoneyin such expensive operations.The economic barrier to having a truly random, and thus reliable,

sample is quite high. The economic barrier to having a straw poll isalmost nonexistent. It will be interesting to see which of the televisionnetworks, newspapers, and other media sources are willing to invest inthe survey infrastructure needed, which will subcontract their firmswith reliable randomly selected panels, and which will simply decide tooffer straw polls and junk surveys, labeling them for entertainment pur-poses only. It will also be interesting to see what signs of progress, ifany, have been taken by the media reporting junk polling for what itreally is. In 2004, there will be an abundance of polls that tout thatyear’s Warren Beatty, or Orrin Hatch. The real question is, will themedia be able to discern the difference between legitimate and junkpolls, will they report them as such, and ultimately, will such distinc-tions filter into the consciousness of citizens and voters?

Page 14: Elections and public polling: Will the media get online polling right?

1022 JOHNSON

MAR WILEJ LEFT BATCH

shortstandard

Top of textBase of text

Base of RF

REFERENCES

American Association of Public Opinion Research. (2000, September 28). Web-based surveys unlikely to represent views of all Americans, warns pollsters’association. News release. Retrieved December 16, 2000 from theWorldWideWeb: http://www.apor.org/press/web� survey.

Brady, H. A., & Orren, G. R. (1992). Polling pitfalls: Sources of errors in publicopinion. In T. E. Mann &G. R. Orren (Eds.), Media polls.Washington: Brook-ings Institution.

Collins, K. (1999). Will online polling destroy the survey business? FreedomForum Online. Retrieved December 8, 2000 from the World Wide Web: http://www.freedomforum.org/technology/1999/10/1onlinepolling.asp.

Coy, P. (2000, January 11). Harris Interactive: A high opinion of online polling.Businessweek Online. Retrieved December 27, 2000 from the World WideWeb: http://www.businessweek.com:/ebiz/0001/ec0111.htm?scriptFramed.

Dowd, M. (1999, August 15). Liberties; will you, Warren? New York Times, p.A35.

Gawisser, S. R., &Witt, G. E. (2000). 20 questions a journalist should ask aboutpoll results. National Council on Public Polls. Retrieved December 2, 2000from the World Wide Web: http://www.ncpp.org/qajsa.htm.

Harris Interactive website. (2000). Retrieved December 17, 2000 from theWorld Wide Web: http://www.harrisinteractive.com/about/ourpanel.asp.

Harris Interactive (2000, December 14). 2000 election winners: GeorgeW.Bush. . . and online polling. News release. Retrieved December 26, 2000 from theWorld Wide Web: http://www.louisharris.com/news/index.asp?HI�election�election.

Johnson, D. W. (1999). The cyberspace election of the future. In B. I. Newman(Ed .), Political marketing handbook (pp. 705–724). Thousand Oaks, CA:Sage.

Johnson, D. W. (2001). No place for amateurs: How political consultants arereshaping American democracy. New York: Routledge.

Knowledge Networks website. (2000). Retrieved December 12, 2000 from theWorld Wide Web: http//:www.knowledgenetworks.com/science/difference.html.

Kurtz, H. (2000, December 22). Errors plagued election night polling services.The Washington Post, p. A1.

Manjoo, F. (2000, October 16). Online polls: Can we trust them. Wired News.Retrieved December 14, 2000 from the World Wide Web: http://www.wirednews.com/news/print/0,1294,39462,00.html.

Mister Poll website (undated). Retrieved January 2, 2001 from the WorldWideWeb: http://www. misterpoll.com.

Morin, R. (1993, March 21). Voting against this ‘referendum,’ Perot invites askewed response, public opinion researchers say. The Washington Post, p.A1.

Morin, R. (1999, September 7). Online polling: No substitute for the real thing.The Washington Post. Retrieved January 2, 2001 from the World Wide Web:http://www.washingtonpost.com/wp-srv/politics/polls/wat/archive/wat090799.htm.

Morin, R. (2000, February 28). Online polling done right. TheWashington Post.

Page 15: Elections and public polling: Will the media get online polling right?

ELECTIONS AND PUBLIC POLLING 1023

MAR WILEJ RIGHT BATCH

shortstandard

Top of textBase of text

Base of RF

Retrieved December 2, 2000 from the World Wide Web: http://washingtonpost.com/wp-srv/politics/polls/wat/archive/wat022800.htm.

National Council on Public Polls, Polling Review Board. (2000b). Errors asso-ciated with “instant” and overnight polls. Retrieved January 13, 2001 fromthe World Wide Web: http://ncpp.org/instant.htm.

O’Brien, R. (2000, January 7). Review board will watch, evaluate net polls in2000. Freedom Forum Online. Retrieved December 26, 2000 from the WorldWide Web: http://www.reedomforum.org/professioinal/2000/1/7polls.asp

Orton, B. (1992). Phony polls: The pollster’s nemesis. Public Opinion, 5, 56–60.

Pew Research Center for the People and the Press (1999). The Internet newsaudience goes ordinary. Retrieved December 9, 1999 from the World WideWeb: http://people-press.org/reports/display.php3?Report ID�72.

Puzzanghera, J. Online polling experiment stirs excitement, skepticism.SiliconValley News. Retrieved December 13, 2000 from the World Wide Web: http://www.mercurycenter.com/svtech/news/indepth/docs/poll101899.htm.

Rosenblatt, A. J. (1999). Online polling: Metholodogical limitations and impli-cations for electronic democracy. Harvard International Journal of Press/Politics, 4, 30–44.

Suellentrop, C. (2000, January 11). Why online polls are bunk. Slate: Net Elec-tion. Retrieved December 27, 2000 from the World Wide Web: http://slate/msn.com/netelection/entries/00-01-11�68660.asp.

Wasserman, E. (1999, October 14). Taking the voter’s pulse online. Slate: Netelection. Retrieved November 4, 1999 from the World Wide Web: http://www.slate.com/netelection/entries/99-10-14�36671.asp.

Wu, W., &Weaver, D. (1997). On-line democracy or on-line demagoguery? Pub-lic opinion ‘polls’ on the internet. Harvard International Journal of Press/Politics, 2, 71–86.

Zogby International website. (2000). Retrieved December 28, 2000 from theWorld Wide Web: http://www.zogby.com/onlinepolls/index.cfm.

Correspondence regarding this article should be sent to: Dennis W. Johnson,The George Washington University, Graduate School of Political Management,805 21st Street, N.W., Washington, DC 20056 ([email protected])