3
116 © Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.116 –118 Blackwell Publishing Ltd. Using research in practice In search of the evidence Andrew Booth In the first column in this series (March 2003) we highlighted the contribution that general outputs from research and development programmes can make to evidence-based information practice. Our first example was taken from the domain of marketing and promotion and dealt with the knowledge base around administering questionnaires. In this issue we switch to another of the six domains of evidence-based librarianship identified by Crumley & Koufogiannakis, namely information access and retrieval. 1 Again we start with a scenario: You have been recruited to an academic unit to produce systematic reviews for a Naughty Institution for Cost Effectiveness. As you meet with a review team to scope out your first search one of the researchers asks ‘Why don’t we just conduct all the searches on MEDLINE and not bother about all the other databases?’. Meanwhile another researcher, on secondment from a Spanish agency asks ‘Why don’t you include the Spanish literature in your search strategy?’. You are about to reply with the official line that we search multiple databases to ensure comprehensive coverage of the literature and we don’t search for languages other than English because of the time and cost of translation. However you pause and ask yourself—what evidence is there that these now-established methods are best practice? Notwithstanding the fact that involvement by information specialists in systematic reviews has grown dramatically over recent years, total num- bers participating are still quite small. Yet, at the same time, local NHS librarians increasingly act as local advocates for systematic review-based prod- ucts such as the Cochrane Library, the Database of Abstracts of Reviews of Effects (DARE) and the report series issued by the NHS Health Tech- nology Assessment Programme. Can we have confidence in these products? Are we aware of the specifications that underpin their production? If not, can we truly claim to be evidence-based practitioners? Challenging the assumptions Systematic reviews are mammoth undertakings, typically taking at least 1 year to complete and costing between £50 000 to £70 000. Although such figures pale into insignificance when ranged alongside the cost of a sufficiently powered ran- domised controlled trial, expenditure on review projects is a major concern. From a professional viewpoint, systematic reviews command our attention because, unlike original research where contact with a library may be variable and spasmodic, they place a premium on information skills and a supporting library infrastructure. To the costs of staff time and fees for access to databases can be added the time spent in managing references and in obtaining photocopies from stock and interlibrary loans from elsewhere. Truly, it is essential that we examine whether such ex- penditure is justified. The impression that we might get when reading a systematic review is that the investigators have made every attempt to locate every study that might inform the review question for that particu- lar topic. In actuality, a review team will decide how much effort it can afford to expend—for the different stages of the review and for the different techniques within each of the stages. So, for exam- ple, it is no more desirable to spend all the team’s resources searching the literature, and thus leaving no time for critical appraisal and synthesis, than it is to spend the entire searching phase conducting hand searches of relevant journals. In reality, a systematic review is a ‘make-do’ product of trade- offs and compromises—but it remains the best we have. We should not be surprised at this because the neatly ordered format within which a

In search of the evidence

Embed Size (px)

Citation preview

Page 1: In search of the evidence

116 © Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.116–118

Blackwell Publishing Ltd.

Using research in practice

In search of the evidence

Andrew Booth

In the first column in this series (March 2003) wehighlighted the contribution that general outputsfrom research and development programmes canmake to evidence-based information practice. Ourfirst example was taken from the domain ofmarketing and promotion and dealt with theknowledge base around administering questionnaires.In this issue we switch to another of the sixdomains of evidence-based librarianship identifiedby Crumley & Koufogiannakis, namely informationaccess and retrieval.1 Again we start with ascenario:

You have been recruited to an academic unit toproduce systematic reviews for a Naughty Institutionfor Cost Effectiveness. As you meet with a reviewteam to scope out your first search one of theresearchers asks ‘Why don’t we just conduct allthe searches on MEDLINE and not bother about allthe other databases?’. Meanwhile another researcher,on secondment from a Spanish agency asks ‘Whydon’t you include the Spanish literature in yoursearch strategy?’. You are about to reply with theofficial line that we search multiple databases toensure comprehensive coverage of the literature andwe don’t search for languages other than Englishbecause of the time and cost of translation. Howeveryou pause and ask yourself—what evidence isthere that these now-established methods are bestpractice?

Notwithstanding the fact that involvement byinformation specialists in systematic reviews hasgrown dramatically over recent years, total num-bers participating are still quite small. Yet, at thesame time, local NHS librarians increasingly act aslocal advocates for systematic review-based prod-ucts such as the Cochrane Library, the Databaseof Abstracts of Reviews of Effects (DARE) andthe report series issued by the NHS Health Tech-nology Assessment Programme. Can we have

confidence in these products? Are we aware ofthe specifications that underpin their production?If not, can we truly claim to be evidence-basedpractitioners?

Challenging the assumptions

Systematic reviews are mammoth undertakings,typically taking at least 1 year to complete andcosting between £50 000 to £70 000. Althoughsuch figures pale into insignificance when rangedalongside the cost of a sufficiently powered ran-domised controlled trial, expenditure on reviewprojects is a major concern. From a professionalviewpoint, systematic reviews command ourattention because, unlike original research wherecontact with a library may be variable andspasmodic, they place a premium on informationskills and a supporting library infrastructure.To the costs of staff time and fees for access todatabases can be added the time spent in managingreferences and in obtaining photocopies fromstock and interlibrary loans from elsewhere. Truly,it is essential that we examine whether such ex-penditure is justified.

The impression that we might get when readinga systematic review is that the investigators havemade every attempt to locate every study thatmight inform the review question for that particu-lar topic. In actuality, a review team will decidehow much effort it can afford to expend—for thedifferent stages of the review and for the differenttechniques within each of the stages. So, for exam-ple, it is no more desirable to spend all the team’sresources searching the literature, and thus leavingno time for critical appraisal and synthesis, than itis to spend the entire searching phase conductinghand searches of relevant journals. In reality, asystematic review is a ‘make-do’ product of trade-offs and compromises—but it remains the bestwe have. We should not be surprised at thisbecause the neatly ordered format within which a

Page 2: In search of the evidence

Using research in practice

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.116–118

117

systematic review is presented is no less a feat ofcosmetic surgery than the sleight of hand by whicha chaotic primary research project is transformedinto a paragon of orderliness as soon as it hits thepages of a journal article. The consumer of asystematic review is asked only to judge whetherthe review team has taken ‘reasonable’ steps to ensurethe comprehensiveness of the search strategy.

So how does a review team handle the gaps that‘reasonable’ rather than ideal searching mightleave. Basically they employ ‘educated guesses’,embodied in established review techniques, tohandle areas of uncertainty. For example, a deviceknown as a ‘funnel plot’ is used to help detect pub-lication bias. If the inverted funnel shape caused byplotting sample size of studies against their effectsize is asymmetrical, or if there is a gap in it, thenthis is taken to indicate that some studies haveeither not been published or (as in our area ofinterest) not been located.2 Rather disturbingly,however, it might merely indicate that these studieshave not even been conducted!

Another technique for handling uncertainty isRosenthal’s File Drawer calculation3 whereby theresearcher tries to calculate how many studiesshowing a null result that have not been identified(i.e. that are lurking undetected in someone’s filingcabinet) need to exist before the effect shown bythe systematic review would be overturned or, atleast, negated. The lower this number, it is argued,the more likely it is that further searching wouldhelp to undermine the review’s current result.

Finally, researchers can attempt to quantify thelikelihood that the population of studies capturedby a review are in fact representative of the totalpopulation of all available studies on this topic.They do this using a technique known as capture-recapture sampling.2 Suppose you had a lake fullof fish and you wanted to know how many carpthere were. You would take a sample of the fish(Sample A) and count the number of carp. Youwould then mark the carp and replace them in thelake. You would take a second sample of the fish(Sample B) and count how many of your originalmarked fish are in the second sample. The totalnumber of carp in the lake can then be estimatedby the number of carp in sample A multiplied bythe number of carp in Sample B divided by thenumber of (marked) carp that appear in both

samples. Pat Spoor and colleagues in Leeds havewritten a BMJ paper applying this technique tosearching within a systematic review.4 For carpread papers relevant to the review!

Addressing the scenario

Having demonstrated exactly how much uncertaintysurrounds the apparently ‘gold standard’ of asystematic review, with a methodological andtheoretical aside, we are now in a position toaddress the scenario. In searching for evidence weidentify another Health Technology Assessmentreport entitled: How important are comprehensiveliterature searches and the assessment of trialquality in systematic reviews? Empirical study.5

In this study, Egger and colleagues identified 159systematic reviews based on comprehensiveliterature searches and including the outcomes ofat least five controlled clinical trials. They thenreanalysed the studies in the following subgroups:unpublished versus published (60 studies), otherlanguages versus English (50 studies) and non-indexed versus -indexed (66 meta-analyses). They found that unpublished trialsshow less beneficial effects than published trials,whereas non-English language trials and non-indexed trials tend to show larger treatmenteffects. They also found that the importance ofwhat they group together as ‘difficult to locatetrials’ varies from speciality to speciality. Forexample, a large proportion of complementarymedicine trials are difficult to locate.

The bottom line

Egger and colleagues conclude:

Systematic reviews that are based on a searchof English Language literature that is accessiblein the major bibliographic databases will oftenproduce results that are close to those obtainedfrom reviews based on more comprehensive searchesthat are free of language recommendations.5

So, our Spanish researcher’s question can beanswered by stating that, based on availableevidence, the inclusion of Spanish languagematerials is unlikely to have a significant impact on

Page 3: In search of the evidence

Using research in practice

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.116–118

118

the findings of a systematic review. How thoughdo we treat the suggestion that searches can beconducted on alone? Here we mustdistinguish between the methodology used byEgger and colleagues (where they compared with non-) and what commonsense and, indeed, other published evidence mighttell us. We believe that the overlap in coveragebetween databases such as ,

and is comparatively small.6 We also knowthat the indexing on any specific database is likelyto be inconsistent7 and therefore retrieval oftrials is likely to be suboptimal.8 Given that it iscomparatively easy to access and search theseadditional databases, we can optimize our coverageby using an agreed core set of bibliographic data-bases. Of course this research does not go anywheretowards indicating what these databases might beor how many of them should be included in ouroptimal number.

The reviewers go on to recommend that ‘whenplanning a review, investigators should considerthe type of literature search and the degree of com-prehensiveness that are appropriate for the reviewin question, taking into account budgetary andtime constraints’.5 Although in isolation this mightread like ‘motherhood and apple pie’, we have thereassurance that this is not the mere pragmatismof a review team speaking but that it is also

supported by empirical research. Of course, manymore such questions remain to be asked and theworrying thing is that some of these may alreadyhave been answered, but even now the answersmay be lying unidentified in some researcher’s filedrawer!

References

1 Crumley, E. & Koufogiannakis, D. Developing evidence-based librarianship: practical steps for implementation. Health Information and Libraries Journal 2002, 19, 61–70.

2 Earl-Slater, A. The Handbook of Clinical Trials and Other Research. Oxford: Radcliffe Medical Press, 2002.

3 Rosenthal, R. The ‘file drawer problem’ and tolerance for null results. Psychological Bulletin 1979, 86, 638–41.

4 Spoor, P., Airey, M., Bennett, C., Greensill, J. & Williams, R. Use of capture-recapture technique to evaluate the completeness of systematic literature searches. BMJ 1996, 313, 342–3.

5 Egger, M., Juni, P., Bartlett, C., Holenstein, F. & Sterne, J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technology Assessment 2003 7, 1 (whole issue).

6 Brazier, H. & Begley, C. M. Selecting a database for literature searches in nursing: or ? Journal of Advanced Nursing 1996, 24, 868–75.

7 Booth, A. How consistent is indexing? A few reservations. Health Libraries Review 1990, 7, 22–6.

8 Dickersin, K., Scherer, R. & Lefebvre, C. Identifying relevant studies for systematic reviews. BMJ 1994, 309, 1286–91.