4
© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.185 –188 185 Blackwell Publishing Ltd. Using research in practice Collective decisions Andrew Booth Although this column opportunistically reviews evidence as it comes to light, it is somewhat apposite that the first three articles in the regular ‘Using Research In Practice’ series each focus on a different one of the ‘Six Domains of EBL’ previously identified in this journal by Crumley & Koufogiannakis. 1 So the first column (March 2003) highlighted the domain of Marketing/ Promotion and looked at use of questionnaires, while the second (June 2003) dealt with Informa- tion Access and Retrieval and examined the com- prehensiveness of literature searches. This, our third article, deals with a domain that appears, on first viewing at least, to be exclusively a concern of librarians, that is Collection Development. In actuality, our keen interest in journal quality is shared by the evidence-based practice community with its concerns with the validity, applicability and reliability of articles placed within those journals. As in previous columns we start with a scenario: You are part of a Working Group looking at purchase of a ‘bundle’ of electronic journals for your local health community. The Working Group includes various stakeholders including managers, educators, clinicians and doctors in training. In discussing your community’s requirements, a repre- sentative of the Postgraduate Deanery queries the process by which journal titles are selected for local libraries. She asks: ‘What is the best way of judging whether a journal is of high quality and should be purchased from the core budget?’ There then follows an extensive debate during which the reliability of such factors as circulation figures, peer review, im- pact factor and presence on a core list are debated. ‘Alas!’, you muse, ‘if only someone had conducted a systematic review to look at journal selection criteria’. While the prospect of some deus ex Medline may seem remote, relief is at hand. Recent preoccupa- tions with the peer review process are reflected in a special issue of JAMA where several apparent indicators of quality are analysed. 2 You decide to take a closer look at the results. About the study The authors’ hypothesis is that peer-review and bibliometric methods (such as impact factors, circulation figures, indexing on or pre- sence on a core list) may be useful in evaluating the quality of a journal. These methods are said to be controversial because of biases in citation, impact factor and what are described as ‘inherent limi- tations of the sources of information used to calculate them’. 2 The authors claim that none of these bibliometric parameters have been validated in connection with journal quality. They use research article methodological quality as a proxy measure for journal quality. The authors use a computer-generated list of random numbers to randomly select 30 journals from 107 general internal medical journals classi- fied as such by the Institute for Scientific Informa- tion. They excluded journals that were not in English or were unavailable through the Univer- sity of California library system. Original research articles were identified by searching and from January to December 1999. They excluded a large number of publication types, focusing instead on randomised controlled trials and on other non-RCT empirical studies. They extracted data on seven factors thought to have a possible bearing on journal quality: 1 Peer review. 2 Citation rate.

Collective decisions

Embed Size (px)

Citation preview

Page 1: Collective decisions

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

, pp.185–188

185

Blackwell Publishing Ltd.

Using research in practice

Collective decisions

Andrew Booth

Although this column opportunistically reviewsevidence as it comes to light, it is somewhatapposite that the first three articles in the regular‘Using Research In Practice’ series each focus ona different one of the ‘Six Domains of EBL’previously identified in this journal by Crumley &Koufogiannakis.

1

So the first column (March2003) highlighted the domain of Marketing/Promotion and looked at use of questionnaires,while the second (June 2003) dealt with Informa-tion Access and Retrieval and examined the com-prehensiveness of literature searches. This, ourthird article, deals with a domain that appears, onfirst viewing at least, to be exclusively a concernof librarians, that is Collection Development. Inactuality, our keen interest in journal quality isshared by the evidence-based practice communitywith its concerns with the validity, applicabilityand reliability of articles placed within thosejournals.

As in previous columns we start with a scenario:

You are part of a Working Group looking atpurchase of a ‘bundle’ of electronic journals foryour local health community. The Working Groupincludes various stakeholders including managers,educators, clinicians and doctors in training. Indiscussing your community’s requirements, a repre-sentative of the Postgraduate Deanery queries theprocess by which journal titles are selected for locallibraries. She asks: ‘What is the best way of judgingwhether a journal is of high quality and should bepurchased from the core budget?’ There then followsan extensive debate during which the reliability ofsuch factors as circulation figures, peer review, im-pact factor and presence on a core list are debated.‘Alas!’, you muse, ‘if only someone had conducted

a systematic review to look at journal selectioncriteria’.

While the prospect of some

deus ex Medline

mayseem remote, relief

is

at hand. Recent preoccupa-tions with the peer review process are reflected ina special issue of JAMA where several apparentindicators of quality are analysed.

2

You decide totake a closer look at the results.

About the study

The authors’ hypothesis is that peer-review andbibliometric methods (such as impact factors,circulation figures, indexing on

or pre-sence on a core list) may be useful in evaluating thequality of a journal. These methods are said to becontroversial because of biases in citation, impactfactor and what are described as ‘inherent limi-tations of the sources of information used tocalculate them’.

2

The authors claim that none ofthese bibliometric parameters have been validatedin connection with journal quality. They useresearch article methodological quality as a proxymeasure for journal quality.

The authors use a computer-generated list ofrandom numbers to randomly select 30 journalsfrom 107 general internal medical journals classi-fied as such by the Institute for Scientific Informa-tion. They excluded journals that were not inEnglish or were unavailable through the Univer-sity of California library system. Original researcharticles were identified by searching

and

from January to December 1999.They excluded a large number of publicationtypes, focusing instead on randomised controlledtrials and on other non-RCT empirical studies.They extracted data on seven factors thought tohave a possible bearing on journal quality:

1

Peer review.

2

Citation rate.

Page 2: Collective decisions

Using research in practice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

, pp.185–188

186

3

Impact factor.

4

Circulation.

5

Manuscript acceptance rate.

6

Indexed on

.

7

Listed in the Brandon/Hill Library list.

Is peer review an indicator of quality?

Unfortunately this particular research study, thoughpurporting to examine the importance of peerreview, is unable to do so. Why? Because of themethods by which they selected their sample. Theyselected journals for inclusion from an issue ofJournal Citation Reports. Then, as already men-tioned, they restricted the journal articles selectedto those identified by searching

and

. Although all such journals are notnecessarily peer-reviewed, selection of

journals attaches such weight on peer review thatit is not surprising to find that all journals in theauthors’ selection are, without exception, peerreviewed. This is similarly true of being indexedon

. Clearly, you cannot analyse theimportance of a factor as a variable if there are noexceptions to this factor within the group you areanalysing.

This situation is exacerbated if you look at thetwo exclusion criteria. We know that

journals are more likely to be in English. Exclud-ing journals in languages other than Englishincreases the likelihood that the remaining jour-nals are indexed in

. As mentioned above,they are thus more likely to be peer reviewed.Exclusion of journals that are unavailable withinthe University of California library system is evenmore problematic. What factors will likely havedetermined whether journals were originallyselected for the library—back in the mists of timesome librarian probably considered ‘Is it peer-reviewed?’ and ‘Is it indexed in

?’!So what can we conclude? That journals listed

in Journal Citation Reports are likely to beindexed on

and are also likely to be peer-reviewed! That journals selected for the Universityof California library system are also likely to bepeer-reviewed and indexed in

! Notexactly rocket-science—in fact quite the reverse—an apparent methodological flaw that ironicallyappears in an article that takes issue with the

limitations and biases of bibliometric methods.Surely it would have been more meaningful to havesampled from Ulrich’s International PeriodicalsDirectory (already on hand as a source for circu-lation figure data) where neither being peer-reviewed nor being indexed in

necessarilydetermines inclusion.

Is impact factor a predictor of quality?

Within many academic units, journal impactfactor has assumed an almost mythical import-ance because of its past association with theResearch Assessment Exercise, that quinquennialreview of academic output from UK Universities.Although criteria for judging research output havebroadened beyond this much debated and almostabsurdly authoritative measure, it still holds swayamong factors that determine where a researcherwill target a proposed article. Does it have abearing on journal quality?

One will resist the temptation to comment fur-ther on the fact that the journal sample wasselected not from some value-free listing but ratherfrom an issue of Journal Citation Reports pub-lished by the ISI. The authors were able to observea ‘significant association’ between journal qualityscore (using an instrument that they had devel-oped) and both impact factor (

P

< 0.001) andcitation rate (

P

< 0.001). In addition, when con-trolling for RCT status, citation rate was the factorwith the smallest

P

-value (

P

< 0.001).The authors conclude that articles of higher

methodological quality are published in journalswith a higher citation rate and impact factor.However they do not comment on whether thehigher citation rate and impact factor may them-selves be attributable to the fact that the articlesare of higher methodological quality. Would younot be more likely to read and then cite a highquality paper than a lower quality one?

Are circulation figures related to quality?

The authors observed a circulation range ofbetween 1080 copies and 3.7 million copies inthe journals they studied and found that ahigher circulation figure was associated withhigher methodological quality (

P

= 0.001). So

Page 3: Collective decisions

Using research in practice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

, pp.185–188

187

higher quality journals are more widely read!While such an observation might not necessarilypertain among such literary outputs as nationalnewspapers, where objectives other than edifica-tion are frequently involved, one would be sur-prised if clinicians chose to read journals in whichthey could not place professional confidence. Con-sider the two main purchasers of medical journals—individual subscribers and libraries. Individualsubscribers are unlikely to spend money from theirown pockets on a journal of low perceived quality.Even more clearly, libraries frequently feel dutybound to purchase only those journals that crosssome threshold of ‘quality’.

Is being on a core list of journals an indicator of methodological quality?

Core lists of journals, such as the Core Collectionof Medical Books and Journals

3

in the UK and thewell-known Brandon-Hill lists

4

in the US, havelong persisted as external arbiters of journalquality. Ironically, as Eldredge recognizes, core listsalso have a long pedigree of being challenged as totheir validity.

5

Eldredge describes how, in 1946,William D. Postell began the first known cohortstudy

6

to question the practice of using recom-mended books and titles from ‘authoritative lists’to guide selection decisions. Postell used a cohortdesign to determine how journals ranked highlyon a list of recommended journals compared withjournals actually used by his clientele.

This particular paper

2

found that indexing onthe Brandon/Hill Library list was significantlyassociated (

P

< 0.001) with articles with higherquality scores. Again, we might be concerned thatinclusion in a core list might be determined bysome of the factors already examined separatelysuch as whether the journal is peer reviewed,whether it is included in

, and its circula-tion figures.

Implications for practice

From a brief appraisal we have concluded that,although the authors affirm that ‘articles of higherquality are published in journals whose articlesare cited more frequently, read more widely andscrutinized more carefully’, they have likely

overlooked the fact that many of the measuresthey use display a complex interdependence. Sopeer review and inclusion in

may have abearing, not only on the inclusion and exclusioncriteria themselves (availability in the library andlikelihood of being in English), but also oninclusion in a core list and possibly on themagnitude of citation and impact factors.

In a recent editorial, Plutchak challenges ‘evidence-based collection development’, that is ‘the notionthat one should do those things that have beenproved to work in similar situations under scien-tifically valid conditions, rather than relying onconvention, common practice, or some sort ofintuitive, educated guess about what the best thingmight be’.

7

Significantly, he does not take issue somuch with the quality of the evidence but more‘because we do not really have a clear notion ofwhat the problem is that we are trying to solve’. Hegoes on to say that:

We can (and do) speak in general terms aboutwhat a good collection is ... a complex balance ofbudget, usage, and good, educated guesses on thepart of the librarians doing the choosing. Whileimpact factors, selected lists, and other tools maybe useful guides, they cannot be mechanicallyapplied to defining what a good collection is.

7

At this point it is worth taking stock of exactlywhat we might want to achieve by practisingevidence-based information practice. Would thecause of our professional standing be furtheredhad we been able to reduce the process of journalselection to the simplicity of a mechanisticformula? Is it not more important to demonstratethe complexity of our professional decisionmaking by making others aware of

all

the factorsthat may have a bearing on our final journalselection? In fact, we can use papers such as thisone to reaffirm a definition of evidence-basedlibrarianship that involves the complex interplayof research-derived, practitioner-observed anduser-reported factors.

8

Every librarian shouldlearn to critique articles such as this one

2

so thatthe reality of decision-making informed bymultiple factors such as peer review, databasecoverage and circulation figures is not hidden bythe impression that these seven factors all operate

Page 4: Collective decisions

Using research in practice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

, pp.185–188

188

independently of each other. So, although wemight take issue with Plutchak’s implied rebuttalof evidence-based practice, we can certainly agreethat:

It is tempting, because we like to have things wecan measure, to put our faith in impact factors orselected lists or other seemingly objective meas-ures. ... The evidence-based movement pushes usin the direction of measurable goals and objec-tives. We need to be careful, however, not to confusemeasurement with efficacy.

7

However Plutchak’s subsequent and concludingstatement that ‘Collection development, as with somuch else in librarianship (and medicine, for thatmatter), remains more of an art than a science’

7

comes dangerously close to the appeals to magicor mysticism by which an elite traditionally defendtheir claim to specialist expertise. Instead, whenfaced with challenges such as that from theWorking Party of our scenario, let us championour profession as ‘evidence informed’ rather than‘evidence based’ for as Plutchak reminds us:

Better data and better definitions will certainlyhelp us, but, in the long run, we will continue torely on the educated judgement of professionallibrarians who know the literature and, most

important of all, know the needs of the peoplewho use it.

7

References

1 Crumley, E. & Koufogiannakis, D. Developing evidence-based librarianship: practical steps for implementation.

Health Information and Libraries Journal

2002,

19

, 61–70.

2 Lee, K. P., Schotland, M., Bacchetti, P. & Bero, L. A. Association of Journal Quality Indicators with Methodological Quality of Clinical Research Articles.

JAMA Peer Review Congress IV

2002,

287

, 2805–8.3 Hague, H. Comp.

Core Collection of Medical Books and Journals 2001

, 4th edn. London: Medical Information Working Party, 2000.

4 Hill, D. R. & Stickell, H. N. Brandon/Hill selected list of print books and journals for the small medical library.

Bulletin of the Medical Library Association

, 2001,

89

, 131–53.

5 Eldredge, J. D. SCC milestone in EBL history.

South Central Connection (MLA/SCC Chapter Newsletter)

2003,

13

, 2, 10, 14.6 Postell, W. D. Further comments on the mathematical

analysis of evaluating scientific journals.

Bulletin of the Medical Library Association

1946,

34

, 107–9.7 Plutchak, T. S. The art and science of making choices.

Journal of the Medical Library Association

2003,

91

, 1–3.8 Booth, A.

Exceeding Expectations: Achieving Professional Excellence by Getting Research Into Practice, LIANZA 2000, Christchurch, New Zealand (October 15th-18th)

. Available from: http://www.shef.ac.uk/

scharr/eblib/Exceed.pdf (accessed 23 May 2003).