4
© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.53– 56 53 Blackwell Publishing Ltd. Using research in practice Using research in practice, Andrew Booth Using research in practice, Andrew Booth A quest for questionnaires Andrew Booth If using research in practice conjures up visions of librarians pouring over issues of Health Informa- tion and Libraries Journal or the Journal of the Medical Library Association, then think again. Admittedly, the increasing number of research- orientated articles appearing in our professional literature is a phenomenon very much to be welcomed. Nevertheless this merely represents the ‘tip of the iceberg’ where evidence-based information practice is concerned. Not to be underestimated is the contribution to be made by the more general outputs of research and development programmes, whether within health or elsewhere. Take, for example, the following scenario: As the recently appointed Primary Care Knowledge Manager for Apathey Primary Care Trust you have been charged with conducting an information audit to assess the information and training needs of your health community. You decide to supplement a series of planned interviews with local stakeholders with more comprehensive coverage using a question- naire. Being relatively inexperienced in research methodologies you wonder whether there might be some way of increasing the notoriously low response rates within your Trust. Your attempts to be evidence based appear to be rewarded immediately as you retrieve two very per- tinent systematic reviews. The former is the first published output from a Cochrane methodology review, entitled ‘Increasing response rates to postal questionnaires’. 1 The latter is from the Health Tech- nology Assessment NHS R&D HTA Programme under the title ‘Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients’. 2 You decide to survey the contents of both reviews to answer a list of questions that you have compiled about question- naire response rates. Questionnaires are probably the most common research instrument that librarians encounter in their day-to-day practice. Your first questions refer to the choices that you have with regard to the physical production of your questionnaire: Appearance of the questionnaire Should I use coloured paper or stick to white? Eight trials including a total of 14 797 participants provide an answer to this question. 1 These conclude that your chances of getting a response are 6% greater if you use coloured paper than if you use white paper (odds ratio 1.06). However the 95% confidence interval for using coloured paper instead of white runs from 0.99 to 1.14. As this value crosses unity (i.e. an odds ratio of 1.00) we can conclude that the observed effect is not statistically significant. Conclusion for practice. Clearly there is not a convincing reason to conclude that coloured paper will produce a better response rate. If the cost of coloured paper is no different from that for white paper then you may decide to experiment with an alternative colour. There is a certain appealing argument (unsubstantiated by this research) that placing a questionnaire on brightly coloured paper might reduce the chance of it getting mislaid or mistaken on a busy GP’s desk. Perhaps your library has a distinctive colour to its ‘livery’ that might be strengthened by association with use of this colour in marketing and survey activity. However, if you are planning to photocopy questionnaire returns, it will be very unhelpful to use a dark-coloured paper with regard to legib- ility and even the economics of consumption of photocopier toner! A legitimate reason for using

A quest for questionnaires

Embed Size (px)

Citation preview

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

, pp.53–56

53

Blackwell Publishing Ltd.

Using research in practice

Using research in practice, Andrew BoothUsing research in practice, Andrew Booth

A quest for questionnaires

Andrew Booth

If using research in practice conjures up visionsof librarians pouring over issues of

Health Informa-tion and Libraries Journal

or the

Journal of theMedical Library Association

, then think again.Admittedly, the increasing number of research-orientated articles appearing in our professionalliterature is a phenomenon very much to bewelcomed. Nevertheless this merely representsthe ‘tip of the iceberg’ where evidence-basedinformation practice is concerned. Not to beunderestimated is the contribution to be madeby the more general outputs of research anddevelopment programmes, whether within healthor elsewhere. Take, for example, the followingscenario:

As the recently appointed Primary Care KnowledgeManager for Apathey Primary Care Trust you havebeen charged with conducting an information auditto assess the information and training needs of yourhealth community. You decide to supplement aseries of planned interviews with local stakeholderswith more comprehensive coverage using a question-naire. Being relatively inexperienced in researchmethodologies you wonder whether there might besome way of increasing the notoriously low responserates within your Trust.

Your attempts to be evidence based appear to berewarded immediately as you retrieve two very per-tinent systematic reviews. The former is the firstpublished output from a Cochrane methodologyreview, entitled ‘Increasing response rates to postalquestionnaires’.

1

The latter is from the Health Tech-nology Assessment NHS R&D HTA Programmeunder the title ‘Design and use of questionnaires: areview of best practice applicable to surveys ofhealth service staff and patients’.

2

You decide tosurvey the contents of both reviews to answer a list

of questions that you have compiled about question-naire response rates.

Questionnaires are probably the most commonresearch instrument that librarians encounter intheir day-to-day practice. Your first questions referto the choices that you have with regard to thephysical production of your questionnaire:

Appearance of the questionnaire

Should I use coloured paper or stick to white?

Eight trials including a total of 14 797 participantsprovide an answer to this question.

1

Theseconclude that your chances of getting a responseare 6% greater if you use coloured paper than ifyou use white paper (odds ratio 1.06). However the95% confidence interval for using coloured paperinstead of white runs from 0.99 to 1.14. As thisvalue crosses unity (i.e. an odds ratio of 1.00)we can conclude that the observed effect is notstatistically significant.

Conclusion for practice.

Clearly there is not aconvincing reason to conclude that coloured paperwill produce a better response rate. If the cost ofcoloured paper is no different from that for whitepaper then you may decide to experiment with analternative colour. There is a certain appealingargument (unsubstantiated by this research) thatplacing a questionnaire on brightly colouredpaper might reduce the chance of it getting mislaidor mistaken on a busy GP’s desk. Perhaps yourlibrary has a distinctive colour to its ‘livery’ thatmight be strengthened by association with useof this colour in marketing and survey activity.However, if you are planning to photocopyquestionnaire returns, it will be very unhelpfulto use a dark-coloured paper with regard to legib-ility and even the economics of consumption ofphotocopier toner! A legitimate reason for using

Using research in practice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

, pp.53–56

54

different colours of paper was illustrated recentlyin a project that myself and colleagues from theTrent Institute for Health Services Researchundertook to examine the information and trainingneeds of staff in social care. Here we used a differentcolour for each ‘patch’ (e.g. Nottinghamshire,Sheffield, Derbyshire) and this helped in compilingthe returns for each locality when questionnaireswere returned anonymously by post. Of course,a similar effect might be obtained by using asingle running number sequence, although inthis instance we had no reason to discriminaterespondents at an individual level.

It is interesting to consider the effect that thisevidence has had upon our decision-makingprocess. As there is no compelling research-basedreason to support our choice, we are able to admitother factors into the equation, such as cost andlogistics. If, however, there had been a strongresult favouring either alternative then there mightwell have been tensions between research andpracticalities.

Does it help to personalize the questionnaires?

Thirty-eight trials involving 39 210 participantswere identified to address this question.

1

In thiscase your chances of getting a response are 16%greater if you personalize the questionnaires thanif they are not personalized (odds ratio 1.16). The95% confidence interval for personalizing ques-tionnaires versus not doing so runs from 1.06 to1.28—a clear result in favour of personalization.

Conclusion for practice.

Personalizing a question-naire is potentially time-consuming and resourceintensive. However, if you have access to therequired human resources then it appears to beworth doing. Practical considerations, such asthe need to get all questionnaires distributedwithin a narrow window of opportunity, mightwell run counter to this finding. The number ofquestionnaires to be distributed may well be anoverwhelming consideration. In the above-mentioned social care project we did not have theadministrative and clerical support needed topersonalize the 595 questionnaires required. Thisillustrates that occasionally an ‘evidence-based’option might be denied to us because of practical

considerations—a not-unfamiliar situation withinthe context of health care rationing.

Of a total of six factors relating to appearanceexamined by the review,

1

only one other appearedto have a significant positive effect—using col-oured ink rather than standard ink (OR 1.39; 95%CI 1.16–1.67). However this was the result of onlya single trial. Although using a brown envelope inpreference to a white one seems to have a markedeffect (OR 1.52) the confidence interval is wideand crosses unity (95% CI 0.67–3.44) indicatingthat this was not significant.

Methods of delivery

Does a stamped addressed envelope increase response over and above a business reply or franked envelope?

Fourteen trials involving 38 259 participantsaddress this particular issue.

1

There is a clear effectin favour of the stamped addressed envelope (OR1.26) with a narrow confidence interval (95% CI1.13–1.41). Your chances of getting a response aretherefore 26% better by using a stamped addressedenvelope. Interestingly there is some evidence, al-though not statistically significant, that a stampedoutward envelope performs worse than a frankedone (OR 0.95; 95% CI 0.88–1.03)

Conclusion for practice.

The method of return isvery much determined by organizational policies,constraints and facilities. In practical terms it maybe easier to be self-sufficient within your library byenclosing stamped addressed envelopes. Frankedenvelopes may require the participation of acentral post room with whom relationships mayalready be fraught because of the volume of theactual questionnaires themselves! Business replyenvelopes require extra effort and resources if theyare to be produced and licensed—if these costshave not been anticipated they may proveprohibitive. However, many organizations are verywary about bulk purchase of postage stampsbecause it might present problems for financialaudit, with the additional fear that they might beused for purposes other than the project. Here, wehave a vivid demonstration of how the widerpicture may limit our options to pursue an

Using research in practice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

, pp.53–56

55

evidence-based approach. Disconcertingly, thesingle most important delivery factor is that ofrecorded delivery versus standard postal delivery(OR 2.21; 95% CI 1.51–3.25)—a sure way toget your manager’s alarm bells ringing! Amore realistic aspiration might be to send thequestionnaires first class (OR 1.12; 95% CI 1.02–1.23).

Incentives

Should I offer respondents the chance to win a bottle of champagne?

45 trials involving 44 708 participants show

1

that anon-monetary incentive is advantageous over noincentive (OR 1.19; 95% CI 1.11–1.28). However,two other incentive strategies perform better thana non-monetary incentive. Providing a monetaryincentive has a greater than two-fold (202%)advantage over no incentive (OR 2.02; 95% CI1.79–2.27). Perhaps more practically, offering anincentive with the questionnaire versus onlyoffering that to those who respond also provides afavourable effect (OR 1.71; 95% CI 1.29–2.26).

Conclusion for practice.

Although your organiza-tion’s ability to pay an incentive may be limited,particularly if you are working within a publicsector organization such as the NHS or auniversity, it is certainly worth giving somethought to the type of incentive that might appealto your particular audience. Sometimes an item ofequal value to another may have greater appealbecause of how it is perceived (compare booktoken versus bottle of champagne). Only adetailed knowledge of your audience—its driversand its value systems—will help you get the mostof any incentive that you (or your sponsors) haveto offer.

Communication

Should I give a deadline date for response?

Four trials, involving 4340 participants, examinedthis particular issue.

1

The overall effect is unity(OR 1.00) with the 95% confidence interval in therange 0.84–1.20. In other words, there is an equal

likelihood of return regardless of whether there isa response deadline or not.

Conclusion for practice.

Conventional wisdom hasit that you should always have a response deadlinewhen sending out a questionnaire. We can see fromthe above that this appears unsubstantiated byrigorous research. Nevertheless there are variousother factors that might legislate towards havinga deadline date, other than an anticipated effecton the response rate. There is the ethical issueof whether it is desirable to have busy health prac-titioners spending time filling out a question-naire when we know that we will not be usingtheir responses as we have already collated returns.Secondly, there is the project management aspectwithin which we might find it helpful to set adefinite date on which to commence analysis ratherthan wait until the (optimistic) flood dries up toa trickle.

Conclusion

In taking a topic that could practically involveany library or information worker, whetherinvolved in the initiation of a service or its ongoingmonitoring and evaluation, we hope to havedemonstrated two specific truths about evidence-based information practice:

1

That it is feasible to use research evidence in apractical, and hopefully realistic, setting.

2

That the evidence base is not confined to thehealth information press, nor even to the infor-mation science literature in general, but can beidentified within a wider body of research towhich we will increasingly need to gain access.In fact the questions covered above, dealing

specifically with response rates, are a mere fractionof the issues related to questionnaire design andadministration covered by the research literature.

2

Conflict of interest should be sufficient for me notto give undue emphasis, for example, to the factthat having a university as sponsor or sourceincreases your chance of response by 31% (OR0.31; 95% CI 1.11–1.54)

1

! Perhaps the greatestdisadvantage under which we all have to labour isthat a more interesting questionnaire is advan-tageous over a less interesting questionnaire bya factor of almost 2.5 times (OR 2.44; 95% CI

Using research in practice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

, pp.53–56

56

1.99–3.01).

1

That rules out asking anything aboutbooks and libraries then!

Of course, with our critical appraisal skills dulyhoned, we will resist the temptation to be tooaccepting and uncritical about such research. Asthe authors themselves point out, without infor-mation about what the baseline response wouldhave been, some strategies may only succeed inpushing 70% response rates up to 80% whileothers may work particularly well at the lower endof the response rate continuum. Neither can weconclude that combining all the successful strate-gies together continues to increase success, part-icularly when an acceptable threshold has beenreached. One thing is sure, however: informingyour questionnaire activities through the results of

systematic reviews is infinitely preferable to basingthem upon the folklore of ‘how-to-do-it’ text-books. It has the added advantage of allowing youto claim that, in this aspect at least, you are prac-tising evidence-based information practice!

References

1 Edwards, P., Roberts, I., Clarke, M., DiGiuseppi, C., Pratap, S., Wentz, R. & Kwan, I. Increasing response rates to postal questionnaires: systematic review.

British Medical Journal

2002,

324

, 1183–5.2 McColl, E., Jacoby, A., Thomas, L. Soutter, J., Bamford, C.,

Steen, N., Thomas, R., Harvey, E., Garratt, A. & Bond, J. Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients.

Health Technology Assessment

2001,

5

(31), 1–250.