6

Click here to load reader

Evaluating educational interventions for information literacy

Embed Size (px)

Citation preview

Page 1: Evaluating educational interventions for information literacy

DOI:10.1111/j.1471-1842.2011.00976.x

Learning and teaching in action

Abstract

This article considers how information literacytraining initiatives delivered by health library ser-vices are evaluated. It presents three validatedassessment and evaluation models, and usingexamples from practice, discusses how these canbe used to establish the impact of information lit-eracy training and to improve current evaluationpractices. HS

Keywords: education and training, evaluation qual-itative, evaluation quantitative, information, infor-mation skills, literacy.

Evaluating educational interventions for

information literacy

Paul StevensonHealth Information SpecialistAiredale NHS Foundation TrustE-mail: [email protected]

Introduction

Providing users with good information literacyskills is an important function of library services,and most health libraries carry out some level ofuser education activity.

Library services are increasingly being asked toprove their worth and justify their function. Evalu-ation has always been an important aspect ofauditing and managing training activities, but asfinances in our organisations become limited, it isbecoming increasingly important to have robust,high-quality information that validates the provi-sion of library services.

Training evaluation is the systematic collectionof data regarding the success of trainingprogrammes.1 Reviews of information literacytraining have consistently found that evaluations of

ª 2012 The authors. Health Information and Libraries Journal ªHealth Information and Libraries Journal, 29, pp.81–86

these education interventions are poorly executedand often use insufficient or inappropriate outcomemeasures.2,3 Historically, evaluation of informationskills training has primarily relied on question-naires, and evaluation is carried out solely by mea-suring learner reaction to a course.4 Not only isthis an unreliable way of evaluating the effective-ness of training,5,6 but it also, to some extent, fos-ters a view that ‘having a good time becomes themark of excellence in training, a valuing of enter-tainment over education’.7 Some evaluations haveincluded additional components such as classroomobservation of task completion and tests of declar-ative knowledge.

Overall, there is no consistency in the quality ormethodology of information literacy training evalu-ation activities, with a multitude of approaches andmethods being used.

The poor quality of evaluation is not unex-pected. The evaluation of training activities isoften a labour intensive and complex task. Fortu-nately, there is plenty of research available whichsuggests practical approaches that can be used toguide the evaluation process.8,9

Kirkpatrick’s typology of evaluation and

current practice

By far the most widely used model of evaluation forinformation literacy training is based onKirkpatrick’swork,10,11 which has been adapted for use in healtheducation by Barr and colleagues.12 This model high-lighted four distinct aspects that should be included inthe evaluation of training activities (Fig. 1).

The Kirkpatrick four-level model has dominatedthe evaluation of information literacy educationinterventions and continues to provide a useful andworthwhile starting point with which to approachevaluation.

However, the model has some limitations: Thereis often an assumption with Kirkpatrick’s modelthat because someone can do something in the

2012 Health Libraries Group

81

Page 2: Evaluating educational interventions for information literacy

1. Learner reaction

2. Modification of learner attitudes & perceptions, and learner acquisition of knowledge & skills

3. Changes in learner behavior

4. Benefits to the organisation/patient resulting from learner performance

Figure 1 Kirkpatrick Model: 4 levels of evaluation. (1)

Learner reaction. (2) Modification of learner attitudes &

perceptions, and learner acquisition of knowledge & skills.

(3) Changes in learner behavior. (4) Benefits to the

organisation ⁄ patient resulting from learner performance.

1. Aspects of the training, including course design and course content.

2. Characteristics of the learner, including motivation, personality and ability.

3. Features of the work environment, including opportunity to use learning and organisational culture.

Figure 2 Baldwin & Ford: 3 Influencing factors of transfer.

(1) Aspects of the training, including course design and

course content. (2) Characteristics of the learner, including

motivation, personality and ability. (3) Features of the work

environment, including opportunity to use learning and

organisational culture.

Learning and teaching in action82

training environment, that they will do it back inthe workplace. However, it is often the case that alearner will be able to demonstrate knowledge dur-ing a classroom test or observation, yet this newknowledge is not implemented in their day to daywork activity.13

With its focus primarily on the learner, evalua-tions carried out using the Kirkpatrick model regu-larly fail to consider contextual influences ontraining. Training and education do not happen inisolation. To truly understand the impact of aneducational intervention, we need to widen ourmeasures beyond ‘the learner’ and begin to include‘environmental’ aspects. For example, if there isno change in learner behaviour, it is often viewedas a failure because of ineffective training. Whilethat may be the case, evidence shows that the fail-ure to implement is more likely to be due to envi-ronmental factors such as lack of resources,organisational structure, lack of job autonomy orlack of management support.8,14–16 Environmentalfactors have a profound effect on the way newskills and knowledge are used, and how trainingand education are best delivered.

While Kirkpatrick’s model can give us a robustunderstanding of the impact of training on thelearner, it will not provide a complete evaluationbecause it fails to contextualise the learner andtraining. We need to move away from one direc-tional evaluation: While it is important to showhow the learner uses new knowledge and theimpact the learner has on the work environment;it is also necessary to evaluate how the workenvironment influences the learner and identifypotential and actual barriers to knowledge trans-ferral.

ª 2012 The authors. Health Info

The transfer of learning

While academic training and education are primar-ily concerned with giving learners new knowledgeand schemas on which they can build future skills,workplace training needs to demonstrate behavio-ural change and measure the impact of educationalactivities on the learner’s job performance. Themain purpose of occupational training is the trans-ferral of new practice into the workplace.

Baldwin and Ford17 were among the firstresearchers to construct theory guiding the transferof training into practice. In their model, trainingtransfer is seen as a function of three sets offactors (Fig. 2).

Subsequent research has confirmed organisation-al environment factors as key influences on thetransfer of training.18,19Thus, it is increasinglyclear that even when learning occurs, there is stillsignificant opportunity for the workplace environ-ment to inhibit application of the learning. Allig-er’s5 meta-analysis of training transfer concludedthat providing the right organisational environmentwas just as important as the actual training itself.Yet most evaluation of training activity fails toincorporate outcomes that measure the impact ofthe organisational environment on the transfer oftraining.

This trinity of factors (learner, environment andtraining) is also present in Rogers20 seminal texton the diffusion of innovation. Rogers work showsthree key factors that impact on the likelihood ofadoption of new thinking and behaviour change(Fig. 3).

Lewin’s equation21 B = f (P,E), behaviour is afunction of the person and the environment, is stilla valid and relevant statement. A comprehensive

rmation and Libraries Journal ª 2012 Health Libraries Group

Health Information and Libraries Journal, 29, pp.81–86

Page 3: Evaluating educational interventions for information literacy

1. Receiver variables (the learner) – personality of the learner, previous experiences, perceived need for change.

2. Social system variables (the workplace environment) – organisational structure, workplace culture, job autonomy

3. Perceived characteristics of the innovation (training) - perceived complexity of the task, triability, observability.

Figure 3 Rogers: Influencing factors of diffusion. (1)

Receiver variables (The Learner) – Personality of the learner,

previous experiences, perceived need for change. (2) Social

system variables (The Workplace Environment) –

Organisational structure, workplace culture, job autonomy.

(3) Perceived characteristics of the innovation (Training) –

Perceived complexity of the task, triability, observability.

Learning and teaching in action 83

evaluation should look not only at the educationalintervention, but also the workplace environment.

Adapting this to evaluate the impact of an edu-cational intervention, we get BC = f (L,E,T) –behavioural change is a function of the learner, theenvironment and the training. This leads us awayfrom the narrow focus of attributing everything tothe educational intervention and expands the eval-uation process to a more systems-based approach.Future evaluations of information literacy trainingactivities should include data collection and out-come measures, which incorporate all three ofthese distinct areas.

We have established that many training pro-grammes fail to achieve their objectives because oforganisational factors. It is therefore sensible dur-ing the design of training and educational activitiesto carry out an organisational analysis in additionto the standard training needs analysis. Currently,it is rare for organisational analysis to be part ofthe design process in information literacy trainingactivities. The purpose of an organisational analy-sis is to highlight the systemwide components ofthe organisation that may affect the delivery of atraining programme.

To summarise, rather than having the four levelsof the Kirkpatrick model, the transfer of learningliterature highlights three distinct areas that can beused in evaluation. The subtle but important differ-ence between this and Kirkpatrick is that ratherthan focus solely on the impact of the learner ontheir environment, we also measure the impact ofthe environment on the learner. By looking atwider aspects of how, where and who is imple-menting new ways of working, we can start to

ª 2012 The authors. Health Information and Libraries Journal ªHealth Information and Libraries Journal, 29, pp.81–86

gain a true picture of how new learning is prag-matically used in the workplace.

Systems approach

Examining the potential reasons why training mayfail to be implemented in the workplace shows thatmany of the barriers are organisational and notunder the direct influence of the trainer. By view-ing training and library services as part of a largernation-wide system, we can gain new insight intohow and where transferral of learning is influ-enced.

Often Managers and employees consider a two-hour training course as an instant fix. By basingtraining evaluation on a systems approach, wehave an opportunity to highlight through two-waydialogue that training alone is only part of thesolution and that the learner must feel confidentand willing to implement the skills and thatmanagers are aware that they need to provide apsychologically safe environment with access tosufficient resources to facilitate the transfer oflearning.

This shares responsibility for evaluation andensures that all parties involved understand theirresponsibility and commitment to changing behav-iours. Using the evaluation process to identify thepotential organisational barriers to change, we canstart to influence areas outside of our normal pro-fessional jurisdiction. This allows us opportunityto work collaboratively with others in the work-place and share the responsibility for the effective-ness of educational interventions.

Evaluation is often something that is consideredas an afterthought, bolted on after the training hasbeen designed and delivered. This leads to poorquality data that have limited meaning or rele-vance. Evaluation should be considered at the startof the training design process. The initial trainingneeds analysis will identify where training isneeded, what needs to be taught and the learningoutcomes to be achieved. The training needs anal-ysis should be an activity that involves the poten-tial learners, heads of departments and otherrelevant stakeholders. By involving others in theearly development stage of training, it is easier togain their commitment to assisting in the evalua-tion.

2012 Health Libraries Group

Page 4: Evaluating educational interventions for information literacy

Learning and teaching in action84

Example measures

There is some good work being done in evaluatinginformation literacy educational interventions, andtools exist that can help produce a valid and usefulevaluation. Some examples of these follow.

The environment

The BARRIERS tool was developed by SandyFunk22 to assess the perception of barriers to the useof research literature. This is a reliable and valid toolthat can be used to gather data. Alternatively, learn-ers could be asked to complete a simple force fieldanalysis23 as part of the evaluation process. Forcefield analysis is a useful technique for looking at allthe forces for and against an action. By carrying outthe analysis, you can identify and plan to strengthenthe forces supporting information literacy andreduce the impact of opposition to it. However, thisrelies on the learner to be actively aware of the bar-riers: it is often the case that organisational culturesand processes that prevent the practice of informa-tion literacy are so abstracted and habitual that thelearner may not be consciously aware of them. Amore labour intensive approach can be to conduct atask analysis24 or cognitive work analysis.25 Thiscan be very useful to identify where potential barri-ers to implementing information literacy existwithin an organisation.

The training: valid assessment tools

Giving the learner a test is a commonly usedmethod of evaluation. If the learner can carry outa specific task or answer specific questions, theyare demonstrating the application and use of thenewly learned knowledge. However, often theseassessment tools and tests used have not been vali-dated or tested for reliability.26

It is important to have a validated assessmenttool to show that learning has occurred. An exam-ple of good practice is the use of the Fresno testthat is used to assess competency in evidence-based practice. The Fresno test has shown itscontent to have good internal reliability, discrimi-natory ability, and construct validity.27–29

Similarly, the Berlin Questionnaire has been devel-oped as a valid and reliable method to assess

ª 2012 The authors. Health Info

critical appraisal skills and knowledge.30 A system-atic review by Shaneyfelt et al26 provides a goodoverview of these and other validated assessmenttools for all aspects of evidence-based practice.

Using these existing assessment tools not onlyincreases the validity and reliability of the resultsbut also leads to continuity in measurement thatallows greater scope for synthesis and compari-son between different studies. Universal adoptionof these assessment tools as part of the evalua-tion process would be a useful future develop-ment for research in information literacyeducation.

Establishing the impact of information literacytraining on the wider environment, such as patientcare, can be time-consuming, complex and requirecompliance and assistance from multiple areasacross the organisation. One pragmatic option is tointroduce a critical incidence analysis31 as part ofthe evaluation process. Issuing course attendeeswith ‘critical incident sheets’, to record examplesof where they required the use of information, andthe outcomes of that situation can provide valuablequalitative data that can be used in evaluation.Critical incidents provide real-life demonstrationsof the impact of training and information literacyskills that can help to justify the continuation ofservices and training.

Conclusion

Healthcare knowledge and library services expendconsiderable resources on training staff to carryout the information literacy skills associated withevidence-based practice. It is increasingly impor-tant to identify whether this expenditure ofresources is providing the results that are desired.

It may be that while this training is successfulin giving people new skills and knowledge, theseskills will not be used in the workplace. Environ-mental and organisational barriers may prevent thegood work done by libraries from having theimpact it should. It is only by identifying thesebarriers (through comprehensive evaluation) thatwe can begin to change the workplace environ-ment to make it more conducive to the implemen-tation of information literacy skills.

If training is not producing the desired behavio-ural change then a high-quality evaluation should

rmation and Libraries Journal ª 2012 Health Libraries Group

Health Information and Libraries Journal, 29, pp.81–86

Page 5: Evaluating educational interventions for information literacy

Learning and teaching in action 85

provide the information to identify where issuesmay lie in the system and suggest ways to addressthe issue. It can show that training is effective inits immediate aim (to give individuals the skillsand knowledge to practise new working methods)and shows that other aspects of the organisationmay be creating barriers to implementation andskill transferral. It will also generate informationthat can show how these organisational barrierscan be lessened.

The purpose of workplace training is not toentertain or educate staff, but to change workingpractice. Any evaluation should take into accountthe surrounding factors that affect the transfer oflearning and hinder this change to working prac-tice. If we are to truly understand the effects ofinformation literacy training, we must expand thefocus of our evaluation to incorporate aspects ofthe organisational environment as well as aspectsof the learner and learning.

References

1 Goldstein, I. L. Training in Organizations: NeedsAssessment, Development & Evaluation. Monterey, CA:Brooks, 1993.

2 Brettle, A. Information skills training: a systematic reviewof the literature. Health Information and Libraries Journal2003, 20(Suppl. 1), 3–9.

3 Brettle, A. Evaluating information skills training in healthlibraries: a systematic review. Health Information &Libraries Journal 2007, 1, 18–37.

4 Bailey, D. Post Qualifying Mental Health Training. London:National Institute of Mental Health, 2002.

5 Alliger, G. M., Tennenbaum, S. I., Bennett, W. & Traver,H. A Meta-analysis of the relations among training criteria.Personnel Psychology 1997, 50, 341–358.

6 Khan, K. S. & Awonuga, A. O. Assessments in evidencebased medicine workshops: loose connection betweenperception of knowledge and its objective assessment.Medical Teacher 2001, 23, 92–94.

7 Michalski, G. V. & Cousins, J. B. Differences instakeholder perceptions about training evaluation: a conceptmapping investigation. Evaluation and Program Planning2000, 23, 211–230.

8 Salas, E. & Cannon-Bowers, J. The science of training: adecade of progress. Annual Review of Psychology 2001, 52,471–499.

9 Kraiger, K., Ford, J. K. & Salas, E. Application ofcognitive, skill-based, and affective theories of learningoutcomes to new methods of training evaluation. Journal ofApplied Psychology 1993, 78, 311–328.

ª 2012 The authors. Health Information and Libraries Journal ªHealth Information and Libraries Journal, 29, pp.81–86

10 Kirkpatrick, D. L. Evaluation of training. In: Craig, R. L.,Bittel, L. R. (eds.) Training & Development Handbook.New York: McGraw-Hill, 1967: 87–112.

11 Kirkpatrick, D. L. Evaluating Training Programs. SanFransisco: Berret-Koehler, 1994.

12 Barr, H., Freeth, D., Hammick, M. & Reeves, S.Evaluation of Interprofessional Education: A UnitedKingdom Review of Health and Social Care. London:CAIPE, 2000.

13 Coomarasamy, A. What is the evidence that postgraduateteaching in evidence based medicine changes anything? Asystematic review, BMJ 2004, 329, 1017–1022.

14 Cromwell, S. E. & Klob, J. A. The effect oforganisational support, management support, and peersupport on transfer of training. In: Egan, T., Lynham, SA (eds.) Proceedings of the 2002 Academy of HumanResources Development Annual Conference. BowlingGreen, Ohio: The Academy of Human ResourceDevelopment, 2002: 537–544.

15 Ford, J. K. & Kraiger, K. The application of cognitiveconstructs and principles to the instructional systems designmodel of training: implications for needs assessment, design,and transfer. International Review of Industrial &Organizational Psychology 1995, 10, 1–48.

16 Rouiller, J. Z. & Goldstein, L. L. The relationship betweenorganizational transfer climate and positive transfer oftraining. Human Resources Development Quarterly 1993, 4,377–390.

17 Baldwin, T. & Ford, J. K. Transfer of training: a review anddirections for future research. Personnel Psychology 1988,41, 63–105.

18 Gumuseli, A. I. & Ergin, B. The managers’ role inenhancing the transfer of training. International Journal ofTraining and Development 2002, 6, 80–97.

19 Ford, J.K. & Weissbein, D.A.. Transfer of training: anupdate review and analysis. Performance ImprovementQuarterly 1997, 10, 22–41.

20 Rogers, E. M. Diffusion of Innovations. New York: Simon& Schuster International, 2003.

21 Lewin, S. Field Theory in Social Science: SelectedTheoretical Papers. New York: Harper & Row, 1951.

22 Funk, S. G., Champagne, M. T., Wiese, R. A. & Tornquist,E. M. Barriers: the barriers to research utilization scale.Applied Nursing Research 1991, 4, 39–45.

23 Lewin, K. Defining the field at a given time. PsychologicalReview 1943, 50, 292–310. Republished In: Resolving SocialConflicts & Field Theory in Social Science, Washington D.C:American Psychological Association, 1997.

24 Kirwan, B., Ainsworth, L. (eds). A Guide to Task Analysis.London: Taylor and Francis, 1992

25 Jenkins, D., Salmon, P. & Walker, G. Cognitive WorkAnalysis: Coping with Complexity. Surrey: AshgatePublishing, 2008.

26 Shaneyfelt, T., Baum, K. D., Bell, D., Feldstein, D.,Houston, T., Kaatz, S., Whelan, C. & Green, M.Instruments for evaluating education in evidence-based

2012 Health Libraries Group

Page 6: Evaluating educational interventions for information literacy

Learning and teaching in action86

practice: a systematic review. JAMA 2006, 296, 1116–1127.

27 Ramos, K. D., Schafer, S. & Tracz, S. Validation of theFresno test of competence in evidence based medicine. BMJ2003, 326, 319–321.

28 McCluskey, A. & Bishop, B. The adapted Fresno test ofcompetence in evidence-based practice. Journal ofContinuing Education in the Health Professions 2009, 29,119–126.

29 Tilson, J. K. Validation of the modified Fresno test:assessing physical therapists’ evidence based practiceknowledge and skills. BMC Medical Education 2010, 10,38.

30 Fritsche, L., Greenhalgh, T., Falck-Ytter, Y., Neumayer, H.& Kunz, R. Do short courses in evidence based medicineimprove knowledge and skills? validation of Berlinquestionnaire and before and after study of courses inevidence based medicine. BMJ 2002, 325, 1338–1341.

ª 2012 The authors. Health Info

31 Davis, P. Critical incident technique: a learning interventionfor organizational problem solving. Development andLearning in Organizations 2006, 20, 13–16.

For details on how to contribute to this feature pleasecontact:

Hannah SpringLearning and Teaching in Action Feature Editor

Senior Lecturer: Research and EvidenceBased Practice Support

Faculty of Health and Life SciencesYork St John University

Lord Mayor’s WalkYork Y031 7EX

Tel: +44 (0)1904 876813E-mail: [email protected]

rmation and Libraries Journal ª 2012 Health Libraries Group

Health Information and Libraries Journal, 29, pp.81–86