169
Evidence: How do you get clinicians involved in quality improvement? An evaluation of the Health Foundation’s Engaging with Quality Initiative – a programme of work to support clinicians to drive forward quality August 2010 Identify Innovate Demonstrate Encourage

Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Evidence:

How do you get clinicians involved in quality improvement? An evaluation of the Health Foundation’s Engaging with Quality Initiative – a programme of work to support clinicians to drive forward quality

August 2010

Identify Innovate Demonstrate Encourage

Page 2: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved inquality improvement?An evaluation of the Health Foundation’s

Engaging with Quality Initiative– a programme of work to support clinicians to

drive forward quality

Final report

Tom LingBryony SoperMartin BuxtonStephen HanneyWija Oortwijn

Amanda ScogginsNick Steel

Page 3: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The Health Foundation is an independentcharity that aims to improve the quality ofhealthcare across the UK. We are here to inspireand create the space for people, teams,organisations and systems to make lastingimprovements to health services. Working at everylevel of the healthcare system, we aim to developthe technical skills, leadership, capacity, knowledge,and the will for change that are essential for realand lasting improvement.

In 2004 we launched the Engaging with QualityInitiative (EwQI). It supported eight projects led byRoyal Colleges to improve the quality of care in arange of conditions and diseases in acute andmental health care, including inflammatory boweldisease, chronic obstructive pulmonary disease andprescribing for serious mental illness. EwQI wasinspired by evidence suggesting that clinicians areattentive to the need to improve quality but areoften not sufficiently engaged in efforts to achievethis. It was designed to tap into the enthusiasm ofclinical leaders operating in professional bodiesand in multi-professional networks and was basedon the premise that clinician led improvementwork is critical to engaging clinical communities.Most projects ran audits as a core improvementintervention, supplemented by a range of otherimprovement methods.

We commission independent evaluation of all ourmajor activities in order to generate robust andconvincing evidence about improvements in careand provide learning for individual programmes ofwork. We seek to stimulate debate about the bestmethods to evaluate complex continuousimprovement interventions and contribute to thescience of improvement. In 2005 we appointed aconsortium of RAND Europe and the HealthEconomics Research Group at Brunel University toundertake a four year evaluation of the initiative.

The evaluation reports that EwQI was successful inengaging clinicians and service users in effectiveprocesses of change. It also engaged policy makersand decision makers, promoted the capacity of thehealthcare system to deliver improvement andcontributed to the knowledge base aboutimproving quality. Projects reported greaterstandardisation of professional practice, moreequitable care, greater quality control andimproved patient satisfaction. Improvements inclinical outcomes were reported to be real, butmodest and patchy. Given the limited nature of theimprovement interventions – with hindsightprojects’ strong focus on clinical audit coupled withlimited change mechanisms was going to beunlikely to produce a step change in outcomes – itis very encouraging that the evaluation foundimprovements in structures, processes andcultures.

The evaluation concludes: 'professionally-led QI[quality improvement] in acute care cansuccessfully mobilise large numbers of cliniciansacross a wide range of organisational settings. Inacute settings it also appears that this engagementhas more to do with the professional identity ofclinicians than with any pecuniary gain.'

The evaluation provides powerful learning aboutthe challenges of undertaking improvement workand makes important recommendations aboutdelivering, supporting and evaluating improvementinterventions. Crucially, it demonstrates thatimproving quality is part of clinicians’ professionalidentity and that tapping into this can be apowerful motivator for change.

Dr. Dale WebbDirector of Evaluation & StrategyThe Health Foundation

How do you get clinicians involved in quality improvement? iii

Foreword

Page 4: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

iv How do you get clinicians involved in quality improvement?

Foreword iii

Figures and tables vi

Executive summary viii

List of abbreviations xiii

Acknowledgements xv

CHAPTER 1 Introduction 1

1.1 The Engaging with Quality Initiative 1

1.2 Aims and objectives of the EwQI evaluation 2

1.3 The evolving policy context of the EwQI 3

1.4 The quality gap 3

1.5 Evaluation, causality and our approach 4

1.6 Methods adopted 7

1.7 Theory of change approaches and this evaluation 9

1.8 Building the ‘contribution story’ 10

CHAPTER 2 The project teams’ approaches to quality improvement 11

2.1 Contributing to improvement 11

2.2 The projects 12

2.3 Locating the EwQI theories of change 18

CHAPTER 3 What was achieved by the projects and what effort did it require? 20

3.1 Data available about achievements 20

3.2 An overview of the extent of implementation of each project 22

3.3 Measurable improvements in patient care 25

3.4 Reported changes in other outcomes 35

3.5 Effort expended by the project teams 45

3.6 Variation within the EwQI projects 54

3.7 Conclusions 56

Contents

Page 5: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? v

CHAPTER 4 Engaging with the initiative 58

4.1 Introduction 58

4.2 Engaging clinicians 58

4.3 Engaging patients and their representatives 64

4.4 The involvement of NHS managers 69

4.5 Conclusions 70

CHAPTER 5 Leadership and building the capacity to deliver lasting improvement 71

5.1 Introduction 71

5.2 The leadership in question 72

5.3 The capacity in question: building a platform for change, motivating action, and 72sustaining improvements

5.4 Leading sustainable change 76

5.5 The role of the royal colleges and professional bodies 79

5.6 The role of information in delivering QI 82

5.7 Conclusions 83

CHAPTER 6 Conclusions and recommendations 84

6.1 Introduction 84

6.2 What was achieved? 85

6.3 Was it worth it? 90

6.4 Explaining success and failure: why is the return on effort greater for some 91QI activities than others?

6.5 What should be done differently in the future? 93

APPENDICES

Appendix A: Engaging with Quality Initiative projects 97

Appendix B: The projects’ logic models 100

Appendix C: The evaluation protocol 109

Appendix C: Annex 1: Data collection for external evaluation 122

Appendix D: Data collection for external evaluation 124

Appendix E: EwQI evidence base 125

Appendix F: Delphi survey analysis 129

Appendix G: Measures of clinical quality 141

REFERENCES 147

Page 6: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

vi How do you get clinicians involved in quality improvement?

Figure 1: The Engaging with Quality Initiative pyramid of impacts 90

Table 1: The Engaging with Quality Initiative (EwQI) objectives 1

Table 2: Aims of the EwQI external evaluation (with related tasks identified by the 6external Evaluation Team)

Table 3: Theories of change underlying POMH-UK quality improvement activities 14

Table 4: Sources of data from the EwQI projects about improvements in patient care 21

Table 5: Sources of data from the EwQI projects about outcomes other than patient 21outcomes attributed or related to the projects

Table 6: Overview of the extent of implementation of the EwQI projects 22

Table 7: Achievement of audit standards for patients in the Colorectal Cancer project 26

Table 8: Number of respondents taking part in project team surveys about self-harm 27outcomes at baseline and re-audit

Table 9: Self-harm outcomes 27

Table 10: POMH-UK measurable patient outcomes – topics with re-audit data 28

Table 11: NCROP measurable patient outcomes 29

Table 12: PoISE measurable patient outcomes – overall 30

Table 13: PoISE measurable patient outcomes from standard dissemination 30

Table 14: PoISE measurable patient outcomes from the opinion leader and web-based 31education tool intervention

Table 15: PoISE measurable patient outcomes from the PDSA intervention 31

Table 16: EPI-SNAP measurable outcomes 32

Table 17: IBD selected measurable outcomes 34

Table 18: Increase in the capacity and infrastructure for quality improvement in the 35professional bodies involved in the project

Table 19: Increase in the knowledge base 38

Table 20: Sustainable arrangements for improving the quality of care 39

Table 21: A transferable system of quality improvement to other areas of medicine 41

Table 22: An increase in knowledge and understanding of quality improvement in 43healthcare

Figures and tables

Page 7: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? vii

Table 23: Clinicians’ opinions of the EwQI projects 43

Table 24: Guide to time commitment of service users in the Self-harm project 46

Table 25: Rough estimate of time commitment required for each local team member in 47the Self-harm project

Table 26: Rough estimate of time commitment required for each local team lead in the 47Self-harm project

Table 27: PoISE costs are estimated across whole PDSA intervention in all five trusts 50unless stated otherwise

Table 28: PoISE costs (estimated across opinion leader + web intervention in all five 51trusts, unless stated otherwise)

Table 29: PoISE costs associated with standard dissemination 52

Table 30: Increase in levels of professional engagement in QI as a result of the EwQI 60

Table 31: Skills needed in the central project team 74

Table 32: Summary of the benefits from the EwQI 89

Table 33: Key questions to be answered in the EwQI self-evaluations 122

Page 8: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

viii How do you get clinicians involved in quality improvement?

The Engaging with QualityInitiativeIn 2004, the Health Foundation invited nationalprofessional bodies and specialist societies in theUK to bid for funds for projects under theEngaging with Quality Initiative (EwQI). Thethree objectives of the EwQI were to:

– engage clinicians in leading qualityimprovement projects that would achievemeasurable improvement in clinical quality

– identify effective strategies for clinical qualityimprovement that could be replicated andspread across the healthcare system

– increase capacity for clinical qualitymeasurement and improvement in the UK bydeveloping the infrastructure and skills withinprofessional bodies.

Eight projects run by professional bodies orspecialist societies were selected:

1. National Bowel Cancer Audit Programme;lead organisations: Imperial College London,Association of Coloproctology of Great Britainand Ireland

2. The Use of Regional Collaboratives toImprove Services for People Who Have Self-harmed; lead organisation: Royal Collegeof Psychiatrists

3. The Prescribing Observatory for MentalHealth; lead organisation: Royal Collegeof Psychiatrists

4. The National COPD Resources andOutcomes Project; lead organisation: RoyalCollege of Physicians

5. Peri-operative Fasting Implementation StudyEvaluation; lead organisation: Royal Collegeof Nursing

6. Epilepsy and Community-acquiredPneumonia Scottish National Audit Project;lead organisations: Royal College of Physiciansof Edinburgh and Royal College of Physiciansand Surgeons of Glasgow

7. UK Inflammatory Bowel Disease Audit; leadorganisation: Royal College of Physicians

8. Perineal Assessment Repair LongitudinalStudy; lead organisation: Royal College ofMidwives.

The evaluationIt was the Health Foundation’s intention thatevaluation should be conducted at the same timeas, and be integral to, the EwQI, and that it shouldoperate at two levels: evaluations of the individualprojects (self-evaluation) and an evaluation of theoverall initiative (external evaluation). The overallaims were to determine progress against the EwQIobjectives, identify and measure outcomes, assessthe processes adopted, and explore the thinkingbehind the projects in order to identify the factorsassociated with success.

A consortium led by RAND Europe with theHealth Economic Research Group, Brunel

Executive summary

Page 9: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? ix

University, was invited by the Health Foundationto provide the external evaluation, and this is thesubject of this report. The external EwQIevaluation was intended to be both formative andsummative, and the Evaluation Team workedclosely with the project teams to help themdevelop their self-evaluations, upon which theexternal evaluation was built. Our further taskswere to:

– analyse and synthesise data from the projects’self-evaluations

– assess increases in clinical engagement inquality improvement

– explore the wider implications of the EwQI

– identify any related changes within the royalcolleges and professional bodies

– assess the sustainability and cost consequencesof the projects.

The EwQI had specific characteristics that shouldshape how our conclusions and recommendationsare interpreted. First, the initiative was from theoutset conceived as a demonstration of what couldbe achieved through clinician-led qualityimprovement (QI) activities, with active supportfrom the royal colleges and professional bodies andwith the engagement of patients and theirrepresentatives, rather than as a scientificexploration of QI. Both the Health Foundation andthe project teams wanted to show the improvementsthat could be made. However, if the inspiration andour evaluation approach were pragmatic, ourevaluation was also scientific in the sense that wewere seeking systematic evidence to support orweaken causal claims. We evolved an evaluationprotocol to reflect the challenge of doing‘pragmatic science’1.

The second issue is that the EwQI was one of twoprogrammes funded by the Health Foundation, andwas focused on acute care. The second programme,reporting in 2011, concerns primary care, and it willbe important to combine the findings from both ofthese studies for an overall account of thecontribution of clinician-led QI in the NHS. Third,

the model of QI being studied was about changingthe behaviour of individual clinicians through theprovision of information, peer review, training andother supports. Intended outcomes for patientsincluded improved clinical outcomes, improved ormore equal access to healthcare, improved patientexperience, and a healthcare system that is moreresponsive to need.

The EwQI promoted one particular approach to QI.Other approaches include patient safety systems,accreditation schemes, a stronger role forcommissioning, for management (throughorganisational quality management programmes),for (non clinician-led) standard setting, and so on2.These are all part of the potential mix of instrumentsfor raising standards and improving quality, and theanalysis here does not form a judgement about howto optimise this mix.

What the evidence told usThe EwQI:

– Demonstrates that in acute care, clinician-ledapproaches to identifying standards, auditingagainst these, and developing improvementplans can successfully engage other clinicians ina process of change.

– Demonstrates that QI requires a complex mixof skills, including leadership, communication,management and a knowledge of how theactivities pursued fit within the wider processesof the NHS.

– Supports other findings that clinicians havean appetite to collaborate with their peersto identify and implement improvementsin healthcare delivery, but the effort requiredto implement successful improvementsin the current UK healthcare systemis considerable.

– Supports other findings that patientperspectives and user voices, when carefullyintegrated, can strengthen QI.

– Demonstrates that, within the timescales of this

Page 10: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

x How do you get clinicians involved in quality improvement?

initiative, even well-founded and well-conductedQI programmes may have patchy and limitedimpacts on measurable health outcomes.

– Demonstrates that professional bodies can playa supportive role in providing QI activities withlegitimacy and visibility, should they choose todevelop this role.

– Underlines the need for further work tounderstand the causal chain linking QIactivities to healthcare and patient outcomes,and how, given more time and greaterinstitutional support, QI activities such as thosepromoted through the initiative could representbetter value for money in the mix of measuresdesigned to create a healthcare system that issafe, fair, effective and efficient.

From this, we make seven recommendations:

1. A springboard for action

Any QI project should have a springboard,consisting of a team with sufficient capacity tomanage the complexity of that project. Thisreport demonstrates the considerable extent ofthis requirement in terms of project and peoplemanagement, user engagement, data collectionand analysis, communication, trust building, andunderstanding of the wider NHS environment.

TARGET AUDIENCE: those planning and leadingQI; NHS bodies hosting QI activities; funders.

TIMESCALE: immediate.

TASKS: develop a short ‘capability check’ that QIproject teams could use to reflect on their capabilityfor action.

2. Sparking change and mobilisingresources

QI activities typically require a change fromroutine practice and must overcome inertia toget started. Successful projects requireleadership capable of sparking enthusiasm andmaintaining a momentum suitable to the scaleof that inertia and to the ambition of the aimsto be realised. Patient voices can be animportant support in this. Large, complexprojects, such as those in the EwQI, require arange of leadership skills to facilitate action andorganise multi-professional, multidisciplinarycollaborations, using structures carefullyadapted to local circumstances.

TARGET AUDIENCE: healthcare leaders; clinicalleadership educators; funders of leadershipprogrammes; NHS Institute for Innovation andImprovement; professional bodies; funders of healthservice research.

TIMESCALE: medium-term development ofleadership capacities in healthcare.

TASKS: build on existing literature on leadershipand change to audit current skills against requisiteskills; continue to use leadership supportprogrammes as part of QI activities; continue todevelop leadership courses and training.

3. Sustaining change and aligning withthe direction of change in the healthsystem

QI activities cannot easily swim against the tideof wider changes in the healthcare system. Toprovide sustainable benefits, QI activitiesshould, where possible, be aligned with themainstream allocation of resources inhealthcare, supported through professionaltraining, and through commissioning andregulation, and be integrated into themanagement of services. This alignment is alsolikely to include engagement with service users.

Page 11: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Should all this not be possible, alternative andsustainable supports should be identified.

TARGET AUDIENCE: commissioners of care;managers in NHS bodies hosting QI activities;funders of QI; deliverers of QI.

TIMESCALE: immediate.

TASKS: QI projects should address sustainability atthe outset rather than towards the end and shouldidentify how changes in the healthcare system canbe harnessed to achieve sustainable improvements.

4. Supporting QI: the role of healthcareinstitutions

A large-scale QI project should only be fundedif the healthcare institution hosting that projecthas the necessary project management capacity,leadership, monitoring and evaluation skills toensure that the project has the best chance ofdelivering and measuring improvements in thequality of healthcare, and of sharing positiveresults. However, a balance should be struck toensure that this does not inhibit innovativeapproaches ‘bubbling up’ from below. Ofparticular importance is the support thatservice users, carers and their representativescan provide.

TARGET AUDIENCE: funders of QI; healthcarebodies hosting QI activities.

TIMESCALE: medium term.

TASKS: develop a ‘capability check list’ to be usedbefore arriving at any decisions about fundinglarge-scale QI.

5. Supporting QI: the role of the royalcolleges and professional bodies

Each royal college and professional bodyshould consider how, if at all, it wishes toprovide leadership, legitimacy, organisationalsupport, and professional training in relation toQI.

TARGET AUDIENCE: the royal colleges andprofessional bodies.

TIMESCALE: medium term.

TASKS: the royal colleges and professional bodies touse their inter-institutional networks to takeforward the debate of what is possible and desirablein general and to develop an internal dialogue onwhat is appropriate for each institution. Asguardians of professional standards, they could alsosolicit the views and expectations of service usersand the wider public.

6. Supporting QI: the role of educationand training

QI should be part of the education, trainingand appraisal of health professionals. This notonly concerns ‘heroic’ leadership but alsodispersed leadership and the ability to maintaineffective dialogue with managers, service usersand other clinicians.

TARGET AUDIENCE: educators.

TIMESCALE: medium to long term.

TASKS: review the ongoing changes to the currentcurriculum and propose inclusion of knowledgeabout QI and skills in its delivery.

How do you get clinicians involved in quality improvement? xi

Page 12: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

xii How do you get clinicians involved in quality improvement?

7. Strengthening learning

Professionals, funders, QI practitioners andevaluators should strengthen learning about theeffectiveness and cost effectiveness of QI bydeveloping a better and more widely sharedunderstanding of the requirements forevaluation, and of its benefits and limitations.

TARGET AUDIENCE: clinicians; QI planners;evaluators; funders.

TIMESCALE: medium term.

Page 13: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? xiii

ACPGBI Association of Coloproctology ofGreat Britain and Ireland

BSAC British Society for AntimicrobialChemotherapy

BTS British Thoracic SocietyCAP Community acquired pneumonia CCQI College Centre for Quality

ImprovementCEEU Clinical Effectiveness and Evaluation

UnitCLAHRC Collaborations for Leadership in

Applied Health Research and CareCMO Context-Mechanism-OutcomeColorectal National Bowel Cancer Audit Cancer ProgrammeCOREC Central Office for Research Ethics

CommitteesCOPD Chronic obstructive pulmonary

diseaseCQC Care Quality Commission DH UK Department of Health EDIS Emergency Department Information

ServiceEPI-SNAP Epilepsy and Community-and acquired Pneumonia Scottish SNAP-CAP National Audit ProjectEwQI Engaging with Quality InitiativeEwQPCAS Engaging with Quality in Primary

Care Award SchemeHCC Healthcare Commission HERG Health Economic Research Group,

Brunel UniversityHQIP Healthcare Quality Improvement

Partnership

IBD UK Inflammatory Bowel DiseaseAudit

IHI Institute for Healthcare ImprovementIOM Institute of Medicine IT Information Technology ITT Invitation to TenderLREC Local Research Ethics CommitteesMINAP Myocardial Infarction National Audit

Project MDT Multidisciplinary teamLPT Local project teamNBOCAP National Bowel Cancer Audit

ProgrammeNCAP National Clinical Audit ProgrammeNCROP The National COPD Resources and

Outcomes ProjectNHS National Health ServiceNHS QIS NHS Quality Improvement ScotlandNICE National Institute for Health and

Clinical ExcellenceNSF National Service FrameworkPARiHS Promoting Action in Research

Implementation in Health ServicesPDSA Plan Do Study ActPEARLS Perineal Assessment Repair

Longitudinal StudyPICU Psychiatric Intensive Care UnitPLAN Psychiatric liaison accreditation

networkPoISE Peri-operative Fasting

Implementation Study EvaluationPOMH-UK The Prescribing Observatory for

Mental Health

List of abbreviations

Page 14: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

xiv How do you get clinicians involved in quality improvement?

QI Quality improvementQOF Quality and Outcomes Framework QQUIP Quest for Quality and Improved

PerformanceRCN Royal College of Nursing RCP Royal College of Physicians,

LondonRCPE Royal College of Physicians of

EdinburghRCPSG Royal College of Physicians and

Surgeons of GlasgowRCPsych Royal College of PsychiatristsRCT Randomised Controlled TrialR&D Research and development

RSE Record of Significant EventSAPG Scottish Antimicrobial Prescribing

GroupSelf-harm The use of regional collaboratives to

improve services for people who haveself-harmed

SER Self-evaluation reportSIGN Scottish Intercollegiate Guidelines

NetworkSPICE Scottish Programme for Improving

Clinical EffectivenessToC Theory of ChangeUCL University College LondonUKCRC UK Clinical Research Collaboration

Page 15: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? xv

We wish to thank the eight EwQI project teams forworking not only collaboratively but also withconsiderable good will and responsivenessthroughout the initiative, and for participatingin and supporting the tasks of the externalevaluation.

We would also like to thank colleagues at RANDEurope who have made contributions to thisreport. In particular, we wish to thank EmmaDisley for managing the project safely home and,along with Christopher Austin, for helping toanalyse the projects’ self-evaluation reports; GregFalconer for helping to interview people inprofessional bodies; and Ellen Nolte and PeterBurge for their useful and insightful commentsduring the quality assurance process.

We would also like to thank the EwQI team and allof those involved with the initiative at the HealthFoundation for their support and interest indebating the ideas contained in this report as theyevolved over the years. Finally, Jocelyn Cornwelland Diana Jakubowska ran the SupportProgramme with vision and efficiency, and alsoengaged constructively and collaboratively with usthroughout the evaluation.

Any remaining errors, despite their best efforts, areours alone.

Tom LingBryony SoperMartin BuxtonStephen HanneyWija Oortwijn

Amanda ScogginsNick Steel

Acknowledgements

Page 16: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

xvi How do you get clinicians involved in quality improvement?

Page 17: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

1.1 The Engaging withQuality Initiative

In 2004, the Health Foundation invited nationalprofessional bodies and specialist societies in theUK to bid for funds for projects to engageclinicians in making measurable and sustainableimprovements in the quality of clinical care underthe EwQI. The three objectives of the EwQI aregiven in Table 1.

Eight projects, run by professional bodies orspecialist societies, were commissioned in variousareas of acute care and at the interface between

acute and primary care. These projects are listedbelow in the order in which they completed andwith the lead organisation identified in each case.For brevity their shorter names (in parentheses)will be used throughout the rest of this report.

1. National Bowel Cancer Audit Programme(Colorectal Cancer); lead organisations:Imperial College London, Association ofColoproctology of Great Britain and Ireland

2. The Use of Regional Collaboratives to ImproveServices for People Who Have Self-harmed(Self-harm); lead organisation: Royal College ofPsychiatrists

3. The Prescribing Observatory for Mental

How do you get clinicians involved in quality improvement? 1

Chapter 1

Introduction

In this chapter we first describe the aims and objectives of theEngaging with Quality Initiative (EwQI) and its evaluation. Wethen summarise the policy context into which the EwQI waslaunched and within which it was implemented, and discussthe problem that the initiative was seeking to address. We thenturn to the external evaluation of the EwQI, describing thethinking that shaped our approach and outlining the methodswe adopted. We finish with a description of the principles thatunderpinned our evaluation.

Table 1: The EwQI objectives

– To engage clinicians in leading quality improvement projects that will achieve measurable

improvement in clinical quality

– To identify effective strategies for clinical quality improvement that can be replicated and spread

across the healthcare system

– To increase capacity for clinical quality measurement and improvement in the UK by developing

the infrastructure and skills within professional bodies

Page 18: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Health (POMH-UK); lead organisation: RoyalCollege of Psychiatrists

4. The National COPD Resources and OutcomesProject (NCROP); lead organisation: RoyalCollege of Physicians

5. Peri-operative Fasting Implementation StudyEvaluation (PoISE); lead organisation: RoyalCollege of Nursing

6. Epilepsy and Community-acquired PneumoniaScottish National Audit Project (EPI-SNAPand SNAP-CAP); lead organisations: RoyalCollege of Physicians of Edinburgh and RoyalCollege of Physicians and Surgeons of Glasgow

7. UK Inflammatory Bowel Disease Audit (IBD);lead organisation: Royal College of Physicians

8. Perineal Assessment Repair Longitudinal Study(PEARLS); lead organisation: Royal College ofMidwives.

An overview of the EwQI projects is provided inappendix A, and each project is described in moredetail in chapter 2.

In total, the Health Foundation provided £4.6million for the EwQI. In addition to the fundsallocated to the projects, this figure includesfunding for three external teams which werecommissioned to support the project teams duringthe initiative. They were: an EwQI Support Team,whose brief was to help the project teams learnfrom each other and learn about qualityimprovement methods from independent experts;a team of leadership consultants to work with theproject teams on team development and leadershipskills; and a team from RAND Europe and theHealth Economics Research Group (HERG) atBrunel University to undertake the externalevaluation of the initiative as a whole. The projectteams were expected to cooperate with all threeexternal teams as they developed and implementedtheir projects. Underpinning this approach was thenotion that the EwQI should be developmental innature, and that project protocols should not befixed and irrevocable from the start but shoulddevelop as each project was implemented throughan iterative process of reflection and redesign.

This report describes the external evaluation of theEwQI, which includes an evaluation of thecontribution of the Support Team but not of theleadership consultants.

1.2 Aims and objectives ofthe EwQI evaluation

It was the Health Foundation’s intention that theevaluation should be conducted at the same timeas and be integral to the EwQI, and operate at twolevels:– evaluations of the individual projects (self-

evaluation) – evaluation of the overall initiative (external

evaluation).

At project level, the aims of the evaluation were to:– assess the extent to which individual projects

achieve measurable improvements in patientcare and identify the range of factors associatedwith success.

At initiative level, the aims were to:– work with award holders on the development

and implementation of their evaluation plans– synthesise the data and findings from the

project-level evaluations– measure increases in professional engagement

in clinical quality improvement– measure the effectiveness of the award scheme

(during its life) in leveraging externalcommitment to clinical leadership of qualityimprovement

– evaluate the increase in competency andinfrastructure for quality improvement in theprofessional bodies involved

– assess the policy influence and costconsequences of the initiative.

The external evaluation and the project self-evaluations were both expected to determineprogress against the EwQI objectives, identifyingand measuring outcomes, assessing the processesadopted, and exploring the thinking behind theprojects in order to identify the factors associatedwith success. But there was a difference in focus:the external evaluation was expected to address allthree EwQI objectives, whereas the project self-evaluations were expected to concentrate mainlyon the extent to which individual projects hadachieved measurable improvements in patient care.

Our approach to the external evaluation was,therefore, shaped by three key factors: thedevelopmental approach adopted by the Health

2 How do you get clinicians involved in quality improvement?

Page 19: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Foundation, the need to work closely with theproject teams on their emergent project designs inan iterative exchange that reflected their growingunderstanding of the EwQI and our growingunderstanding of their aims and environments,and the need to retain objectivity as we assessedthe EwQI as a whole.

In the rest of this chapter we describe the contextin which the EwQI developed and the problems itwas seeking to address. We then outline ourapproach, the reasoning that underpinned thatapproach and the methods we adopted.

1.3 The evolving policycontext of the EwQI

The EwQI was launched in April 2005 in anenvironment in which UK government policyexplicitly acknowledged both variability in thequality of healthcare and the role of professionalsin leading improvement3,4,5. Since then this policyand regulatory context has evolved, with anincreasing emphasis on patient choice and thequality of healthcare.

In England the thrust of change established in theNHS Plan in 2000 and reiterated in 2004 haslargely been continued subsequently6. Theintentions (if not always the delivery) of thesereforms were to give patients and users a strongervoice in choosing care, to strengthen effectivecommissioning to provide incentives to improveservices, and to encourage a diversity of providerswith more freedom to innovate. Among otherthings, this led to an expectation that NHS trustswould ensure that they audit their clinicalperformance. In Scotland (where one of theprojects operated) the context differed in terms ofscale, structure and culture. In particular, the use ofincentives as a lever for change was less apparentand there was a more overtly whole-governmentapproach to delivering improvement7.

In November 2006, the then Secretary of State forHealth in England, Patricia Hewitt, wrote:

In all public services, we are making a radical shiftfrom top-down, target-driven performancemanagement to a more bottom-up, self-improvingsystem built around the individual needs of service

users and influenced by effective engagement withthe public. Increasingly, improvement will be drivenby the choices made by service users and healthycompetition between different service providers. TheNHS and adult social care services are noexception8.

Since then there have been further developments.In January 2008, the Department of Healthannounced new arrangements for clinical auditwith the management of the National ClinicalAudit Programme (NCAP) awarded to aconsortium involving the Royal College of Nursing(RCN), the Academy of Royal Medical Collegesand the Long Term Conditions Alliance. And,simultaneously, wider reforms continued in theNHS in England. Those particularly relevant to theEwQI included: the NHS Next steps review,which was published in June 2008 and aimed toput quality at the heart of the NHS – empoweringstaff and giving patients choice6; a heightenedconcern with patient safety, manifested in acontinued emphasis on clinical governance;changes to medical training9 and to healthcarecommissioning10; re-organisation of primary caretrusts; and an expanded role for foundation trusts.

The global economic downturn will, inevitably,impact on the NHS. It will become increasinglyimportant to consider not only the efficacy andeffectiveness of initiatives aiming to improve clinicalcare in the NHS but also their cost effectiveness.

1.4 The quality gap

Healthcare in high-income countries is in manyways a story of improving effectiveness. Based onan analysis of US data, Bunker estimated that lifeexpectancy in the US had increased by some eightyears in the past 50 years and that around half ofthis increase could be attributed to healthcare11.However, healthcare in high-income countries isalso characterised by substantial gaps betweenrecommended care and the actual care received.McGlynn and colleagues have producedcompelling evidence that, in the US, care receivedmatches recommended care in only some 55% ofoccasions12. A systematic review of quality ofclinical care in general practice in Australia, NewZealand and the UK found that, even in the best-performing practices, only 49% of patients withdiabetes had undergone routine foot examinations,

How do you get clinicians involved in quality improvement? 3

Page 20: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

and only 47% of eligible patients had beenprescribed beta blockers after a heart attack13. Inthe UK, Steel and colleagues confirmed this findingand suggested that the situation is especially poorfor those over fifty and in areas associated withdisability and frailty14, with consequences for thehealth outcomes and quality of life of patients.

There is a link between health outcomes, quality oflife and quality improvement. In 2001, the USInstitute of Medicine (IOM) published Crossing thequality chasm15. In this seminal document theIOM stated that quality in healthcare concerns theextent to which the healthcare system is:– safe– effective– patient-centred– timely – efficient– equitable.

The IOM definition of quality is:

The degree to which health services for individualsand populations increase the likelihood of desiredhealth outcomes and are consistent with currentprofessional knowledge16.

Quality improvement (QI) and related activitiessuch as performance measurement and audit seek toconsolidate performance around what is alreadyknown about best practice. By 2005 there wasevidence of a very wide range of qualityimprovement initiatives with wide variation interms of their impact and success17. Researchconducted jointly by RAND, University CollegeLondon and the Harvard Medical School suggestedthat there are important organisational and culturalfoundations to sustaining quality improvement inhealthcare, and that these are varied and complex18.The literature at that time also suggested that therewas a very wide range of organisational settingswithin which a clinician-led micro system of qualityimprovement might thrive, and indicated howprofessional bodies might actively contribute19.The evaluation of the NHS R&D ImplementationMethods Programme by HERG explored many ofthese ideas20, and noted that researchers in this fieldwere increasingly moving from studying singleinterventions aimed at individual clinicians tolooking at broader change strategies that paid moreattention to structure, processes and culture21.

A review of the literature on the effectiveness andefficiency of different activities intended toimprove clinical quality (such as guidelinedissemination and implementation strategies) wasundertaken by Grimshaw and colleagues in200422,23. While the quality of many of the studiesidentified was poor and the review acknowledgedmany unknowns, it was clear about the potentialbenefits to be gained from engaging clinicians inquality improvement and about the difficulties indelivering and evaluating this. The immediateinspiration for the EwQI came from work byLeatherman and Sutherland24, who concludedthat clinicians in the UK are attentive to theneed to improve quality, but are not fully engaged.The Health Foundation’s decision to invest inprojects run by professional bodies or specialistsocieties reflected Leatherman and Sutherland’sfindings that clinicians listen and learn best fromtheir peers, and that these bodies have alegitimacy and authority that commandclinicians’ respect.

The propositions underpinning the EwQI weretherefore: that QI initiatives are expected toimprove clinical and therefore patient outcomes byengaging clinicians in QI activities; that thisengagement can be facilitated by leadership fromthe royal colleges and professional bodies; and thatthe experience of doing so should influence policyand practice. This report explores what light ourevaluation throws on these propositions.

1.5 Evaluation, causality andour approach

An evaluation aims to understand what differencea service, regulation or other activity makes, atwhat cost, and who bears the costs and receives thebenefits. It is therefore concerned with thecontribution made to achieving desirable outcomesand minimising undesirable costs andconsequences. The realistic evaluation approach isthat both mechanism and context need to beunderstood in order to interpret outcomes25. Thisrequires consideration of attribution, contributionand causality, often in the context of complexinterventions that may evolve over time. Themethodological debate about how this can best beachieved is complex.

4 How do you get clinicians involved in quality improvement?

Page 21: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The randomised controlled trial (RCT) is thestandard approach for evaluating medicalinterventions and is generally acknowledged as thebest way to get to the ‘truth’ about effective care.This approach is exemplified in many of the papersreviewed by Grimshaw and colleagues whosereview used the methods proposed by theCochrane Effective Practice and Organisation ofCare Group22. In medicine, the development ofRCTs has allowed evidence to become moreimportant than belief, but RCTs have limitations,particularly for evaluating complex social changessuch as healthcare quality improvementinitiatives25,26. RCTs can be inconclusive about thebenefits of a complex intervention due to problemsin trial implementation and a methodologicallydeliberate lack of information about the context ofthe intervention being studied. This means thatRCTs may fail to show benefits where they in factexist26. Before a quality improvement initiative canbe generalised to other settings, we need to knowwhy the initiative works, as well as whether itworks. The debate is about epistemology, aboutwhat type of evidence should be sought. Thosesuggesting alternative approaches argue that thereshould be a strong relationship between what isstudied and how it is studied; and in the context ofquality improvement Berwick talks aboutpragmatic science, by which he means methods ofobservation and reflection that are systematic,theoretically grounded, often quantitative, andpowerful, but are not RCTs27,28.

Our evaluation included many ‘why’ questionsinviting causal explanations. In the followingchapter we will outline the ‘contribution stories’ ofthe EwQI projects, which imply that certainbeneficial effects will follow from the projectteams’ activities. We aimed to understand underwhat circumstances, if any, these propositions arelikely to hold true. But our approach to thisevaluation was, and had to be, pragmatic in thesense defined by Berwick. The eight projects werecommissioned as separate studies with varyingapproaches to study design and to self-evaluation.Half the individual projects involved some form ofcontrolled or quasi experimental design, but theoverall design of the EwQI meant that such anapproach was not available to us in the externalevaluation. There were also differences in timing –project start dates ran from April 2005 toNovember of that year – and in duration, which

initially ranged from three to four years. Inaddition, there was heterogeneity within eachproject – all the project teams planned to recruitlarge cohorts of participants from different sitesacross the NHS to implement their selectedimprovement interventions.

We also had to take account of the fact thatproject teams learned, adapted and evolvedtheir activities as the projects were implemented.We were sensitive to the warning of Bokhovenand colleagues:

We know that most interventions are, in practice,heterogeneous and self-limiting and that long-termbeneficial interventions require multifaceted andevolving strategies. This requires non-linear, complexand emergent evaluation strategies. Since mostevaluation don’t do this, most evaluationinformation is weak and fails to convincingly dealwith attribution or accountability29.

We have therefore pursued a ‘non-linear, complexand emergent evaluation strategy’, but have doneso within a systematic framework shaped by thebrief given to us and the project teams by theHealth Foundation. The need to explore change atmany levels and in many contexts, and toinvestigate the values, knowledge and roles of allthose involved, shaped our methodologicalapproach. The brief for the evaluation was not onlyto establish ‘what worked’ but also to understandwhy it worked (or failed to work), ie: what worked,in what contexts and for whom. We concluded thatto capture information about why the projectswere working (or not) the external evaluation hadto be methodologically pluralistic; we thereforeadopted an approach based on logic modellingwithin a framework informed by realist evaluation.

Realistic evaluation aims to establish clear andmeasurable relationships between a project and itsoutcome. It assumes that there is an underlyingtheory of change behind the project explaininghow it brought about the measured change. It isalso sensitive to the context in which the project isdelivered, identifying a series of Context-Mechanism-Outcomes (CMOs) for eachintervention. One difficulty with this approach isthat any intervention can have a large number ofCMOs30. We planned to use the professional, tacitand formal knowledge of the EwQI project teamsto narrow this number, working with them to

How do you get clinicians involved in quality improvement? 5

Page 22: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

develop illustrative logic models for each projectand to identify those aspects of their projects thatthey regarded as important in achievingimprovement in clinical care. Within thisframework, we took the six aims of the external

evaluation and identified a series of tasks undereach aim (table 2). There was some overlapbetween the six aims and this was reflected inlinks between the component tasks.

6 How do you get clinicians involved in quality improvement?

Table 2: Aims of the EwQI external evaluation (with related tasks identified by the externalEvaluation Team)28

Aim 1: To work with award holders on the development and implementation of their evaluation plans

Tasks

– Work with the project teams to support their self-evaluations, including data identification and

validation.

– Assess the experiences of the users as ‘active partners’ in the projects.

– Consider how the counterfactual for each project can be addressed to assess how much change

was attributable to the project, and how much to secular activity.

Aim 2: To synthesise the data and findings from project-level evaluations

Task

– Synthesise the data and findings from project-level evaluations.

Aim 3: To assess increases in clinical engagement in quality improvement

Tasks

– Gauge current clinical engagement through an examination of documentary evidence from the

projects.

– Assess the change achieved by supporting each project in designing, implementing and analysing

a survey of relevant clinicians.

– Conduct a web-based Delphi survey of clinicians participating in the EwQI.

Aim 4: To measure the effectiveness of the award scheme (during its life) in leveraging external

commitment to clinical leadership of quality improvement

Task

– Support a workshop on leveraging external commitment, identifying barriers, facilitators, processes

and outcomes.

Aim 5: To evaluate the increase in competency and infrastructure for quality improvement in the

professional bodies involved in the EwQI

Tasks

– Carry out in-depth interviews with each relevant professional body.

– Look at what the professional bodies involved in the EwQI have done.

Aim 6: To assess the policy influence and cost consequences of the initiative

Tasks

– Influence of the EwQI: evaluate the projects’ legacy plans.

– Cost consequences: work with the projects to explore what data they can provide to estimate

costs.

Page 23: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

1.6 Methods adopted

Aim 1: Supporting the EwQI projectteams and assessing service userinvolvement

The first task was to support the project teams’self-evaluations, and this continued throughout theinitiative. What we were trying to do was three-fold: ensure that the project teams understoodwhat was required from the EwQI evaluations atboth project and initiative level; develop our ownunderstanding of the projects; and ensure that thedata collected by the project teams supported bothlevels of evaluation.

At the start we used logic models, working withthe project teams to track the unfolding aims,activities, outputs and outcomes of each project,and identify the assumptions on which projectdesign had been based. The complete set of theseinitial logic models is given in appendix B. Thelogic models, however, enjoyed mixed success withthe project teams. Subsequently, and in order topromote the project teams’ understanding of therequirements of the self-evaluations and also toobtain data from them in a common format, weworked with the Health Foundation to develop aproject ‘self-evaluation report’ (SER). This requiredthe project teams to address nine key questions,and we asked them to use the SER as a form ofproject diary, updating it regularly throughout theproject (see table 33 in appendix C). Thesedocuments became the foundation of ourinteractions with the project teams and formed thebasis of yearly formal discussion between theEvaluation Team and each team. They achievedthe same ends that we initially hoped to pursuethrough a systematic use of logic models. We alsohad other, more informal contact with the projectteams at all the initiative-wide events organised bythe Support Team (at which we ran occasionalsessions on issues such as cost consequences) andvisited teams to provide further support. This wasa deep immersion, providing us with both formaland tacit knowledge.

To explore the experiences of EwQI service users,we built on the understanding gained throughtask 1 and conducted semi-structured interviewswith eight service users (one from each project) to

explore their role in the projects. These weresupplemented by four interviews with projectmanagers to get their views on service users’involvement in the EwQI (see chapter 4 formore details).

Aim 2: Analysing and synthesising thedata from the projects’ self-evaluations

Our initial agreement with the Health Foundationwas that our evaluation would be based on datacollected by the project teams, and that we wouldnot replicate these collections. An important partof our interaction with the project teams wastherefore to ensure that we understood theirapproaches to data collection, validation andanalysis. We also needed to identify any significantchanges to these approaches as the projects wereimplemented. For example, one project teammoved from a double-audit cycle (baseline/audit-improvement/intervention-re-audit) to continuousdata collection. In addition, we encouraged theproject teams to address any significant gaps in thedata that they were proposing to collect (fordetails, see aim 3 below). This detailed workenabled us to proceed on the basis that the finalSERs received from the project teams were accurateand provided an honest account of the projects.

To enable us to analyse and, where possible,synthesise the data from the final SERs, thesedocuments were imported into the software NVivo,where they were coded and analysed. The startingpoint for this analysis was two-fold. First, theEvaluation Team suggested a number of ideas andthemes that might structure the analysis, based ontheir prior experience of the projects. Second, weinitially and deliberately used experiencedqualitative researchers who had had no previousexperience of the initiative, and they took a‘grounded approach’. This meant that rather thanhaving a list of analytical categories in advance,these were allowed to ‘emerge’ from the data, thusensuring that ideas and thoughts of the cliniciansoperating the projects guided the identification ofthemes for analysis. A final list of categories wasidentified that included both the ‘grounded’categories and those suggested by the EvaluationTeam, and the SERs were then read and coded afinal time to ensure consistency. This analysisattempted two tasks: to draw out lessons andthemes that could be generalised beyond the

How do you get clinicians involved in quality improvement? 7

Page 24: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

specific QI projects, and to provide detailedanalysis of individual projects. The analysis alsosought to outline the theories of change behindeach of the projects: why did the project teamsthink that their selected QI activities would lead tobetter outcomes for patients?

The categories identified in this SER analysisshaped the way we have reported in the followingchapters on the project teams’ achievements, theefforts that they made and the capacities availableto them. The final SERs (with associated RSEs)formed the project teams’ final reports to theHealth Foundation. Many project teams alsosubmitted other material, including audit reportswith quantitative data on patient outcomes. Thesesupplemented the understanding we gained fromthe SERs. Where appropriate, we performedstatistical tests on key findings to allow for formalstatistical comparison of achievements in relationto patient care across each project. (Details of allthe sources of data from the projects are given inappendix E.)

Aim 3: Assessing increases in clinicalengagement

Another requirement of the EwQI evaluation wasto measure increases in clinical engagement in QI.Originally only half the project teams planned toundertake surveys of, or interview, participatingclinicians, and these exercises tended to focuson clinicians’ confidence in managing a particularclinical condition or on their attitudes to audit.We encouraged three of the remaining projects toundertake a survey of participating clinicians and,with less success, asked all the project teams towiden the scope of their surveys to includeattitudes to, and understanding of, QI. Wesuggested that the project teams ask participatingteams to maintain project diaries to help toidentify local contextual issues affecting QIactivities, although this advice was systematicallyfollowed in only one project (NCROP). Towardsthe end of the initiative we also undertook ourown Delphi survey of clinicians who hadparticipated in the EwQI in order to explore theirattitudes to quality improvement more generally.(Details of the Delphi study are provided inappendix F.)

Aim 4: Exploring the widerimplications of the EwQI

A roundtable discussion of the broaderimplications of the emerging findings from theEwQI evaluation was held in September 2009; itwas attended by 11 senior NHS staff, policymakersand commentators, the Health Foundation staffand members of the Evaluation Team.

Aim 5: Identifying changes in thecapacities of professional bodies

To explore change in the capacities of professionalbodies, we drew on three sources of data: theproject teams’ final SERs, which included reportson this issue; a series of eight semi-structuredinterviews which the Evaluation Team undertookwith key individuals (mainly quality/standardsleads) in each relevant professional body; andrelated work undertaken during the initiative bythe EwQI Support Team (described in chapter 5).

Aim 6: Assessing the sustainability andcost consequences of the projects

Part of our work under aim 1 was to support theproject teams in developing legacy plans, and theEwQI Support Team also encouraged the projectteams to think about sustainability at an earlystage. Therefore, our main data source on thesustainability and spread of the project was theproject teams’ final SERs. We also worked, but withless success, with the EwQI Support Team toencourage the EwQI teams to identify the costconsequences of their projects (see chapter 3).

The various sources of our data are listed inappendix D. The sources of the data we receivedfrom the project teams, including the SERs, arelisted in appendix E.

In summary, ‘non-linear, complex and emergent’evaluations involve a number of data collectingand analytical activities leading to an exercise ofjudgement31. Our approach has been to developarguments that aim to reduce uncertaintysurrounding the ‘contribution stories’ of the EwQIprojects, rather than aiming for certainty aboutwhat works and in what contexts. However, as willbecome apparent later in this report, our approach

8 How do you get clinicians involved in quality improvement?

Page 25: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

continued to be informed by the realist interest inunderstanding how contexts and mechanismsinteract to produce outcomes, grounded in aspecific theory of change.

1.7 Theory of changeapproaches and thisevaluation

Our approach took as its starting point theargument of Weiss that:

The concept of grounding evaluation in theories ofchange takes for granted that social programs arebased on explicit or implicit theories about how andwhy the program will work … The evaluationshould surface those theories and lay them out in asfine detail as possible, identifying all theassumptions and sub-assumptions built into theprogram. The evaluators then construct methods fordata collection and analysis to track the unfoldingassumptions. The aim is to examine the extent towhich program theories hold … the evaluationshould show which of the assumptions underlyingthe program are best supported by the evidence32.

In this sense, theory of change is an approachrather than a methodology, and its successfuldelivery requires harnessing a range ofmethodologies such as those adopted by the EwQIprojects. The importance of theories in healthcareand research has long been attested33, and there isgrowing appreciation of use of theories whendeveloping and implementing improvementinterventions and for understanding theunderlying processes34,35,36.

Our theory of change approach in thisevaluation followed five principles. Individuallythese principles are, in our view, neithercontroversial nor radical, but taken together theyprovide a pragmatic base for conductingcomplex evaluations.

1. The approach required us not only to look atthe outcomes of the programme but to payequal attention to processes. This contrasts withmore classical evaluation approaches whichtend to look at outcomes first and then to lookfor evidence to support attribution. Asmentioned above, we spent considerable effortencouraging the project teams to make theiractivities explicit, to report on them, and to

identify their intended outcomes. 2. The approach required a more ‘embedded’

evaluator working closely with the projectteams (and also with policy makers and endusers) to understand and elaborate a sometimeschanging theory of change. Without losing ourindependence, we sought to understand theworld of the project teams, practitioners andservice users, including what motivates theirbehaviour. As described above, this was donemost formally through our regular meetingswith the projects, focusing on the SER andassociated documents, but we also participatedin all the initiative-wide events organised by theSupport Team and the Health Foundation.

3. The approach required an ability to reconstructand represent the sequence of events as theprojects were implemented and to explore howthese contributed to the outcomes identified,identifying statistical co-variations and, wherepossible, the causal mechanisms at work. At thestart we used logic models, later replaced by theSERs.

4. The approach was sensitive to the possibilitythat during the life of a programme orintervention, initial theories of change maychange in response to learning or exogenousevents, and that the evaluation should capturethese changing understandings and actions.

5. The approach was also sensitive to the fact thatdifferent and potentially conflicting theories ofchange might be simultaneously pursuedwithin any one project.

Collectively, these five principles describe aninterest not only in causal effects (what happenswhen an independent variable changes) but also incausal mechanisms (what connects causes to theireffects); not only what project teams andpractitioners say they do, but also what theevidence shows they do; and not only whatcontribution stories practitioners tell themselvesand others, but also what really contributes topatient benefit or healthcare improvement.

How do you get clinicians involved in quality improvement? 9

Page 26: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

1.8 Building the‘contribution story’

In putting these rather abstract arguments intopractice we followed what Mayne calls the‘contribution story’37 in order to understand whyproject teams and participating clinicians,managers and service users believed that their useof resources (money, authority, expertise, time andso on) would contribute to the intended healthsystem and patient benefits, and why side effectsand unintended outcomes would be manageable.We then checked to see how our data supported orweakened these stories. Pragmatically, we agreewith Mayne that in ‘most cases what we are doingis measuring with the aim of reducing uncertaintyabout the contribution made, not proving thecontribution made’38. In practice, we needed toolsto develop and understand the contribution storyand make sense of the (sometimes varying) claimsmade. These tools comprised the logic model toencourage a formal focus on cause and effect, theSER to develop narratives of change, and the face-to-face meetings with the project teams to explorethe more informal aspects of these narratives.These were supported by the other interactionslisted above. Clearly more resources, widerinterviews, and more non-participant observationswould have strengthened our understanding, butwe are entirely confident that we have a strongunderstanding of the project teams’ contributionstories.

As discussed above, our initial approach was tocollaborate with each project to develop a formallogic model (see appendix B), and our expectationwas that we could track the evolving understandingof the project team, as the project teams regularlyupdated these models. This, we hoped, would thenprovide the basis for the theory of change for eachproject. However, while we had initial acceptance

of the logic models produced for each project bythe Evaluation Team (and their engagement inmodifying these), from early on there was someunhappiness about this approach. In short, thecategories most relevant to evaluators (forexample, inputs, processes, outputs and outcomes)often did not resonate with the experiences ofclinicians, patients and managers delivering theseprojects. Consequently, there was always a sensethat the contribution stories, which they werehappy to tell, were being shoe-horned into a formthat made less sense to them. At this stage wetherefore moved away from using the logic modelsand depended more on the SERs. These providedmore opportunity for the projects to describe (intheir own words) what they were doing anddiscovering, and what they hoped to change, why,and how.

This did not mean leaving behind realist evaluationas an organising principle, but it did mean that ourhoped-for, crisp hypotheses linking the sequentialstages of the logic models were replaced by anunderstanding of what the projects were seeking todo. Reducing uncertainty around the likelihood ofsuccess became more important. To repeat, wewere interested in testing these narratives againstindependent evidence that either supported orweakened the contribution stories. We believe thatthis approach has yielded considerable insights, butit has also made clearer the considerable task thatremains in teasing out the effects and impacts ofQI activities. This task will involve a range ofmethodologies, including the ones deployed here,but more besides. We have therefore remainedaware of the need to be sensitive to context,reflecting the realistic evaluation mantra that‘mechanism + context = outcomes’25. Theimportance of context encourages caution beforebelieving that success achieved in one place canautomatically be replicated elsewhere. We turn nowto the project teams’ approaches to QI.

10 How do you get clinicians involved in quality improvement?

Page 27: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

2.1 Contributing toimprovement

To understand what the EwQI projects could tell usabout the wider implications of doing, funding andanalysing QI, we needed a clear sense of what theprojects were seeking to do, of the contexts inwhich they were operating and of the outcomesthey achieved. But in obtaining this it wasimportant to avoid, on the one hand, theimplication that because they were all ‘branded’ asQI they were all essentially similar; and, on theother hand, the risk of becoming so immersed inthe detail of each project that it became hard tocompare and contrast them with each other, withQI activities outside the initiative, and with otherways of delivering patient benefit and systemimprovement (such as payments or regulation).

We aimed to do this by developing ‘thick’descriptions of what each project was trying to doand what they thought would bring about thehoped-for improvements – each project’s theoriesof change. Essentially this is the implicit or explicit‘story’ describing how the project teams connectedwhat they were doing to their intended outcomes.In later chapters, we will examine the evidence thatthey and we have produced to support or weakenthese theories. In this chapter, we describe theprojects and explore what it was they believed theywere contributing.

As described in chapter 1, we co-constructed thesestories with the project teams using logic modelsinitially, and later the teams’ self evaluation reports(SERs). We were interested in what the teams chose

to focus on, how they described their activities, theevidence used of output and outcomes, how theyhoped to achieve lasting benefits, and so on. It isimportant to note that the projects did not startwith a clearly articulated account of a theory ofchange involving clear measurable goals, anevidence-based explanation about how these mightbe met, and tools to assess progress towards thesegoals. Patchy clarity about measurable goals,strategies to meet them, and tools to monitorprogress has been associated with failed QIactivities in industry as well as in healthcare39, andit was part of the deliberately ‘emergent’ approachof the EwQI that these theories of change shouldbe clarified and developed in discussion with theEvaluation Team. This process of reviewing plansperiodically with an external team is one of anumber of features distinguishing the EwQIprojects from other QI activities. Hence the SERsevolved over time as the teams’ understanding ofthe EwQI and its objectives developed, and as thewider context changed. The SERs providedevidence supporting summative judgements butalso helped to inform the development of eachproject. Given this evolution of thinking on thepart of the projects, in this chapter we use theirfinal SERs to identify their mature theories ofchange, using the analytic approach described inchapter 1.

The rationale for our overall approach can befound in our ‘Evaluation Protocol’ in appendix Cand in Soper and colleagues28. In what follows, weoutline the projects in turn, drawing out theirexplicit and implied theories of change and notingany significant modifications adopted duringimplementation.

How do you get clinicians involved in quality improvement? 11

Chapter 2

The project teams’approaches to qualityimprovement

Page 28: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

2.2 The projects

Eight projects were funded through the EwQI. Anoverview of the projects, including their durationand funding, is provided in appendix A.

National Bowel Cancer Audit Project(Colorectal Cancer)

Lead organisations: Imperial College London andAssociation of Coloproctology of Great Britainand Ireland

Partners: Bowel Cancer Campaign

This project measured six aspects of performancein the management of patients with colorectalcancer and compared actual practice to NationalInstitute for Health and Clinical Excellence (NICE)guidelines. The project was based on a self-reported voluntary audit (the National BowelCancer Audit Project, established in 2001), andtherefore built on existing work which collectedaudit information from participating units andproduced an annual report that allowed those unitsto see how they had performed relative to others.

The key method of change for this project isprobably the provision of information, for theindividual consultants and trusts, for theprofessional bodies and also for governmentbodies40.

The theory of change implicit in this is that whenlocal surgeons and units are made aware of howthey perform relative to others, they will realise thepotential for improvement and thus be able to planactions to make improvements.

It was felt that increasing awareness of the standardsrequired among surgeons would also help to improvethe patient outcomes.

This did happen in some participating units.

Units are taking note of where they are failing tomeet current NICE guidelines for quality, ashighlighted by the NBOCAP [National BowelCancer Audit Programme] report. More importantlythe units that had recognised this failing thenperformed a thorough analysis of their results,verifying the accuracy of NBOCAP data andidentifying how they could change processes withintheir trust to improve results.

But other than the audit and the annual feedbackthrough the published audit report, noimprovement intervention was offered. However, ifwe assume that clinicians and units have themotivation and capacity to change (a significantassumption), then the provision of informationcould in itself be an effective lever. The major focusof the project team was on the processes of auditand data collection, with much emphasis onproducing high quality data. Changes to theseprocesses during the project included:– a growing understanding among the project

team of the causal connections between thequality of care and the structures and processeswithin trusts (a key finding from their survey ofparticipating trusts)

– a growing awareness of the need not just forprompt feedback of high quality data onperformance but also for trusts to subsequentlydevelop and implement action plans to addressareas of underperformance

– adoption of online data submission– consideration of a move towards open

reporting (in part, in response to externalpressure from, among others, the HealthcareCommission [now Care Quality Commission]and patient organisations)

– the adoption of a simplified ‘essential data set’to reduce the burden of data collection onconsultants, improve the quality of datacollected and reduce difficulties ofcleaning/merging data by the central team.

Improving the quality of care forpeople who self-harm (Self-harm)

Lead organisation: Royal College of Psychiatrists

Partners: Intercollegiate Faculty of Accident andEmergency Medicine; Mind; Royal College ofNursing; and Royal College of Psychiatrists, Facultyof Liaison Psychiatry

This project aimed to assess the provision of carein ambulance and in acute and general mentalhealth services against NICE standards. It wasbased on a double-audit cycle, with a baselinemeasurement of current care, the introduction oftargeted improvement interventions supported bypeer review, and a follow-up audit to assesswhether change had occurred. Service users playeda key role throughout. The inspiration for the

12 How do you get clinicians involved in quality improvement?

Page 29: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

project came from published NICE guidelines(2004) which ‘concluded that improving staffknowledge and attitudes is the key to betterservices and reduction in the substantial morbidityand mortality associated with self-harm’.Participating trusts chose from a range ofimprovement interventions developed by thecentral project team and designed to improve staffunderstanding of self-harm. These interventionsincluded: – educational material for staff, such as slide sets,

information leaflets, online training exercises, agood practice checklist and assessment tools

– information for service users, such as a posterdisplays, helpline numbers, a booklet of localsupport groups/voluntary organisations and alist of alternatives to self-harm.

The rationale behind this team’s approach wasexplicit, and it:– built on established, evidence-based guidance – brought staff and service users together to seek

improvement – shared best practice information locally and

nationally to avoid teams re-inventing thewheel

– used peer review to enable local teams to learnfrom witnessing good practice first hand:‘Teams are more likely to be open aboutshortcomings when talking to peers.’

In the long run this project was meant to becomeself-supporting through subscriptions fromparticipating trusts, although in the first yearparticipation was free. The main problem the Self-harm project encountered was recruitingparticipants once a charge was introduced.Discussing this, the team commented: ‘The maininterest and energy came from mental health staff,but if they could not get their acute colleagues onboard, they could not sign up. This may have beena factor, as may have been the slightly narrow focuson self-harm.’

The Prescribing Observatory forMental Health (POMH-UK)

Lead organisation: Royal College of Psychiatrists

Partners: British Association forPsychopharmacology; College of Mental HealthPharmacists and UK Psychiatric Pharmacists

Group; Rethink; Royal College of Nursing; and theRoyal Pharmaceutical Society of Great Britain

This project set up a prescribing observatory toimprove pharmacotherapy in specialist mentalhealth services across the UK. It is based on adouble-audit cycle, with a baseline measurement ofcurrent care, the introduction of targetedimprovement interventions, and a follow-up auditto assess whether change has occurred.Participating mental health services pay asubscription for the service and work with serviceusers to select topics. Seven topics in whichprescribing practice has been compared withclinical guidelines have been covered to date. Theselected topics were:

Topic 1: Prescribing of high dose and combinedantipsychotics in acute adult inpatient settings

Topic 2: Monitoring the physical health ofcommunity patients receiving antipsychotics

Topic 3: Prescribing of high dose and combinedantipsychotics for patients on forensic wards

Topic 4: Benchmarking prescribing of anti-dementia drugs

Topic 5a: Benchmarking the prescribing of highdose and combination antipsychotics on adultacute and PICU wards (time-series benchmarking)

Topic 5b: Continued benchmarking as topic 5a,using time-series charts

Topic 5c: Continued benchmarking as topic 5b,using time-series charts

Topic 6: Assessment of side effects of depotantipsychotics

Topic 7: Monitoring of patients prescribed lithium.

The starting point for this project was that knownprescribing practices deviate from evidenced bestpractice. To get actual practice closer to bestpractice, POMH-UK uses a range of interventiontools, including rapid feedback of audit data tolead clinicians, educational activities and materials,and the encouragement of local champions inparticipating trusts.

How do you get clinicians involved in quality improvement? 13

Page 30: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The provision of benchmarked data was describedby the project team as both an intervention and amethod of data collection:

POMH-UK taps into a strong desire by MDTs[multidisciplinary teams] to improve practice andsupports their wish to meet the clinically credibleand realistic standards against which we audit. Forany change process, awareness of the issue and one’sown practice in context is an essential first step, andthis can be achieved with the benchmarked datareports.

Speed of feedback is also important, as are accessto feedback and its ability to be customised.

Within the interventions, there are slightly differingtheories about how information is assimilated andacted upon. These are set out in table 3.

The National COPD Resources andOutcomes Project (NCROP)

Lead organisation: Royal College of Physicians

Partners: British Thoracic Society (BTS) andBritish Lung Foundation

This project aimed to compare four key services forpatients with chronic obstructive pulmonary disease

(COPD) in acute hospitals with BTS and NICEguidelines. It was a quasi-experimental study basedon a double-audit cycle, with a baselinemeasurement of current practice, the introductionof targeted improvement interventions supportedby peer review, and a follow-up audit to assesswhether change had occurred. One hundredhospitals were paired according to whether they hadmore or less of a pre-specified list of organisationalindicators (such as non-invasive ventilation,pulmonary rehabilitation and early dischargescheme). This information was obtained from datasubmitted to NCROP in 2005, and, where possible,pairings were arranged with differing indicators tomaximise the potential for sites to learn fromeach other. The pairs were then randomised to acontrol arm (audit and feedback only) and anintervention arm (audit and feedback plus a day-long peer review visit). The hospitals in each pairreviewed each other – sending a multidisciplinaryteam (which included service users) to visit a teamin their paired hospital, review their practice withthem and agree an action plan for change.

The hypothesis underlying this approach was thatsharing good practice through mutual peer-reviewvisits between paired hospital units, combined withthe production of agreed action plans, would resultin improvements in care.

14 How do you get clinicians involved in quality improvement?

Table 3: Theories of change underlying POMH-UK quality improvement activities

Intervention Underlying theories of change

Posters summarising Displaying these raises staff awareness, plugs knowledge

information about a gaps, thus improving practice.

prescribing topic

Workbook for clinicians to Encourages reflection on the part of the clinician, improves

review their own practice knowledge of best practice.

Lifestyle management These provide information about good practice and act as a

support pack which gives tool which structures interactions with patients and prompts

clinical staff materials to clinicians to conduct assessments in particular ways. The

use in their day-to-day mechanism is both improving training and imposing a

practice structure for clinicians to follow in day-to-day practice.

A change management This supplements the mechanism of improved knowledge of

workshop for local clinicians good practice with an additional element – encouragement

to implement that knowledge. The assumption is that

improved knowledge alone might not be enough.

Academic detailing to This has similarities with the ‘cascading’ element of the

help pharmacists PEARLS project (see below). The mechanism here is that

pharmacists are able to ‘influence other clinical team

members and educate them about antipsychotic

polypharmacy’. It is a different method of improving

practice through better training of clinicians.

Page 31: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The NCROP team emphasised that the provisionof information on comparative performance is nota final goal itself and does not necessarily lead toimprovements in patient outcomes. Rather, theysaw the data as an essential starting point to enableunderstanding of where performance is poor. Theteam also cited lessons from previous audits,including the need for organisational buy-in.

The results from the RCP Stroke audit attempted tofacilitate change by running multidisciplinaryregional feedback meetings but found that the rate ofchange was disappointingly slow ... This highlightedthe results from the Action on Clinical Auditpartnership ... [which] showed that change onlyoccurs if it accords with the aims of the organisationand has the buy-in of all parties. ... This highlightedthe need for the hospitals to take ownership of theirdata and to involve professionals from all areas onthe NHS to drive forward improvement.

This project was the only one to sponsor its ownexternal qualitative evaluation of resources andoutcomes at local level (by Queen Mary, Universityof London). The team stressed the importance ofthis exercise: ‘The re-audit will give someindication of the change but is currently viewed asless critical than the project evaluation’.

Peri-operative Fasting ImplementationStudy Evaluation (PoISE)

Lead organisation: Royal College of Nursing

Partners: Royal College of Anaesthetists; VirtualInstitute for Research in Health Care Practice;and the Peri-operative Fasting GuidelineDevelopment Group

This project aimed to improve implementation ofnational clinical guidelines on peri-operativefasting before and after elective surgery41. It was apragmatic randomised trial based on a double-audit cycle, with a baseline measurement ofcurrent care, the introduction of targetedimprovement interventions, and a follow-up auditto assess whether change had occurred. Nineteenacute hospitals in the UK were randomisedbetween the three arms of the study: standarddissemination; a web-based educational package,championed by an opinion leader; and generationof ideas for change by staff using Plan Do StudyAct (PDSA) cycles.

In the first arm, all the participating trusts receiveda pack of information, including the guidelines onfasting aimed at patients and clinicians. Themechanism implicitly at work in this interventionwas that clinicians are unaware of the best practicein fasting and will fill knowledge gaps by accessingguidance online, and change their practiceaccordingly.

In the second arm, a web-based resource wascreated which included guidance on fasting, goodpractice examples, and so on. Locally appointedopinion leaders promoted the existence and use ofthis resource. The mechanism or theory of changehere was similar to that used in the first arm butwith the addition of promotion by the opinionleader to remind, encourage and persuade cliniciansto use the information. The assumption was thatclinicians need an extra incentive or push to readthe guidelines from credible and respected experts.

In the third arm, PDSA was used as a frameworkfor implementing quality improvement, and wasemployed through multi-professional meetingswithin the participating trusts. The key role forPDSA was in structuring the approach of the localteams, focusing, and therefore improving, theprocess of identifying and implementing changewithin each trust.

PDSA is a framework to conduct and implementquality improvement initiatives. The purpose is tomake a ‘change’, then test that change to see whetherit brings about an improvement in the system orprocess. This then is ideally followed by furtherchanges or revised changes in order to continue theimprovement of the process or system.

This project team also applied a general theory ofchange to the evaluation of their project; thePromoting Action on Research Implementation inHealth Services (PARiHS) conceptual framework42.This is based upon the interplay of evidence, contextand facilitation when implementing research.

Epilepsy and Community-acquiredPneumonia Scottish National AuditProject (EPI-SNAP & SNAP-CAP)

Lead organisation: Royal College of Physicians of Edinburgh and Royal College of Physicians of Glasgow

How do you get clinicians involved in quality improvement? 15

Page 32: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Partners: Epilepsy Scotland; and the InformationServices Division, NHS National Services Scotland

These two projects compared current practice onthe diagnosis and treatment of epilepsy in adultswith Scottish Intercollegiate Guidelines Network(SIGN) guidelines, and current practice on themanagement of community-acquired pneumoniawith British Thoracic Society (BTS) guidelines. Theoverall aim was to develop innovative data captureand feedback mechanisms in two very differentclinical applications, and to explore the feasibilityof a single universal model of clinical qualityimprovement for physicians. However, becausethey had not only two distinct sets of aims but alsodifferent mechanisms for achieving these aims, wehave separated these two arms of the project inlater discussions.

EPI-SNAP

There were two separate EPI-SNAP sub-projects,focused on improving driving advice given topatients referred to first seizure services in Scotlandand on improving the quality of annual review inprimary care stipulated by the Quality andOutcomes Framework (QOF). As a whole, EPI-SNAP therefore focused largely on care at theprimary/secondary care interface. Both sub-projects were based on a double audit cycle and theuse of existing IT systems to put improvementinterventions in place. The reason for the focus ondriving advice is made explicit in the final SER:

Reminding the referring doctor to issue appropriatedriving advice (a patient should be advised not todrive following a suspected seizure) will in factreduce the number of referrals to first seizure clinicsof patients who are non-epileptics and improve themedico-legal position of the referrer.

The reason given for identifying the annual reviewas a focus for change was that it could be improvedby providing patients with more information andby giving clinicians greater guidance on how toconduct the review.

SNAP-CAP

The community-acquired pneumonia projectfocused on improving the management of thedisease in acute hospitals. The team identified theUS Institute for Healthcare Improvement (IHI)

Breakthrough Collaboratives43 as the basic modelfor improvement. Their approach involvedmonthly data collection and short cycle tests ofchange (PDSA), associated with the developmentand implementation of a community-acquiredpneumonia (CAP) care bundle.

The care bundle contains ‘the essentials’, clinicalactions that are known to improve patient outcomesand contains only a few items. The BTS [BritishThoracic Society] CAP guidelines contain over 100recommendations, with varying levels of supportingevidence and covering a broad range of CAPtreatment issues. The care bundle is a simplification,easier to impart to staff and leaving less room foroversight of key aspects of acute management. Thecare bundle can also be added to locally,incorporating local treatment protocols, giving itflexibility and allowing it to evolve.

The original intention had been to base this projecton a double audit cycle, like EPI-SNAP. However,developing the care bundle and the accompanyingdata set took much longer than anticipated, withknock-on delays in database development. Workon the database stalled when the developer movedon, and the team took the opportunity to reviewtheir whole approach. They subsequentlycommissioned the IHI extranet (which is not adatabase but produces run charts on processmeasures from data entered by projectparticipants); these charts then informed the nextPDSA cycle. The project team saw this change asbeneficial, producing ‘data for improvement, notjudgement or research’.

The project is about improvement, not datacollection or analysis. Data for improvement shouldbe just enough to drive improvement.

In addition, to address participants’ concerns aboutavailability of outcomes data to prove that the CAPcare bundle was having an effect, an outcomesanalysis was also planned (using InformationServices Division and General Register’s Officedata) to look at the effect of the care bundle onmortality within 30 days of admission.

16 How do you get clinicians involved in quality improvement?

Page 33: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

UK Inflammatory Bowel Disease Audit(IBD)

Lead organisation: Royal College of Physicians

Partners: British Society of Gastroenterology;Association of Coloproctology of Great Britain andIreland; and National Association for Colitis andCrohn’s Disease

This project aimed to improve standards of carefor IBD patients throughout the UK. It set up thefirst national clinical audit of inflammatory boweldisease, including the development of nationalstandards with which to compare current practice.

There has never been a national audit of the carepathways for patients with IBD. The guidelines arelargely consensus-based and there is no way ofknowing how compliant sites are with theirstandards. This project will raise the profile ofgastroenterology via IBD care and we would expectthis to benefit the field as a whole – perhaps byacting as a driver for NICE guidance or an NSF[National Service Framework].

The IBD project was based on a double audit cycle,with a baseline measurement of current care, theintroduction of targeted improvementinterventions supported by a limited number ofaction planning visits, and a follow-up audit toassess whether change had occurred. As in NCROP,recognition that ‘the provision of information oncomparative performance is not a final goal itselfand does not necessarily lead to improvements inpatient outcomes’ was one of the drivers ofproject design. Another was the desire to test theproposal that:

Change may occur faster through bringing togetherthe experience of those who have achieved change toservices together with those who have not.

Interventions for all sites comprised:dissemination to all hospital teams and chiefexecutives of their results compared to nationaldata; presentation of the data at local and nationalmeetings where change implementation wasdiscussed; the development of an action plan tofacilitate local change; and a web-based documentrepository containing sample business cases andcare protocols.

Project design changed during implementation.

The original three-part study design involved two-thirds of participating sites reviewing their resultsand preparing a ‘local action plan’ that identifiedfive key points for change. Half of those sites (one-third of all participants) would then provide thecentral project team with monthly updates onprogress towards the agreed local targets. But in2007 the IBD steering group changed this design inorder to ‘provide a wider benefit to all IBD servicesand to make the intervention more manageable’. A‘model action plan’ for IBD services was developedand made available to allIBD Services in the UKto adapt for their own service. In addition,supported action planning visits were undertakenat 23 sites by members of the steering group. Thesevisits engaged directly with clinicians and raisedthe profile of IBD care with local management.

The hypothesis was that a site review would improvethe quality of the IBD service ... We were aware ofthe need to ensure ‘buy-in’ from all key stakeholdersincluding managers of health bodies at variouslevels.

Commenting on their final approach, theteam said:

The sharing of knowledge across hospital teamsseems to be an extremely valuable exercise and canhelp to spread good practice, however it could be tooambitious to believe that a model action plan ofitself can achieve change without a committed teamtaking responsibility for making a change locallythat meets the needs of their individual situation.

Perineal Assessment RepairLongitudinal Study (PEARLS)

Lead organisation: Royal College of Midwives

Partners: Royal College of Obstetricians andGynaecologists; National Childbirth Trust; ThamesValley University; University of Keele MedicalSchool; and University Hospital of NorthStaffordshire NHS Trust

This project aimed to improve clinical care in linewith evidence-based Royal College of Obstetriciansand Gynaecologists (2004) guidelines in order to:enhance the assessment and management ofperineal trauma, reduce maternal postpartummorbidity, and improve women’s experiences ofmaternity care. It was a quasi-experimental study

How do you get clinicians involved in quality improvement? 17

Page 34: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

based on a double audit cycle, with baselinemeasurement of current care and trainingpractices, a Delphi survey of patients’ views onoutcomes, the introduction of a targetedimprovement intervention, and a follow-up auditto assess whether change had occurred.

Implementation was through a paired clusterdesign: eleven matched pairs of units wererandomised to implement an early or lateintervention.

The main improvement intervention was astandardised, evidence-based training package. Theinitial plan was for the project team to delivertraining, but, on statistical advice and with theHealth Foundation encouragement, the projectgrew in size. In each unit, it became necessary tocascade the training through facilitators, who wereappointed and trained by the project team. Thischange was seen as beneficial in itself. Using localresearch facilitators enabled local ownership of theproject: practitioners were more likely (forpractical reasons) to attend locally deliveredtraining, and it could be more responsive to localneeds and circumstances. And, as was the case inall the EwQI projects, the existence of the PEARLSproject itself raised awareness of the need forimprovement and thus generated a moreresponsive context.

Implicitly, the theory of change in this project wasthat clinicians want to improve practice, and thatproviding appropriate training and audit data thatallows them to see how they are doing will achievethis aim.

2.3 Locating the EwQItheories of change

The aim of the EwQI was to engage clinicians,through their professional organisations, inprojects to improve the quality of clinical care inthe UK. The immediate inspiration wasLeatherman and Sutherland’s finding thatclinicians listen and learn best from their peers andthat professional bodies have a legitimacy andauthority that command clinicians’ respect24.Other considerations that shaped the EwQI werewell-supported by evidence and common sense.These included the need to base clinical

improvement on sound evidence of best practice;the wish to build, where possible, on existing highquality audits or other performance measurementand reporting systems; the desire to involve users(patients and carers) from start to finish44,45; andthe importance of developing sustainableimprovements in quality46,47.

All the projects were led by professionalorganisations, and all were expected to build on ordevelop high quality clinical audits or otherperformance measurement systems. All wereexpected to produce measurable patient benefits.The emphasis in the approaches reported in theSERs is therefore on professionally approvedguidelines, professionally led audit andprofessionally led action plans. The SERs say verylittle about other potential mechanisms forimproving the quality of care such as incentives,regulation and managerial control. And, perhapssurprisingly, they say little about the role ofservice users and their representatives insupporting QI (although we are aware that thisis an important feature, and it is discussed inchapter 4). To this extent there werecommonalities between the projects.

However, when it came to their roles in the EwQI,project teams saw these differently. Some sawthemselves as researchers, others as cliniciansdeveloping clinical audit, others as members ofestablished departments in professional bodiesdedicated to improving the quality of care. Projectdesign reflected these differing views28. Projectdesign also reflected the varying circumstances ofthe projects (such as whether guidelines alreadyexisted, the role of specific groups of professionals,the role of user groups and the perceived nature ofthe problem) and the theoretical framework withinwhich each project team was working (derivedfrom the varying influences on the projects teams,such as visits overseas, reading, conferences, andlocal experience). Appreciating these initialdifferences, one of the aims of the EwQI SupportProgramme was to enable the project teams toshare and develop their understanding of QI andits complexities.

The view that ‘medical journals and researchfunders are mainly concerned with practical factualresearch, not with research that develops theories’48

may not be as widely held today as it was ten years

18 How do you get clinicians involved in quality improvement?

Page 35: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

ago, but we still found that the concept of a theoryof change was unfamiliar to some project teams.Some of the theories of change drawn out duringour analyses are implicit in the SERs, rather thanbeing stated explicitly. Other project teams wereexplicitly testing and refining theories that theyhad developed themselves (such as the PARiHSframework used by PoISE42, and the Royal Collegeof Psychiatry Centre for Quality Improvement’sapproach used by POMH-UK and Self-harm) ortesting theories developed elsewhere (such as theuse of the Institute for Healthcare Improvement-inspired care bundle by SNAP-CAP).

In locating the EwQI in relation to wider efforts toimprove services, it is also worth noting theparticular dimension of ‘quality’ shared by theprojects in the EwQI. The emphasis was onachieving improvements in measurable patientoutcomes in the domain of clinical effectiveness.Other dimensions of quality – such as safety,patient-centredness, timeliness, efficiency andequity15 – were not irrelevant to the project teams,but their focus on patient outcomes reflected theoriginal call from the Health Foundation.

All the projects were complex. They involved largenumbers of local sites (from 19 to over 100), anddiffered in terms of the complexity of their scopeand the nature of the improvements sought. It wasalso possible to identify various levels ofcomplexity in these improvements, ranging from a

specific change in suturing practices amongmidwives to a wide-ranging change in attitudes toself-harm among transient, multidisciplinaryprofessional teams in the emergency services.Other than the generally adopted mechanism ofaudit and feedback, the improvementintervention(s) used also varied. Some projectteams (such as POMH-UK and Self-harm) used avariety of mechanisms; others focused largely onone approach (such as the peer-review visits inNCROP and the training package in PEARLS); yetothers, such as PoISE, tested one approach againstanother. Some projects were open to, or evenencouraged, a variety of local responses (POMH-UK and Self-harm), while others hoped toencourage conformity to high standards throughlocal activities (PEARLS).

By definition, complex interventions comprise anumber of components, each of which may actindependently and interdependently49. Asdescribed in chapter 1, we attempted to ensure thatthe evaluation methodology adopted by the projectteams in their self-evaluations, and by ourselves inthe external evaluation, matched this complexity.

This chapter has described what the project teamsintended to achieve. In the next, we outline whatthe project self-evaluations tell us about what wasimplemented, what was actually achieved and whatefforts this involved.

How do you get clinicians involved in quality improvement? 19

Page 36: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The external evaluation and the project self-evaluations were both expected to determineprogress against the EwQI objectives. They wereintended to identify and measure outcomes,assessing the processes adopted and exploring thethinking behind the projects in order to identify‘the factors associated with success’. In their SERsthe project teams were asked to report on seventypes of outcomes50:

1. Measurable improvements in patient care2. Increase in the levels of professional engagement 3. Increase in the capacity and infrastructure of

the professional bodies involved in the project4. Increase in the knowledge base5. Sustainable arrangements for improving quality

of care in this field of medicine6. A transferable system of quality improvement

to other areas of medicine7. Increase in knowledge and understanding of

quality improvement in healthcare.

This chapter sets out and assesses the availableevidence from the project teams about theachievements of the EwQI projects and about theefforts expended.

This involves the presentation of a lot of complexdata. But these are the data made available by theprojects, and, if we are to communicate effectivelythe different array of impacts and activities, it isnecessary to engage with this level of detail. Forease, we have divided this long chapter intosections, arranging the data, wherever possible, intabular form and maintaining a commentarythroughout to clarify the key issues. Section 3.1covers the data that were available from theprojects, identifying the sources we have used to

assess the achievements and efforts associated withthe projects. Section 3.2 outlines the extent towhich each project was implemented in practice;we need this in order to understand how far the‘contribution story’ of each project, outlined in theprevious chapter, has progressed. Section 3.3 is thelongest and focuses on patient outcomes, whichwere the project teams’ primary concern. In section3.4, we look at the other outcomes in turn, bar thesecond (which is dealt with in chapter 4). Then weturn to the efforts expended, which are covered insection 3.5. While the overall outcomes from eachproject were the main focus of the project teams’reports, they also reported on variation betweenparticipating sites within each project, and section3.6 summarises these accounts. We conclude, insection 3.7, with some observations about costsand consequences in QI.

3.1 Data available aboutachievements

Tables 4 and 5 set out the data from the projects onwhich we draw in this chapter. In their SERs theproject teams were asked to provide quantitativedata, where possible, and they largely did so inrelation to patient outcomes (based oninterpretations of their audit data) and increases inknowledge (based on the number of papersproduced and presentations given). Reports of theother outcomes were mainly descriptive, based onqualitative evidence from surveys and the teams’own expert judgements. Table 4 shows the threesources of data relating to improvements in patientcare. These data were not available for all the

20 How do you get clinicians involved in quality improvement?

Chapter 3

What was achieved bythe projects and whateffort did it require?

Page 37: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

projects (a full breakdown of all the data sourcesreceived from the project teams is given inappendix E). The only final output which the

project teams were required to produce was an SER,with an associated RSE (see appendix C). But manyprojects also submitted other material, including

How do you get clinicians involved in quality improvement? 21

Table 4: Sources of data from the Engaging with Quality Initiative projects about improvementsin patient care (all data from 2009 reports unless otherwise indicated)

Associated record

Final self-evaluation of significant

Project report (SER) events (RSE) Audit reports

Colorectal Cancer 3 3 3 (2006 and 2007)

Self-harm 3 3 3 (2007)

POMH-UK 3 7 3

NCROP 3 3 3 (2008)

PoISE 3 3 3

EPI-SNAP 3 3 3

SNAP-CAP 3 3 3

IBD 3 3 (2008) 3

PEARLS Outcomes data and final report not available as at

30 September 2009

3 = available

7 = not available

Table 5: Sources of data from the Engaging with Quality Initiative projects about outcomesother than patient outcomes attributed to or related to the projects (dates vary – further detailsare available in appendix E)

Clinician Patient Other surveys

Final self- Associated questionnaire questionnaire undertaken

evaluation record of The Health undertaken undertaken by project

report significant Foundation Audit by project by project team and other

Project (SER) events (RSE) reports reports team team data sources

Colorectal 3 3 3 3 3 7 7Cancer

Self Harm 3 3 3 3 3 3 7

POMH-UK 3 7 3 3 3 7 3

NCROP 3 3 3 3 3 3 3

PoISE 3 3 3 3 3 3 3

EPI-SNAP 3 3 3 3 7 7 7

SNAP-CAP 3 3 3 7 3 3 7

IBD 3 7 3 3 3 7 3

PEARLS Interim report Interim report 3 7 7 3 7(final report (final report

not available not available

as at 30 as at 30

September September

2009) 2009)

3 = available

7 = not available

Page 38: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

audit reports with quantitative data on patientoutcomes. Where appropriate, we have performedstatistical tests on key findings to allow for formalcomparison of achievements in relation to patientcare across each of the projects. Table 5 sets out thedata sources for the other outcomes.

3.2 An overview of the extentof implementation of eachproject

Before looking at the evidence of achievementsagainst the seven types of outcomes set out above,

we briefly review the extent to which each EwQIproject was implemented on the ground.Looking at the extent of practical implementationis important. If the evidence from a projectsuggests that little has changed ‘on the ground’,we might doubt whether any changes in patientcare that show up in an evaluation are really dueto that project.

These assessments are based on what theproject teams said they planned to do in theiroriginal proposals and what they report wasimplemented during the project. For each projectan overall assessment is made and key milestonesare highlighted.

22 How do you get clinicians involved in quality improvement?

Table 6: Overview of the extent of implementation of the EwQI projects

Colorectal Cancer Overall assessment: this project was based on an existing, gradually expanding

audit, but during the period in which the Health Foundation funded the audit

(May 2005–Dec 2008) trust participation fluctuated. In 2006 there was ‘little

increase in the recruitment of trusts, no increase in case ascertainment and no

improvement in the completeness of collection of the essential data items’.

This, the team commented, was undoubtedly partially due to clinicians not

prioritising the time required.

– The audit changed during the project with the introduction of online data

collection and a new minimum data set, as well as a growing recognition

among clinicians of the need for open reporting of findings, for formal action

plans following feedback of audit data, and for an increased involvement of

trust chief executives and managers.

– Over half the participating trusts used the annual report as a basis for formal

annual surgical outcome meetings, and surgeons used the data to benchmark

their quality of care against the national average.

– Individual trusts used audit data as a form of quality control, taking note of

where they were failing to meet current NICE guidelines for quality and

identifying how they could change processes.

Self-harm Overall assessment: the project was largely implemented as planned, with strong

service user engagement. Despite considerable initial interest, there were some

delays in recruitment to the first 18-month wave (which was free) and, in some

trusts, subsequent problems with obtaining re-audit data. Recruiting to the next

two waves (for which a charge was introduced) proved even more difficult, and it

became apparent that the project would not be sustainable through subscription.

A more broadly focused follow-on project (PLAN), based on Self-harm, has now

been developed.

– Planned regional collaboratives were not implemented, although there was a

series of regional learning events.

– Numerous improvement interventions were introduced, including mandatory

peer-review visits.

– Rapid feedback of audit results was achieved.

– A number of teams claimed to have developed or improved their policies and

working arrangements for the management of patients who self-harm.

– A survey revealed that not all the interventions were widely used because of

lack of time, their similarity to existing tools or late arrival for the re-audit.

continued

Page 39: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 23

Table 6: Overview of the extent of implementation of the EwQI projects – continued

POMH-UK Overall assessment: the project was implemented as planned and membership of

the observatory continues to grow despite the introduction of a membership fee.

Trusts have regularly signed up to re-audits of earlier topics. There is every indication

that the observatory will in future be sustainable without the Health Foundation

funding.

– A large number of improvement interventions were introduced, and the team

commented that ‘all the interventions had been used by at least some trusts’

but that, overall, more passive interventions were more likely to be implemented.

– One local team managed to achieve nearly 100% compliance with the

standards in one topic.

NCROP Overall assessment: the project was implemented to schedule as planned, national

audits were successfully developed with high participation rates, peer-review visits

between trusts carried out, and participants submitted information as requested.

– Fifty-four teams were randomised to carry out reciprocal peer-review visits

(the largest ever voluntary review programme run in the UK) and action plans

were received from the clinical lead at each site.

– Participants used audit data as evidence of local performance. For example,

one participant risk-assessed the non-invasive ventilation (NIV) service at her trust

using National COPD Audit data. As a result, NIV was on the trust risk register,

meaning that it was reviewed at board meetings and an action plan was

implemented to bring about change.

– One hundred teams were asked to complete change diaries and, while these

were generally thought to be onerous, participants returned 93 completed

diaries.

– A qualitative sub-study of process and context was completed.

PoISE Overall assessment: the project was implemented to schedule as planned, despite

initial concerns about trust recruitment and the resulting delays. It sometimes

proved difficult at a local level to engage all the relevant members of the

multidisciplinary teams involved in peri-operative care, but overall the engagement

of clinical teams worked well. The importance of local facilitators emerged as a

key issue.

– There is qualitative evidence that the standard dissemination package and

web-based resource were well-used, but that the PDSA intervention was more

difficult to implement.

– Seven trusts received the standard dissemination package.

– For the opinion leader/web-based resource, there were 1,278 hits on the website

during the intervention period.

– Five out of six trusts implemented the PDSA intervention, although the model

prescribed by the project was conducted in a limited way by trusts.

EPI-SNAP Overall assessment: this two-part project was very considerably delayed, mainly

due to difficulties in securing adoption of interventions within national IT systems.

As a result some aspects of what was originally proposed, such as the planned

surveys of clinicians and patients, were not implemented.

First seizure clinic

– Baseline and follow-up audit were implemented as planned. All four first seizure

clinics in Scotland participated in audit, including consultants, GPs with specialist

interest in epilepsy and epilepsy specialist nurses.

– Referral form was developed and made available (after a delay) and was

implemented (although at different times in different health boards).

– Different regions showed variation in progress.

– The project’s aims will be realised through new NHS Quality Improvement

Scotland (NHS QIS) standards for neurological services.

continued

Page 40: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

24 How do you get clinicians involved in quality improvement?

Table 6: Overview of the extent of implementation of the EwQI projects – continued

EPI-SNAP – continued Annual review

– Unresolved problems blocked agreement to use the data set held by the

Scottish Programme for Improving Clinical Effectiveness (this would have meant

that data on information given to patients by GPs could have been collected

automatically). The team commented that overall there was no clear way to

realise this aspect of the EPI-SNAP project.

SNAP-CAP Overall assessment: this project was also delayed by IT issues, and by a

fundamental redesign of the project protocol. The double-audit cycle was

replaced by continuous data collection and the US Institute for Healthcare

Improvement’s model of improvement based on short-cycle tests of change,

associated with the introduction of a care bundle. There were delays in reaching

a consensus on the items to be included in the care bundle.

– The aim was to get engagement from each health board. By the beginning of

the fourth year of the project, at least one hospital in 50% of the health boards

in Scotland had signed up to SNAP-CAP – feedback indicated that clinical

teams felt overwhelmed by government-directed quality improvement projects,

and SNAP-CAP was deemed of lower priority.

– In participating hospitals the care bundle was fully implemented, despite delays,

and outcome measurement fully implemented to schedule, although there

were some gaps in data collection in some participating hospitals.

– One hospital made permanent changes to its admission systems that embed

the SNAP-CAP bundle.

– SNAP-CAP will continue under the Scottish Antimicrobial Prescribing Group.

IBD Overall assessment: the project was implemented to schedule largely as planned,

although with some modification to provide wider benefit and make the

intervention more manageable. There was excellent participation in the national

audit, and the action planning visits and follow-up audits were successfully

conducted and generally well-received.

– Audit data collection, analysis and reporting were completed successfully and

on time, and national reports were published following each of the two rounds

of the audit.

– Model action plan was created and made available via a website.

– Supported action planning visits were carried out.

– At least one site indicated that the lack of adequate service resource

highlighted in their first round report had a direct influence on the decision of

trust management to fund an IBD nurse specialist post.

– Local teams sometimes found it difficult to identify concrete ideas on how to

implement changes to IBD care, and it is not clear how successful local teams

have been in implementing their action plans.

PEARLS Overall assessment: the project is, largely, being implemented as planned. But there

have been considerable delays as a result of difficulties in obtaining ethical

approval for the project and in recruiting trusts.

– Surveys of training needs were carried out, although poor response rate from

some practitioner groups meant the surveys had to be resent.

– Two Delphi surveys of patients (to identify outcome measures) were completed.

– A patient questionnaire was developed and implemented.

– Local research facilitators were trained, and training has been ‘cascaded’

rather than, as originally planned, being done entirely by the project team.

Sources: As identified in table 4

Page 41: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

3.3 Measurableimprovements in patient care

It was a requirement of the Health Foundationfunding that the EwQI project teams identify ‘aclinical problem or deficiency in care for whichthere is a scientific evidence base and/orconsensual professional guidelines. The clinicalarea of interest must have reliable data as well asobjective and credible measures of clinical processand/or outcome. The guidelines or standards maybe selected from an authoritative national orinternational source – for example, a royal college,a specialist society, a National Service Framework,National Institute for Clinical Effectiveness (NICE)or the Scottish Inter-Collegiate Guideline Network(SIGN) – or the clinical/research literature51.’Appendix G provides details of the guidelinesunderpinning the projects and of how the specificstandards used in each project were identified. Thedevelopment of these standards was, in itself, animportant and lasting contribution to qualityimprovement in the clinical fields covered by theprojects (see table 20 below).

In the context of the standards developed, all theproject teams identified key clinical outcomesagainst which they intended to measure theirperformance. These were mainly measures ofimprovement in the processes of care – changesin practice, such as improved lymph nodeharvest (Colorectal Cancer), better staff attitudes

(Self-harm), better referral practice (EPI-SNAP) andprescribing practice (POMH-UK) – rather than endpoints, such as long-term improvement in mortalityand morbidity levels. In this section we look at theevidence that the projects achieved improvements inpatient care. We also consider the clinical and,where appropriate, the statistical significance ofthese findings. In some cases, improvement was anincreased rate of a particular practice, such as anincrease in the recording of circumferential margininvolvement (Colorectal Cancer). In other cases,improvement was a decreased rate of practice, suchas a decrease in the prescription of first and secondgeneration antipsychotics in combination (POMH-UK). The percentage change from baselineto re-audit has therefore been assessed along twodimensions: whether change was an improvement(IP) or not (NI), and whether change wasstatistically significant (*) or not. The tests ofstatistical significance were done by the externalEvaluation Team unless otherwise indicated.

The data in the tables in this section comes fromthe project teams. The exact source of the data isgiven below each table. In some cases (NCROP andIBD) we have reproduced (and annotated) tablesprovided by the teams; otherwise we have tried topresent the tables in a common format. The mainconcern in all cases was to identify whether therehad been a statistically significant improvement inpatient outcomes during the project. The standardsin each table are the key indicators identified bythe project teams.

How do you get clinicians involved in quality improvement? 25

Page 42: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Colorectal Cancer

Comment: The 2006 and 2007 NBOCAP reportsnote that the major difficulty with cancer surgery isthat the real endpoints of interest are long-term,such as five-year survival, both cancer-specific anddisease-free; but there is no mechanism by whichhigh quality data from national audit can be linkedto long-term outcomes. This, combined with thetime lag involved, means that surrogate measuresof surgical outcomes are required to drive andmonitor quality improvement in the short term.Such markers are identified in table 7, and includesome with an obvious short-term impact, such as30-day mortality rates, and others linked to diseaserecurrence and long-term survival, such ascircumferential resection margin involvement ratesfollowing rectal cancer excision. In four of the fivemeasures identified there has been some small

improvement. But the caveats about the quality ofthese data are important: the project team warnagainst placing any weight on any analysis thatlooks at such a short interval of time52. Ourdifficulty here is that this is all we had: noanalysable data were available to the EvaluationTeam for years 2007 and 2008 because theNBOCAP report for 2008 had not yet beenreleased. Furthermore, and as the team alsoacknowledge in their SER, it is ‘difficult to separatethe improvements in care attributable to thisproject from those due to other ongoing initiativesacross the UK which aim to improve the access toservices, diagnostics and treatment. There being nobuilt-in ‘control’ group’.

26 How do you get clinicians involved in quality improvement?

Table 7: Achievement of audit standards for patients in the Colorectal Cancer project

Percentage p-value (chi-square)

Audit standards N (%) 2005 N (%) 2006 change baseline to re-audit

1. 30-day mortality N=7471 N=11287

rates 441 (5.9%) 560 (4.96%) –0.94% (IP*) 0.005

Continues existing downward trend

but ‘at the moment there is no

validation of the completeness of

collection of post-operative deaths’.

NBOCAP report 2007

2. Recording of N=3542 N=2945

circumferential 177 (5%) 389 (13.2%) +7.2% Statistical significance not determined

margin involvement (IP – although because ‘the amount of missing data

rates are still very seriously compromises interpretation’.

low) NBOCAP report 2007

3. Abdomino-perineal N=2800 N=2370

excision of rectum 584 (19.6%) 499 (21.1%) +1.5% (NI) Statistical significance not determined

(APER) rate because ‘It is ... very difficult to be

confident in the numerators and

denominators for this ... calculation.’

NBOCAP report 2007

4. Length of stay N=7471 N=11287

Median Median (IP) Continues existing downward trend

= 11 days =10 days

5. No of lymph N=7471 N=11287

nodes harvested Median Median (IP) Continues existing upward trend

(NICE guidelines =11.85 =12.2

suggests 12)

IP=improvement in care

NI=no improvement in care

*Statistically significant

Sources: Data from National Bowel Cancer Audit Project (NBOCAP) reports 2006 and 2007 (NBOCAP report 2008 not yet

released). Table produced by the Evaluation Team and p-values calculated by Evaluation Team.

Page 43: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Self-harm

Comment: The audit/intervention/re-audit cyclefor Self-harm took 18 months, and there werethree waves of recruitment to this project. Surveysof staff and service users before and after theintervention period suggested that for waves 1 and2, moderate positive change took place. The figuresin table 9 compare baseline audit data for wave 1(which was free and involved 30 trusts) with re-audit data. But, as the team noted in their SER,even from this – the largest wave – the available

data on outcomes are limited because of poorresponse rates to re-audit: ‘even though there arepositive signs, it is hard for us to say that withgreat confidence.’ In view of this comment, wehave not applied a statistical test to these findings.Wave 2 (six trusts and the first wave for which acharge to trusts was made) was too small to yieldmeaningful data. The project team collected wave 3 baseline data (11 trusts) but has not yetproduced final results.

How do you get clinicians involved in quality improvement? 27

Table 8: Number of respondents taking part in project team surveys about self-harm outcomesat baseline and re-audit

Number of respondents Baseline Re-audit

Service users 206 87

Staff 964 568

Table 9: Self-harm outcomes

Baseline Re-audit Percentage

Outcomes Jan–Mar 2006 Feb–May 2007 change

Local project teams involving 30% 40% 10% (IP)

service users in the delivery of

training

Service users rate staff as 48% 60% 12% (IP)

‘excellent’ or ‘good’

Staff felt that people who self- 52% 72% 20% (IP)

harm are given the same

respect and understanding as

patients with other injuries

Staff feel that people who self- 71% 85% 14% (IP)

harm are offered the same

quality of physical treatment as

other patients

Source: Palmer L, Strevens P, Blackwell H (2007). Better services for people who self-harm. Wave 1 follow-up data: summary

report. Royal College of Psychiatrists, CCQI. Tables produced by the Evaluation Team.

Page 44: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

POMH-UK

Comment: Seven topics have been covered to dateby POMH-UK: Topic 1: High dose and combined antipsychotics inacute adult inpatient settings Topic 2: Monitoring the physical health ofcommunity patients receiving antipsychoticsTopic 3: Prescribing of high-dose and combinedantipsychotics for patients on forensic wardsTopic 4: Benchmarking prescribing of anti-dementia drugs Topic 5: Benchmarking the prescribing of highdose and combination antipsychotics on adultacute and PICU wards (time-series benchmarking) Topic 6: Assessment of side effects of depotantipsychoticsTopic 7: Monitoring of patients prescribed lithium.

The SER provides re-audit data on the first three,some of which show statistically significantimprovements, and the table above summarisesthose findings.

POMH-UK is an on-going programme with thecapacity to review and revisit topics. This continuityhas been beneficial in a number of ways, especiallywhen there has been little change in the first year. Intopic 1, some very ‘modest’ changes were shown inthe first re-audit. But the project team concludedthat the intervention had not had a demonstrableimpact, at least in the short term and for themajority of wards, and identified PRN (pro re nataor ‘as required’ prescribing) as a major contributorto high prescribing rates. A supplementary audit,

28 How do you get clinicians involved in quality improvement?

Table 10: POMH-UK measurable patient outcomes – topics with re-audit data

Percentage p-value

change (chi-square)

during years baseline to

Audit standards N (%) 2006 N (%) 2007 N (%) 2008 identified re-audit

Topic 1 – acute wards N=3492 N=3271 N=1505 2006–08

patients patients patients

1. Prescribed high dose 1265 (36%) 1120 (34%) 566 (38%) +2% (NI) 0.35

antipsychotic

2. Prescribed more than one 1503 (43%) 1287 (39%) 597 (40%) –3% (IP*) 0.03

antipsychotic

3. Prescribed 1st and 2nd N not given N not given N not given –1% (IP)

generation antipsychotics in (31%) (29%) (30%)

combination

Topic 2 – assertive outreach teams N=1966 N=1516 N=1035 2006–08

patients patients patients

1. Patients prescribed antipsychotics 219 (11%) 350 (23%) 191 (18%) +7% (IP*) <0.001

who have their BP, BMI, blood

glucose or HbA1c and lipids

measured at least once per year

Topic 3 – forensic wards N=1848 N=1997 2007–08

patients patients

1. Prescribed high dose 624 (34%) 631 (32%) –2% (IP) 0.15

antipsychotic

2. Prescribed more than one 842 (46%) 805 (40%) –6% (IP*) <0.01

antipsychotic

3. Prescribed 1st and 2nd 578 (31%) 536 (27%) –4% (IP*) <0.01

generation antipsychotics in

combination

IP=improvement in care, NI=no improvement in care*Statistically significantSource: POMH-UK annual topic reports. Table produced by the Evaluation Team and p-values calculated by theEvaluation Team.

Page 45: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

undertaken a year later, confirmed that in someunits change did occur over time, but that it tooklonger than the one-year audit cycle. POMH-UK iscurrently undertaking qualitative research that aimsto understand PRN and may develop further

interventions on the back of this work. Topic 2showed the value of identifying barriers to changeprior to the study and tailoring improvementinterventions accordingly, and POMH-UK hasadopted this approach in subsequent topics.

How do you get clinicians involved in quality improvement? 29

NCROP

Comment: The median scores are identical (2008compared to 2007) in both groups for two of thescores (provision of non-invasive ventilation andoxygen provision). For pulmonary rehabilitation,the median score for the intervention groupimproved while the control group remainedunchanged, and for early discharge, the medianscore for the control group worsened, while the

intervention remained the same. Thus, and as theteam also confirmed, in two of the four areas theintervention appears to have some positive impacton the scores, but the slight improvement is notstatistically significant. Commenting on thesefindings the team said: ‘It may be that NCROP andpeer review has failed to influence service or it maybe that more time is required for significant service

Table 11: NCROP measurable patient outcomes

Quality standard scores in key COPD service areas

Intervention Control

Service Median IQR N Median IQR N

2008 AUDIT

Non-invasive 67 58–79 51 71 63–79 45

ventilation

Pulmonary 86 77–91 51 86 73–95 45

rehabilitation

Early 89 83–89 25 89 72–94 35

discharge

(if EDS)

Oxygen 79 61–86 51 79 61–86 45

provision

CHANGE (2008 MINUS NCROP 2007 BASELINE) Mann-Whitney

test

Non-invasive 0 –13 to +13 50 0 –13 to +8 44 P=0.80

ventilation

Pulmonary 5 0 to +14 50 0 –5 to +9 44 P=0.14

rehabilitation

Early 0 –10 to +4 24 –6 –11 to +6 34 P=0.47

discharge

(if EDS)

Oxygen 0 –4 to +11 50 0 –13 to +7 44 P=0.21

provision

IQR=inter-quartile range

EDS=early discharge schemes

Source: The National Chronic Obstructive Pulmonary Disease Resources and Outcomes Project (NCROP) final report53. Table

produced by the NCROP team and calculations undertaken by the project team54.

Page 46: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

change to come about. There is evidence for bothof these hypotheses from the change diaries andqualitative feedback from participants’. However,the team also noted that a more detailed analysis ofindividual standards within each of these four

categories did demonstrate some small andstatistically significant changes, with some unitschanging from partially meeting or not meeting astandard in 2007 to meeting it in full in 2008. Butother units changed in the opposite direction53.

30 How do you get clinicians involved in quality improvement?

PoISE

Table 12: PoISE measurable patient outcomes – overall

Total Total Mean (SD) Median

number number fasting Mean (SD) (quartiles) Median

of patients of patients time in fasting time fasting (quartiles)

in sample in sample hours in hours time in fasting time in Range in Range in

pre inter- post inter- pre inter- post hours pre hours post hours pre hours post

vention vention vention intervention intervention intervention intervention intervention

Food 1435 1777 13.97 14.17 (4.92) 13.75 (11.00, 14.25 (11.00, From 1.00 From 2.50

fast (4.86) 16.50) 17.00) to 57.75 to 56.25

Fluid 1440 1761 9.59 ) 8.91 (4.84) 9.00 (5.25, 8.00 (4.74, From 0.50 From 0.50

fast (5.19 13.00) 12.75) to 51.50 to 32.75

Table 13: PoISE measurable patient outcomes from standard dissemination

Percentage

change

N (obser- N (obser- (shorter (IP) p-value Percentage p-value

vation pre- vation post- or longer (chi- change in (chi-

intervention) intervention) (NI) in mean square) mean square)

for fluid (fl) for fluid (fl) duration of baseline to duration of baseline to

Site Intervention and food (fd) and food (fd) fluid fast) fluid fast food fast food fast

A 1. SD not disseminated, 116 fd 86 fd and fl –14.3% (IP*) 0.027 +3.3% 0.438

feedback not 114 fl

disseminated

B 2. SD not disseminated, 67 fd and fl 91 fd +8.1% 0.34 +5.2% 0.361

feedback 92 fl (NI)

post-intervention

C 3. SD, disseminated 140 fd 109 fd +9.3% 0.163 +3.8% 0.382

by key contact, 139 fl 108 fl (NI)

feedback unknown

F 4. SD, disseminated 104 fd 44 fd +0.2% 0.979 +3.2% 0.522

by key contact, 105 fl 43 fl (NI)

feedback unknown

L 5. SD, disseminated, 41 fd and fl 7 fd and fl –26.9% 0.461 –3.6% 0.522

no feedback (IP)

Q 6. SD disseminated 57 fd 51 fd and fl –38.6% <0.001 +10.1% 0.051

and feedback 56 fl (IP*)

P 7. SD not disseminated, 115 fd and fl 143 fd and fl –13.46% 0.041 +1.6% 0.816

feedback not (IP*)

disseminated

IP=improvement in care

NI=no improvement in care

SD=standard dissemination

*Statistically significant

Page 47: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 31

Table 14: PoISE measurable patient outcomes from the opinion leader and web-basededucation tool intervention

Percentage

change

N (obser- N (obser- (shorter (IP) p-value Percentage p-value

vation pre- vation post- or longer (chi- change in (chi-

intervention) intervention) (NI) in mean square) mean square)

for fluid (fl) for fluid (fl) duration of baseline to duration of baseline to

Site Intervention and food (fd) and food (fd) fluid fast) fluid fast food fast food fast

D 1. OL disseminated 56 fd and fl 85 fd –16.0% 0.133 3.8% 0.629

and feedback 81 fl (IP)

E 2. OL disseminated 130 fd 131 fd 13.5% 0.062 10.3% 0.059

and feedback 128 fl 129 fl (NI)

H 3. OL disseminated 66 fd and fl 58 fd 12.7% 0.088 4.0% 0.447

and feedback 59 fl (NI)

J 4. OL disseminated, 115 fd and fl 143 fd and fl –27.4% 0.001 6.2% 0.123

no feedback (IP*)

M 5. OL disseminated, 37 fd and fl 71 fd and fl –8.6% 0.488 12.2% 0.73

with feedback (IP)

S 6. OL disseminated, 81 fd 159 fd 22.5% 0.021 1.8% 0.637

no feedback 80 fl 156 fl (NI*)

IP=improvement in care

NI=no improvement in care

OL=opinion leader

*Statistically significant

Table 15: PoISE measurable patient outcomes from the PDSA intervention

Percentage

change

N (obser- N (obser- (shorter (IP) p-value Percentage p-value

vation pre- vation post- or longer (chi- change in (chi-

intervention) intervention) (NI) in mean square) mean square)

for fluid (fl) for fluid (fl) duration of baseline to duration of baseline to

Site Intervention and food (fd) and food (fd) fluid fast) fluid fast food fast food fast

G 1. PDSA, no audit, 87 fd 125 fd –14.6% 0.075 6.3% 0.243

no feedback 100 fl 127 fl (IP)

I 2. PDSA, no audit, 55 fd and fl 73 fd and fl 11.8% 0.18 15.3% 0.011*

no feedback (NP) (longer)

K 3. PDSA, with 93fd 96 fd and fl –10.7% 0.010 1.1% 0.808

feedback 92 fl (IP*)

N 4. PDSA, no audit, 47 fd and fl 92 fd –23.7% 0.001 –5.1% 0.309

feedback 99 fl (IP*)

O 5. PDSA, no audit, 79 fd 92 fd –3.1% 0.722 2.0% 0.664

feedback 78 fl 91 fl (IP)

R 6. PDSA, audit, 34 fd and fl 97 fd –13.4% 0.180 –6.5% 0.324

feedback 96 fl (IP)

IP=improvement in care

NI=no improvement in care

PDSA=Plan-Do-Study-Act

*Statistically significant

Sources of data in all PoISE tables (tables 12 to 15): PoISE data synthesis report and the PoISE duration of fasting findings report

(May 2009). Tables produced by Evaluation Team and p-values calculated by Evaluation Team.

Page 48: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comment: Overall, for both food fasting and fluidfasting, there was no statistically significantdifference in the mean fasting time across the fourtime points at which data was collected. There wasalso no evidence of a trend across time55. However,in relation to fluid fasting, six sites (out of 19) didshow a statistically significant improvement, andonly one site had a statistically significantly worsemean fluid fasting time after the intervention56.Commenting on this finding, the team noted thatwithin these six sites, there were no obvious

patterns or differences that could explain why they(as opposed to the other sites) achieved statisticallysignificant changes. The team also pointed outthat, even at these sites, the post-interventionmean still hugely exceeded the guidelinerecommendation of two hours for the fluid fast.More broadly, the team suggested that PoISE hadan impact in other ways: the evidence for this isbased on interview data that show that someparticipants believed that attitudes towards localfasting practice had shifted.

32 How do you get clinicians involved in quality improvement?

EPI-SNAP

Table 16: EPI-SNAP measurable outcomes

p-value

N (%) N (%) (chi-square)

First seizure clinic site and baseline re-audit Percentage baseline to

audit topic 2007 2008–2009 change re-audit

Ayrshire and Arran

Primary care N=45 N=34

Driving status not documented 29 (64) 21 (62) 2 (IP) 0.81

Driving advice not documented 25 (56) 19 (56) 0 0.98

Driving advice not recalled by patient 4 (9) 5 (15) 6 (NI) 0.49#

Secondary care N=46 N=30

Driving status not documented 31 (67) 18 (60) 7 (IP) 0.51

Driving advice not documented 30 (65) 18 (60) 5 (IP) 0.55

Driving advice not recalled by patient 7 (15) 4 (13) 2 (IP) 1.00#

Fife

Primary care N=5 N=6

Driving status not documented 5 (100) 2 (33) 67 (IP) 0.06#

Driving advice not documented 1 (20) 1 (17) 3 (IP) 1.00*

Secondary care N=13 N=10

Driving status not documented 11 (84) 9 (90) 6 (IP) 0.70

Driving advice not documented 8 (62) 4 (40) 22 (IP) 0.41#

Primary and secondary care N=18 N=16

Driving advice not recalled by patient 2 (11) 9 (56) 45 (NI) 0.01*#

continued

Page 49: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comment: Documentation of whether appropriatedriving advice had been given was used in EPI-SNAP as an indicator of good practice in referral tofirst seizure clinics57. This indicator has also beenadopted within the national standards forneurological services, being developed by NHSQuality Improvement Scotland58.

These results indicate that statistically significantimprovement was achieved in documented drivingadvice, and in patients’ recollection of that advice,in one out of the four areas.

SNAP-CAP

Detailed work on outcomes is still to be completedand will not be available until the end ofSeptember 2009. However, the SER does give some(limited) figures about bundle compliance and themeasures recorded on the Extranet. The reportedimprovement in median scores from December2006 – January 2009 was:– CURB65 score recorded: 11%–85%– Antibiotics in 4hrs: 85%– Oxygen therapy: 78%–87%– Bundle compliance: 5%–35%

How do you get clinicians involved in quality improvement? 33

Table 16: EPI-SNAP measurable outcomes – continued

p-value

N (%) N (%) (chi-square)

First seizure clinic site and baseline re-audit Percentage baseline to

audit topic 2007 2008–2009 change re-audit

Grampian

Primary care N=19 N=17

Driving status not documented 10 (53) 12 (71) 18 (NI) 0.27

Driving advice not documented 10 (53) 11 (65) 12 (NI) 0.46

Driving advice not recalled by patient 3 (16) 6 (35) 3 (NI) 0.26#

Secondary care N=12 N=37

Driving status not documented 5 (42) 15 (41) 1 (NI) 0.95

Driving advice not documented 4 (33) 13 (35) 2 (NI) 1.00#

Driving advice not recalled by patient 0 (0) 8 (22) 22 (NI) 0.17#

Tayside

Primary care N=42 N=40

Driving status not documented 33 (79) 24 (60) 19 (IP) 0.07

Driving advice not documented 34 (81) 19 (48) 33 (IP) <0.01*

Driving advice not recalled by patient 19 (45) 4 (10) 35 (IP) <0.01*#

Secondary care N=23 N=10

Driving status not documented 20 (87) 7 (70) 17 (IP) 0.25

Driving advice not documented 14 (61) 5 (50) 11 (IP) 0.56

Driving advice not recalled by patient 6 (26) 1 (10) 16 (IP) 0.40#

IP=improvement in care

NI=no improvement in care

*Statistically significant

#Fisher’s exact test used when small numbers (less than 5 per cell)

Source: All data taken from EPI-SNAP first seizure clinic audit results. Table produced by Evaluation Team and p-values calculated

by Evaluation Team.

Page 50: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

34 How do you get clinicians involved in quality improvement?

Tab

le 1

7:

IBD

se

lec

ted

me

asu

rab

le o

utc

om

es

Nu

mb

er

of

pa

tie

nts

with

ulc

era

tiv

e c

olitis

an

d C

roh

n’s

dis

ea

se m

ee

tin

g a

ud

it s

tan

da

rds

in d

iffe

ren

t si

tes

Ac

tio

n p

lan

nin

g s

ite

sN

on

ac

tio

n p

lan

nin

g s

ite

sO

ve

rall

p-v

alu

e

p-v

alu

e

p-v

alu

e

(ch

i-sq

ua

re)

(ch

i-sq

ua

re)

(ch

i-sq

ua

re)

Pe

rce

nta

ge

b

ase

lin

e t

o

Pe

rce

nta

ge

b

ase

lin

e t

o

Pe

rce

nta

ge

ba

selin

e t

o

Au

dit s

tan

da

rds

N 2

00

6N

20

08

ch

an

ge

re

-au

dit

N 2

00

6N

20

08

ch

an

ge

re

-au

dit

N 2

00

6N

20

08

ch

an

ge

re

-au

dit

1.

De

dic

ate

d G

I w

ard

sN

=2

3

N=

23

N

=1

32

N

=1

27

N

=1

55

N

=1

50

site

ssi

tes

site

ssi

tes

site

ssi

tes

11

19

+3

5%

(IP

*)0

.01

32

97

10

1+

7%

(IP

*)<

0.0

00

11

08

12

0+

10

%0

.03

8

(IP

*)

2a

. IB

D n

urs

e o

n s

ite

N=

23

N

=2

3

N=

13

1N

=1

27

N

=1

54

N

=1

50

site

ssi

tes

site

ssi

tes

site

ssi

tes

81

2+

17

% (

IP*)

<0

.00

01

83

88

+6

% (

IP*)

0.0

05

91

10

0+

8%

(IP

)0

.17

2

2b

. P

atie

nt

vis

ite

d b

y

N=

62

6

N=

61

7

N=

3,3

48

N

=3

,07

0

N=

3,9

74

N=

3,6

87

IBD

nu

rse

p

atie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

80

10

3+

4%

(IP

)0

.05

2

76

39

29

+7

% (

IP*)

<0

.00

01

84

31

,03

2+

7%

(IP

*)<

0.0

00

1

3.

Pa

tie

nt

giv

en

N

=6

26

N

=6

17

N

=3

,34

9

N=

3,0

70

N

=3

,97

5

N=

3,6

87

pro

ph

yla

ctic

he

pa

rin

p

atie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

p

atie

nts

35

94

53

+1

6%

(IP

*)<

0.0

00

11

,86

62

,25

5+

17

% (

IP*)

<0

.00

01

2,2

25

2,7

08

+1

7%

(IP

*)<

0.0

00

1

4a

. Sto

ol sa

mp

le s

en

t N

=4

76

N

=4

76

N

=2

,54

6

N=

2,3

23

N

=3

,02

2

N=

2,7

99

for

sta

nd

ard

sto

ol

pa

tie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

cu

ltu

re

24

63

06

+1

2%

(IP

*)<

0.0

00

11

,42

41

,45

9+

7%

(IP

*)<

0.0

00

11

,67

01

,76

5+

8%

(IP

*)<

0.0

00

1

4b

. Sto

ol sa

mp

le s

en

t N

=4

76

N

=4

76

N

=2

,54

7

N=

2,3

23

N

=3

,02

2

N=

2,7

99

for

CD

T p

atie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

pa

tie

nts

17

72

69

+2

0%

(IP

*)<

0.0

00

11

,14

11

,27

5+

10

% (

IP*)

<0

.00

01

1,3

18

1,5

44

+1

1%

(IP

*)<

0.0

00

1

5.

Tim

eta

ble

d m

ee

tin

gs

N=

23

N

=2

3

N=

13

2

N=

12

7

N=

15

5N

=1

50

(ga

stro

en

tero

log

ists

si

tes

site

ssi

tes

site

ssi

tes

site

s

an

d c

olo

rec

tal

15

18

+1

3%

(IP

)0

.32

6

10

39

1-6

% (

NI)

0.2

36

1

18

10

9-3

% (

NI)

0.4

85

surg

eo

ns)

IP=

imp

rov

em

en

t in

ca

reN

I=n

o im

pro

ve

me

nt

in c

are

*Sta

tist

ica

lly s

ign

ific

an

tSo

urc

es:

Ev

alu

atio

n o

f th

e U

K IB

D a

ud

it a

ctio

n p

lan

nin

g v

isits

(20

09

) a

nd

th

e U

K IB

D a

ud

it,

2n

d r

ou

nd

(2

00

8)

rep

ort

.(T

ab

le f

rom

IB

D t

ea

m,

p-v

alu

es

ca

lcu

late

d b

y E

va

lua

tio

n T

ea

m,

usi

ng

ac

hi-sq

ua

re t

est

. Th

e E

va

lua

tio

n T

ea

m u

sed

th

e c

hi-sq

ua

re t

est

to

fa

cili

tate

co

mp

ariso

n a

cro

ss a

ll th

e E

wQ

I p

roje

cts

. B

ut

we

re

co

gn

ise

th

at

this

ma

y,

in s

om

e c

ase

s, h

av

e s

tre

tch

ed

th

eu

sefu

lne

ss o

f th

is t

est

. Th

e IB

D p

roje

ct

tea

m d

o n

ot

be

liev

e t

ha

t it is

en

tire

ly a

pp

rop

ria

te f

or

the

da

ta t

ha

t th

ey h

av

e g

en

era

ted

.)

IBD

Page 51: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comment: As reported in the Evaluation of the UKIBD audit action planning visits (September 2009),virtually all these indicators improved between thetwo audit rounds in the visited sites. Our analysisalso indicates that the majority of theseimprovements were statistically significant. Theteam commented that most improvements werematched by similarly sized improvements in non-visited sites, though there are exceptions (mainly inorganisation and structure) where improvementsin visited sites were considerably more markedthan for non-visited sites. But the team also notedthat these improvements need to be balancedagainst the fact that the first-round performance ofthe visited sites was generally lower, probablybecause of an over-representative sample frommore resource-poor areas in IBD. Their finalconclusion was, however, cautiously positive, ‘thereis a signal from these results that informal sitevisits may be beneficial in improving servicequality above that of simple feedback of results andaccess to online quality improvement tools’59.

PEARLS

At the time of writing, the PEARLS project was notcomplete and had not generated any quantitative

data on patient outcomes or any data aboutchanges in practice.

3.4 Reported changes inother outcomes

In this section, we summarise what the projectteams reported on outcomes other than measurablechanges in patient care (and changes in clinicalengagement which are covered in chapter 4). Theseother outcomes included: 1. Increase in the capacity and infrastructure for

quality improvement of the professional bodiesinvolved in the project (table 18)

2. Increase in the knowledge base (table 19)3. Sustainable arrangements for improving

quality of care in this field of medicine(table 20)

4. A transferable system of quality improvementto other areas of medicine (table 21)

5. An increase in knowledge and understandingof quality improvement in healthcare (table 22)

6. Clinicians’ opinions of the EwQI projects(table 23)

How do you get clinicians involved in quality improvement? 35

Table 18: Increase in the capacity and infrastructure for quality improvement of theprofessional bodies involved in the project

Colorectal – The Association of Coloproctology of Great Britain and Ireland (ACPGBI)

Cancer has supported the audit since it began (in 2001) and continued this support

during the the Health Foundation project. In that time, it has encouraged

debates among its members that have led to changes in the audit, such

as the introduction of electronic data collection and the essential data set,

as well as a move to open reporting.

– The ACPGBI has worked with the former Healthcare Commission to secure

funding for continuation of the audit and the existing infrastructure on

which it is based at Imperial College.

Self-harm – The project made the Royal College of Psychiatrists’ Centre for Quality

(see also in Improvement (CCQI) aware of the need to promote projects like Self-harm

conjunction within the college as a whole. The project influenced the follow-up

with programme – the psychiatric liaison accreditation network (PLAN). To set

POMH-UK) this up CCQI approached the college’s Liaison Faculty from the outset

and worked with it to develop PLAN. The faculty chair is the co-chair of the

PLAN steering group and is very supportive of the work.

– PLAN will work with the college’s policy unit to develop a position

statement arguing for better funding for mental health services.

– The project team provided an argument for the ‘Linking physical and

mental health’ section of the Royal College of Psychiatrists’ Fair Deal

Campaign manifesto – and also helped to write the section on service user

involvement.

– It also raised the profile of user involvement within the college.

continued

Page 52: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

36 How do you get clinicians involved in quality improvement?

Table 18: Increase in the capacity and infrastructure for quality improvement of theprofessional bodies involved in the project – continued

POMH-UK – The CCQI already provided a supportive environment that allowed

POMH-UK and Self-harm to draw on in-house expertise and support from

the peer network of QI projects. In turn, the project teams were able to

influence others in the CCQI.

– Within the CCQI, POMH-UK and the Self-harm project have led the way

on service user involvement.

– The understanding that clinical leads have been ‘invaluable’ to the

success of POMH-UK may influence how future projects within the CCQI

are set up.

– The team gained expertise in developing audit and providing rapid and

accurate feedback which was shared across CCQI.

NCROP – The project built on and expanded the British Thoracic Society’s (BTS)

experience of peer review, and the Royal College of Physicians’

experience of audit and benchmarking studies.

– It provided opportunities, through BTS winter meetings, for junior colleagues

to learn about QI and how to analyse data, submit abstracts, write for

publication and present findings.

– The project sits within the Clinical Standards Department at the Royal

College of Physicians, which is headed by the Clinical Vice President. This

demonstrates a commitment to support this work, and provides an

infrastructure to do so.

PoISE – The Royal College of Nursing (RCN) has been in a state of flux during the

life of PoISE, which has impacted on how the project was perceived.

– RCN staff attached to PoISE did, for a period, contribute to the

infrastructure of the RCN with respect to implementation, but otherwise

the project did not have a wider impact on the RCN’s capacity and

infrastructure for QI.

– But the previously separate RCN QI programme recently merged with a

learning and resource team, which includes an audit component.

Following these changes, the PoISE team has been in discussions about

how its findings can be disseminated through the college’s networks,

and whether the implications from PoISE are relevant to the new

programme’s work.

SNAP – The direct involvement and support of the Royal College of Physicians,

(in general) Edinburgh (RCPE) and the Royal College of Physicians and Surgeons of

Glasgow was through an overarching strategic group, and project staff

were recruited and employed by RCPE on behalf of the two colleges.

– But there was a ‘lack of a pre-existing infrastructure within the Colleges to

support a project of this nature and scale’60.

The SNAP projects did raise the profile of QI within the colleges: a meeting

of the Bi-collegiate Physicians’ Quality of Care Committee with the

specialist societies in Scotland in October 2007 was devoted to promoting

a culture of quality improvement within the medical profession, and

specific proposals were developed.

– Against this background the team made a series of recommendations

about the need for the colleges to undertake further QI projects, to use

professional networks to promote clinical standards and QI (as well as

education and training), and to work with specialist societies to share

knowledge and experience.

continued

Page 53: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comment: It is not possible to generalise about thepositions of the royal colleges and professionalbodies involved in the EwQI either before or afterthe initiative. They started with differing capacitiesto support QI, and some were better able to support

the EwQI projects than others and to learn from theprocess. There is some indication that understandingof QI has increased, but it is unclear how much thischange can be attributed to the EwQI. These issuesare discussed further in chapter 5.

How do you get clinicians involved in quality improvement? 37

Table 18: Increase in the capacity and infrastructure for quality improvement of theprofessional bodies involved in the project – continued

SNAP-CAP – In addition to all the above, SNAP-CAP ‘had an impact on the capacity

and infrastructure’ of the British Society for Antimicrobial Chemotherapy

(BSAC) and the BTS.

– BSAC adopted ‘breakthrough collaborative’ as its basic model for quality

improvement, and has established an annual Quality Improvement

Workshop to facilitate sharing of experience.

– BTS invited a member of the project team to talk at one of its meetings

and liaised with the project on developing the new CAP guidelines.

IBD – There was already a strong infrastructure within the Clinical Standards

department at the Royal College of Physicians, including the Clinical

Effectiveness and Evaluation Unit (CEEU).

– Beneficial links were developed with other college departments and units

within the college, such as the Health Informatics Unit and the College

Press Office.

– The audit was a direct catalyst for the development of the National

Service Standards for the healthcare of people with IBD.

– Improved understanding of the role of patients in audit developed within

the college.

– Improved understanding about the need for a communications strategy

in audit, and the importance of establishing this upfront, also developed

within the college.

PEARLS – There was a ‘lack of infrastructure within the Royal College of Midwives to

support a major project’.

– Understanding of QI within the college has grown and informed the work

which the Royal College of Midwives does with other organisations on quality

and audit.

– The lessons learned from supporting this project will be taken forward and

used to advise others planning major large-scale studies.

Sources: As identified in table 4

Page 54: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comment: This table lists publications from theprojects (other than audit and project reports), aswell as presentations and posters, identified by theproject teams as of September 2009. This is a

snapshot only, and it is reasonable to expect thatthis list will expand as the projects all complete andwrite up their findings. We have not attempted anyfurther analysis at this point.

38 How do you get clinicians involved in quality improvement?

Table 19: Increase in the knowledge base

Colorectal – Eight publications in professional journals

Cancer – Three published abstracts

– Seven presentations at conferences and meetings61

Self-harm – Publication of the Better services for people who self-harm: quality

standards for healthcare professionals (2006). London: Royal College of

Psychiatrists Centre for Quality Improvement.

– Four articles in the Emergency Nurse Journal

– One magazine article in Mental Health Today

– Eight reports (including baseline audit and re-audit reports)

– Eleven presentations at RCP Quarterly Meeting, national conferences

and other meetings

– Published seven types of materials for staff working with people who

self-harm, including an information leaflet and online training exercise

manuals.

– Published four resources for service users

POMH-UK – Five papers in peer-reviewed journals including British Journal of Psychiatry,

Schizophrenia Bulletin and Acta Psychiatrica Scandinavica

– Thirteen conference presentations and posters, including talks at the ACNP,

ECNP, the International Congress of Schizophrenia Research and the

Maudsley Forum

NCROP – Eight reports62

– Six papers in peer reviewed journals

– Twenty-eight abstract presentations at nine conferences

PoISE – Four presentations at national and international conferences (2006–08)

such as the RCN International Research Conference and the International

Guideline Implementation Network (GIN) conference.

– Three posters presented on the PoISE project (2007–09) at the Getting

evidence into practice: Quality and Safety in Healthcare conference in

Berlin, first invitational Knowledge Translation Conference in Canada, and

the Knowledge Utilisation Colloquium in Stockholm

EPI-SNAP – Two posters at conferences

SNAP-CAP – Three conference presentations

– Six posters at conferences

– Evidence synthesis publicly available on SNAP-CAP website at

www.scottishmedicines.org.uk/smc/7280.221.245.html this includes

considered judgements and evidence tables for items in the care bundle

and justification for items not in the care bundle

IBD – Six journal papers and articles

– Nine conference presentations

– Eleven publications in which the audit data was used for training or

professional development

PEARLS – No information was available about publications or conference

presentations

– Training DVD

Sources: As identified in table 4

Page 55: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 39

Table 20: Sustainable arrangements for improving the quality of care

Ongoing support Standards Material/tools Associated policy

for audit or identified and from project changes which team

The EwQI measurement Follow-on further made widely claim were influenced

projects system programmes developed available by project

Colorectal Funding obtained Yes – and an Validated risk

Cancer from HCC (now ‘essential adjustment

the Care Quality data set’ of model

Commission) 40 data items NBOCAP

until 2009 developed to database

simplify data

collection in

future audits

Self-harm No Psychiatric Yes – and in Yes – through The Academy of

liaison Self-harm Self-harm Medical Royal Colleges’

accreditation quality stan- website report on meeting

network (PLAN) dards for health urgent mental health

professionals, needs in the general

published on hospital and forth-

the web coming No Health

without mental health

report.

The Royal College of

Psychiatrists Fair Deal

Campaign

PLAN will work with the

college’s policy unit to

develop a position

statement arguing for

better funding for

mental health services.

POMH-UK Yes – funded Trust subscription Yes – and an Yes – through Mentioned in the

through annual on-going POMH-UK Healthcare

trust subscription system website Commission’s (Now the

developed to Care Quality

support future Commission) report

POMH topics Talking about

medicines; the

management of

medicines in trusts

providing mental health

services (2007).

Mentioned in the

national report Risk,

rights, recovery

(Mental Health Act

Commission, twelfth

Biennial Report 2005–07).

Work with the

Healthcare Commission

(now the Care Quality

Commission) on a self-

assessment tool for

mental health trusts for

medicines management

including a standard

around participation in

POMH programmes.

continued

Page 56: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

40 How do you get clinicians involved in quality improvement?

Table 20: Sustainable arrangements for improving the quality of care – continued

Ongoing support Standards Material/tools Associated policy

for audit or identified and from project changes which team

The EwQI measurement Follow-on further made widely claim were influenced

projects system programmes developed available by project

NCROP Not yet – failed Yes – feeding Plan to make Development of the

to obtain funding into the a COPD audit National Clinical

from HQIP but National web-based Strategy for COPD

pursuing this Strategy for data collection

further COPD tool available

for teams

undertaking

local audits in

the intervals

between the

full National

COPD Audit

PoISE No Yes – and Plan to make

details made products (eg:

available on PDSA packages,

the web data collection

forms, guideline

packages) avail-

able via the RCN

Learning and

Resource unit

and other

websites

EPI-SNAP No Managed Yes – feeding Referral NHS QIS neurological

clinical networks into NHS QIS template standards (re

for epilepsy will neurological available in SCI documentation of

continue audit standards gateway to be driving advice on

of driving used by all referral)

advice. referrers in BPQCC work to

primary care. promote QI

Stand alone

referrals avail-

able for

secondary care

referrers.

SNAP- CAP No Adopted by Yes – SNAP-CAP SNAP-CAP model

Scottish Anti- care bundle adopted by SAPG for

microbial adapted also other antimicrobial

Prescribing at sites outside stewardship

Group Scotland interventions

IBD Yes – funding Continuation of Yes – feeding Plan to produce IBD audit included in the

obtained from audit into ongoing a self-assessment 2009 Annual Health

2010 National plans for further web-based tool Check process

Clinical Audit and use of standards for IBD services Catalyst for the

Patient Outcomes development of the

Programme National Service

Standards for IBD

(Feb 2009)

PEARLS No Yes – with plans Plan to make Development of Safer

for further use training pack- childbirth (2008) which

of standards age available has led to inclusion of

perineal repair in

mandatory training

Sources: As identified in table 4

Page 57: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comments: The EPI-SNAP team point out thatsustainability depends on three favourablecircumstances: ongoing supply of resources,influence on health policy, and measurable benefit.Table 20 illustrates what the teams told us aboutthe projects’ impact on the first two of these;measurable benefits were discussed in section 3.3.These findings draw on the project teams’ reportsof two audit cycles, usually over one or two years.Given this timescale, it is too early to make well-founded comments about the long-termsustainability of these changes. However, someappear to be promising. For example, POMH looksset to be self-sustaining through trust subscriptionby 2010. And in PoISE, trust participants

committed themselves to continuing with35 activities beyond the life of the project: theirplans to continue or extend activities includedrolling out changes in fasting practice to otherclinical areas, continuation of the web-based toolas an educational activity, and providing ongoingreminders to sustain change63.

Participating in the EwQI has also had widerbenefits for project team members, includingnew skills in using data or developingcommunications, being given a platform to speakwith national decision makers and developingtraining materials. These issues are discussedfurther in chapter 5.

How do you get clinicians involved in quality improvement? 41

Table 21: A transferable system of quality improvement to other areas of medicine

Colorectal – The NBOCAP risk adjustment model for predictive mortality has also been

Cancer applied to upper gastrointestinal surgery.

– The methods of data collection and the database have also been used to

create a database of patients with inflammatory bowel disease

undergoing restorative proctocolectomy.

Self-harm – The QI methods adopted in Self-harm (service user involvement, self-

reviews, peer reviews, accreditation, providing interventions) are not, in

themselves, new, and are already used in other areas. But using all these

together in one initiative is not that common.

POMH-UK – The POMH-UK model could be transferred to acute secondary care,

although it would be important to apply the lessons from POMH-UK

(eg: focus on small manageable problems in prescribing for a particular

condition, such as CAP or Parkinson’s disease).

– The potential for transferring the model to primary care is less certain

because the ‘relevant systems and drivers’ are very different. For example,

the development of audit methodologies that required tracking individual

patients across the primary/secondary care would be challenging and

resource-intensive.

– However, the POMH-UK team had made contact with the Royal College

of General Practitioners, with whom it was in discussion about how a

joint programme with primary care might be taken forward in due course.

– Formal links established with the National Patient Safety Agency.

NCROP – NCROP methodology: lessons from the peer review process are of interest

as a tool to support quality improvement in other teams (eg: the National

Lung Cancer Audit team).

– The National COPD Audit methodology, specifically with regard to

engaging primary care and patients, is of interest to other national audit

project teams, such as IBD.

– There is interest in National COPD Audit in Europe: in Austria, to support its

own National COPD Audit; and in relation to a wider European Audit (to

be discussed at the European Respiratory Society Congress in Vienna,

September 2009).

– There is also interest in the COPD Audit from colleagues in New South

Wales, Australia.

continued

Page 58: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comment: This summary and the one in table 22below suggest that there have been gains from theEwQI in terms of understanding of QI and thedevelopment of models for use in other settings.We have not at this stage been able to analyse thesegains further.

42 How do you get clinicians involved in quality improvement?

Table 21: A transferable system of quality improvement to other areas of medicine – continued

PoISE – Planning to make the PDSA guide freely available through a website.

– The fasting data collection sheet could be used as an audit tool by trusts.

This will also be made freely available.

– A web resource has been launched for all RCN members.

– The guideline pack (including implementation guidance) could be used

by trusts.

EPI-SNAP – The before and after double audit cycle, using a custom-built database

could be used by other QI projects.

SNAP-CAP – Two specialist groups (gastroenterologists and rheumatologists) are looking at

the IHI Extranet as a possible model for data collection and feedback.

However, it is not clear to what extent this is attributable to SNAP-CAP.

– The project team plans to publish a review of its experience developing

a care bundle, which will include thoughts on the kind of disease areas or

care settings where this might be an appropriate model for QI.

– SNAP-CAP methods are being transferred to other infection areas to

support antimicrobial stewardship.

IBD – Model action plan: the method can be transferred to other QI projects

but would need to be supported by stronger mechanisms for supporting

local action planning and implementation.

– Supported action planning visits: this might be a relevant mechanism for

other projects to employ. It is cheaper, easier to organise and less time-

consuming than formal peer review; and it embraces the concept of a

‘clinical champion’, which is critical to driving QI. It also enables/forces

local teams to consider their action plan carefully and what they are

going to do to implement it.

PEARLS – The model of professional education and the use of standardised

evidence-based material will be of interest to all of those engaged in

professional education of midwives, doctors and other professional staff.

Sources: As identified in table 4

Page 59: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 43

Table 22: An increase in knowledge and understanding of quality improvement in healthcare

Colorectal – Data and published work on audit findings and the NBOCAP

Cancer methodological approach.

Self-harm – Online training exercises – focused largely on service user viewpoints.

POMH-UK – Publications in scientific journals, and posters and presentations at

national and international conferences.

– Training workshops on ‘Bringing about Change’ in local teams and

academic detailing for pharmacists – these aimed to educate clinical

staff in the issues around high-dose and combination prescribing.

– Presentation at RCP annual meeting (July 2009).

NCROP – Publications in academic, peer-reviewed journals; presentations at

national and international conferences.

PoISE – Raising awareness of guidelines among the participating clinicians was

an explicit part of the web-based/opinion leader arm of the project: the

impact of this was assessed through participant interviews.

– More generally, the project itself raised awareness.

EPI-SNAP – Referrers were made more aware of importance of issuing driving

advice64.

SNAP-CAP – Evidence of measurable increases in knowledge and understanding of

quality improvement by participants64

– Plan to work on measurable capacity building with NHS Education

Scotland, ensuring that QI is an integral component of learning needs

for antimicrobial management teams and continuing to work with the

Doctors Online Training System.

IBD Practical experience of:

– Model action planning.

– Supported action-planning visits.

– Developing national standards.

PEARLS – Training DVD.

Sources: As identified in table 4

Table 23: Clinicians’ opinions of the EwQI projects

Colorectal – Feedback from surgeons suggests that they find the data useful and of ‘personal

Cancer and clinical advantage’.

– Results of a survey (of 171 trusts and 549 consultants) that was undertaken

by the project team showed that 82% of the 105 consultants who had

read the annual audit (NBOCAP) thought that it was useful as a

benchmark and to raise awareness within units about surgical outcomes.

Self-harm – ‘We believe that improvements were made by a number of the services

that worked with us and we know that many people who used our

change interventions found them to be useful.’

– 19 of the 22 teams that answered the Wave 1 evaluation survey

undertaken by the project team said that the programme has helped

them make improvements to the care of people who self-harm.

– At the end of online training, participants were asked whether the exercise

was helpful and whether it might change the way they work with people

who self-harm. The majority of respondents answered positively to these

questions.

– Additional anecdotal feedback was that people found these exercises

easy to use, quick and informative.

continued

Page 60: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

44 How do you get clinicians involved in quality improvement?

Table 23: Clinicians’ opinions of the EwQI projects – continued

Self-harm – Information leaflet ‘Working with People Who Self-harm’ received positive

continued feedback.

– The aggregated report was rated as useful, but slightly less so than the

local reports.

POMH-UK – ‘The general view given in telephone feedback events is that trusts are

happy with the work that POMH-UK does. No major improvements were

suggested.’

– Several interventions were listed by the POMH-UK teams as having

received positive feedback from clinicians. These included the ready

reckoner, workbook and lifestyle management support pack, and

academic detailing training.

– The audit reports and feedback were found to be useful.

NCROP – ‘What is clear is that the majority of participants in the peer review group

and a sizeable minority in the control group found benefit within NCROP.

The former for a wide range of reasons that include the provision of a

quality framework, sharing of good practice and the bringing together of

commissioners and providers to the team focused outcomes of better

morale and self-awareness that was achieved53.’

– Feedback on the National COPD Audit 2008 was ‘overwhelmingly positive’.

There was only one negative comment from a participating clinician who

perceived the audit process to be ‘disheartening’.

– The SER includes comments and quotations from several influential figures

praising the audit. It is worth noting, however, that this praise is directed at

the audit and acknowledges the effort that went into collecting and

analysing such data, rather than talking about improvements in patient

care.

PoISE – The SER states that PDSA cycles, one of the three improvement

interventions, were considered useful by participants.

– Of the local investigators, 44% felt that PoISE had improved fasting times.

EPI-SNAP – The SER says that while there was good uptake of the first seizure referral

intervention in some areas, in others there was resistance from both

primary and secondary care clinicians, some of whom thought that

issuing driving advice was ‘not their job’.

SNAP-CAP – BTS said that SNAP-CAP was ‘an example to follow’ – although the SER

does not specify which parts of the programme were particularly praised.

– Seven participating local teams were asked to complete a ‘Maturity

Matrix’, which included questions about whether the programme was

being implemented as expected and whether it was having an effect on

patient care. The project team concluded: ‘responses from the Maturity

Matrix indicate that we have achieved the basic level of progress, and

for three hospitals that firm progress has been achieved’.

– Contributors believe that ‘the programme is having an impact on clinical

practice in Scotland’.

– When asked what difference the project made, the central project team

comment that it is ‘unlikely that changes in all bundle processes …

occurred for other reasons’, although ‘changes in outcome measures

could have multiple causes, including bias and confounding … from

other measures introduced at the same time as CAP.’

continued

Page 61: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comment: The participating clinicians’ opinions ofthe EwQI projects are instructive. Many of thosewho were engaged in these projects are experiencedand knowledgeable professionals in their fields, andtheir views on the usefulness and value of theprojects are likely to carry weight with their peersand with policy makers more generally. Theseissues are considered further in chapter 4.

3.5 Effort expended by theproject teams

In order to assess the total effort expended in eachproject we needed details of the time and effortexpended by the central project team, as well as thatexpended by local participating teams. But it proveddifficult to collect cost data or details of the timespent by those involved in the project from theproject teams. The Evaluation Team ran sessions atinitiative-wide ‘away days’ and visited project teamsto provide support in obtaining data on costconsequences. But such is the paucity of costinformation in the NHS, and so limited was thepriority given by the teams to collecting data on thetime spent, that we can offer only a partial insightinto the effort required by the project teams, eithercentrally or locally in the participating sites. Nor isthe data available in a way that facilitates contrastand comparison. However, the scale of the effort(if not the detail) is broadly apparent from the datapresented below. The majority of these data are

about time commitments; only two project teamsprovided any cost data, and only one of them didso comprehensively.

Colorectal Cancer

No quantitative data were provided in the SERabout the time spent by the central team, althoughthe range and scope of the team’s published papersindicate the effort and resource that went intodeveloping the audit and ensuring that the datareported annually were of high quality. Thiscommitment to producing high quality auditdata was shared by other project teams – as thePOMH-UK team said in their SER, ‘the quality ofdata is paramount’.

At local trust level, the main activity wasparticipation in the audit. There was no otherimprovement intervention in this project. Datacollection for the audit moved from a paper-basedto a web-based system during the course of thisproject. The NBOCAP report 2005 providesaccounts of both these approaches, and from theseit is apparent that there was far less time-pressureonce electronic data collection had been introduced.

The first visit to the database for a new patient isfollowing an operation for colorectal cancer. Forthose who, for whatever reason, do not have asurgical procedure, then data input follows the MDT[multidisciplinary team] discussion (vide infra). Atpresent the database is separate from the PAS systemand so demographic details are needed at this stage.For those unfamiliar with the system these are

How do you get clinicians involved in quality improvement? 45

Table 23: Clinicians’ opinions of the EwQI projects – continued

IBD – ‘We found that sites were keen and eager to become involved in the

action planning process. They were happy to present to their peers on

aspects of their service.’

– The SER reports feedback from a clinical lead at one hospital, ‘I think the

site visit process is certainly worthwhile’.

– The SER includes quotations from several clinicians praising the project.

The focus of the praise is on the quality of the audit, but there is also

recognition that it was an important first step in ensuring that services of

appropriate quality are available to IBD patients throughout the UK: ‘I am

very impressed with the IBD community’s engagement with the UK IBD

Audit … the audit has shown that there are huge opportunities to improve

the care for people with IBD throughout the UK.’

PEARLS – Facilitators ‘demonstrated high levels of commitment and enthusiasm in

the main’.

– Facilitators have stated that there had been a positive change in practice.

– ‘There are early indicators and anecdotal evidence that PEARLS has been

successful in providing training that changes practice.’

Sources: As identified in table 4

Page 62: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

minimal, hence the term ‘minimum dataset’.Demographic details, referral type, tumourcharacteristics, etc. are all entered and the auditedtime for such an activity is less than ten minutes. Bythe time the post-operative discussions are held atthe MDT, the patient will have been discharged.Following the MDT meeting the dataset is visited forthe second, and last, time. Details of the pathologyare entered, the date of discharge or death is noted,post-operative complications entered, and decisionswith regard to adjuvant therapy put into therelevant section. Again, this visit for an individualpatient is never more than 10 minutes and can bedone as the letters are dictated following the MDTmeeting. Patients not undergoing surgery will havebeen discussed at these meetings and their data aresimilarly entered at this stage .... The time taken forthis activity is as minimal as the dataset ...65

In line with this the SER notes that the estimatedcost impact per patient for the NBOCAP datacollection is approximately £15–£20 (based ontaking 15–20 minutes per patient to collect andenter data). The NBOCAP report 2007 summarisesthe findings of a survey of colorectal surgeons,responses to which were received from 159 of 549consultants contacted (29%) and 117 of 171hospitals (66%). Of the 74 consultants (46.5%) whowere not currently submitting data, the mainreasons given were lack of IT support (23.6%) andlack of funding (19.6%): only a third said theylacked dedicated audit time (18.9%)66.

Self-harm

No details are given in the SER about the timespent by the central project team, although wewere told in interviews and meetings that theefforts involved were very considerable, and theSER confirms this when it talks about ‘three yearsof hard work, a substantial grant and a lot ofplanning and working across boundaries.’

However, the SER does contain some roughestimates of the time spent by service users andlocal participating teams on developing andimplementing interventions and making changes,and the tables which summarised this informationare reproduced below. The project teamcommented that the time spent varied a lot,depending on how much the local team decided todo. Many of the interventions provided to teamswere ready to use, but some teams developed theirown changes from scratch. Based on these roughestimates the project team estimated that timecommitment for the entire team over an 18-monthperiod was approximately 116 days.

Comment: These are figures per service user.But, as the project team noted, ‘Most teams shouldhave had two service users who will shareresponsibilities, so the average time spent in totalcould be estimated as about 15 days’.

46 How do you get clinicians involved in quality improvement?

Table 24: Guide to time commitment of service users in the Self-harm project

Estimated number Activity of days per user

Attend introductory workshop (June/July 2007) 1 day

Contact with a local user group or service to encourage responses to 1 daythe service user survey (between July–Oct 2007) by telephone, email, writing or in person.

Meet with project team to discuss results of audit and prepare for peer 0.5 dayreviews (Jan/Feb 2008)

Attend a peer review with team mates (Jan/Feb 2008) 1 day

Receive a peer review visit with team mates (Jan/Feb 2007) 1 day

Attend a feedback workshop (Apr 2008) 1 day

Attend additional project team meetings 1.5 days

Extra reading or preparation 1 day

Total 8 days

Source: Self-harm SER

Page 63: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Comment: These are figures per team member. Asthe project team pointed out, some team members

will be more active than others, but four teammembers spending 15 days each equals 60 days.

How do you get clinicians involved in quality improvement? 47

Table 25: Rough estimate of time commitment required for each local team member in theSelf-harm project (each team had about four clinical team members)

Estimated number

Activity of days per user

Attend introductory workshop (June/July 2007) 1 day

Help with data collection 2 days

Help with interventions and making changes 4 days

Project team meetings 3 days

Attend a peer review with team mates (Jan/Feb 2008) 1 day

Receive a peer review visit with team mates (Jan/Feb 2007) 1 day

Attend a feedback workshop (Apr 2008) 1 day

Admin (arranging travel, etc) 1 day

Additional reading, communication or preparation 1 day

Total 15 days per team

member

Source: Self-harm SER

Table 26: Rough estimate of time commitment required for each local team lead in the Self-harm project

Estimated number

Activity of days per user

Complete joining form and secure funding (except for Wave 1 teams, 3 days

which did not pay)

Get team together and get sign-up from chief executive 3 days

Attend introductory workshop (June/July 2007) 1 day

Communicate with the central project team 4 days

Communicate/meet with the local project team 5 days

Communicate with the local servicer users 3 days

Prepare for data collection period 2 days

Undertake data collection 4 days

Receive local report 1 day

Use of interventions and making changes 10 days

Meet with project team to discuss results of audit and prepare for

peer reviews (Jan/Feb 2008) 1 day

Attend a peer review with team mates (Jan/Feb 2008) 1 day

Receive a peer review visit with team mates (Jan/Feb 2007) 1 day

continued

Page 64: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

POMH-UK

The SER provides no quantified details about theeffort and time required by the central team, butdoes say that POMH-UK recruited 43 members in2008, each paying an annual membership fee of£3,500+VAT or £10,000+VAT for three years. Italso reports that so far the project has cost£401,830 to run, and £141,944 has been collectedin subscription fees. POMH-UK is on track tobecoming self-sufficient by 2010. Fees for trustswere based on a long-term, ambitious target ofcontinued engagement of 50 mental health trusts,which it was near to reaching.

In line with a similar finding in the ColorectalCancer project, the SER notes: ‘We do not thinkthat data collection is particularly onerous fortrusts; particularly as it forms part of the auditwork that needs to be done anyway.’ However,other aspects of the measurement and feedbackprocesses were time-consuming. The developmentof data collection tools ‘required input from localteams at regional meetings, along with meetingswith specialist advisers. If trusts were to do this ontheir own, the whole exercise would duplicateeffort, be costly and of variable quality, and notallow for benchmarking.’ And with regard to thedata analysis and cleaning undertaken by thecentral project team, the SER notes: ‘Carefulexamination of submitted data for data entryerrors and data cleaning is extremely time-consuming but necessary to ensure high qualitydata ... the quality of data is paramount’.

Overall, the SER reports that local participationcosts, other than the subscription fee, varied. Whilethis variation was not quantified, the reasons for it

are given, ‘data collection methods and the datacollection period vary, as do the team membersthat need to be involved. In addition, trustsgenerally choose their own sample size, which isoften dependent upon trust capacity. While sometrusts were expected by the project team to put inall eligible cases, others were expected to includejust a sample.’

NCROP

The SER provides no quantified details about theeffort and time required at either central or locallevel. However, in the SER, and also in the Finalreport of the qualitative sub-study (2008), there aredetailed qualitative descriptions of the scope of theaudit undertaken, of the efforts of the centralproject team to support data collection, and of theefforts of local teams to carry out peer review visitsand fill in change diaries. We summarise thesebriefly below.

The SER notes that the audit achievedparticipation from all sectors – primary andsecondary care NHS organisations, GPs andpatients. – 98% of acute NHS trusts participated in the

audit of COPD resources and organisation ofcare element.

– 96% of acute NHS trusts participated in theclinical audit of COPD exacerbations.

– 73% of primary care organisations participatedin the survey of COPD resources andorganisation of care.

– 2,728 general practices returned a surveyof COPD care (best estimate of participationis 43%).

48 How do you get clinicians involved in quality improvement?

Table 26: Rough estimate of time commitment required for each local team lead in the Self-harm project – continued

Estimated number

Activity of days per user

Prepare presentation for feedback workshop (Apr 2008) 1 day

Attend a feedback workshop (Apr 2008) 1 day

Attend additional project team meetings (including preparation) 5 days

Additional reading, communication or preparation 2 days

Total 48 days

Source: Self-harm SER

Page 65: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

– 2,864 surveys were received from patients (bestestimate of response rate is 45%).

There was regular contact by the central projectteam with all participating sites through weeklyemail updates during data collection, and theseupdates were also posted within the audit web-based data collection tool. The web-based datacollection tool had an administrative facility thatallowed the project team to monitor activity byparticipating sites and to deliver targeted support.The central project team supported trusts that haddifficulty collecting and submitting data by: – offering telephone support with web-tool

queries– talking callers step by step through the process

of data submission, validating, locking orexporting data

– suggesting where data could be found– sharing examples of how teams in other units

organised themselves to identify patients andcollect data

– developing a tool for teams to use locally whenidentifying patients to be included in the audit:the ‘log of patients entered to the audit’ wasdesigned to assist teams keep track of thepatients they entered into the audit and ofwhen various components of the audit hadbeen completed (eg: survey sent to GP,questionnaire given to patient).

All this involved close working with the twoclinical associate directors connected with theproject, who made themselves available at ‘anytime’ to support the work.

At local level a lead clinician and a clinical auditlead were nominated at each site. Data collectionand input to the web-based tool were completed bypeople in various posts – most frequently thesewere specialist registrar grade doctors, specialistrespiratory nurses or respiratory consultants. TheSER reports that there was an enthusiastic responseto NCROP activities, ‘the tremendous participationin the audit and embracing the new, complexmethodology is testament to the desire to improve– this was a lot of work for acute teams, speakingwith the project team during conferences,attending meetings we have hosted, engaging indebate during abstract presentations, acceptinginvitations to join the project team at the HealthFoundation related activities’. Overall, intervention

site participants felt that the NCROP was a gooduse of time, but adverse comments about the timeburden were frequent.

In addition to the timetabled day, each peer-reviewvisit involved another half day in last-minutepreparation time. On top of that, there wasdocumentation to fill in both before and after eachvisit, as well as change diaries to complete. Despiteconsiderable anxieties about the length of timetaken (local teams felt that the day of the visitsitself was rushed and depth of knowledgecompromised by ‘cramming’ everything into oneday), participants said that the RCP-recommended10.00 to 16.00 one-day timetable enabled asufficiently accurate appreciation of services at thereviewed site for the process to be useful. Insummary, clinicians wished to engage but wereanxious that time away from patient care was insome sense ‘lost time’.

The NCROP change diary was intended to helplocal teams monitor progress against action plansdeveloped following peer review visits and toprovide feedback to the project team. The SERnotes that an average of 30% of returns wasreceived each month and that twenty sites regularlysubmitted a completed diary. But the diaries wereunpopular: participants found the monthly returnonerous and poorly matched to the (slower) paceof change in trusts.

PoISE

The SER comments: ‘It was not possible to do costconsequences analysis because no significantdifference of effect could be shown by the timeseries data.’ However, the team does provideinformation on the time and costs of eachimprovement intervention (PDSA, opinion leaderand web model, and standard dissemination) atnational and trust level, and the tables in whichthey summarised this data are reproduced below.The team estimates that the cost to a nationalorganisation of providing implementation supportto all 170 acute trusts would be £153,700 for thePDSA model and £67,300 for the opinion leader +web model. The SER also provides an estimate ofthe total running costs of the project. This was over£550,000, a figure that includes the HealthFoundation funding but does not includecommitment of in-kind time by the RCN.

How do you get clinicians involved in quality improvement? 49

Page 66: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

50 How do you get clinicians involved in quality improvement?

Table 27: PoISE costs are estimated across whole PDSA intervention in all five trusts unless statedotherwise

Cost of Cost of Cost of

PDSA component Staff Hours staff materials travel/expenses

Costs associated with external implementation support activities

Developing and RF 51 £2,835 £3.80 per

printing PDSA SRF 10 trust

guidebook

Developing training RF 14 £611

event

Running training RF 40 £1,744 £459

event (per event) Admin 3 £79

Support for first RF 12 £560 £157

meeting (per trust) Admin 1.4

Preparation of RF 18 £785

diagnostic report

Analysis of ORC RF 1 £44

(per trust)

Costs associated with PDSA activities within the trusts (cost per trust assuming all activities took place

according to the PDSA model)

Attending training Facilitators 7 £742 £567

event

Recruitment of Facilitator 6 £636

PDSA team

Distributing ORC Facilitator 2 £212

First meeting Facilitator 3 £1,575 £11

+PDSA team*

5 subsequent Facilitator 5 x 2 £5,250 £55

meetings** +PDSA team*

Communication Facilitator 2 hours x £5,512

and liaison 26 weeks

between meetings

Implement change Senior nurse 12 £372

Study/audit change Junior nurse 6 £144

Records and reports Facilitator 6 £636

RF=Research Fellow

SRF=Senior Research Fellow.

*PDSA team consists of one nurse manager, two senior nurses, four junior nurses, one ward clerk, one consultant anaesthetist and

one consultant surgeon.

**Assumes all meetings held and fully attended.

Source: PoISE SER

Page 67: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 51

Table 28: PoISE costs (are estimated across opinion leader and web intervention in all five trustsunless stated otherwise)

OL + web Cost of Cost of Cost of

component Staff Hours staff materials travel/expenses

Costs associated with external Implementation support activities

Running the SRF 28 £1,711 £419

training event Admin 15 £397

(two events)

Developing SRF 42 £2,567

materials for training RF 10 £436

event, web Admin 5 £132

resources, publicity

Producing and Admin 2 (=13/6) £57 £43

disseminating

publicity materials

(per trust)

Developing key RF 70 £3,053

questions SRF 14 £856

Liaising with key RF 2.3 ) £102

contact (per trust) (=14/6

Web tool RF 100 £4,361

development

(in-house public

sector team)

Costs associated with OL+web activities within the trusts (cost per trust using average activity data)

Attending OL* 7.5 £560 £81

training day

Identification of Key 4 £124

opinion leaders contacts**

OL activities OL* 26 (1 per £1,944

(excluding training) week)

OL=opinion leader

RF=Research Fellow

SRF=Senior Research Fellow,

* Opinion Leaders are mix of band-7 nurses and consultants

**Key contacts: assumed band-7 nurse

Source: PoISE SER

Page 68: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

EPI-SNAP

The SER provides no quantified data about thetime commitments of local participants, but aquestionnaire sent to all steering group membersoffers some information about their commitments,which included the following: – attending steering group meetings: 126 hours

(excluding travel time)– referral form development: 12 hours– attending conferences: 13 hours– attending external meetings as an EPI-SNAP

representative: 80 hours– preparing abstracts: four hours– project promotion: 20 hours– other work relating to the audit: 12 hours.

The SER also notes that time and effort could havebeen saved by having better understanding ofplanned changes to NHS information systems andgreater involvement of relevant externalorganisations.

SNAP-CAP

The SER provides some data about the timecommitments of steering group (SG) members andof local participants.

Estimates obtained from the questionnaire sent toall SG members: – bundle development (SG meetings plus

additional time): 167 hours of clinicians’/pharmacists’ time

– attending steering group meetings: 190 hours(excluding travel time)

– external database: 105 hoursimplementation time

– Extranet set-up: set up in two weeks byaudit coordinator

– project promotion: 50 hours from allSG members

– preparing routine data: four hours– preparing abstracts: 20 hours– attending external meetings a SNAP-CAP

representative: 34 hours– attending conference related to SNAP-CAP:

118 hours.

52 How do you get clinicians involved in quality improvement?

Table 29: PoISE costs associated with standard dissemination

Standard dissemination Total cost Per copy Per trust

Costs associated with provision of standard dissemination materials

Short guideline Editing £300 £0.75 £3.74

(recommendations) Designing £575

5,000 copies Printing and delivery £2,868

Total: £3,743

Poster (editing and Printing and delivery £844 £0.12 £0.60

designing costs included

within short guideline)

7,000 copies

Patient leaflet Designing: £1.91 £9.53

850 copies (RF x 28hrs)

£1,221

Printing: £400

Implementation guide Designing: £1.52 £7.60

(Admin x 4hrs, SRF x 8hrs,

RF x 16hrs)

£1,293

Materials and packaging £2.24 £11.20

Posting £1.15 £5.75

Source: PoISE SER

Page 69: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Estimates were obtained from the questionnairesent to all participating sites – at least one responsewas requested from each site:– data collection: between one and 10 hours per

month by foundation year 1 doctors (FY1s),specialist registrars (SpRs), clinicaleffectiveness staff, audit staff, respiratoryconsultant and pharmacists

– data entry: between 15 and 30 minutes permonth by respiratory consultant, SpRs,pharmacists, nursing staff and audit staff

– feedback: 15–60 minutes per month feedingback data via email, and verbally atantimicrobial team meetings and monthlymedical team meetings between respiratoryconsultants, SpRs, acute physicians andnursing staff

– teleconferences: one hour per monthattended by respiratory consultants,infectious disease (ID) consultants, acutephysicians, SpRs, clinical audit staff, chargenurses and pharmacists

– learning sessions: one day per year (2007,2008) attended by acute physicians, A&Econsultants, ID consultants, respiratoryconsultants, audit staff, clinical effectivenessstaff, SpRs, foundation year doctors, nursingstaff and pharmacists.

The SER also notes: ‘significant time and somedirect costs could have been saved if we hadinitially taken the IHI approach of using anExtranet, as opposed to trying to develop arelational database’.

IBD

This project is being completed as this report iswritten. At our last meeting (July 2009) the projectteam told us that a questionnaire had been sent toall IBD sites to explore who had been involved inthe audit and how many hours had been spent;25–35% of the sites had responded. This data iscurrently being finalised, but was not available tous at the time of writing. The SER (June 2009)does note, however, that ‘the most commonmethod seemed to be that sites would initiallycomplete the data entry forms by hand and thentransfer the data onto the website’, indicating thateven web-based data entry can be a time-consuming process. The project team also notedthat the Royal College of Physicians estimates that

the central costs of a national audit are£120,000–£150,000 per year.

PEARLS

This project has not yet been completed and nocost or timing data are available.

Below are some of the common themes about theeffort required that emerged from these accounts.

Central costs of the projects: We know whatfunding each project received from the HealthFoundation. We had some estimates of steeringgroup time from the two SNAP projects. We alsohad some qualitative accounts of the efforts madeby the central project teams and of thecommitment of central clinical leads and otherproject champions (see, for example, the NCROPaccount of the central support required for localdata collection).

Central costs of audit: All the EwQI projects, barone (SNAP-CAP), involved audit, and five of theprojects were aiming at national coverage. Betweenthem the projects provided a large volume ofqualitative data (much of it from interviews andmeetings, some from questionnaires) about theefforts involved centrally in developing audits,recruiting and retaining participants, andmaintaining the quality of data collected, and incentral data cleaning, analysis and feedback. Butthis was not supported by quantitative data,although IBD gave us an estimate of the centralcosts involved.

Local costs of data collection for audit: The effortand time involved in local web-based datacollection for audit were not always thought to beonerous (for example, Colorectal Cancer, POMH-UK), and indeed were often regarded as an already-funded part of a clinician’s job (as is the case formandatory national audits). Paper-based datacollection from case notes (as in IBD) was muchmore time-consuming.

Local costs of improvement interventions: Oneproject (PoISE) provided quantitative data of thetime and costs of local trust involvement in specificimprovement interventions. A second (Self-harm),provided estimates of the time involved by localtrust teams and by service users in each 18-month

How do you get clinicians involved in quality improvement? 53

Page 70: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

wave of the project, and a third (SNAP-CAP)provided similar estimates, based on aquestionnaire sent to participants. The POMH-UKteam told us what they charged trusts for thecentral functions supplied through the observatorybut provided no quantitative data on additionaltrust involvement. We had extensive qualitativeevidence about local efforts from other projectteams (eg: NCROP and IBD), which suggests aconsiderable commitment, particularly by clinicalleads, but there was no quantitative data to supportthis. This is a very mixed picture, and we considerbelow why it was so difficult to obtain betterquantitative data.

3.6 Variation within theEwQI projects

The aim of the EwQI was to reduce variation inclinical practice in relation to an agreed nationalstandard. In response to this challenge, someproject teams sought to introduce a commonintervention (such as the PEARLS trainingpackage) across all participating trusts, others(such as POMH-UK) provided a package ofdifferent improvement interventions and letparticipating trusts select those that seemed mostappropriate to their needs. Yet others (such asNCROP) encouraged trusts to develop their owntailored improvements through action plansdeveloped following peer review.

The extent to which these variations in approachrelated to a measurable reduction in variation inlocal clinical practice is, of course, a key question.We can see from table 7 to table 17 above thatsome EwQI projects reported real but relativelymodest impacts, despite evidence of considerablecommitment. We have also identified wideroutcomes. However, it is also the case that withinthe projects there was much variation in outcomes,and we review some of this variation in thissection. Again, we draw on the project teams’ ownreporting here, as well as on the understanding wegained through interviews and meetings withproject team members.

Colorectal Cancer

A survey sent by the project team to participatingconsultants showed that, on the whole, trusts with

more staff allocated to data entry and with theresources to fund dedicated time for audit weremore willing to participate in the project. However,the project team also commented: ‘it should not beassumed that smaller units do not participate, asoften in these instances it is one or two dedicatedindividuals who continue to drive submission,using their own time to enter data’.

The SER also comes to an importantconclusion about the reasons for variation inthe quality of care:

The results of the survey suggest that the differencesin surgical outcomes may be due to differences inquality of care, through structure and processeswithin the trusts. Volume of cases per hospital, thenumber of consultants and specialist nurses persurgical unit and larger ITU [intensive care unit]and HDU [high dependency unit] facilities appearto play a role in 30-day mortality, adequateresection margins and adequate sampling of lymphnodes at the time of operation. Hospital trusts thatuse a fast track discharge scheme were more likely todischarge patients in less than 15 days. Althoughthis data is not from a large sample population withoutcomes taken over time, it does provide anindication that the organisational infrastructure andprocesses of the trust are also important indetermining patient outcomes in addition to themore frequently cited volume of cases per hospitalrust.

POMH-UK

Trusts took different approaches to theimplementation of change and preferred differentinterventions – this was not seen as a bad thing bythe central team and, in fact, highlighted the needfor those promoting QI to provide a range ofinterventions to allow trusts to choose the best fortheir local circumstances. Some of theinterventions were used in slightly different waysthan anticipated: ‘for example, a workbookdesigned to be used by individual clinicians, hastended to be used by clinical teams to informgroup training.’ The team also commented: ‘Sometrusts left and re-joined the programme, and inthese areas we can expect there to be variation inpractice and possibly outcomes.’

The team reported that change in practice acrossparticipating trusts had been varied:

There are a number of issues that need to be taken

54 How do you get clinicians involved in quality improvement?

Page 71: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

into account when interpreting POMH-UK data.Variation in the performance of individual teamsmight be accounted for by the different contexts inwhich they work and the particular group ofpatients treated. Where small sample sizes areentered by teams, this will also contribute to thevariation seen.

Self-harm

Some local teams engaged well, others less so.The project team commented: ‘We are awarethat our positive feedback about the project camefrom those teams which engaged well. We believethat we had much less impact with some teams,but the difficulty is knowing how many fall intoeach category.’

One local team lacked management support, andthere was a poor response rate from some localteams to the staff surveys sent out by the centralproject team. Hinting at the importance of localfactors such as the commitment of individualproject teams and local funding arrangements, thecentral project team commented: ‘Some of theteams we worked with are now more poorlyresourced than before they started, so they mayincreasingly struggle to find the time and spacefor QI. However, those that see a significant valuein QI may actually try harder to build this intotheir work.’

NCROP

The median scores for all participating unitsshowed no statistically significant change in any ofthe key service areas, but the project teamcommented that this overall picture concealedsome small but statistically significant changeswithin individual units. The participant changediaries were particularly informative aboutvariation in service change:

The service changes described varied enormously:from major service reforms, such as standardisationof COPD care pathways across a district or healthsector, to much more specific and small-scalechanges, such as revision of an NIV protocol. Anumber of achievements involved the appointmentto new posts that included medical and nursingspecialists but also administrative and other supportworkers. Many respondents reported that serviceimprovements were either agreed but not yetimplemented, or that negotiations were ongoing and,

in the majority of cases, but not all, appeared to beheading for a successful conclusion. The impressiongiven is that service improvements may take muchmore than a year to fully implement from the pointof inception.

PoISE

Each participating site started from a differentposition as regards its approach to fasting and itsstate of readiness to engage in the project. Therewere different levels of participation in theinterventions between different sites. As far as itcould, given the resources available, the teamattempted to explore this pre-existing variationand its impact on implementation and outcome.

While the interventions were standardised acrosssites – for example, by providing training andintervention packages – in fact, each siteimplemented its interventions differently (forexample, particularised to site circumstances). Thisraises a question about methodology: ifinterventions are being implemented differentlywithin and across sites and intervention arms, whatis actually being measured/evaluated? Gatheringinformation from sites about how they implementedinterventions provides us with a picture of the extentof ‘fidelity’; however, there are methodologicalquestions about the use of trial designs, which areaimed at measuring like for like.

EPI-SNAP

The processes differed between regions. In Ayrshireand Arran – despite the clinical lead’s owninvolvement locally and extensive contacts withclinicians, e-Health and medical management – theprocess went very slowly and encountered activeopposition among some secondary care physicians.In Grampian and Tayside, as well as theinvolvement of consultants responsible for runningthe first seizure clinics, there were generalpractitioners with an interest in epilepsy whohelped to run these clinics and liaised with theprimary care community. In Grampian, however,the introduction of the form was as slow as inAyrshire and Arran, and here too eventual uptakewas very poor.

SNAP-CAP

At the time of writing this report the SNAP-CAPproject team was still analysing the outcome data.

How do you get clinicians involved in quality improvement? 55

Page 72: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

However, responses from practitioners to dateindicate that the project achieved a basic level ofprogress and that for three hospitals, firm progresshas been achieved. The four hospitals where theproject worked best all had consultants involvedwith the steering group, whose members hadsignificant leadership roles in their organisations.

The top-down approach in SNAP-CAPencountered problems when the care bundle wastaken to potential sites. Some lead clinicianscommented that the consultation on the bundleshould have been wider and were reluctant to takepart as they had not been involved earlier in thedevelopment process.

IBD

The first round of audit revealed the differencesbetween sites in terms of the level of serviceprovision at the beginning of the programme. Theparticipating trusts also undertook data collectionand data entry differently: in some areasconsultants were heavily involved in datasubmission, in others the task was delegated tojunior medical staff or to clinical audit staff. Trustsalso participated in the audit in different ways –some showed high levels of commitment, othersdid not.

On the variation in outcomes the team commentedthat ‘Sites will inevitably have responded to theirparticipation in the audit with varying levels ofcommitment and success. Perhaps the moreaccurate statement might be that we have helpedthose who wished to make a change but did notknow where, or with which evidence, to start toimprove their service.’

PEARLS

This project is not yet complete, so it is notpossible to comment on variation betweenparticipating sites.

In summary: some common themes emerge fromthe project teams’ comments on local variationwhich confirm our findings elsewhere in thisreport:– the wide variation in the starting points of

participating trusts – not only in clinical

practice (the rationale for the QI project in thefirst case) but also in the degree to which trustshad the resources and the prior understandingto engage with the project

– the importance of the organisational context, ofexisting management structures and processes,and of management support

– the importance of well-motivated clinicians– the need to allow enough time to capture all

the relevant change.

It also seems likely that the same QI approach, evenif it were implemented with complete fidelity ineach participating unit, would have differentoutcomes according to the particular capacities andstarting points of each unit. The projects were notable to capture data with enough granularity toallow us to understand what enables some units tobe more successful than others. Two propositionsare worth considering. The first is that some unitsmight already be performing very highly in applyingagreed standards and are unlikely to be improved bythe QI activity, while others are performing poorlyand lack the capacity to improve. Like Goldilocks’porridge, other units might be ‘just right’ with boththe need to improve and the capacity to do so.Equally plausibly, variation in outcomes might beunrelated to current performance but determinedby other contextual factors such as morale,leadership and management. Exploring thesequestions would require a more detailed local study,possibly more ethnographic in nature, than hasbeen possible here.

3.7 Conclusions

The consequences of the considerable effortrequired to deliver these projects were, in places,significant. However, they never produced anacross-the-board step change to a new level ofimproved quality or efficiency. This conclusion isentirely in line with the systematic reviews ofeffective practice and organisation of careproduced during the past decade – for example, bythe Cochrane group67. The EwQI projects involvedcomplex interventions into organisations andprocesses, which have many determinants inaddition to the causal effect of the QI activity. Ascomplex interventions, they cannot be easilycontrolled and their impact is constrained by theother determinants of organisational life.

56 How do you get clinicians involved in quality improvement?

Page 73: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Recognising that the EwQI has produced no magicbullets (and judging by the wider evidence base, itwas never likely to) is a first step towards learningfrom the evaluation68. However, many of theproject teams were able to tell, and provideevidence to support, a convincing story aboutongoing changes in practice and, perhaps moreimportantly, in attitudes. And some (such asNCROP, who studied these changes in detailthrough a qualitative evaluation) thought thatthese were the more important outcomes.

Our second conclusion is that if the effects weremodest, so too were they variable. The evidencepresented here strengthens the argument from thewider evidence that ‘the organisational context forquality improvement initiatives is a crucialdeterminant of their effectiveness, and differencesin context from one organisation to another meanthat, even if a quality improvement activity couldbe standardised, its effects would still be likely tovary considerably’68.

Third, the double audit cycle model of QI, used bymany of the projects, demonstrated thatcomparative performance data can be employedsuccessfully in promoting improvement. There arecontinuing and common pitfalls in using suchdata69, but these can be overcome with carefulattention to the quality of the data and diplomacyaround how, or if, it should be made public. Thisstudy does not cast much light on the questionwhether the public release of performance data willimprove healthcare quality, but it does support thebelief that, for many clinicians, engaging in clinicalaudit can support their participation in peerreview and other improvement activities. However,the evaluation also suggests that clinical auditwithout action plans is unlikely to producemeasurable benefit.

Fourth, the experiences of the projects confirmthat it is difficult to establish the opportunity costof these interventions. Detailed cost data is hard tocome by in the NHS, but as important as thedifficulties of collecting data were the difficultiesthe project teams had in conceptualising and

analysing what activities, if any, were forgone(despite considerable efforts by the EvaluationTeam to support this effort). Indeed, ‘despite theimportance of understanding the financial impactof such programmes, there are no establishedstandard methods for empirically assessing QIprogramme costs and their consequences for smalloutpatient healthcare organisations’70, and – wewould add – not only for small outpatienthealthcare organisations. Without clearer cost data,the costs and benefits of QI activities will be hardto judge and therefore justify.

Nevertheless, while it is hard to be exact aboutcosts, the feedback from the project teams (well-illustrated in the estimates provided by Self-harm(see section 3.5.2)) clearly shows that each teamfelt it was pushing at the very limits of the effortsthat clinicians could make to QI activities withoutreducing their input to other aspects of theirworkloads.

Relative to the total costs of the healthcare system,the EwQI costs were tiny. However, they were inaddition to the efforts routinely required, andthere seemed to be very little opportunity tosubstitute routine work for QI activities. ‘Qualityimprovement can be costly, especially in serviceswith little experience or infrastructure to supportimprovement71.’ We have a healthcare system thatappears unable to value QI: it does not know howmuch QI costs and it diverts its efforts into otherthings, often leaving QI to operate at the marginsof routine NHS activity. Where QI wassuccessfully pulled into these routine activities(through professional development, working toguidelines and so forth) it appeared to havemore traction. We will see that clinicians andpatients and user groups can also help to bringabout change by consolidating QI within thepackage of activities that make up a progressivehealthcare system. We move onto this issue in thefollowing chapter.

How do you get clinicians involved in quality improvement? 57

Page 74: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

4.1 Introduction

As we noted in chapter 2, the aim of the EwQI wasto engage clinicians, through their professionalorganisations, in projects to improve the quality ofclinical care in the UK. In this chapter we outlinewhy seeking the engagement of distinct groups inthe EwQI was considered important, and discussthe engagement of clinicians, patients andmanagers in turn. We have already noted that QIactivities are complex, context-dependent andemergent. They also depend upon a degree ofcoordination or alignment between differentgroups and in different organisational settings.These groups and organisations are not alignedthrough a single bureaucracy, nor do they respondto the same set of incentives and motivations. Thismakes them ill-suited to hierarchical control. Insuch complex systems, simple monetary rewardsmay encourage perverse behaviours or haveunanticipated and unwelcome consequences.Therefore, the mix of mechanisms for deliveringQI often includes ‘softer’ characteristics, such asprofessional commitment, public service ethos andaltruism. One way of understanding this is toexplore how these relationships can be organisedand consolidated through the effective engagementof groups such as patients and carers, clinicians,managers, and policy makers.

However, although professional commitment,public service ethos and altruism can provideflexibility and collaboration for QI, these attributescan be hard to sustain in the face of growingdemands on limited resources. The ‘soft values’necessary may be displaced by a lack of suitablyskilled staff, ill-considered regulation, shifting andconfusing policy objectives. People may becomedisillusioned by the experience of achieving only

limited gains and a sense that ‘nothing works’.Successful engagement in the kind of QI underconsideration in this report involves buildingshared goals and constantly re-energisingcommitment to them, developing agreedperformance standards or guidelines that reflectthese goals, willingly sharing information andbuilding trust. It requires frameworks within whichdisputes and differences along the way can beresolved or at least managed. For this reason,collaboration is a key part of the mix of activitiesneeded to underpin sustainable QI72. The EwQIwas a testing ground for different ways of engagingkey groups in QI, and important lessons werelearned.

4.2 Engaging clinicians

The wider literature on healthcare professionals’views on clinician engagement in QI has beensummarised as follows:– Healthcare professionals express strong support

for the principles of quality patient care, butthis may not reflect a clear understanding as tohow quality might be defined, recognised orimproved.

– Healthcare professionals’ espoused beliefs aboutquality may not translate into changes ineveryday practice. Instead, clinicians haveshown a variety of responses to qualityinitiatives, ranging from apathy to downrightresistance73.

Our own early experience of the project teamsconfirmed the first of these conclusions28. Belowwe explore the balance between the two, drawingon what the project teams reported in their SERs,including the findings from the teams’ own surveys

58 How do you get clinicians involved in quality improvement?

Chapter 4

Engaging with theinitiative

Page 75: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

of participating clinicians (see appendix E). Wealso report on our Delphi survey of cliniciansparticipating in the EwQI, which was undertakentowards the end of the initiative (see appendix F).

The views of the project teams

In their SERs, the project teams describe a varietyof engagement activities that were, in theiropinion, successful.

Widespread debate and influential advocates

The Colorectal Cancer team held debates for‘clinicians with reservations and those whostrongly supported the project to discuss theirideas in a public forum’. In order to make thesedebates attractive and encourage attendance, theproject team encouraged ‘well-known figuresconnected to policy bodies . . . [to] take part’. Otherprojects, such as POMH-UK and NCROP,encouraged regional debates on the findings oftheir respective audits. In general, all the projectteams took it as self-evident that the support ofwell-known or influential figures in the field wouldencourage clinicians to take part. The IBD team,for example, noted that the project had benefitedfrom the involvement of respected steering groupmembers who could encourage their colleagueswithin IBD care across different professions. Therewas a general view that ‘personalities’ – that is,project champions who were often drawn fromsenior, respected members of the profession – gaveQI activities credibility.

But the project champions were not just drawnfrom well-known senior figures. The dedication ofenthusiastic individuals at many levels was alsoimportant in securing wider engagement, and thisgroup included members of local teams as well asmembers of the central project teams. When thisinput ended, the gap became apparent. Forexample, the Colorectal Cancer team noted thatwhen a key professional who had been a strongadvocate of the audit retired, the quality of the datacollected in his trust reduced significantly. In thePoISE project, initial contact with trusts was oftenthrough an enthusiastic individual, and even oncethat role had formally passed to someone else, theoriginal contact frequently maintained an interest.And when a key contact was not enthusiastic, ‘thisdid have an impact, with loss of support and

commitment ... Enthusiasm of key contacts andmany local investigators has been crucial to thesuccess of the project locally’.

The Self-harm team reported that the success offeedback events was ‘very dependent on the energyof the people taking part (and those running it)’.The IBD project team had a very committedclinical lead, who carried out 12 of the 25 localaction planning visits and was able to motivatetrust staff to take part in the project because hewas ‘prepared to engage with them [local teams]directly and not just make pronouncements from adistance’. And at local level, clinical audit staff wereoften highly committed, taking on theresponsibility for the IBD audit on top of theirexisting commitments and, in some trusts,managing the whole process.

Incentives

The premise of the EwQI was that professionalpeer pressure works as an incentive. All the projectsemployed this incentive through a variety ofapproaches which included comparative audit,regional meetings to discuss findings and peerreview visits. Some (such as NCROP) used all theseapproaches. The Colorectal Cancer teamcommented that busy consultants need incentivesto prioritise the audit. Similarly, the EPI-SNAPteam reported: ‘an incentive was needed toencourage GPs to complete all fields of the annualreview screen. ‘Quality’ was not a big enoughincentive for GPs to complete annual review fieldsthat fall outside the QOF clinical indicators forepilepsy’, suggesting that financial incentives alsowork, at least in primary care. Being clinician-ledwas also sometimes sufficient on its own; as theColorectal Cancer team put it:

The fact that the project was started by consultantshelps it to appeal to other colleagues, in that thegoals of the group are common to the profession as awhole – to improve patient outcomes – and that themarkers of quality are based on the experience andknowledge of the profession, rather than meaninglessstandards set by a bureaucrat with no knowledge ofthe system.

But this was not always the case. When cliniciansfrom different specialties need to be involved,conflicting views of quality improvement mayprevail and impede engagement73. In thesecircumstances, professional pressure from

How do you get clinicians involved in quality improvement? 59

Page 76: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

clinicians outside a clinician’s own specialtywill not necessarily be effective – as demonstratedby the lack of engagement of surgeons in thePoISE project.

Fit with professional aims and identities

The IBD and NCROP teams commented that theirexperiences accorded with the results of apreviously conducted audit, which showed thatchange only occurs when it is aligned with the aims

of the organisation and has the buy-in of allparties. This was also the experience of the EPI-SNAP team, who reported that some clinicians didnot see it as part of their role to provide advice topatients about driving licences and therefore werereluctant to comply with this aspect of the project.

Further details of professional engagement in QIthat were reported in project SERs are given in thetable below.

60 How do you get clinicians involved in quality improvement?

Table 30: Increase in levels of professional engagement in QI as a result of the EwQI

Colorectal – Trust participation rose to 81% throughout England and Wales, with 18,504

Cancer patient records for the reporting period 2006–07.

– The team added to their existing publications on the National Bowel

Cancer audit (NBOCAP) during the project, and commented that this had

‘meant that there is increased awareness of the work that NBOCAP is

doing and also the results are available for review at an international and

national level by clinicians’.

Self-harm – Feedback from Wave 1 participants (through an evaluation survey

undertaken by the project team) indicated that almost all respondents

felt the programme had helped to improve both their understanding of

self-harm and of services for patients. But, as the team commented, this

threw little light on whether levels of engagement in QI had increased,

‘Perhaps some of these respondents already held positive views on QI

anyway (we didn’t ask)’.

– However, the project team also noted that some local teams were more

poorly resourced than others, and in some there was little capacity for QI.

The project team speculated that those teams that see a significant value

in QI might actually try harder to build this into their work.

POMH-UK – The POMH-UK team did not know the exact number of participating

clinicians from all the trusts involved, but it was large: 209 adult acute and

intensive care wards submitted data for topic 1; 35 assertive outreach

teams entered data for topic 2; 155 wards submitted data for topic 3.

Each ward/team would have had at least one consultant psychiatrist

and one junior doctor, as well as nursing and pharmacy staff.

– The team referred to the good reputation the POMH-UK project has

established with participating trusts, demonstrated by their willingness to

continue subscribing to the observatory. Numbers of trusts subscribing

grew from 37 in the first year to 48 in 2009.

NCROP – ‘Our experience is that colleagues participating in the NCROP have

been very keen to be involved in the project and to improve the quality

of care for COPD patients.’

– There were good levels of participation in the project: 100 hospitals took part.

– 93 out of 100 teams fully completed baseline and final change diaries.

– The 2008 national COPD audit included more organisations than the

original BTS audit in 2003 (trust participation rate 95%). Participation rates

were:

– acute trusts: 98% for the resources and organisation of care audit, and

96% for the clinical audit

– primary care organisation survey: 73%

– GP survey: approximately 43%

– patient survey: approximately 45%

continued

Page 77: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 61

Table 30: Increase in levels of professional engagement in QI as a result of the EwQI – continued

NCROP – The NCROP team reported their involvement in BTS meetings, publicity

continued events and conferences.

– Summarising its qualitative findings from the participant change diaries

the team commented, ‘What is clear is that the majority of participants in

the peer review group and a sizeable minority in the control group found

benefit within NCROP. The former for a wide range of reasons that include

the provision of a quality framework, sharing of good practice and the

bringing together of commissioners and providers to the team’.

PoISE – The team commented that local engagement in the project was strong,

particularly on the part of the local investigators, and ‘some key contacts

benefited from personal development, greater knowledge of trust

organisation and an increase in contacts.’

– But although nearly 200 NHS staff were directly involved in the PoISE project

(and there was anecdotal evidence of the indirect involvement of many

more), the team also reported problems with the engagement of some

groups of clinicians (surgeons), and with some key individuals, such as

some senior anaesthetists in some trusts. The team reported that some

clinicians were risk-averse when adopting new practices.

– Inter-professional relations, for example between doctors and nurses, also

posed difficulties; different professionals had different approaches to

change, different leadership structures and so on. All this made

engagement across groups difficult.

– There were also problems in coordinating practice across small sub-units

within each trust.

EPI-SNAP – All first seizure clinics in Scotland participated in the audit, including

consultants, GPs with specialist interest in epilepsy, and epilepsy specialist

nurses.

– But engagement was different in different areas and among different

groups of clinicians.

SNAP-CAP – Fifteen teams out of a possible 25 acute hospitals in Scotland had

participated, as of March 2009.

– Participants were asked by the central project team if participation in

SNAP-CAP had contributed to the development of individual contributors:

out of the seven teams who responded, two said there had been firm

progress, four said that a basic level of development had been achieved,

and one said that no progress had been made.

IBD – The team noted that there was ‘a notable increase in participation across

both rounds [of audit], driven – we believe – by the engagement of

clinicians in the process. Hospital representation and feedback from the

initial second round regional meetings indicate that clinicians are very

much engaged in the process, and that they are very keen to lead the

development of their services towards meeting the new IBD Standards.

A number of clinicians present at the meetings have also expressed their

interest in participating in the suggested change implementation pilot

project that seeks to maximise the impact of the project.’

– All of the key professional groups represented on the steering group gave

their full support to the development of the successful bid for further

funding for the audit as part of the National Clinical Audit and Patient

Outcomes Programme.

PEARLS – This project is not yet complete. The team noted: ‘QI [is] being discussed

professionally as important markers of patient reported outcomes – this will

be more measurable later’.

Sources: As identified in table 5

Page 78: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The view of clinicians participating inthe EwQI

In chapter 3 we considered what the SERs sayabout participating clinicians’ opinions of theirprojects (see table 23). In order to explore theviews of clinicians participating in the EwQI onprofessional involvement in QI more generally, wealso undertook an initiative-wide Delphi survey.This covered a small sample of cliniciansparticipating in six EwQI projects (n=97 in thefirst round and n=53 in the second round).(Further details are given in appendix F, whichreports our findings in full). In summary, wefound that:– Overall, clinicians in all six projects perceived

the role of clinician engagement in successfulQI to be fairly important, tending towardsvery important.

– The top activities identified as improvingquality through clinicians’ involvement wereproviding training for clinicians and managersand keeping clinicians up to date through thedevelopment and promulgation of clinicalpractice guidelines. Taking part in regularformal discussions with colleagues was rankedas one of the three most important activities forengaging clinicians by participants in three outof the six projects, but was not one of the sixtop priorities of the overall population.

– With regard to providing support for clinicalengagement in QI, the three most effective wayswere seen to be: securing good inter-professional relationships, communicatingcandidly and often about QI, and involvingpatient organisations. But there was also widedivergence of opinion. On the question of howbest to support clinical engagement, mostparticipants identified at least one effectiveapproach that was not among those ranked assix highest by the overall population.

– The top barrier to engaging clinicians in QI wasidentified as the limited number of staffavailable for QI. Other ‘small’ obstaclesinclude the lack of widely shared knowledgeand lack of leadership. Lack of financialrewards, lack of performance targets, use offinancial sanctions and poor protocols wereranked as ‘minimal’ obstacles.

– Greater standardisation of professionalpractice, more equitable care, greater qualitycontrol and improved patient satisfaction were

perceived as the most important consequencesof engaging clinicians in quality improvement.

– The Delphi also sought clinicians’ views abouttheir attitudes toward the value of engagingclinicians in QI. Clinicians were asked to list thethree most important activities they viewed asquality improvement. In total, they listed 64activities, which can be found in appendix F.The four activities that were perceived byclinicians as the most important were clinicalaudit (cited 58 times), engaging withpatients/service users (cited 23 times),communication (cited 21 times), andcontinuing medical education (cited 18 times).

– Clinicians in all six projects perceivedclinicians’ engagement as at least ‘fairlyimportant’, tending towards ‘very important’.

– Clinicians also rated the success of theirEwQI project in engaging the respondents inQI on a five-point scale. The average ratingwas between ‘neither unsuccessfully norsuccessfully’ and ‘fairly successfully’.Respondents gave various reasons for this,including already being engaged in QI projectsand an excellent service already being providedbefore the EwQI. However, one respondentsaid, ‘the EwQI has been a steep learning curveand leading the project had probably been thehardest project that I have ever undertaken butalso the most rewarding’.

– Attitudes about the value of engaging cliniciansin QI did not change dramatically as a result ofinvolvement in the EwQI. Respondents fromSNAP-CAP reported the biggest change inattitude: the mean response from that team wasthat they had changed their attitude‘moderately’, whereas the mean response fromother project teams was that attitudes hadchanged ‘a little’. When respondents wereinvited to elaborate on their answers, onerespondent pointed out that they had alwaysvalued involving clinicians and that theirpositive attitude had only increased slightly.

The Delphi therefore reinforced the argumentdiscussed in chapter 1, that is, context mattersgreatly. In each project very different views wereexpressed about what works best to facilitateengagement. The Delphi also confirmed theconclusion of Davies and colleagues that clinicianssupport the principle of QI73.

62 How do you get clinicians involved in quality improvement?

Page 79: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Lessons learned

First, we can conclude that professionally led QI inacute care can successfully mobilise large numbersof clinicians across a wide range of organisationalsettings. It was relatively easy to count the numbers.Although – as the Colorectal Cancer and Self-harmprojects demonstrate – even doing that meantaccounting for considerable fluctuation over time.But it proved much harder to establish more detail,for example, about who was involved in each trust,what their professional background and motivationwas, what they actually contributed, and whetherthese particular individuals or teams remainedinvolved throughout the project. The cliniciansurveys undertaken by the projects might havehelped here, but in practice they largely focused onissues of special relevance to each project, such aswhat the clinicians thought of the NBOCAP audit(Colorectal Cancer) or staff attitudes towardstraining (Self-harm). They therefore provided littleinformation about clinicians’ attitudes towards, orunderstanding of, QI per se.

Ideally, we could have supplemented the projects’clinician surveys with a more general initiative-wide before-and-after survey of all participatingclinicians covering issues such as:– the leadership behaviours that each team

engaged in (that is, setting achievable goals,making concrete plans and establishingmeasurable milestones) and the frequency ofthat engagement

– the extent to which each trust supportedinnovation (along the seven dimensions of risk,resources, information, targets, tools, rewardsand relationships)74

– the clinical teams’ previous experience of QImethods, and the extent to which these had ledto changes in practice75.

But the scale of the initiative and the large nationalaudits associated with many of the projects createdobstacles. The project teams, of necessity, focusedtheir attention in the early stages of the EwQI ontrying to recruit large numbers of participants. Tohave sought detailed information from participantsabout their attitudes to QI at that time might havedeterred them from participating and would havetherefore been counterproductive, as indeed someteams told us. There was a certain conflict ofobjectives here – the project teams’ main concern

was to get on with their projects and get cliniciansengaged. For them, evaluation came second. Norwould such an endeavour have been aided by thegeneral lack of understanding of QI that thenprevailed among clinicians, which might well haveproduced a low response rate to a general survey.

An alternative approach would have been for theprojects to undertake detailed local qualitativestudies to explore these questions. But it was clearfrom discussion with the project teams that theresources for this were not, in most cases, available.The project teams did, however, do what theycould. PoISE, for example, undertook a series ofqualitative interviews with some of theirparticipants, and NCROP did find funds for aqualitative study which, among other things,looked at participants’ experiences, beliefs, viewsand expectations of the intervention76. And thiswork provided some information about what theparticipating clinicians saw as benefits, andtherefore what sort of things immediatelymotivated them. These included: – opportunities for an exchange of ideas, giving

insight into the workings of other teams– a structured framework for a systematic critical

appraisal – time to reflect on their own services, providing

an opportunity that the teams did not normallyhave because of work pressures

– ‘food for thought’, provided because a teamwith a fresh eye was evaluating the site’sservices and processes

– a validating and reassuring experience that leftteams feeling that they were ‘not alone’

– a networking opportunity in which mentoringrelationships could be formed

– a chance for team working and team building,and for building bridges between clinicians,managers and commissioners

– a status-enhancer and a promoter of individualand departmental services to managers andcommissioners

– an additional lever in business case arguments.

Resources for QI matter too, but nowhere in thislist is monetary reward for individual cliniciansmentioned. Our second general conclusion is thatin acute settings, clinician engagement has more todo with the professional identity of clinicians thanwith any pecuniary gain. The POMH team, forexample, noted that:

How do you get clinicians involved in quality improvement? 63

Page 80: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The drivers for change are varied. For someclinicians, the desire to improve the quality of care isa sufficient driver, for others the participation inaudit, which can be used as evidence for engagementin CPD and revalidation, is an additional incentive.An example of the former is the strong support fromclinicians for a recent programme on the monitoringof the side effects of depot anti-psychotic injections.Traditionally this has had been a relatively neglectedarea. But 500 clinical teams took part in thisprogramme, submitting data on nearly 6,000patients.

Third, it was clear from the project teams’ accountsof their work that they regarded raising clinicians’awareness as an essential first step in theirengagement in QI. The POMH team put thissuccinctly:

POMH taps into a strong desire by MDTs toimprove practice and supports their wish to meet theclinically credible and realistic standards againstwhich we audit. For any change process, awarenessof the issue and one’s own practice in context is anessential first step, and this can be achieved with thebenchmarked data reports.

But given that, our fourth finding must followpost-haste on the heels of our third. Raisingawareness is an essential first step, but on its own itachieves little. All the project teams recognised this,some coming to appreciate the force of thisconclusion more fully during the course of theinitiative, although others, such as NCROP, wereclear from the start.

The results from the RCP Stroke audit attempted tofacilitate change by running multidisciplinaryregional feedback meetings but found that the rate ofchange was disappointingly slow ... This highlightedthe results from the Action on Clinical Auditpartnership ... [which] showed that change onlyoccurs if it accords with the aims of the organisationand has the buy-in of all parties77.

This recognition of the need for ‘action on clinicalaudit’ also proved stronger and more widespreadamong participating clinicians than some projectteams had perhaps initially anticipated. Asmentioned in chapter 2, the IBD steering groupredesigned its project to give all participants earlieraccess to their ‘Model Action Plan’, and theColorectal Cancer team commented on the‘growing awareness of colorectal surgeons of theneed not just for prompt feedback of high qualitydata on performance but also for trusts to

subsequently develop and implement action plansto address areas of underperformance’. Theimplications for further QI work and for nationalaudits are clear.

Our fifth and final finding is that the enthusiasmand commitment of a small number of central andlocal staff can be vital in motivating others to takepart. The down side is that, unless a project is well-supported by the system in which it operates, tooheavy a reliance on single individuals can pose athreat to sustainability. The key question, therefore,is: whose responsibility is it to mobilise the energyand enthusiasm of clinicians (and of service usersand managers)? We return to this in our finalchapter.

4.3 Engaging patients andtheir representatives

One of the specific requirements of the EwQI wasthat the projects should not only engage cliniciansbut also ‘work with patients’ representatives andexpert patients, and encourage participatingclinicians to work with patients78’. The case forengaging patients in all aspects of healthcare,including research and QI work, has been made inthe UK in the statutory requirement that NHSorganisations involve and consult patients and thepublic about health service planning79, andthrough initiatives such as the NHS Institute forInnovation and Improvement’s work on userinvolvement in QI projects, the Department ofHealth’s Expert Patients programme, and TheHealth Foundation’s own work through its Questfor Quality and Improved Performance (QQUIP).

The concept of ‘patient involvement’ includes tworather different things. First, it can refer to shareddecision making between a patient and apractitioner. Second, it can imply a process ofcollaboration in some aspect of healthcare morewidely – in this case, in QI activities. None of theprojects explicitly had a focus on the former, butall of them intended to include ‘patientinvolvement’ in the second sense. As discussed inchapter 2, the project teams did not, on the whole,start with clearly articulated theories of change.Nor did they start with clearly developed theoriesof patient involvement. It would have been hardfor them to have done so. The need to involve

64 How do you get clinicians involved in quality improvement?

Page 81: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

patients in activities such as QI, research andservice development is widely accepted, but theevidence base for doing so is still weak80,81.Moreover, the barriers to effective involvement areconsiderable. Discussing the involvement ofpatients in service development, Coulter and Ellinscite a list of constraints which include ‘lack ofclarity about aims and objectives; resourcelimitations and organisational constraints;professional or managerial resistance; problematicrelationships between stakeholders; and concernsabout representativeness’79. And in their paper onthe use of patient survey data in QI, Davies andCleary cite a wide range of organisational,professional and data-related barriers82.

The original EwQI proposals made it clear thatpatients and/or their representatives were presenton the steering groups of all the projects andinvolved in project design and the development ofoutcome measures. But there were few furtherdetails. Given the interest of the EwQI evaluationin engagement (and the requirement to engageservice users), members of the Evaluation Teamtherefore held semi-structured interviews witheight people involved in the EwQI project teams asservice users or user representatives during thesecond year of the initiative. To explore experiencesof user involvement, we also interviewed four ofthe EwQI project managers specifically about thisissue. The aim was to identify what, in the EwQIcontext, had helped and/or hindered effectiveinvolvement, and what such involvement had, andin future should, entail. This section thereforecovers not only what we found, but also some ofthe key questions that these findings raise.

Who should be involved?

People from a wide variety of backgrounds wereinvolved as service users in the EwQI projects.They included patients, carers, chief executives, andemployees of charities. As a term, ‘service user’ isbroad and encompasses many roles. It can bedifficult to generalise about who should beinvolved in a particular project. For example,projects addressing the needs of service users withchronic conditions may need to involve a verydifferent set of service users than, say, projectsabout elective surgery. The characteristics neededby service users also differ according to thenature of the improvement mechanisms

selected, requiring different kinds of backgroundand experience.

At the start of their review of patient-focusedinterventions, Coulter and Ellins note, ‘There is agrowing belief among policy makers thatpatients/citizens can contribute to qualityimprovement at both an individual and a collectivelevel’79. And, taking this further, Williamsoncompares the complexities of patients’ andclinicians’ views of various aspects of healthcareprovision and its quality, and distinguishes the‘structure’ of the patient side into patients, patientgroups and patient representatives83.

In all the EwQI projects, service users wereinvolved centrally as members of the projectteams and on steering groups: sometimes as alone voice, but in most cases with some supportfrom at least one other service user. In five of theprojects, considerable efforts were also made toencourage participants to involve service userslocally, building on and/or developing localservice-user networks.

Selection of service users

All the interviewees had experience of involvementas service users or patient representatives beforetheir involvement in the EwQI. Most were selectedthrough personal contacts; others throughadvertisement. Many interviewees (five out ofeight) had known some member(s) of the EwQIproject team before the project (in some cases for anumber of years). They stressed the importance ofthe mutual respect gained through such establishedrelationships, although, as one interviewee pointedout, this respect was not a consequence simply oflongevity but because ‘there were good peopleinvolved’.

The fact that the selection of service users in theEwQI projects was clearly not random raisesquestions. Should involvement be accidental oroccur through personal contacts? Is there an unduerisk of bias in such circumstances? Is there a casefor a formal recruitment process? Would there beany negative outcomes if there were formalrecruitment, for example, limiting the pool ofthose involved? How were other members of theproject team recruited? How much does experiencematter, and what experience and skills are

How do you get clinicians involved in quality improvement? 65

Page 82: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

required? The EwQI projects recruited individualsto provide a service-user perspective on the basis ofrelevant experiential or professional expertise, andalso on the understanding that their motivationswere in some sense aligned with the aims of theproject. But whose experience matters? A recentstudy on consumer involvement in research foundthat only three out of eight principal investigatorshad previously worked with service users/carers84.There may be a lack of relevant experience amongall members of a project team. Specifically, weneed to understand more about how to recruit,and subsequently motivate, individuals to bringa service-user perspective to a project team’sdecision making.

Motivation (and payment)

Asked about their motivation, intervieweesmentioned the same combination of ‘reasonsrelated to their personal situation, their experiencesof health and/or social care services (oftennegative) as well as … a more general commitmentto getting involved and bringing about change’85

that is found in studies of why people get involvedin research. But, as one interviewee noted, timeand resource constraints create a risk of bias: thereis a tendency for those involved to be relativelyfinancially secure (such as some retirees) or to beon benefits, or to be salaried patient representativesfrom medical charities. This raises the issue ofpayment. Interviewees’ experiences differed, and sodid their views. We were told that payment couldbe a ‘double-edged sword’: without it service userslack parity with others attending the samemeetings and being reimbursed for their time, butpaying people might attract them for the wrongreasons. Many interviewees mentioned thesignificant amounts of time they had given to theproject. We found that where a fee had been paid –for example, for attendance at meetings – there hadalso sometimes been attempts to ensure parity withother professionals as a ‘matter of principle’.Sometimes only expenses were paid. Someinterviewees got nothing. Some sought nothing.

The interviews encouraged us to considerquestions such as: how can an appropriate cross-section of people be attracted? Does payment help?If so, how should it be organised? Does paymentreflect the true cost of patient involvement? Is itright that only patient representatives and not

patients are paid for their time? INVOLVE (anational advisory group funded by the NationalInstitute for Health Research that aims to supportand promote active public involvement in NHSpublic health and social care research) issued adetailed policy on payment of people involved inresearch in August 200686. The policy covers allthe issues raised above and has been used toguide subsequent the Health Foundation-fundedQI projects.

How and when should service users beintroduced to the project and to theteam?

All interviewees stressed how important it was thatservice users had an adequate and appropriateunderstanding of a project’s aims and objectives. Inpart, this depends on timing. If service users arenot involved early, they will be ‘left in the darkabout decisions already taken and about therationale behind them’87. Interviewees involved inthe design of their projects and in the applicationto the Health Foundation also told us what animportant bonding experience this had been. Ifthey are to contribute fully and effectively, serviceusers working with the central project team needto be involved as early as possible in the design andplanning of the project.

The issue, therefore, is not when users should beinvolved but, given early involvement, how muchadditional prior understanding is also required. Is adetailed understanding of QI methodology and/orresearch techniques required? Interviewees thoughtnot. Is a detailed understanding of the relevantdisease and current approaches to care, as well ascurrent gaps in that care, also needed? Intervieweesthought that this was something the service user orpatient representative should be able to offer. Butmore important are trusting and openrelationships within the project team that allow allits members, including service users, to askquestions when they don’t understand something.And these relationships, in turn, depend on howservice users are introduced to the project teamand/or steering group. Interviewees emphasisedhow important it was for service users to be‘introduced early and as an equal member of theproject team’ if tokenism and tendencies to seeusers as ‘fashion accessories’ are to be avoided. Oneinterviewee thought that this was so crucial that it

66 How do you get clinicians involved in quality improvement?

Page 83: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

might be necessary to offer training inpresentational skills to potential service users tohelp them handle this initial step as well aspossible. Service users need to be introduced as anequal member of the team.

But what of service users working with localparticipants who out of necessity are recruitedafter the early planning has been done? Theytoo need to understand the project and workout what is needed. Explaining the project’saims and objectives and the potentialcontribution from service users in terms thatthey can understand is therefore a key aspect ofthe central project team’s communicationstrategy. And there should also be a clearexpectation that, as in the central team, localservice users are seen, and see themselves, as equalmembers of participating teams. People’s time isimportant, equal membership of a team (at anylevel) means members having equal opportunitiesto walk away from a project if involvement seemsto them to have become purposeless.

Service users’ role in the projects

Interviewees described their roles in the projectsby outlining what they had done. Activitiesincluded attending meetings, helping to designthe project and its communication strategy,setting outcome measures, helping to designquestionnaires, interviewing, interpreting data,discussing how findings should be reported,writing reports and giving presentations. Some hadalso played a large role in supporting other serviceusers at a local level.

Most EwQI service users felt that their role in theproject had been clear from the start and that allinvolved had understood it and supported themwell. But this happy situation was not shared by all.In at least some cases there was confusion aroundkey questions:– How was the service user’s role in the project

defined, and by whom? – Was the service user involved in this process? – Was that role clear and explicit from the start?– Was the service user able, if necessary, to adapt

that role over time? – Were the roles of other members of the project

team clearly defined?

Avoiding potential confusion on these questions

was seen to be important in securing effectiveengagement. One study on patient involvement inresearch projects documents how service userswere asked to describe their role using fourcategories: researcher, service user, carer andother84. Out of 61 respondents, 10 describedthemselves both as service users and as researchers.But, as the report of the study pointed out, thisview of their role was not necessarily shared byothers in the project, or even by the respondentsthemselves at the start of the project. In otherwords, roles can be unclear and therefore disputed,and can also change over time.

Support for service users

The support provided to service users variedsignificantly between the projects, but all theproject teams found it more time-consuming andresource-intensive than anticipated. Practicalsupport included: provision of access to ITequipment and training; training in presentationskills; willingness to explain and discuss the moretechnical aspects of the project; help with transportto meetings and care in timing meetings to meetthe needs of sick people; timely andunderstandable information about the project; andso on. Several interviewees also mentioned thecrucial need for moral support for people whowere often unwell themselves and were working inan unfamiliar setting with recognised experts inthe field.

Principles of service-user involvementin QI

As we have seen, all the interviewees commentedon the need for service users to be treated as equalsby other members of the project team and/or thesteering group. They also mentioned the need forrespect and trust among those members. In theabsence of these characteristics, the effectivenessof the service user’s interactions with the teamwas undermined. How this parity was achievedvaried from project to project, and differentapproaches included:– service users who already had good relations

with members of the project team establishedbefore (in some cases, well before) the theHealth Foundation-funded project and wereable to build on these relations

– steering groups who recruited multiple service

How do you get clinicians involved in quality improvement? 67

Page 84: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

users/patient representatives in an attempt toensure an appropriate balance on the group ofprofessionals and users

– positive attempts by project team/steeringgroup members to identify and utilise all therelevant skills and expertise of all theirmembers, including service users

– chairing meetings in ways that recognisednuances of understanding among members,and people’s possible contributions

– developing relations of trust and understandingamong team members, so that people were notafraid to ask questions to clarify something

– providing external support to service users(including support from external mentors, suchas a leadership development consultant oranother external ‘expert’ service user, buddysystems and telephone help lines)

– providing training, both informally (throughinvolvement in the project) or formally.(Several interviewees stressed the need to trainservice users alongside the professionals alsoengaged in QI88,89.)

It is possible, on the basis of this list and otherwork90,91,92, to develop a set of principles to coverthis relation between project teams and serviceusers at both central and local levels. One such listincludes:– varied and effective methods of

communication (such as regular telephonecontact and easily understandable language)

– respect for the knowledge and insights ofservice users

– strong personal commitment from everybodyto ensure service user involvement improvesthe project and its outcomes

– willingness to accept additional time/resourcesrequired84.

Outcomes – the experience of the EwQIservice users and the project teams

The majority of interviewees were happy with theirrole in the projects and felt that they had had apositive impact. Their achievement met and, in onecase at least, exceeded their expectations. We werenot able to explore the views of other members ofthe team in all cases, but where we did, it was clearthat they shared this view. The Self-harm team, forexample, talked in its SER about:

… striking gold by finding some excellent serviceuser representatives who really took the project bystorm. The service users worked with us to help usunderstand how best to involve them (and usersnationally), and we worked hard to respond to this.The willingness to adapt to new ways of workingand a commitment to joint working (on both parts)was key ... not only did this enhance our project andthe work completed by local teams, it has alsoinfluenced the way in which we work at the CCQI[the Royal College of Psychiatrists’ Centre forQuality Improvement].

And the PoiSE team mentioned, ‘It was a goodexperience to work closely with patients as partnersin the research. Having their perspectives in allstages of the project has helped to prevent thebalance shifting to what clinicians want/need ortheir views to dominate.’

The POMH team continues to build onits experience:

POMH has endeavoured to support service userinvolvement at all levels (project team, steeringgroup, topic groups, LPTs [local project teams]) fromthe very start ... some local project teams have beenvery successful in involving service users in theirPOMH work, others have found this challenging.The structure of LPTs varies widely depending onpractical issues such as trust geography eg: someteams are virtual, communicating by email alone,where services are spread out and meeting in personis difficult to arrange. At the beginning it wasexpected that each LPT would involve coremembership of a senior pharmacist, psychiatrist,clinical audit person, a service user representativeand nursing. However, subsequently some trustshave found it more useful to co-opt service usersand nurses onto the LPT for each particularprogramme so that relevant, specific expertise can begained. For each programme, we aim to develop achange intervention for service users. For example, apatient-held card was developed for programme 2which proved popular, and a service userinformation pack is being developed for programme7 in collaboration with the NPSA [National PatientSafety Agency]. Service users are also invited to theregional meetings, where they can influence futuredevelopment of programmes. The POMH serviceuser strategy is currently being reviewed to reflectthe evolving role of service user representatives at alllevels within POMH.

Evaluating service user involvement

Barnard and colleagues suggest that the experienceof service users covers the following parameters

68 How do you get clinicians involved in quality improvement?

Page 85: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

(which can be either positive or negative):– empowerment – mutual respect, valuing

different knowledge and experience,development, learning, growth, expressing apotential, and having a recognisable impact

– support – empathy, sensitivity andindividual contact

– communication – need for clarity of roles andresponsibilities, expectations and the use ofappropriate language

– resources – time, skills and money– motivation – enthusiasm, commitment

and inspiration84.

This raises questions about how the impact ofservice users on future projects should bemeasured and evaluated. The following list ofpossible outputs against which service userinvolvement might be evaluated is adapted fromBarnard and colleagues:– changes to the design of the project– new/revised questionnaires, interview designs,

etc, created by service users/carers– finding new ways of collecting data – suggesting patient-relevant outcomes and ways

of measuring them– access to other service users to provide

relevant data– explanations of the data relating directly to how

people experience the services– access to service-user networks to tell people

about the findings of the project– use of findings – for example, suggesting ways

to change services, based on the findings ofthe project.

And, in the context of the EwQI and our ownfindings from this set of interviews, we wouldconcur with these and also add:– developing a communication strategy for

the project– advising on the form in which findings should

be released and, in particular, whether theyshould be anonymised

– exploring and developing links withpolicy makers.

In summary, and given this range of potentialcontributions, we conclude that project teams (andsteering groups) need to be clear from the startabout the actual contributions they are seeking –not just from service users but from all themembers of the team, including clinicians, project

managers, statisticians, and so on93. And projectteams also need a good understanding of thepotential contribution that service users couldmake to a QI project, given favourablecircumstances, and of any specific limitations tothat input. The evidence from the EwQI is that theproject teams rose to the challenge posed by theobligation to include service users in a variety ofways. In almost all cases there was attention to the‘softer’ aspects of ensuring that service users feltincluded and, for their part, service users felt ableto participate in ways that at least facilitated theproject and at best, made a distinct and significantcontribution to its success.

4.4 The involvement of NHSmanagers

The project teams also report in their SERs on theinvolvement of NHS managers in the EwQI. Thisinvolvement was not a main focus of the initiative,although the project teams recognised itsimportance and some, such as IBD and EPI-SNAP,involved NHS managers in their steering groups.Others, such as POMH, deliberately and in this casesuccessfully, tried to market their project to trustchief executives. The need ‘to engage the hospitalmanagement in the audit and to make them awareof the importance of the data collection as a qualityimprovement method’ was also increasinglyrecognised by the Colorectal Cancer team as thatproject progressed – it sought to involve managersin the field by sending annual audit reports to trustchief executives. And the NCROP team listedmanagers as intended users of audit findings, whilethe EPI-SNAP team included managers in theircommunication strategy.

The IBD team was clear from the start ‘of the needto ensure buy-in from all key stakeholders,including managers of health bodies at variouslevels’, and the project team encouraged local truststo involve managers in its local action planningvisits, although this did not always happen:

It would have been beneficial to make moreconcerted efforts to support the local teams ininviting management to attend the meetings. Thepresence of trust staff at a management level at themeetings was the exception rather than the rule andwhere they were present there were signs that therewas a greater recognition of the issues faced by local

How do you get clinicians involved in quality improvement? 69

Page 86: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

clinical teams and what would be required toaddress any issues highlighted through the audit.

But this initiative was deliberately clinician-driven.A management voice on a steering group (whenavailable) could have provided advice oninfluencing senior management in NHS trusts, but,in practice, this does not appear to have reliablyaffected the commitment and involvement ofsenior NHS management in the field. There werereports that clinicians failed to get access tomanagement at the local level. The evidence wehave from the SERs is patchy, but it does suggestthat poor commitment and support from NHSmanagers hinders QI activities. For example, theSelf-harm team reported that, although feedbackfrom local teams about the project was generallypositive, the one team that had not found theproject helpful in improving patient care attributedlack of progress ‘largely to the lack of support fromsenior management within their trust’. Similarly,the SNAP-CAP team reflected, ‘In retrospect weshould have ensured commitment from chiefexecutives or medical directors to supportingSNAP-CAP and to ensuring that their hospitalparticipated in measures for improvement.’ Thisteam also said that clinical governance support anddirection was needed to support the project.

Government policies and priorities are animportant influence on QI strategies94. Interestfrom trust chief executives was more likely when aproject accorded with a national or local priority;for example, the NCROP team found thatcongestive obstructive pulmonary disease was alocal priority in many PCTs. But even this statusdoes not guarantee systematic and sustainedimprovement.

4.5 Conclusions

We began this chapter by noting that theengagement of clinicians, service users andmanagers is often thought to be a key to successfulQI. The evidence from the EwQI strengthens thisview. We found that many clinicians were willingto become involved in QI but faced substantialbarriers, including lack of time and, often, a failureamong all concerned to understand QI and theeffort needed to effect sustainable change. We alsofound that service users were, generally, well-

integrated into the project teams and felt that theyhad been able to make a positive contribution. Butbarriers exist here too, and we learned much aboutthese and about the substantial commitmentrequired to overcome them. While it was not thespecific focus of the EwQI, we also learnedsomething about the importance of theinvolvement of managers. Their potentialcontribution to QI requires further investigation.More generally, our study has confirmed thatengagement activities need to be tailored to thecircumstances of the QI activity. No one size fitsall, but good information contributes positively(and this includes clinical audit and feedback), asdoes clear communication among all of thoseinvolved, whether they are clinicians, patients ormanagers. It is also clear that successful QI mustattend to the softer aspects of aligning andmotivating different groups, as well as to thetechnical aspects of achieving project aims. We goon to consider all this in the following chapter.

We conclude this chapter with a quote fromPOMH-UK team, which did, literally, succeed inselling its project to trust chief executives:

We are aware of the need to appeal to people atdifferent levels within trusts, both in marketing theproject and in the service that we provide, ie: qualityimprovement programmes. Our experience inrelation to this has included:– Senior managers, such as trust chief executives or

clinical audit managers who were impressed bythe POMH-UK report sent to them directly, andencouraged others in the trust to participate andgave positive feedback to teams on theirperformance.

– Pharmacists have viewed POMH-UK work asraising awareness and the profile of medicinesmanagement, and a legitimate element of theirprofessional role, as well as being an objectivemeasure of their own effectiveness.

– The presence of a POMH-UK champion within atrust has been important for stimulatingparticipation, both generally and for individualprogrammes. In the latter case, different staffmembers have emerged as champions,encouraging data collection and reflection on theresults, and further implementation and changesin practice.

70 How do you get clinicians involved in quality improvement?

Page 87: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

5.1 Introduction

In this chapter we explore two closely relatedquestions: what kind of leadership was needed todeliver the EwQI, and what capacity within thecentral teams and within participating units didthe projects require?

For the Health Foundation, leadership has a keyrole to play in delivering QI. Since 2003, the HealthFoundation has identified supporting leadership asa key strategic aim and has supported some eightseparate leadership schemes at different times.In an evaluation of these leadership schemes forthe Health Foundation, Walmsley and Millercomment that:

The loose articulation of the links betweenleadership development and quality improvement isreflected in the Foundation’s strategic plan 2004–09,approved by the board in 2004, which identified fivestrategic aims, of which one was ‘developing leaders’... The strategic aims acknowledge aninterconnection between leadership and improvingquality, but this remained very much an exploratoryrelationship, rather than a clearly articulated theoryof change95.

This ‘exploratory relationship’ characterised therole anticipated for leadership in the EwQI. Theexpectation that leadership would be importantwas associated with a willingness to work with theproject teams to develop a tailored approach.Therefore, during the first year of the initiative, theproject teams were offered a package of resourcestailored to the particular needs of each project anddelivered by a leadership development consultant.The thinking behind this approach is explained inWalmsley and Miller’s evaluation:

The Foundation’s initial commitment to developingleaders acknowledged a potential interconnectionbetween leadership and improving healthcarequality... Although, … where the relevant literatureis reviewed, the empirical evidence base is relativelyweak, our evaluation indicated that unlessleadership and a focus on improving quality wereclearly articulated in schemes, participants tended tofocus on personal development without a paralleldrive to impact on patient care. Consideration ofhow to embed improvement in scheme aims wasgiven additional impetus as the Foundationdeveloped its theory of change during 2006–07. Thecontribution ‘developing leaders’ could make, inparticular in ‘building will, skills and capacity’,sharpened our thinking about focus. This led theFoundation to seek to embed technical expertise in

How do you get clinicians involved in quality improvement? 71

Chapter 5

Leadership and buildingthe capacity to deliverlasting improvementIn the previous chapter we discussed the issue of engagement.Here the focus is on leadership and building the capacity todeliver QI, including the management of the projects and theinitiative, the role of information and communication, thecontribution of the royal colleges and professional bodies, andthe relevance of contingency and path dependency.

Page 88: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

improving quality alongside development ofleadership skills, with varying degrees of emphasis,and to articulate the idea of a ‘leader in qualityimprovement’, someone with the capability not onlyto personally address problems with quality, but toinspire and enable others to do likewise95.

It was not part of our brief from the HealthFoundation to formally evaluate the role thatleadership development consultants playedin the EwQI. But we were interested in therole played by leadership and how thisarticulated with the capacity of projects todeliver quality improvement.

5.2 The leadership inquestion

The wider literature identifies a range of ways inwhich leadership can be conceptualised. Lucas hassummarised these as:1. Great ‘man’ ‘Leaders are born and

not made’2. Traits ‘It is clear that there is a list

of personal and professionalskills/qualities which leadershave [or need to acquire]’

3. Behavioural ‘There is an assumed andshared view of humanbehaviour (ie: people areinherently lazy or everyonehas potential), and thisinfluences the wayleaders act’

4. Transformational ‘Leaders are essentially thereto inspire people to changeand may well have an explicittheory of change’

5. Situational ‘Different situations call fordifferent skill sets/attributes’

6. Principle-centred ‘Leadership assumes certainmoral principles – forexample, a need to serveothers. Consequently, how itis exercised is as importantas its outcomes’

7. Distributed ‘Leadership is a sharedactivity and no longer thepreserve of one person’

The approach taken by the Health Foundation,according to Lucas, is principally transformational

and distributed96. That is to say, leaders play anactive role, often involving a theory of change, inchanging the way others behave, and this functionmay be distributed rather than found in a smallnumber of leaders at the top of a hierarchy.

But, following Ferlie and Shortell, it is alsoimportant to appreciate that:

These initiatives are unlikely to achieve theirobjectives without explicit consideration of themultilevel approach to change that includes theindividual, group/team, organisation, and largerenvironment/system level. Attention must be givento issues of leadership, culture, team development,and information technology at all levels97.

Leadership in delivering QI activities in the healthservices can take place at all levels from the pointof care to political leadership. Leadership for QI isconcerned with influencing others to change theirbehaviour. The evidence of what leaders can dowith respect to changing the behaviour of others indelivering QI is limited. Ovretveit poses thequestion ‘Can leaders influence improvement?’and comments:

They can certainly stop improvement, and there isevidence that their actions or failure to act isassociated with harm to patients and poor qualitycare. There is some evidence that leaders canestablish structures, systems and processes in theirorganisation for generating improvement, which, inturn, are thought to improve patient care and reducewaste98.

However, neither on the impact of leadership, noron the most effective style of leadership canOvretveit identify a great deal of certainty in theresearch base.

5.3 The capacity in question:building a platform forchange, motivating action,and sustaining improvements

The first task of the EwQI project teams was tobuild a capacity to manage the complex activitiesfor which they had taken responsibility. Theseinvolved intervening in multidisciplinary contexts,with support from service users, and coordinatinga national project with a range of local activities.

72 How do you get clinicians involved in quality improvement?

Page 89: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

These activities also involved sharing evidence,collaborative learning, and developing trust amongall the participants in the project, centrally andlocally. The skills required to deliver such projectswent beyond the normally accepted skills of projectmanagement. A further task was to ensure thatparticipant organisations and the teams that theyrecruited were supported to carry out theirresponsibilities, and within this, that individualswere suitably informed, equipped and motivated.Third, as part of the initiative, the project teamshad an additional set of tasks relating to theirresponsibilities to operate within the terms of theircontract with the Health Foundation, and thisincluded meeting with, interacting with, andproviding data for, the Evaluation Team.

Leading all these activities required a combinationof organisation, diplomacy and energy. In Ferlieand Shortell’s typology97, this was leadership atboth team and organisation level. However, it alsoinvolved influencing the larger system/environment. Building a ‘platform for change’ iseasier when powerful facilitators are available. Inthe EwQI this involved mobilising the weight ofprofessional opinion and the authority of the royalcollege/professional body behind each project.Leading these activities was also about more thanbuilding a platform for action; in each project,leaders were required to not only inspire initialenthusiasm but also to maintain commitment and,ultimately, to spread the lessons beyond the project– into the future and into other areas of healthcare.At team and organisation level, this called for adifferent form of leadership – the ability tomotivate others in many spheres and to sustaintheir motivation. We shall see from the evidenceprovided in their SERs that the project teamsimplicitly recognised the need for both these formsof leadership.

Securing ethics approval

One early, and relatively prosaic, task facing theproject teams was to secure ethics approval fortheir proposed activities. Meetings with the projectteams revealed wide differences in their approachto formal ethics approval. Some teams were clearthat they were undertaking research and thereforerequired ethics approval for their study in the usualway; others were equally clear that what they weredoing was a service evaluation, which they believeddid not need formal approval. A third group felt

that some aspects (research) of their project didneed approval, other aspects (audit) did not. In theevent, obtaining formal ethics approval proved tobe a significant barrier to the smooth setup ofsome EwQI projects. For example, the PEARLSteam experienced considerable delays and inhindsight said that it should have allocatedmore time: ‘A major issue hindering the projecthas been getting MREC [Multi-centre ResearchEthics Committee] approval (which tooksix months plus).’

The same team also commented on the length oftime it took to go through local NHS R&Dgovernance processes:

The local R&D processes [have] … taken one yearthree months … R&D departments have provedmore difficult and lengthy as all have required thecompletion of several forms; many have requiredadditional copies of paperwork and information andwaiting for committee meeting dates. However, thedesign of the study means that approval had to beachieved in all units before the formal cascadebegan.

The PoISE team also commented on thedelays caused by research governance procedures,which in turn caused delays in gaining access tothe trusts.

These difficulties, and associated concerns thatsignificant delay in securing ethics approval mightdelay data collection and so compromise the finalevaluation, encouraged the Evaluation Team tolook at this issue in more detail. At that time therewas an ongoing re-organisation of the role of LocalResearch Ethics Committees (LRECs) and anassociated debate about the scope of theiractivities. The Central Office for Research EthicsCommittees (COREC, now the National ResearchEthics Service) had made a helpful distinctionbetween audit, service evaluation and research; andit was clear that LREC activity only applied to thelatter99. But the position of QI projects in whichthere is a mix of activities was less clear. Wetherefore worked with COREC and with some ofthe project teams to facilitate a smootherapplication process for the EwQI applications. Inthe process we have also been able to contribute tothe national debate on how best to ensure theethical integrity of QI projects in the UK, and tothe understanding of the ethical requirements offuture the Health Foundation-funded QI projects.

How do you get clinicians involved in quality improvement? 73

Page 90: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The wider skill set needed in the centralproject team

The project teams were specifically asked tocomment in their SERs on the skills needed todesign, implement and evaluate their projects.Their comments, summarised in table 31, showthat the project teams thought that they generallyhad the necessary skills (although, arguably, theywould tend to say this): PoISE commented that thecentral project team members had complementaryskills; PEARLS thought the team had the necessaryskills and experience to design, implement andevaluate the project; EPI-SNAP thought that thenecessary key skills were to be found within itsteam; IBD stated, ‘the Implementation and SteeringGroup members provide a huge amount ofexpertise that has been harnessed for all aspects ofthe project’s work’.

The PoISE team recommended bringing inappropriate outside expertise, as needed, duringthe course of the project, and many of the projectteams did this, particularly with regard to healtheconomists or statisticians. In these specific areasof expertise, the PEARLS team also reported that ithad benefited from the ‘technical seminars’ led bythe Evaluation Team on cost benefit analysis.

The project teams may or may not have felt that itwas in their interests to state that they had thenecessary skills to deliver their projects, but theyhad no reason to be other than frank about the listof skills required to deliver QI and, as can be seenfrom table 31, these are considerable.

74 How do you get clinicians involved in quality improvement?

Table 31: Skills needed in the central project team

Clinical/QI skills

– Clinical expertise in the project’s specialist field

– Skills in developing and quality assuring the tools, training materials, guidelines, and so

on, which made up the QI activities (this also required clinical expertise)

– Ability to train others in all the above

Day-to-day project management

– Administration skills

– Budgeting proficiency

Specific expertise

– Expertise in seeking ethics approval and making ethics applications

– Skills in health economics – some teams brought in outside experts

– Skills in designing and developing web-based tools

– Statistical analysis – teams brought in experts with these skills

– Database design competence

– Ability to understand the service users’ perspective and communicate it to others

Relationship building

– Ability to build productive working relationships between members of the central

project team – for example, between clinical and service user members

– Ability to build relationships with collaborators, such as professional bodies and patient

groups, and to maintain contact with them

– Aptitude for successful recruitment of, and ongoing contact with, trusts (eg: trouble

shooting, information seeking)

Audit, research and evaluation skills

– Proficiency in designing and implementing large-scale audits

– Ability to design, use and analyse research/QI instruments such as questionnaires and

surveys

continued

Page 91: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

However, the term ‘skills’ can focus attention on arather static ‘stockpile’ of competencies rather thanthe ‘flow’ of skilled actions that characterisessuccessful QI. In each project, momentum wasmaintained not only by drawing upon a staticreservoir of competencies, but also by theinjection of enthusiasm and commitment, andthrough ongoing interactions between thecentral project team and local participants.This required leadership in a variety of the sensesoutlined above, including diplomatic but persistentinput from project managers, intellectualleadership at conferences and workshops, andmaintaining and building trusted relationshipswith the professions, with service users, and withkey organisations (such as the royal colleges).Leading peer review visits also required acombination of authoritative input and asupportive approach. The royal colleges areespecially well-placed to support these sorts ofactivities, especially those that involve buildingrelationships within and across professions.

Other resources needed to lead andimplement the projects

The time needed to implement change

As the PoISE team noted, ‘Our experience raisesboth practical and methodological implications; itmay be unrealistic to implement a project like thiswith anything less than a five-year time frame.’

The evidence from the projects is that QI movesslowly and that effective leadership needs to besustained over long periods (arguably longer thanthose allowed for in the initiative). Sustainedleadership also needs to remain focused ondelivering the project when personal interests,political priorities, and improvement fashion may

have moved on. For example, due to initial delaysin the PEARLS project there was quite a longperiod between the training of local researchfacilitators and the time when they finally startedto cascade that training to local teams. The PoISEteam noted that some sites did not startimplementing the initiative until three months intoa six-month implementation period, ‘meaning thatany changes and practice improvements hadlimited time to … be instigated, be adopted, andsubsequently show any impact’. The team suggestedthat six months was not long enough to implementthe changes required.

The POMH-UK team commented that one of thekey lessons from the first topics it covered wasabout the amount of time it took for QI to beimplemented and to bring about changes inpractice. It took longer than anticipated forinformation to pass from the central team to thetrusts involved, then to individual clinicians, thenfor the changes to be considered and implemented.The team said that one year was ‘not long enoughfor an audit cycle and not long enough to makechanges’. Trusts said that they needed time todiscuss and approve changes. In later topics,POMH-UK extended the period of time betweenbaseline and re-audit for this reason.

Leadership from NHS managers

We discussed what the project SERs say about theinvolvement of NHS managers in chapter 4. Ourfindings can be interpreted in two different ways.On the one hand, the SERs generally report unevenleadership from management in support of theEwQI projects. Their efforts to get local seniormanagers to participate in the projects worked insome contexts, but not in others. This could beviewed as a failing on the part of management. Onthe other hand, unlike clinicians and patients (who

How do you get clinicians involved in quality improvement? 75

Table 31: Skills needed in the central project team – continued

Data analysis and IT skills

– Ability to identify and develop IT tools

– Expertise in collating/cleaning/sending out data

– Expertise in data validation and analysis

Promotion and communication

– Competence in developing and implementing a communications strategy

– Flair for writing up papers for publication

– Ability to present at conferences and meetings.

Page 92: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

had to be involved from the outset by the terms ofthe Health Foundation funding), managers weregiven very little opportunity to shape the form orconduct of the EwQI projects. On this basis, wecould also argue that if leadership from managersis required to deliver QI, then at the very leastmanagers should be invited to help shape theseactivities. We have too little evidence to make anyfirm judgements on this, but can speculate thatlack of management input could explain someof the patchiness of impacts noted earlier inthis report.

Local clinical leadership

Several teams highlighted difficulties in leadingbehaviour change at the local level, attributing thisto a lack of resources rather than a failure of localleadership:– Colorectal Cancer: The majority of consultants

who failed to submit data to the audit said thatthis was due to a lack of resources within thetrust.

– Self-harm: Some of the trusts participating inthis project were reported to have become lesswell resourced during the course of the project,and thus struggled to find ‘the time and spacefor QI’.

– POMH-UK: In topic 1, there was a lack ofresources to provide all clinical staff onparticipating wards with copies of the workbookon combined antipsychotics. In topic 2, resourceconstrains meant that it was not possible tocollect data about all the patients in theparticipating assertive outreach teams –a sample of patients was included instead.

– PoISE: ‘Human resources pressures’ werereported in the local teams. Local staff weresometimes too busy to undertake their EwQItasks, and needed protected time away fromtheir posts in order to participate. Onetrust was unable to commit resources to aPDSA facilitator.

Most opinion leaders felt they did not haveenough time to conduct their role andassociated activities as well as they would haveliked. In the most extreme case, one opinionleader had not been able to do anything otherthan raise awareness of the web tool by email.Other planned activities were prevented byhaving to prioritise other work responsibilities.This opinion leader had very limited clinicalcontact and felt that it would be better to havegiven the role to someone based in clinical

practice who could combine [QI] activities withtheir usual role.

– PEARLS: NHS funding difficulties may havereduced ‘motivation to complete questions andsurveys’ and also had an adverse impact onmidwives’ ability to attend updating sessions.

Some clinicians were suspicious of innovation.EPI-SNAP reported that, initially, two GP sub-committees had resisted adopting the protocol forreferral to the first seizure clinic (although bothwere eventually persuaded). The PoISE teamcommented that surgeons were not involved in theprogramme, but should have been. In the IBDproject, local teams often found it difficult to getthe required people (multidisciplinary teammembers and management) together to hold localmeetings for action planning. The project teamsdid not always have the traction to overcome theselocal resistances.

There is a sense of the fragility of support for QI atthe local level. Any one of a number ofunpredictable events could prevent progress. QI isoften not regarded as a part of the core business ofthe NHS and has to be undertaken at the marginsof mainstream activities.

5.4 Leading sustainablechange

Just as delivering successful QI requires leadershipat different levels within NHS organisations, so toodoes sustaining and spreading the benefits. Thisinvolves an ability to bridge from one organisationto another and to continue improvement into thefuture. We discussed this issue briefly in chapter 3and provide a more detailed analysis below.

Spreading improvement by influencing changein national initiatives

The efforts the teams put into influencing widernational changes have been considerable. The workof the two SNAP teams provides a good example.SNAP-CAP developed a primary care bundlewhich is similar to the secondary care bundle. Itlists the antibiotics to be given in primary care andthe clinical indicators that would indicateadmission, and will help bridge the gap between

76 How do you get clinicians involved in quality improvement?

Page 93: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

primary and secondary care for patients with CAP.The secondary care bundle will be introduced to allhospitals within Scotland, not just acute hospitals,as part of the Scottish Antimicrobial PrescribingGroup (SAPG) work plan, in which the carebundle becomes ‘best practice’. The SNAP-CAPteam has thus secured the continuation of theproject under SAPG in response to a nationalagenda and framework that was not evident at thestart of the project.

Similarly, the EPI-SNAP team attempted to embedits interventions in routine practice throughalignment with existing audit and clinicalmanagement systems. Although this left the projectvulnerable to changes in policy and ITinfrastructure at national level, the project has nowproduced a result. The EPI-SNAP first seizureproject has been instrumental in the inclusion of astandard on the giving of driving advice in the newScottish National Standards for neurologicalservices. It is also likely that the project’sstandardised referral form, or a form based on it,will become mandatory across Scotland.

Another example comes from the IBD project. Thethen Healthcare Commission included key(organisational) data items from the IBD audit inits annual health check. The team commented,‘This was a major achievement in healthcare inEngland, raising the profile of IBD within trustmanagement’. The audit was also accepted into theNational Clinical Audit and Patient OutcomesProgramme, funded by the Department of Health.

What these examples illustrate is the need foracuity in change management if improvements areto be sustained. They also highlight the difficultiesfaced by individual projects operating against abackground of shifting agendas and priorities.

Institutionalising change rather thandepending upon the enthusiasm of individuals

As already discussed, the EwQI projects havehighlighted the importance of individuals as acatalyst for change, but they have also illustratedthat an overdependence on individuals, rather thaninstitutionalised processes, can be fragile. Apreferred route within the EwQI toinstitutionalising change was to promote theadoption of standards and guidelines, building

local capacity to lead change through training anduse of IT. For example, the idea of cascadingtraining was central to the PEARLS project.

[The] project is designed to create a system thatencourages sustainable good practice. The researchfacilitators will be highly trained and supported, asthey then cascade the training to the practitioners intheir locality. Therefore the facilitators themselvesand the colleagues that they have taught will be ableto continue to practice the learning they havereceived.

The PoISE team spoke in similar terms aboutdeveloping the skills of people working withintrusts to build capacity. The Self-harm team plansto make the training materials produced by theproject permanently available on the internet, andcommented that the programme had improvedjoint working in many local teams.

Securing sustainable funding

From the start, the POMH-UK team intended thatin the long term the prescribing observatoryshould be supported through a subscription feefrom participating trusts (although the first topicwas initially free). It changed its original feestructure and is now close to achieving this goal.

Moving to an annual subscription fee has given theproject greater financial stability and has been aresponse to the finding that changes in practicerequire lengthy timescales and continuedcommitment from trusts. Developing trustsubscriptions has been crucial in ensuring the long-term viability of POMH-UK past the length of TheHealth Foundation grant.

Both the EPI-SNAP and IBD teamsrecognised that, if they were to be sustainable,QI activities must use existing resources moreefficiently or be resource-neutral, rather thandemand extra funding.

An important aspect of the [EPI-SNAP] project hasbeen the attempt to promote sustainability bydeveloping interventions that can be supported byexisting practices and initiatives, in order tominimise the need for ongoing commitment ofresources during and beyond the project.

Successful programmes are more likely to besustained if these successes can be backed up withevidence. Although project teams often struggled

How do you get clinicians involved in quality improvement? 77

Page 94: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

to fully quantify their successes, the point wasgenerally well-understood. The EPI-SNAP teamcommented that:

The likelihood of the topics chosen by the projectbeing sustained beyond the end … of the project willbe greatly enhanced by the demonstration that theyare delivering benefits, as defined by each arm of theproject. As well as improving the quality of care forpatients with epilepsy, there are potential benefits forhealth care providers, in terms of updatingknowledge and skills and acting as a focus fordeveloping a team approach, in circumstances whichmay not always be conducive to this (eg: acutemedical receiving). Actions which promote theseaspects are therefore also being addressed within theproject, in order to maximise the benefits andpromote sustainability.

Promoting awareness of the problems andpotential benefits

One mechanism through which QI activitieschange practice is by raising awareness of aproblem and its possible resolutions. The NCROPteam noted that ‘The team will need to pro-activelyseek opportunities to sustain the work bymaintaining an awareness of both national andlocal healthcare agendas.’

Similarly, the PoISE team talked aboutdissemination of results and work to raise theprofile of better fasting practice:

Ensure that the resources developed will be morewidely available after the project is completed – egPDSA book, standard dissemination pack includingpatient resource and implementation presentation,web-based resource.

And the Self-harm project team commented thatthe creation of the psychiatric liaison accreditationnetwork (PLAN), which was developed ‘as anexpansion of the self-harm project’, will offerparticipating teams further opportunities toimprove and develop.

Aligning QI activities with national policies

In chapter 1 we summarised the policy context intowhich the EwQI was launched and implemented.All the teams recognised the value of aligning theirprojects to national policy initiatives. For example:

Both projects [EPI-SNAP and SNAP-CAP] have also

demonstrated the value of aligning and collaboratingwith other groups and initiatives at national level.This may be important for the success of the projectoperationally … or methodologically, or politically.

With increasing public demand for accountabilityand the drive for greater transparency, both from thegovernment and within the medical profession, it isvital that [colorectal] surgical units can demonstratetheir outcomes to patients and the local communityand be able to compare results with other trusts’results.

Some project teams were able to go further andbecome directly involved in relevant nationalinitiatives. For example, members of the PEARLSproject team were involved in the development ofthe Department of Health’s Safer childbirthdocument (2008), which is now used in mandatorytraining of midwives – a strong lever to get localunits to take notice.

But although alignment with national changes canoften help to promote a project and aidimplementation, national changes may also pose athreat to QI activities. For example, the POHM-UKteam commented that while a recently introducedrequirement for trusts to develop qualityimprovement plans is a driver for participationin POMH-UK:

…[they] would want to be assured that theparameters in the QI plans are the right ones.If these were based around participating in reflectivepractice, then this would be a positive step. However,we would not support the data being used to assessperformance [if the QI plans were inappropriate].

The PoISE team was attempting to recruit to postsfor the project in May 2006, at which time therewere budget deficits, and commented on theresulting obstacles, ‘This project has beenconducted at the time of NHS trust deficits, re-configuration, re-organisation, staff shortages androle changes’.

The Self-harm team commented that changesto mental health teams in hospitals wereongoing during the EwQI, but the effect of changewas mixed:

Liaison teams we worked with no longer exist due tofinancial pressures, whilst in other areas liaisonservices are being established or expanded.

78 How do you get clinicians involved in quality improvement?

Page 95: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

5.5 The role of the royalcolleges and professionalbodies

‘The quest for quality’

In chapter 1 we highlighted the importance for theEwQI of Leatherman and Sutherland’s finding thatclinicians listen and learn best from their peers andthat professional bodies have a legitimacy andauthority that command clinicians’ respect24.

Leatherman and Sutherland argued:

Given the extent of their influence and voice, itwould seem incontestable that the royal collegescould and should play a critical role in the QualityAgenda. … The key question here is not whether theroyal colleges have a pivotal role in the QualityAgenda, but rather how to engage them mostconstructively in a set of critical tasks.

However, they were also sceptical about what theroyal colleges collectively had contributed to date,‘a crucial role, unevenly adopted at the time ofwriting, is the development of routine datacollection, analysis and reporting capability inorder to monitor quality’. Leatherman andSutherland clearly looked to them to play a moreimportant role:

As England’s Quality Agenda matures, it mustincreasingly move from the current, legitimate andvital emphasis on national capacity building, ledlargely by government, to professionally dominatedinitiatives of routine analyses of quality and theimplementation of corrective actions to remediatedeficiencies. It is in this body of work that the royalcolleges should be close collaborators, if not leaders.

One of Leatherman and Sutherland’s‘Recommendations for the Quality Agenda’ isabout the importance of engaging the professions:

Getting the professions ‘on board’ is an essentialfactor that is currently deficient. … A robust andprofessions-led initiative for standard setting,specification of quality measures and comparativepeer review is essential.

They note that resources may be an issue:

Whilst the royal colleges clearly have an importantrole to play, the requisite skills and capacity are not

available in each royal college. There are exemplarycapacities in several sites, such as in the RoyalCollege of Physicians (demonstrated in the MINAPand Sentinel Stroke Audit Programmes), the RoyalCollege of GPs (demonstrated in the quality-measure development) and in the Intensive CareNational Audit and Research Centre.

The report then makes the specificrecommendation picked up by the HealthFoundation in framing the EwQI:

Develop a published strategy detailing how themajor clinical professions, presumably through theroyal colleges, will: a) set quality standards; b)develop quality measures; c) collect data andperform analyses; d) conduct peer review; and e)publish results and aim for improvement. Thestrategy must include resource requirements andprovisions, as well as plans to leverage currentcapabilities (such as are present in several colleges).Possible funders to seed this activity includecharitable foundations and the ModernisationAgency, as part of its intention to evolve and embedcapabilities and functions in the health service andprofessions. Independent health foundations, incollaboration with interested royal colleges, shouldaccept responsibility for convening the necessarymeeting(s) to develop the strategy.

The evidence for the Leatherman and Sutherlandreport came partly from a series of 47 interviews,although only two of these were with peopleprimarily from royal colleges.

Cornwell and Jakubowska’s review ofthe role of the royal colleges andprofessional bodies

The potential role for the royal colleges andprofessional bodies in supporting QI continued tobe a matter of debate in the EwQI, and inDecember 2006, the Health Foundation publisheda working paper written by Cornwell andJakubowska (the EwQI Support Team)100. Basedon interviews with senior people from 12 of the16 royal colleges and professional bodies involvedin the EwQI projects, the paper explored the roleof these organisations, and discussed the leversavailable to them, as well as the barriers andobstacles they face in supporting QI. The paperconcluded:– All the royal colleges/professional bodies have

the potential to build the capability for clinicalquality improvement in their membership.

How do you get clinicians involved in quality improvement? 79

Page 96: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

– There are positive signs of leadership from ahandful of bodies.

– There are fundamental questions about theability of some bodies to realise the potential,and about the appetite for reform ofgovernance arrangements.

– NHS reform and recent and ongoing changesin medical training, employment, careers andregulation make this a critical time for themedical bodies. Paradoxically, the crisis mayprovide the catalyst that is needed for them tolearn from each other and to work together.

– The wide variation between the bodies reflectedin survey findings is becoming the subject of awider debate. In the not too distant future, itseems possible that the medical royal collegeswill find themselves exposed to some measureof public scrutiny – low level, perhaps, butunfamiliar and therefore uncomfortable.

The number of royal colleges and professionalbodies involved in the EwQI, and the timing, meanthat clinicians involved in the EwQI may be in aposition, individually and as a group, to challengeand support the host bodies to take a public standon the need for improvement in the quality (andsafety) of patient care and to use their influencewith members to:– further enhance clinical participation in quality

measurement, clinical audit and improvementinterventions

– further develop the technical knowledge andskill improvement methods of doctors, nursesand midwives

– adequately involve users in the development ofthe agenda100.

These are important conclusions and they resonatewith our own conclusions in the following chapter.We also had an opportunity to discuss these issuesfurther at a later date with people from a numberof the royal colleges, and we report on thesediscussions in the following section.

End-of-project interviews with theroyal colleges and professional bodies

Towards the end of the evaluation, we interviewedeight senior figures (mainly quality/standardsleads) from seven of the royal colleges andprofessional bodies involved in the EwQI. All theinterviewees occupied senior managerial or

leadership roles, and all were aware of theinitiative. They reported a range of engagementwith QI activities within the royal colleges andprofessional bodies, including two institutions thathad dedicated QI units or centres and at least oneother that was seeking to clarify how to address QIin a more structured way. Most interviewees hadhad some involvement with QI activities inaddition to the EwQI.

Interviewees reported that the EwQI had had arange of significant impacts on the royal collegesand professional bodies101. In one institution, theEwQI had encouraged a shift from data collectionon its own to developing interventions based onthese data and had also led to greater patientinvolvement in QI activities. In another, it hadstrengthened the patient focus and thecommunication of best practice. In a third, theEwQI had been a catalyst for change which had‘arrived at the right moment’, and in yet another ithad helped make QI a key strategy. At the time ofthe interviews, interviewees perceived QI to be apriority of the colleges and professional bodies –typically, somewhere near the top of the list. Andwhile interviewees reported that the EwQI had notprecipitated a ‘sea change’ in culture (and indeedthis should not be surprising given the scale andscope of the initiative) it had allowed more time tobe spent with clinical communities.

Interviewees identified a variety of roles for theroyal colleges and professional bodies, includingsupporting research, coordinating national audits,using their reputation to boost clinicianparticipation, and facilitating joint workingbetween clinicians. Members were informed aboutthese activities through a range of media, includingwebsites, conferences, newsletters and (lesscommonly) departmental name changes. None ofthe royal colleges and professional bodies requiredmembers to participate, preferring incentives tocompulsion, but all reported that they anticipatedchanges in the future – in particular, with regard torevalidation. There was no consensus over howthese activities should be funded, and barriersidentified included:– lack of time– ‘cultural divide’ between those engaged with

audit and those involved with QI and research– lack of funding for QI (meaning that, as

subscription-based organisations, they were not

80 How do you get clinicians involved in quality improvement?

Page 97: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

funded to deliver improvements on behalf ofthe NHS)

– insufficient senior buy-in– practitioners’ apathy – unsuitable IT systems – short-term contracts (although creative efforts

to maintain the skill base by extendingtemporary contracts were also reported).

However, the factors identified differed from onebody to another.

Despite noting some resistance to QI in most royalcolleges, interviewees also believed that mostcolleges were willing to support further progress,although several expressed the view that changewould be gradual, and that accreditation andrevalidation offered opportunities that could beused to support QI. More specifically, QI-relatedactivities included:– a dedicated research team– practical support in administering audits– coordinating responses from specialist societies– facilitating joint working– communications support.

Interviewees also discussed the potential role ofother organisations. Most were very open to theidea of establishing partnerships with otherorganisations (although they reported differentlevels of current partnering). Examples of partnersmentioned were:– academic institutions– SIGN– HQIP– the Health Foundation– regulators– government health departments in other

countries– NHS trusts– charities– patient groups– lay advisory groups.

The various ways of working with these partnersmentioned by interviewees included: strategicpartnerships with academic partners, individuallinks to universities, agenda influencing andinformal partnerships with charities. Theinterviewees saw the opportunity to workmore effectively with commissioners, GPs, PCTs,patient groups, and (for the royal colleges)professional bodies.

Overall, the interviewees said that the initiativehad had a direct, significant and positive impacton the royal colleges and professional bodies.More widely (and partly influenced by theirown involvement with the EwQI) they identifieda growing role for QI in the work of theroyal colleges and professional bodies, andanticipated that this growth would continue.They identified a varied but often significant setof activities and organisational changes. Therewas a willingness to collaborate with othersand no sense that the royal colleges andprofessional bodies would want to monopolisework in this area. Clinician engagement, alreadypalpable, could be strengthened further throughthe leadership and authority of the royal collegesand professional bodies.

The future role of the royal colleges andprofessional bodies in supporting andleading QI

The EwQI suggests that QI is strengthened bybeing given status, technical support, and thepotential to be institutionalised in guidelines,protocols, standards, clinical audit, revalidationand training. The EwQI projects also show that,should they choose to, the royal colleges andprofessional bodies could play a helpful role insupporting and facilitating QI. Cornwell andJakubowska’s assessment – that willingness andcapacity is variable across the colleges – appearsto be right, but this should not prevent anyroyal college and professional body that sowishes from engaging with the QI agenda. Ourend-of-project interviews give added weight toCornwell and Jakubowska’s conclusion and showwhat might be possible.

5.6 The role of informationin delivering QI

The EwQI model of QI is fuelled by information.Accurate, timely, relevant and easily interpreteddata are a key component of the platformsupporting the initiative. This is information notonly to support monitoring and accountability butalso, crucially, as part of a theory of change – byproviding new information in a particular way, it ishoped that behaviour will be changed. This is

How do you get clinicians involved in quality improvement? 81

Page 98: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

wider than information technologies. However, ITproblems did figure as a barrier in the projects’SERs. In the EPI-SNAP project, it was not possibleto get the referral form for the first seizure clinicintroduced on the new A&E EmergencyDepartment Information System (EDIS) becauseof the development problems experienced by EDIS:additional projects like EPI-SNAP were not givenpriority with regard to system specification. ‘ITaccess problems’ also caused delays to the PoISEweb resource going live.

Generally, however, information featured as asupport to improvement. In particular, collectingperformance data and feeding this back tohospitals, units and clinicians was central to all theproject teams’ activities.

Web-based and electronic systems

At least six of the project teams (Colorectal Cancer,NCROP, Self-harm, POMH-UK, SNAP-CAP andIBD) mentioned online data submission ordiscussed project-generated audit reports that wereavailable on the web. For example, the ColorectalCancer team commented that the recentlyintroduced ‘web-based system of data entry iscurrently active and has allowed trusts tostreamline data entry, entering only essential dataitems in the required format’. And the NCROPteam noted: ‘An electronic system of data collectionhas many advantages and is increasingly acceptableto busy colleagues in a variety of healthcaresettings’. In contrast, local IBD teams achievedweb-based data entry, but only as part of a moreonerous process: data collection involved‘identifying the appropriate cases for inclusion,reviewing the individual case notes andinterpreting the data to be entered’, it was commonfor trust staff to initially complete the data entryforms by hand and then transfer the data ontothe website.

An ongoing concern among the project teams wasminimising the burden of data entry. The IBDteam noted: ‘Where possible, data sets will betrimmed down to the minimum questionsrequired to gain the required data. Time frameswill also be adapted in terms of case ascertainmentand data entry’. In a similar vein, the NCROP teamfound that its initial plans for completing changediaries (submitted electronically) were too

onerous: it was ‘not feasible for clinicians to collectdata at such frequent intervals’.

Feedback of audit data

The Colorectal Cancer project aimed to feed backdata to clinicians on a one-year cycle. In fact, due toproblems in harmonising data submission, data for2007 and 2008 is not yet available at the time ofwriting this report. The other project teams stressedthe importance of rapid feedback. IBD fed backdata within three months of the end of the dataentry period. POMH-UK tried ‘to maintain a rapidresponse of six to eight weeks after last submissionof data’. Producing detailed and individualisedreports so rapidly was time-consuming for thecentral team, but the POMH team reported that theparticipating teams appreciated this quick feedback.The keys to such a quick turnaround were onlinedata submission and advanced preparation to refinethe audit tool, to ensure it worked and to plan dataanalysis. Similarly, the Self-harm team aimed tosend local teams a report of their performancewithin four weeks of the end of the data collectionperiod. In EPI-SNAP, the four first seizure clinicsreceived an audit report one month after datacollection finished.

The project teams provided unit-specific analyses.For example, IBD provided site-specific reports, andthe acute trusts participating in NCROP receivedindividualised reports from the clinical andorganisational audits. PoISE produced an individualfeedback report for each trust, detailing their foodand fluid fasting time, benchmarked against all thetrusts in the study. POMH-UK sent teamsindividualised reports of the audit data, withsubsections reporting on national, trust and clinicalteam data. The Self-harm project provided eachteam with an individual report of their performanceand an aggregated report describing trends acrossthe project, allowing some comparison.

Some EwQI project teams also provided practicalassistance in using the data. For example, the IBDproject team made supported action planningvisits to some local sites at which the audit datawere reviewed:

Teams were very open when reviewing their 1st-round data, highlighting areas where they felt theyneeded to make improvements. The fact that thevisits had been arranged was, by the admission of

82 How do you get clinicians involved in quality improvement?

Page 99: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

sites themselves, often the catalyst for sites toundergo a more detailed analysis of their first-roundresults and to begin the process of identifying stepsthat would need to be taken to improve care.

Similarly, the POMH-UK project team providedan executive summary of each trust’s report, aswell as slides to facilitate local dissemination. Self-harm held feedback events for local teams,asking each to share an area of achievement oran idea for improvement.

5.7 Conclusions

The evidence from the projects shows thatdelivering QI involves highly complex activities,and that the leadership, resources and supportavailable for these are often fragile. The skill setrequired to provide leadership is extensive, and QIrequires leadership at a variety of levels, such asnational professional organisations, localprofessional (often multi-professional) teams,patient groups, and in management and policy

making. Furthermore, it is not certain that theinterests and values of leaders at all these levels willcoincide. Adding to this complexity, the requiredskill mix and leadership style might have to evolveover time. There is a sequence of the leadershipand other skills required when launching,managing, evaluating, and sustaining andspreading the project.

The current model of leadership for the delivery ofQI in the NHS is, arguably, unrealistically heroic,requiring exceptional efforts to mobilise and alignthe work of others because QI is not embedded inthe routine systems of the NHS. However, evenwith a more embedded system, the demands of QIon leadership would be considerable102. In thiscontext, the royal colleges and professional bodiescan realistically claim the potential to provide anorganisational setting for nurturing and promotingsuch leadership. But even with such support, therewould be a need to develop and exploit leadershipskills more widely.

How do you get clinicians involved in quality improvement? 83

Page 100: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

6.1 Introduction

In this final chapter we summarise and discuss ourkey conclusions. We consider what has beenachieved, discuss whether the achievementsjustified the effort, and explain our conclusionsabout the reasons for success and failure. On thisbasis we then identify a set of recommendationsconcerning three aspects of QI: its delivery, supportand evaluation.

Our aims are laid out in table 2 (chapter 1), and webelieve that to a large extent we have achievedthem. First, we have developed an innovative andeffective way to engage with the projects in helpingthem develop their implementation and evaluationplans. The use of logic models had mixed success,but the use of self-evaluation reports (plus therecords of significant events), combined withstructured face-to-face meetings, has facilitated adeep understanding of the projects on our part,and a more informed approach to evaluation ontheir side. Second, we have brought together in thisreport the data and findings from the projects.However, the aspiration to synthesise these datahas only partially been achieved because of theincommensurable nature of the input and outcomedata provided by the projects. Third, we haveshown significant increases in clinical engagementin QI as a result of the projects and documentedboth the extent and nature of this. Fourth, we haveexplored the wider influence of the EwQI inleveraging external commitment to standardsetting, clinical audit and patient engagement,especially in relation to the royal colleges andprofessional bodies. Finally, we have attempted togauge the costs and influence of the initiative.

When considering the conclusions andrecommendations presented in this chapter, a

number of things should be kept firmly in mind.The first is that the EwQI has specificcharacteristics that limit the generalisation of anyconclusions. From the outset the EwQI wasconceived as a demonstration of what could beachieved through clinician-led QI activities, withactive support from the royal colleges andprofessional bodies and the engagement of patientsand their representatives. Both the HealthFoundation and the project teams wanted to showthe improvements that could be made (includingimprovements in patient outcomes and adherenceto guidelines) by working within the spirit of theinitiative. The initiative was not, therefore,conceived as a scientific exercise. However, ourevaluation of it was not only pragmatic but alsoscientific – we were seeking systematic evidence tosupport or weaken causal claims. The evaluativeevidence produced is therefore necessarily limitedin two fundamental ways. First, it has a weaklydeveloped counterfactual (while we can speculateabout what might have happened in the absence ofeach project, neither the individual projects nor wewere testing the EwQI approach against otherapproaches to improving particular aspects ofhealthcare). Second, the evidence from the projectshas been produced by them primarily to supporttheir QI activities. We had a number of fruitfuldiscussions with projects about the differencebetween evidence for research and evidence for QI;we agreed that, while there is a significant overlap,the concerns of the former focus on reliability,validity and replicability, whereas the latter focuson more pragmatic concerns of relevance, usabilityand timeliness.

The second issue is that the EwQI was one of twoprogrammes funded by the Health Foundation,and it focused on acute care. The otherprogramme, reporting in 2011, concerns primary

84 How do you get clinicians involved in quality improvement?

Chapter 6

Conclusions andrecommendations

Page 101: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

care, and the findings from both these studiesshould be combined for an overall account of thecontribution of clinician-led QI in the NHS. Thiswill be particularly relevant when consideringdifferent models of QI, such as whole systemchange, which do not feature in this evaluation.Therefore, our conclusions and recommendationshere should be understood as relating particularlyto acute care.

This brings us to the third issue. The model of QIbeing studied concerns changing the behaviour ofindividual clinicians through the provision ofinformation, peer review, training and othersupports. Delivering this change involves projectsthat are clinician-led, patient-involving and royalcollege-supported. Intended outcomes focus onadherence to guidelines and improved patientoutcomes. Clearly, this is not the only model of QIin town. Other potential outcomes includeimproved or more equal access to healthcare,improved patient experience, and a healthcaresystem more responsive to the balance of healthneeds in society. Other mechanisms for changemight include a stronger role for commissioning,management and (non clinician-led) standardsetting. These are all part of the potential mix ofinstruments for raising standards and improvingquality, and the evidence produced here does nothelp us form a judgement about how to optimisethis mix.

That said, although it might be psychologicallyattractive for evaluators to hide behind theincompleteness of the evidence, there is also anobligation to make judgements based on theevidence available (while making the strength ofthe supporting evidence clear). We attempt suchjudgements in this chapter. On the basis of thesefindings, we make a number of recommendationsat the end of the chapter.

6.2 What was achieved?

Among other achievements, the EwQI successfullysecured clinician leadership in clinical audit.Properly understood, clinical audit involves a cycle,or process of change, with a sequence of activitiesincluding: selecting the audit topic, agreeingstandards of best practice, defining the auditmethodology, data collection, analysis and

reporting, making recommendations and an actionplan, implementing change, and re-auditing. It alsoinvolves a complex sequence of challenges such asbuilding an agreement around the guidelines,evidence or outcomes that should be the focus ofthe cycle, putting in place processes of datacollection, validating and analysing data, agreeingwhat the data means, and developing action plansbefore subsequently re-auditing. The process istechnically demanding and value-laden. It is likelyto be challenging and it is not guaranteed that theprocess will be equally trusted by all stakeholders,nor is it certain that all the relevant parties willcollaborate. A role for professionals is desirable and(probably) inevitable; clinicians in the NHS areexpected to participate in clinical audits whereappropriate. The peer-led, patient-informedapproaches to this, which were adopted in theEwQI projects, are revealing about how the clinicalaudit cycle might be delivered.

Most of the EwQI projects conceived of the auditprocess as a whole cycle. But in the ColorectalCancer project, the term ‘clinical audit’ referredonly to the first stages of the audit cycle (that is,agreeing the topic, selecting standards, defining themethodology, data collection, analysis andreporting). It did not include developing an actionplan and implementing changes which aresubsequently re-audited. The project teamargued that without a fully trusted and technicallyreliable first audit, any action plans would beflawed. Even so (at least in terms of relevantsurrogate measures), small but real improvementswere identified, suggesting that in somecircumstances simply making relative performancedata available might be sufficient to promptchange. However, the statistical significance of themeasured improvement is low and furthermore,these improvements could have resulted fromunrelated changes.

In the Colorectal Cancer project, the overallnumber of participating units has increased since2000 (predating the EwQI funding). There is now agreater likelihood that a colorectal surgeon in theNHS will benchmark the quality of the careprovided locally against the national average.Although further work may well be needed toensure that these data are used to improveperformance, simply establishing a trusted andrigorous audit of performance against agreed

How do you get clinicians involved in quality improvement? 85

Page 102: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

standards represents an important step. The resultsof a survey undertaken by the project team showedthat 82% of the 105 consultants who had read theannual audit thought that it was useful as abenchmark and to raise awareness within unitsabout surgical outcomes66. And perhaps withoutthe audit, quality standards might have fallen inthe context of competing demands on services.

Elsewhere in the initiative, the term ‘clinical audit’was used in its more common sense of a full auditcycle. Professionally led clinical audits wereachieved. In Self-harm, for example, the elementsof the cycle were implemented as planned, and thereasons for some local units not participating wereeither linked to problems of resource and timing orbecause something similar was already happeningin that unit. In POMH-UK, participation in auditscontinued to grow despite fees being introduced,and subsequent changes in behaviour, while patchy,were impressive in at least some instances. InNCROP, 96% of all acute trusts participated in theclinical audit and, of the 54 teams randomised tocarry out reciprocal peer reviews, all participated,representing a significant shift in improvementactivity in this field of medicine.

We have also seen that a variety of improvementinterventions were adopted, with variable successin implementation. For example, in PoISE there isevidence that the standard dissemination packageand web-based resources were used, but a morepatchy response was reported for the use of PDSA.In EPI-SNAP the central project team successfullydesigned a referral protocol, but uptake on theground varied. As in PoISE, this intervention wasintroduced into a particularly complex,multidisciplinary organisational setting. SNAP-CAP did not achieve the desired level of awarenessamong local teams, but despite this, the carebundle was implemented, the collection ofoutcome measures was widely adopted, and SNAP-CAP principles were embedded in the admissionsystems in at least two hospitals and in otherstandards. IBD achieved high levels ofparticipation in the audit but more modestachievements in delivering change throughsupported action planning visits.

We therefore conclude that peer-led audit in theacute sector can achieve high participation withtrusted results, and that such audits may lead to

successful action plans, but that delivering actionplans may take longer and be more complex. Tothis extent peer-led processes can work.

We also conclude that the projects not onlyachieved effective clinician engagement but alsothat this was followed by measurable changes inclinicians’ attitudes and practice (although suchchanges might have been influenced by factorsoutside the EwQI). These changes includeimproved lymph node harvest (Colorectal Cancer),better staff attitudes (Self-harm), better referralpractice (EPI-SNAP) and better prescribingpractice (POMH-UK). Furthermore, each projectput forward evidence to support the expectationthat these changes in behaviour would contributeto improved outcomes for patients. However, weshould be careful not to assume that changes inclinicians’ behaviour automatically lead tomeasurable improvements in patient outcomes.

To understand the achievements of the EwQIprojects, it is important to understand that thereare at least two ‘layers’ of factors, or two potentialgaps, between successful participation in the earlystages of the clinical audit cycle and improvedpatient outcomes.

The first is the development of credible anddeliverable action plans in response to the first-round audit data. In NCROP and IBD considerableenergy was devoted to supporting peer review andinformal action planning visits, respectively, as aroute to implementing change. POMH-UK hadsome success in supporting local improvementthrough local champions, academic training,change management workshops, work books andeasy-to-use monitoring charts. PEARLS appears tohave made significant steps towards achieving localchange through the use of locally based trainers,who then spread good practice (although theevidence for this is still incomplete). Therefore, wecan say with some confidence that in the acutesector, clinician-led approaches can, in the rightcircumstance, result in the successful developmentof action plans of one sort or another.

The second gap is that even agreed action plans donot necessarily deliver measurable improvements.POMH-UK showed modest but statisticallysignificant improvements on a range of measures,and the team justifiably concluded that these were

86 How do you get clinicians involved in quality improvement?

Page 103: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

at least in part attributable to their activities. InNCROP, evidence of patient benefit was patchy,with measurable (although not statisticallysignificant) improvement in two areas ofintervention but not in another two. In PoISE,there were no statistically significant differences inthe mean food and fluid fasting times acrossintervention strategies at the cluster (hospital site)level, although some statistically significantchanges on individual wards were recorded.A reasonable explanation of this, in the case ofPoISE, is that the local context for change wascomplex and multidisciplinary, and thatimprovement would require greater coordinationand re-alignment of behaviour than could beachieved, given the resources and time available.EPI-SNAP was successful in securing someimprovement in providing driving advice. But,although this measure is widely recognised as anappropriate indicator of good practice in referral, itis not known what impact it has on long-termpatient outcomes. SNAP-CAP reportedimprovements in some measures, such asantibiotics in four hours, oxygen therapy andbundle compliance, which are (again, justifiably)believed to be associated with improved patientoutcomes. IBD showed statistically significantimprovements in a range of indicators at visitedsites. However, non-visited sites also improved(with some exceptions).

On the basis of the evidence available, we come tothe conclusion that improved outcomes – in termsof measurable improvements in adherence toguidelines or patient outcomes – have been modestand patchy during the study period of theevaluation. On this narrow definition of outcomesand against an implicit counterfactual that thesequality standards would not have changed withoutsuch initiatives, it can be hard to justify theconsiderable effort put into the EwQI. However, inour view, this does not capture all of what wasachieved, and there are considerable additionalbenefits to consider. These wider benefits include: – engaging clinicians (and service users) in

effective processes of change– engaging policy makers and decision makers– enhancing the capacity of the healthcare system

to deliver QI– contributing to the knowledge base on QI– contributing to the design of evaluations

for QI.

We shall now briefly look at these wider impacts inturn. We have already noted that clinical leadershipof QI activities was associated with significantengagement by clinicians. In addition, the EwQIsought to engage the royal colleges andprofessional bodies in QI. In Colorectal Cancer, theproject team reported that the ACPGBI’s activesupport of the project boosted participation. Self-harm and POMH-UK both successfully drew on,and in turn strengthened, resources within theRoyal College of Psychiatrists. The NCROP teambelieved that commitment to the project wasshown because it sat within the Clinical StandardsDepartment of the Royal College of Physicians,which provided the infrastructure to support it.The PEARLS team also reported that the RoyalCollege of Midwives now has a greaterunderstanding of the issues around QI as a resultof the initiative.

However, although we can state with confidencethat the projects contributed to the engagement ofthe royal colleges and professional bodies in QI,and we can state that their active involvementfacilitated change, there is also considerablevariation among the royal colleges and professionalbodies (see chapter 5) and it is important to avoidover-generalisation. Not only were they all startingfrom very different places, they are alsoaccountable to different constituencies.

In addition the EwQI involved service users asmembers of the central project teams. As we reportin chapter 4, both the service users and other teammembers were extremely positive about thecontribution they had made and about the lessonslearned in the process.

As a demonstration project, the EwQI hadaspirations to promote learning and changeamong policy makers (broadly conceived). Seniorfigures within the royal colleges and professionalbodies have been engaged from the outset. Wehave seen that the profile of QI has been raisedwithin these organisations, and their capacity tosupport QI in more practical ways has beenimproved. In chapter 5 we outlined interviewevidence showing that senior figures within theroyal colleges and professional bodies believedthat the EwQI had had positive, beneficial andtimely effects in raising the profile of QI withinthese institutions. The EwQI project teams have

How do you get clinicians involved in quality improvement? 87

Page 104: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

produced a wide range of contributions to theknowledge base, as is demonstrated by a growingrange of publications and conferencepresentations (see chapter 3). They have alsoinfluenced national decision makers and beeninvolved in setting national clinical standards.For example, IBD and EPI-SNAP contributed toNational Service Standards. Clinical audit is oneplausible route to QI, and the EwQI not onlyprovided additional resources through directfunding but also raised the awareness and profileof such audits.

The Health Foundation also hopes that the EwQIwill have a wider influence among policy makersand senior decision makers. For the reasonssuggested above, this has already been achievedwith senior figures in some royal colleges and inthe professions. Some senior managers, such aschief executives, at least have an awareness of theinitiative (although in general, the impact onsenior management seems to be muted). Thecontribution of the projects to guidelines andstandards is further evidence of this widerinfluence. We will have to wait until thedissemination of this report and related activitiesto see whether the aim of influencing seniordecision makers in and around the Department ofHealth has been achieved.

Finally, a report of the protocol for the externalEwQI evaluation has already been published andwas positively referred to in a Department ofHealth invitation to tender103. The protocol hasalso helped to shape several other studies,including: the evaluation protocol for the HealthFoundation’s ‘Engaging with Quality in PrimaryCare’ and ‘Closing the Gap through ClinicalCommunities’ schemes; the evaluation of theDepartment of Health’s Integrated Care PilotProgramme; and the evaluation of the NIHRCollaborations for Leadership in Applied HealthResearch and Care (CLAHRCs). And the approachadopted in this evaluation has been transferred tomultilevel, multi-agency emergent projectsoutside healthcare.

In judging the achievements of the EwQI we mightgo in one of two directions. The first would be totake stock of all that has been accomplished todate. Despite real changes in peer-led QI and in thebehaviour of clinicians, there is limited evidence

that this has fed through to improvements inhealth outcomes. These achievements could becharacterised as interesting but limited. Analternative approach would be to try to understandhow the legacy of the EwQI might contribute tofuture improvements. This legacy will be felt byindividual patients and by their variousorganisations, and by clinicians and theirprofessional bodies. It can be considered along atleast two dimensions. The first is that someactivities will be carried on directly. POMH, forexample, has used the investment in theinfrastructure funded partly by the EwQI as ‘sunkdevelopment costs’, and will roll its approachforward. Elsewhere, national audits will berepeated. Second and, perhaps as interesting is thefact that many busy people continued to commitconsiderable amounts of time throughout the lifeof the initiative and beyond, and they appearstrongly committed to continuing doing so in oneway or another. These are also part of a largergroup of people (including the Evaluation Team)who have learned a great deal from participating inthe process. In the end, the consequences of theEwQI will depend upon the joint efforts of these,and many others, in taking the lessons forward.Table 32 summarises these benefits from the EwQIbut leaves to one side any judgements aboutlonger-term impacts.

In summary, we may therefore conclude thatefforts to change clinicians’ behaviour throughclinician-led QI which engages patients werebroadly successful. Wider benefits to the healthsystem included: – improved practical knowledge of QI among the

many health professionals and service usersinvolved in the initiative

– improved evaluation techniques– the engagement of the royal colleges and

professional bodies, associated in someinstances with an enhanced capacity todeliver QI.

On the other hand, direct benefits in terms ofmeasurable patient outcomes were more limited,and it is important for us to understand whatfacilitated and hindered such improvements, whichwe attempt to do in section 6.4.

We have summarised the impacts in Figure 1opposite. This shows that the most visible and

88 How do you get clinicians involved in quality improvement?

Page 105: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

measurable achievements are in relation toengaging clinicians, as evidenced for example, intheir leadership of, and participation in, clinicalaudit. Less visible are the impacts on the royalcolleges and other influencers of improvement inclinical behaviour, and on the behaviour ofparticipating clinicians. Finally, the impacts onpatient outcomes are the least easy to identify.

Another way of interpreting this evaluation of the

EwQI ‘contribution story’ is that we haveconsiderably reduced uncertainty surroundingwhen, why and how clinicians will engage in QI,and the effect this has on the healthcare system. Wehave shown that such an approach produces patchyand limited impacts on patient care (withoutreducing many uncertainties about why thishappens). We will consider the wider implicationsof these achievements, but first, we pose thequestion ‘Was it worth it?’

How do you get clinicians involved in quality improvement? 89

Table 32: Summary of the benefits from the EwQI

Direct benefits to patients Modest, patchy but real

arising from the work of

the projects

Engaging clinicians and Peer-led QI processes secured effective clinician engagement.

the professions in change Very large numbers of clinicians engaged in the EwQI, mostly

positively. Engagement was limited by time and resource rather

Engaging service users than interest and enthusiasm.

Service users were engaged in different but largely successful

ways with benefits reported for both the quality of the project

and the individuals involved.

Engaging policy makers The EwQI secured the attention of the royal colleges and

and decision makers professional bodies, and they reported immediate

consequences in organisation and practice, and also that the

EwQI had either ‘catalysed’ or supported longer-term trends

towards involving them in QI. The royal colleges and

professional bodies reported that through the EwQI, they had

influenced guidelines, engaged senior champions within

professions, and strengthened the patient voice within QI.

Influence on wider policy makers is yet to be determined. There

is only limited evidence of engagement of senior management

in trusts and the healthcare system more widely.

Promoting the capacity of The EwQI has improved awareness of the practical challenge

the health system to of delivering QI within the medical profession, improved

deliver QI technical skills such as delivering all parts of the clinical audit

cycle (although this varied by project), strengthened the

capacity of the royal colleges to support QI, and made some

contributions to guidelines and standards development.

Contributing to knowledge Findings from projects have been made available through

usable in improving the professional conferences and publications, and through peer-

quality of care reviewed journals. Further dissemination is planned by the

project teams, and by the Health Foundation and the

Evaluation Team (of which the publication of this Final

Evaluation Report will be part).

Contributing to the design Development of evaluation approach to address emergent,

of evaluations multilevel and multi-agency interventions has been positively

received more widely, and has directly contributed to recent

evaluation designs for EwQPCAS, CLAHRCs and Integrated

Care Pilots.

Page 106: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

6.3 Was it worth it?

In this section, we first identify the effort requiredto achieve these results and then explore thequestion ‘Was it worth it?’ But before doing so, weshould note an additional problem which is bothobvious and easily overlooked. The effort requiredto deliver any given level of achievement varied foreach project, partly because each had a differentstarting point. The capacity to deliver QI is notspread equally across either the professions ortheir professional organisations. For example, thetwo projects linked to the Royal College ofPsychiatry were housed within a unit whichalready had considerable experience andunderstanding of delivering improvementactivities. The team based in the Royal College ofNursing had a well-developed analytical approachto understanding improvement interventions.By contrast, the Royal College of Midwives had acommitment to developing such capacities butlittle prior experience.

Furthermore, there is no simple way to arrive at amonetary cost for the initiative. We know thatthe Health Foundation made available some£4.6 million for this initiative, and this directlyfunded much of the support structure to helpdeliver QI, including the Support Team, the workof the Leadership Programme and the EvaluationTeam, in addition to the funding for each project.

However, this provided us with only a partialunderstanding of what was required by the centralproject teams and the local participating units.Despite constant prompting and support from theEvaluation Team, the projects often failed to obtainthe necessary management and cost informationneeded to provide estimates of the costs involved.As shown in chapter 3, we were thereforecompelled to take a broader view to reveal theamount of effort required to deliver the initiative.Although we were unable to cost these efforts, wecould provide a sense of their scale.

We can conclude a number of things. As we saw inchapter 3, the resources required locally toparticipate in the first stage of the audit cycle werenot regarded as excessive by clinicians. TheColorectal Cancer team provided an account byone participating clinician that made it clear thatthe time required was manageable and notexcessive. In POMH-UK, the view of the centralteam was that local units did not find the datainput too onerous – and this was confirmed bytheir willingness to participate and to pay for thisopportunity. In NCROP, almost 100% of the targetunits were recruited to both the organisational andclinical audits; and the SNAP-CAP team providedan insight into the local resources required toparticipate in their project.

The resources required centrally to develop anddeliver the audits were variable but sometimes

90 How do you get clinicians involved in quality improvement?

Figure 1: The EwQI pyramid of impacts

Betterpatient

outcomes

Improving clinical behaviour

Peer-led engagement of clinicians in audit and other

QI-related activities

Diminishingmeasurable

impact

Increasingmeasurable

impact

– Modest, patchy,limited impact onpatients

– Active interest fromprofessionalbodies, serviceusers, professionaland academicevents andpublications, andwider groups

– Successfullyinvolved largenumbers ofclinicians in peer-led and patient-involving QI

Page 107: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

considerable. The IBD team told us that the averageannual central cost of conducting a clinical audit forthe Royal College of Physicians was in the order of£120,000–£150,000, but we are also aware of theconsiderable energy and ‘pester power’ that wasexpended to achieve high levels of participation.The evidence from the EwQI, therefore, is that withsupport from a central unit, participation in clinicalaudit by local units in acute care is feasible butchallenging. And although demanding, the costs ofcentral support are not excessive.

The efforts required to develop and deliverimprovement plans were also varied. At the locallevel, the Self-harm team estimated that the timecommitment of each participating clinician was inthe order of 15 days per team member (or 60 daysfor each team), with 15 service-user days and48 days for each team leader. The NCROP teamreported high levels of commitment andparticipation, but also high levels of anxiety aboutthe time taken to prepare for and deliver visits.

Furthermore, the central support required for theselocal improvement activities can be considerable.The PoISE team estimated that the annual cost ofsupporting all 170 trusts with full implementationwould be £153,700, and that the total running costsof the whole project were £550,000, excluding in-kind time from the RCN. The SNAP-CAP steeringgroup estimated their commitment at a total of43 days. PEARLS is yet to provide a final report,and the analysis of the time and effort spent on theIBD audit is being undertaken by an outside team.Though the final analyses are not yet available, it isclear from discussions that both project teamsregarded the delivery of their respective improve-ment packages as expensive and time-consuming.Overall, it is impossible for us to demonstrate thatthe use of resources was cost-effective.

Given the scale of effort required, and the modestand patchy outcomes for patients, is the EwQImodel sustainable? Will commissioners fundactivities which may have long-term, system-widebenefits (for example, future cost savings, lessdemand for social care, more equitable services)that are unrelated to local targets or the patientsroutinely cared for by local clinicians? Ifparticipating in clinical audit is a professionalobligation – and we have shown how aprofessionally led approach to collecting data onperformance against agreed standards is feasible and

acceptable – then we anticipate that clinician-ledmeasurement will continue, but the implementationof improvement plans may not. This is animportant hiatus for policy makers to consider.

The greatest challenge to the model of QIunderpinning the EwQI, therefore, involves not theproduction of trustworthy relative performancedata but the delivery of sustainable change (andone without the other is of limited value). In partthis is because such change is difficult (witness theoutcomes for patients in the EwQI), and in partbecause it is time-consuming, and time is at apremium for clinicians and others. Clinically ledaudit produces important new evidence that allowsprofessionals to assess the performance of theirunits and, on the back of this, consider how toimprove the quality of their service. In ourjudgement, the approach to the first stages of theclinical audit cycle common to all bar one of theEwQI projects represents good value for money104.The problem is that this stage cannot be consideredin isolation: the clear understanding of themajority of the EwQI project teams was that auditand feedback on their own do not necessarilyachieve change. And it is the second stage (theimprovement intervention, the second ‘gap’ weidentified earlier) that appears, on the evidencefrom the EwQI, to represent variable value formoney. Given that, we believe that the opportunitycosts of ‘doing’ QI need to be carefully consideredwhen commissioning projects. As we have shownin chapter 4, these anxieties were widely shared byclinicians themselves. In order to understand moreclearly what might represent good value for moneyfrom more targeted QI activities, it would behelpful to discuss what appears to have facilitatedor inhibited change in the EwQI. We consider thisin the following section.

6.4 Explaining success andfailure: why is the return oneffort greater for some QIactivities than others?

As we noted in chapter 1, the variety of theprojects’ starting points, methods and aims limitsgeneralisations about what others can learn fromthe achievements of the EwQI. However, thatdiversity also allows us to understand how different

How do you get clinicians involved in quality improvement? 91

Page 108: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

mechanisms might be more or less successful indifferent settings.

To start with the more straightforward evidence ofwhat works and why, we have seen that the firststages of the clinical audit cycle can be effectivelydelivered with widely accepted results when asmall, technically skilled group is sanctioned byfellow professionals (either through a royal college,professional body or other means). In the EwQIprojects, the gap between existing practice andguideline-compliant practice was widelyacknowledged, and agreed standards for theclinical audit either existed or there was a basis forcreating them. Under these circumstances, the firststage of clinical audit could successfully bedelivered by a professionally led team of clinicians.

However, projects experienced more difficulty indeciding what to do with the data in order toachieve change. ‘Difficulty’ took differentdimensions. On the one hand, POMH-UK had acomplex model of change involving localchampions, academic training, change managementworkshops, work books and monitoring charts.But it also had a relatively discrete and clear targetaudience, a clear problem definition, and a solutionwidely acceptable to clinicians, pharmacists andservice users. Statistically significant improvementsin practice were achieved in some topics. PoISE, onthe other hand, appeared to have an equally clearproblem-definition (unnecessary peri-operativefasting) and it sought an apparently simple change(adherence to evidence-based standards). It alsohad a sophisticated and clearly articulated theory ofchange (arguably more so than any other project),involving complex delivery of informationconcerning good practice. Yet in this case, there wasno overall statistically significant change in practice.A plausible explanation is that the multi-professional context proved to be a barrier,requiring the alignment of anaesthetists, surgeonsand nurses, and the management of resources bothon the wards and in the operating theatre. Theconcept of professionally led processes as a uniqueand important driver of change may need to bemodified where the leadership is perceived to comefrom just one profession and other professionalinterests are also involved. The consensus andtrust that professionals have in processes led bytheir own professions (see chapter 4) are notnecessarily extended to processes led by otherprofessional groups.

Another contrast might be made with the IBDproject, which succeeded in influencing bothnational standard setting and the capacity of theRoyal College of Physicians to support QI. Thisteam achieved high participation in the audit andmanaged well-received action planning visits.Successfully changing individual clinicians’ practicehere appeared to be associated with a clearlyidentified and consensually agreed problem,combined with a relatively small and coherent targetaudience. Their success in influencing nationalstandards owed much to the persistence andinfluence of the chief executive of the main IBDpatient support group (National Association forColitis and Crohn’s Disease), who was a member ofthe project implementation group. This ability tospan the worlds of service users, QI and theprofessions might be important in achieving someof the wider influence which the EwQI sought.

The projects also experienced difficulties. Somefaced problems with ethics and researchgovernance arrangements (see chapter 5) –PoISE and PEARLS in particular experiencedconsiderable delays. Some, such as EPI-SNAP andSNAP-CAP, had difficulty integrating their project’srequirements with existing web-based electronicsystems or developing suitable systems themselves.All found that communications skills were moreimportant than they had expected at the outset.And initially none fully anticipated the extent ofthe obligation to collect monitoring data on theimplementation of the project (whereas mostmodels of QI recognise that measurement is arequirement in order to understand what isworking). Most projects were able to recruit a highquality team (a condition of funding), and goodmanagers had a palpable effect on the running ofthe projects. Some of the improvementinterventions selected, such as supported peerreview visits, made high demands on the centralteam. All the teams were stretched by thecomplexity of delivering change and someoccasionally struggled; the change processesembodied in the EwQI model require a significantmanagement capacity.

Other generic factors are time pressures and theunevenness of institutionalised support for QI.Participating clinicians in the central teams and inlocal units often commented that they were doingQI in addition to their ‘day job’. Nor is QIperceived to be a route to advancement in the

92 How do you get clinicians involved in quality improvement?

Page 109: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

profession – it is not held in the same esteem asmedical research.

There was also the problem of aligning activities atdifferent levels. Centrally driven activities (such asaudit) had to be aligned with the capacities of localtrusts and with capacities at ward and individualclinician level. Absence of support at any levelcould jeopardise the whole project. For example,support from trust chief executives was seen byseveral project teams as critical, and was oftensought, but not always obtained. The projectsrequired staff resources, IT provision, clinical skills,project management, training, and access topatients and their representatives. They requiredcommunication and financial skills, as well as thetechnical skills of measuring and cost estimation,and knowledge of techniques such as PDSA. Whileevery project had access to many of these skills anddemonstrated a capacity to align activities acrossthe levels of the NHS, they all had limitations,which the Support Team and leadershipprogramme only partially overcame. We concludethat in less well-resourced approaches to QI,elsewhere in the NHS, there is a risk that thedelivery platform will be too fragile to implementQI successfully.

6.5 What should be donedifferently in the future?We have organised our recommendations intothree broad areas: delivering, supporting andevaluating QI. Together these recommendationsaim to achieve three things: to facilitate clinicianengagement in effectively led QI, to amplify theimpact of this engagement on clinical practice andpatient outcomes, and to improve ourunderstanding about what works and why.

Delivering QI

A springboard for action

Delivering each EwQI project required asophisticated platform – a springboard to facilitateaction. In some instances, this platform built uponan existing capacity in a royal college (POMH-UK,Self-harm), while in others that capacity had to bebuilt virtually from scratch (PEARLS). In some(IBD), existing relationships could be used to help

build the platform, others were seeking to buildand develop new relationships as part of the modelof delivery (SNAP-CAP). This delivery platform forQI included:– project management– data collecting and analysing– communication– trust building and diplomacy– understanding of the wider NHS and its

incentives and drivers.The springboard might be conceived of as the‘entry ticket’ for playing the ‘QI game’. In the caseof the EwQI, there were additional supportsavailable to the teams (not least funding) throughthe Health Foundation. This will not always be thecase for other QI activities. Therefore, our firstrecommendation is as follows:

Any QI project should have a springboard,consisting of a team with sufficient capacity tomanage the complexity of that project. This reportdemonstrates the considerable extent of theserequirements in terms of project and peoplemanagement, user engagement, data collectionand analysis, communication, trust building, andunderstanding of the wider NHS environment.TARGET AUDIENCE: those planning and leadingQI; NHS bodies hosting QI activities; fundersTIMESCALE: immediateTASKS: develop a short ‘capability check’ that QIproject teams could use to reflect on theircapability for action

Sparking change and mobilising resources

Professional awareness of a gap between theevidence and current practice changes little on itsown – these gaps remain persistent. Even having aplatform for delivery is insufficient. Each projectalso demonstrated the importance of leadershipand mobilisation. Additionally, each project soughtto structure and organise that leadership – a sparkto ignite and maintain action.

Leadership took different forms. The role oftrusted ‘names’ in encouraging others and givingthe project legitimacy points to a more ‘heroic’form of leadership, but the capacity todiplomatically and persistently maintain interestand momentum among clinicians and patientgroups who had many competing interests wasequally important. Those acting in these rolesneeded to be credible both among clinicians and in

How do you get clinicians involved in quality improvement? 93

Page 110: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

the NHS more widely. These complex projects alsoneeded robust project management.

Successful implementation of these projectsdepended on multi-professional and inter-organisational collaboration. Peer review, althoughtime-consuming, proved to be an effective way todevelop action plans at the local level, but it wasresource-intensive. On the other hand, PDSA,which depends upon inter-disciplinary teamssharing an understanding of a problem andagreeing to act and measure any resulting changes,had mixed success. PDSA provides a basis forstructuring local collaborations in multi-professional settings. This appears not to havealways been deliverable. But the difficultiesencountered do not suggest that it could not workwell in other settings or over longer timescales. IHIthinking on collaboratives (SNAP-CAP) and actionplanning visits (NCROP and IBD), and NIHthinking on PDSA (SNAP-CAP, PoISE) all aim tohelp individuals and their teams learn from eachother, engage with experts and develop theevidence base. These aims are fundamental tosuccessful QI. Each project had to adapt thesegeneric approaches to local circumstances, and thatoften required considerable effort. Even the mostintellectually coherent approach can beconfounded by the complexities of deliveringchange on the ground. There is a need forflexibility in planning and realism in expectationsabout likely achievements.

Our second recommendation is that:

QI activities typically require a change fromroutine practice and must overcome inertia to getstarted. Successful projects require leadershipcapable of sparking enthusiasm and maintaining amomentum suitable to the scale of that inertia andto the ambition of the aims to be realised. Patientvoices can be an important support in this. Large,complex projects, such as those in the EwQI,require a range of leadership skills to facilitateaction and organise multi-professional,multidisciplinary collaborations, using structurescarefully adapted to local circumstances. TARGET AUDIENCE: healthcare leaders; clinicalleadership educators; funders of leadershipprogrammes; NHS Institute for Innovation andImprovement; professional bodies; funders ofhealth service research.TIMESCALE: medium-term development ofleadership capacities in healthcare.

TASKS: build on existing literature on leadershipand change to audit current skills against requisiteskills; continue to use leadership supportprogrammes as part of QI activities; continue todevelop leadership courses and training.

Sustaining change and aligning with thedirection of change in the health system

Throughout our interactions with the projectteams we prompted them to outline and discusstheir plans for sustaining and spreading thebenefits they achieved beyond the life of the EwQIfunding. This involved, in one way or another,institutionalising changes through, for example,contributing to guidelines or standards, alteringadmissions procedures in hospitals, securingfunding for future audits through subscriptions orother sources, influencing professional training,and so forth – effectively, aligning the QI activitiesto the wider resources of the healthcare system.

The projects all struggled to protect clinicians’ andservice users’ time. We have seen the considerableefforts required to engage in QI, and as the NHSfaces increasing pressures to deliver, these efforts donot appear to be sustainable. Clinicians face newpriorities and interests, and service users becomeexhausted. However, if QI were to becomeembedded in the wider mechanisms through whichhealth resources are allocated, then theopportunities for sustainable benefits would beconsiderably greater. This would require somepractical steps. For example, understanding of QIcould be improved and participation in QIincentivised through pre-clinical education,professional training and revalidation.Commissioning could be used to support long-termstreams of QI and to ensure that lessons learned inone place are absorbed elsewhere. Clinical audit andthe use of guidelines and standards could continueto be promoted and spread.

Achieving this would require many things, but inparticular, it would need to involve management.Trust chief executives were required by a numberof the projects to make a paper commitment, butthis was rarely an active involvement. Theengagement of managers was not a focus of thisinitiative, and perhaps, in retrospect, it should havebeen given more attention.

Our third recommendation is therefore that:

94 How do you get clinicians involved in quality improvement?

Page 111: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

QI activities cannot easily swim against the tide ofwider changes in the healthcare system. To providesustainable benefits, QI activities should, wherepossible, be aligned with the mainstream allocationof resources in healthcare, supported throughprofessional training, and through commissioningand regulation, and be integrated into themanagement of services. This alignment is alsolikely to include engagement with service users.Should all this not be possible, alternative andsustainable supports should be identified.TARGET AUDIENCE: commissioners of care;managers in NHS bodies hosting QI activities;funders of QI; deliverers of QI.TIMESCALE: immediate.TASKS: QI projects should address sustainability atthe outset rather than towards the end and shouldidentify how changes in the healthcare system canbe harnessed to achieve sustainable improvements.

Supporting QI

We have seen that formally funded, large-scale QIprojects are complex and time-consuming, andthat they are unlikely to generate quick step-changein patient outcomes. Equally, QI is just one of anumber of approaches to improving healthcare,and if it is to earn its place in the mix of policiesintended to make healthcare safe and effective,then advocates of QI need to be able to show long-term, sustainable benefits that justify the effortrequired. For these reasons we propose that thegovernance arrangements of QI be examined morecarefully to ensure that QI activities that areinappropriate, insufficiently managed or lacktraction with the wider NHS are not encouraged.However, this should be seen to apply specificallyto large-scale and formally funded demonstrationprojects and not to local ‘own account’ initiativesthat could provide fertile ground for newapproaches, which could later be assessedmore rigorously.

Consequently, our fourth recommendation is asfollows:

A large-scale QI project should only be funded ifthe healthcare institution hosting that project hasthe necessary project management capacity,leadership, monitoring and evaluation skills toensure that the project has the best chance ofdelivering and measuring improvements in thequality of healthcare, and of sharing positive

results. However, a balance should be struck toensure that this does not inhibit innovativeapproaches ‘bubbling up’ from below. Of particularimportance is the support that service users, carersand their representatives can provide.TARGET AUDIENCE: funders of QI; healthcarebodies hosting QI activities.TIMESCALE: medium term.TASKS: develop a ‘capability check list’ to be usedbefore arriving at any decisions about fundinglarge-scale QI

In arriving at such protocols for good governanceof QI, the royal colleges could play a key andleading role, should they choose to do so. Morewidely, it is clear from our evaluation that theengagement of the professions is both feasible andappropriate. Again, the royal colleges andprofessional bodies could, should they so wish,play a positive role in informing, supporting andlegitimising QI. Therefore, our fifthrecommendation is that:

Each royal college and professional body shouldconsider how, if at all, it wishes to provideleadership, legitimacy, organisational support andprofessional training in relation to QI.TARGET AUDIENCE: the royal colleges andprofessional bodies.TIMESCALE: medium term.TASKS: the royal colleges and professional bodiesto use their inter-institutional networks to takeforward the debate of what is possible anddesirable in general and to develop an internaldialogue on what is appropriate for eachinstitution. As guardians of professional standards,they could also solicit the views and expectationsof service users and the wider public.

While the colleges and professional bodies mighthave a role to play in the understanding of QIthrough influencing clinical appraisal, revalidationand professional development, the universitiesteaching medicine could also incorporate this intotheir teaching and research. This would give QIgreater status and equip clinicians to participate.The EwQI teams did not start with a well-developed skill set to deliver QI. Therefore, oursixth recommendation is that:

QI should be part of the education, training andappraisal of health professionals. This not onlyconcerns ‘heroic’ leadership but also dispersedleadership and the ability to maintain effective

How do you get clinicians involved in quality improvement? 95

Page 112: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

dialogue with managers, service users and otherclinicians.TARGET AUDIENCE: educators.TIMESCALE: medium to long term.TASKS: review the ongoing changes to the currentcurriculum and propose inclusion of knowledgeabout QI and skills in its delivery.

Evaluating QI and strengtheninglearning

We are therefore not advocating an explosion oflarge-scale centrally funded QI projects throughoutthe NHS. Instead, we are suggesting that suchapproaches to QI should earn their place in themix of policies designed to make healthcare safe,effective, fair and responsive. But if they are to doso, we need to build a more systematic evidencebase by setting up projects that have not onlygood governance but also good evaluation. Then,over time, it would be possible to make ajudgement about the opportunity cost of theresources that might be put into QI, and about thetypes of QI that would be most likely to producecertain benefits.

Evaluating QI projects like those assessed in thisreport is complex. It requires an ability tounderstand how projects unfold as newinformation becomes available, new skills evolve,relationships mature and the environment changes.Among this emergence, effective evaluationrequires an explicit theory of change

capable of being tested with data that is reliable,and information systems that can monitoroutcomes and costs. These requirements should beavailable to project teams delivering change.

As the EwQI unfolded, it became increasingly clearthat the capacity of the projects to generateappropriate cost and outcome data was limited.There was also considerable difficulty inconceptualising the differences and similaritiesbetween research, evaluation, evidence for QI andmonitoring. As a team, we were treated with greatcourtesy and good humour, but we were aware thatcollecting data on processes, costs andconsequences was often regarded as a distractionfrom the ‘real’ task of delivering QI. However, inour understanding, high quality data are absolutelyintegral to QI and not an optional extra. This isnot simply about the technical capacity to conductevaluations. Rather, it is about a deeperunderstanding of the overall relationshipsbetween QI and producing flows of evaluativeevidence, both to steer future actions and todemonstrate to others that an approach might beworth spreading. Our seventh and finalrecommendation is therefore:

Professionals, funders, QI practitioners andevaluators should strengthen learning about theeffectiveness and cost effectiveness of QI bydeveloping a better and more widely sharedunderstanding of the requirements for evaluation,and of its benefits and limitations.TARGET AUDIENCE: clinicians; QI planners;evaluators; funders.TIMESCALE: medium term.

96 How do you get clinicians involved in quality improvement?

Page 113: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 97

Appendix A

Engaging with QualityInitiative projects

Page 114: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

98 How do you get clinicians involved in quality improvement?

Pro

jec

t En

d d

ate

Fu

nd

ing

e

xte

nsi

on

to

(f

ina

l re

po

rt

Tota

l fu

nd

ing

Le

ad

org

an

isa

tio

nP

roje

ct

title

an

d a

imStu

dy

de

sig

nSc

op

ed

ura

tio

nd

ura

tio

n

Sta

rt d

ate

rec

eiv

ed

)a

mo

un

t

Imp

eria

l C

olle

ge

an

d

Co

lore

cta

l C

an

ce

rA

ud

it a

nd

fe

ed

ba

ck

Bu

ildin

g o

n e

xis

tin

g

3 y

ea

rs9

mo

nth

sM

ay 2

00

5D

ec

20

08

£2

73

,37

4

Ass

oc

iatio

n o

f To

im

pro

ve

th

e

Tim

e s

erie

s a

na

lysi

s o

ng

oin

g n

atio

na

l (A

pr

09

)

Co

lop

roc

tolo

gis

ts o

f q

ua

lity o

f c

are

fo

r o

f re

pe

at

au

dits

au

dit,

aim

ing

fo

r

Gre

at

Brita

in a

nd

p

atie

nts

with

ca

nc

er

10

0%

pa

rtic

ipa

tio

n.

Ire

lan

do

f th

e la

rge

bo

we

l1

05

co

ntr

ibu

tin

g

ho

spita

ls

Ro

ya

l C

olle

ge

of

NC

RO

PA

ud

it a

nd

fe

ed

ba

ck

Bu

ildin

g o

n a

3

ye

ars

4 m

on

ths

Oc

t 2

00

5Fe

b 2

00

58

3,4

85

Ph

ysi

cia

ns

of

Lon

do

nTo

im

pro

ve

th

e c

are

A

co

mp

lex

pre

vio

us

on

e-o

ff

(Ju

ne

09

)(i

nc

lud

ing

of

pa

tie

nts

ad

mitte

d

ran

do

mis

ed

n

atio

na

l a

ud

it o

f £

45

,00

0

to h

osp

ita

l w

ith

c

on

tro

lled

in

ter-

94

% o

f U

K a

cu

te

sup

ple

me

nt)

exa

ce

rba

tio

ns

of

ve

ntio

n w

ith

mu

lti-

ho

spita

ls.

Aim

ing

to

ch

ron

ic o

bst

ruc

tiv

e

pro

fess

ion

al p

aire

d

rec

ruit 1

00

pu

lmo

na

ry d

ise

ase

pe

er

rev

iew

p

art

icip

atin

g s

ite

s

Ro

ya

l C

olle

ge

of

IBD

Au

dit a

nd

fe

ed

ba

ck

De

ve

lop

ing

a

4 y

ea

rs–

Oc

t 2

00

5O

ct

20

09

£5

36

,03

3

Ph

ysi

cia

ns

of

Lon

do

nTo

ass

ess

an

d im

pro

ve

C

om

pa

rin

g t

hre

e

na

tio

na

l a

ud

it.

(Se

p 0

9)

serv

ice

s fo

r p

eo

ple

a

pp

roa

ch

es,

tim

e-

Aim

ing

to

re

cru

it

with

in

fla

mm

ato

ry

serie

s b

ut

no

co

ntr

ol

80

% o

f a

ll (a

pp

rox.

bo

we

l d

ise

ase

s2

40

) a

cu

te t

rust

s

Ro

ya

l C

olle

ge

of

Po

ISE

Au

dit a

nd

fe

ed

ba

ck

19

pa

rtic

ipa

tin

g

3 y

ea

rs6

mo

nth

sN

ov

20

05

Ma

y 2

00

38

3,7

85

Nu

rsin

g

To im

pro

ve

th

e c

are

R

an

do

mis

ed

stu

dy

tru

sts

(Ju

ne

09

)(i

nc

lud

ing

of

ad

ult p

atie

nts

o

f th

ree

mo

de

s o

f £

38

,80

0

un

de

rgo

ing

su

rge

ry

dis

sem

ina

tin

g a

n

sup

ple

me

nt)

ac

ross

th

e U

K b

y

ed

uc

atio

na

l

imp

lem

en

tin

g n

atio

na

l p

ac

ka

ge

(p

ass

ive

,

clin

ica

l g

uid

elin

es

inte

rac

tiv

e w

eb

-

on

pe

ri-o

pe

rativ

e

ba

sed

, P

DSA

)

fast

ing

Tim

e s

erie

s a

na

lysi

s

Ro

ya

l C

olle

ge

of

SN

AP

-CA

P a

nd

D

ou

ble

au

dit c

yc

le

Fo

r SN

AP

-CA

P –

4

ye

ars

1 m

on

thM

ay 2

00

5Ju

ne

20

09

£3

37

,34

6

Ph

ysi

cia

ns

of

EP

I-SN

AP

with

fe

ed

ba

ck

ha

lf t

he

Sc

ott

ish

(J

uly

09

)(i

nc

lud

ing

Ed

inb

urg

h a

nd

A

tw

o-a

rme

d p

roje

ct

(EP

I-SN

AP

).h

ea

lth

bo

ard

3,0

00

Ro

ya

l C

olle

ge

of

to im

pro

ve

th

e

Imp

lem

en

tatio

n o

f Fo

r EP

I-SN

AP

– o

ve

r su

pp

lem

en

t)

Ph

ysi

cia

ns

an

d

ma

na

ge

me

nt

of

a c

are

bu

nd

le

on

e-t

hird

of

all

Su

rge

on

s o

f c

om

mu

nity a

cq

uire

d

ap

pro

ac

h

Sc

ott

ish

pra

ctic

es

Gla

sgo

wp

ne

um

on

ia a

nd

(S

NA

P-C

AP

)a

nd

fo

ur

clin

ics

ep

ilep

sy

co

ntin

ue

d

Page 115: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 99

Pro

jec

t En

d d

ate

Fu

nd

ing

e

xte

nsi

on

to

(f

ina

l re

po

rt

Tota

l fu

nd

ing

Le

ad

org

an

isa

tio

nP

roje

ct

title

an

d a

imStu

dy

de

sig

nSc

op

ed

ura

tio

nd

ura

tio

n

Sta

rt d

ate

rec

eiv

ed

)a

mo

un

t

Ro

ya

l C

olle

ge

of

Se

lf-h

arm

Tim

e s

erie

s a

na

lysi

s 3

4 s

ele

cte

d t

ea

ms

4 y

ea

rs–

Ap

r 2

00

5A

pr

20

09

£3

69

,88

4P

syc

hia

tris

tsTo

im

pro

ve

se

rvic

es

for

of

rep

ea

t a

ud

its

(Ma

y 0

9)

pe

op

le w

ho

ha

ve

se

lf-h

arm

ed

Ro

ya

l C

olle

ge

of

PO

MH

-UK

Au

dit a

nd

fe

ed

ba

ck,

42

pa

rtic

ipa

tin

g

4 y

ea

rs–

Ap

r 2

00

5A

pr

20

09

£4

56

,28

8P

syc

hia

tris

tsTo

im

pro

ve

pre

scrib

ing

p

lus

qu

alit

ativ

e w

ork

tr

ust

an

d o

ne

(M

ay 0

9)

pra

ctic

e f

or

pa

tie

nts

in

form

ing

p

riv

ate

he

alth

ca

re

with

se

ve

re m

en

tal

inte

rve

ntio

ns.

o

rga

nis

atio

n.

illn

ess

Aim

ing

to

exp

an

d

this

nu

mb

er

thro

ug

ho

ut

the

p

roje

ct

Ro

ya

l C

olle

ge

of

PEA

RLS

Au

dit a

nd

fe

ed

ba

ck

10

pa

ire

d u

nits

4 y

ea

rs8

mo

nth

sO

ct

20

05

Ju

ne

20

10

£5

63

,17

2M

idw

ive

sTo

im

pro

ve

th

e q

ua

lity

Pa

ire

d c

lust

er

(n/a

)(i

nc

lud

ing

o

f c

linic

al c

are

in

th

e

ran

do

mis

ed

tria

l to

£

90

,00

0

ass

ess

me

nt,

re

pa

ir a

nd

e

sta

blis

h e

ffe

ctiv

e-

sup

ple

me

nt)

the

sh

ort

an

d lo

ng

er-

ne

ss a

nd

pe

rsis

ten

ce

te

rm m

an

ag

em

en

t o

f o

f tr

ain

ing

pa

cka

ge

sec

on

d-d

eg

ree

p

erin

ea

l tr

au

ma

Page 116: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

100 How do you get clinicians involved in quality improvement?

Appendix B

The projects’ logic models

Evaluation framework: Improved services for people who self-harm

18 July 2005

Inputs Outputs Outcomes

Problem• Quality of caredepends on quality ofjoint working betweenemergency departmentsand mental healthservices. But the nature ofthese relationships varieswidely. There is variationin nature and quality ofspecialist mental healthinput to the assessmentand care of people whoself-harm and, inconsequence, in theconsistency and quality ofthe assessment and careof people who self-harm• Service users believethey are not treated withsame care and respectas other NHS patients, notproperly involved indecisions about their careand not given adequateinformation.

Study population• Patients from co-terminous mental health,acute and ambulancetrusts• All those over 8 whoself-harm.

Priorities• Reduce morbiditythrough more effectivetreatment of associatedmental disease• Reduce mortalitythrough reduction ofpreventable suicides• Improve users’experience of self-harmservices.

Resources

People and expertise• Multidisciplinary steeringgroup• Experienced projectteam (involved in NICEguidelines)• Experience of RCPsychin three previous QIinitiatives.

Knowledge base

Existing audit tools• NICE guidelines 2004• RCPsych 1994.

Time/opportunity costs ofthose recruited intoproject (clinicians, projectteam, patients)

Three levels ofintervention(1) Locally driven audit +rapid feedback + use oflocal opinion leaders(2) Regionalcollaborations: educationand training, coordinatednetworking, workshopsusing PDSA(3) National – findings oflocal audit and peerreview used to informprogramme of educationand training.

• Multi-professional teamrecruited from co-terminous acute, mentalhealth and ambulancetrusts, and from localservice users/user groups• Set of data collectiontools and methods tested• Quality standardsagreed locally (based onNICE 2004)• Local reports on qualityof care in participatingtrusts, identifying specificinterventions withassociated action plans• Audit cycles completedin two waves of trusts,with feedback.

Anticipated outcomes

Engagement ofprofessional/patients (inregional collaboratives)Improved clinical carepractice• Identified improvementsin patient care,evidentially linked toimproved outcomes• Improvements in users’experience of services.

Developing a sustainablesystem of high qualitycare• Self-sustaining,expanding networks (‘self-harm collaboratives’) withincome from subscriptionreplacing central funding• Promotion of modelthrough HCC, NICE andNIMHE.

Unanticipated outcomes• To be completed asevidence emerges.

} } }

Page 117: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 101

Evaluation framework: Use of a prescribing observatory to improve pharmacotherapy in

specialist mental health services (topics 1 and 2 cover prescribing of antipsychotic drugs)

21 July 2005

Inputs Outputs Outcomes

Problem• Majority of people withsevere mental illnessreceiving care areprescribed one or moreanti-psychotic drugs,often long-term• Current prescribingpractice varies greatlyand frequently deviatesfrom best practiceguidance. 20% ofinpatients prescribeddose above BNFrecommended limits, 48%prescribed more than onedrug concurrently. Mostpatients not told theyhave been given dosesof medicine outsiderecommended range.• Suboptimal prescribingresults in: failure to relievesymptoms and preventassociated morbidity,unnecessary burden ofdisease and reducedquality of life, reduction inlife expectancy in long-term patients, raised levelof mortality (inc fromoverdoses) and in severemorbidity.

Study populationPatients of Observatorymembers.• T1: patients currentlyoccupying beds innominated psychiatric in-patient wards inparticipating units(observatory members)• T2: outpatientsprescribed antipsychoticdrugs.

Priorities• Identify and quantifysuboptimal prescribingpractice, and allowcomparison of practice• Measure impact ofinterventions designed tochange practice• Monitor changes inprescribing practice overtime• Act as educationalresource.

Resources

People and expertise• Multi-professionalsteering group• Multi-professional teamsin each member trust• Experience of RCPsychin managing QI initiatives.

Knowledge base• Cochrane systematicreviews• Partnershiporganisations.

Existing audit tools• NICE guidelines, BNFconsensus basedrecommendations.

Time/opportunity costs ofthose recruited intoproject (clinicians, projectteam, patients)

Primary intervention• Measure baselineprescribing practiceperformances for pilottopics 1 and 2.

Secondary intervention• Develop multifacetedinterventions to addressdeficiencies (eg: rapidfeedback, educationactivities)• Measure effectivenessand efficiency of theseinterventions• Disseminate results foreach topic and promoteapplication to othertopics and at other sites.

• A list of agreed criteriafor selecting future topics• A cohort of identifiedmember trusts andassociated multi-professional teams• Arrangements agreedfor local data collection• Feedback on thequantitative andqualitative findings of twocompleted audit cycles• Lessons identified fromthe process (available tofurther recruits to theObservatory and for usein other topic areas).

Anticipated outcomes

Engagement ofprofessionals/patients• Engagement of multi-professional interest inquality improvement.

Improved clinical carepractice• Detailed information onprescribing as a process,evidentially linked tospecific patientoutcomes.

Develop a sustainable(flexible) system of highquality care• Central hub linked topartner organisations andto clinical governanceteams in member trusts.

Unanticipated outcomes• To be completed asevidence emerges.

} } }

Page 118: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

102 How do you get clinicians involved in quality improvement?

Evaluation framework: Web-based audit of evidence based medical interventions –

epilepsy

20 July 2005

Inputs Outputs OutcomesProblem• Few well-establishedaudits monitoring thequality of clinical caredelivered by physicians• Little nationally basedbenchmark dataavailable to physicians inmedical specialties – thisconstrains implementationof evidence-basedguidelines and limits useof audit to facilitatecontinuous improvementin the quality of care• Concerns about currentcare for epilepsy: patientstreated by non-specialists(variable management);diagnostic process,treatment choices, givingof patient information,over-diagnosis.

Study population• Epilepsy: populationbased sample of adultpatients (>13 years) incommunity with chronicepilepsy.

Priorities• Develop, test andimplement a new geneticapproach to nationalmulti-professional audit inthe medical specialties• Improve standards ofcare for people withepilepsy• Support or influence theprovision of clinicalinformation systems toprovide clinical dataeffectively to achievenational and internationalstandards.

Resources

People and expertise• Strategic Group• Multi-professionalsteering group• Expertise of InformationServices (ISD) andRCPE/RCPSG fellows.

Knowledge base• Close-knit healthcarecommunity in Scotland• Managed clinicalnetworks• eSCRIPTS.

Existing audit tools• Existing SIGN guidelines;diagnosis andmanagement of epilepsyin adults.

Time/opportunity costs ofthose recruited intoproject (clinicians, projectteam, patients)

Primary interventionsDouble cycle audit ofepilepsy using web basedapproach for datacapture and feedback• Anonymouscomparative feedback toeach participating clinicalteam• Surveys of participantsand patients (contributingto audit cycle and selfevaluation).

Secondary interventions• Implementation ofsecondary interventions(based on auditstandards and dependingon outcomes of firstbaseline audit) to addressdeficiencies in care; andadditional evaluation.

Measurement of baselineperformance• Standards to beaudited/agreed datadefinitions agreed, andsampling frameworkdefined by steering groupfor each condition• Liaison with user groupsand broader PublicPartnership groups• Clinical teams recruitedfrom urban and ruralhospitals within Scotland• Qualitative informationcollected fromparticipants to informdata capture and othermethodological issues in2nd audit.

Baseline audit• Baseline clinical datacaptured by local clinicalteams and enteredlocally into web-baseddatabase. Resultscollated centrally andanalysed by steeringgroups• Individual results andanonymised comparativefeedback published onweb-based system for allclinician teamsparticipating• Quantitative andqualitative surveysundertaken of patients• Web-based systemmodified (if appropriate).

Selection of secondaryinterventions

Second audit andadditional evaluation• Assessed effectivenessand efficiency ofsecondary interventionson clinical outcomes• Survey of patient views,questionnaire toparticipating clinicians,and national meetings todiscuss findings and valueof audit approach.

Anticipated outcomes

Engagement ofprofessional/patients

Improved clinical patientcare• Improving standards ofcare for people sufferingfrom epilepsy• Highlighting ways thatquality of care can beimproved.

Developing a sustainablesystem of high qualitycare• Shared genericmethodology and qualityimprovements with thoseresponsible for clinicalaudit across the NHS toroll out across UK andextend to other diseaseareas• Automated datacollection to sustainregular audit andbenchmarking of results(with support of plannedNHS clinical informationsystem improvements)• Developing model forphysician audit.

Building the knowledgebase for quality andperformanceimprovement

Unanticipated outcomes• To be completed asevidence emerges.

} } }

Page 119: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 103

Evaluation framework: Web-based audit of evidence based medical interventions –

community acquired pneumonia (CAP)

20 July 2005

Inputs Outputs OutcomesProblem• Few well-establishedaudits monitoring thequality of clinical caredelivered by physicians• Little nationally basedbenchmark dataavailable to physicians inmedical specialties – thisconstrains implementationof evidence-basedguidelines and limits useof audit to facilitatecontinuous improvementin the quality of care• Concerns about currentcare for CAP:considerable variation inpractice, gaps in qualityof care, eg: amount ofO2 prescribed, poor caserecord and prescriptionchart documentation.

Study populationPatients who present withCAP anywhere inScotland. Inclusioncriteria: aged 16 yearsand over at time ofdiagnosis, normalresidence in Scotland.Patients being treated fora diagnosis of CAP.

Priorities• Develop, test andimplement a new geneticapproach to nationalmulti-professional audit inthe medical specialties• Improve standards ofcare for people with CAP• Support or influence theprovision of clinicalinformation systems toprovide clinical dataeffectively to achievenational and internationalstandards.

Resources

People and expertise• Strategic Group• Multi-professionalsteering group• Expertise of InformationServices (ISD) andRCPE/RCPSG fellows.

Knowledge base• Close-knit healthcarecommunity in Scotland• eSCRIPTS.

Existing audit tools• British Thoracic Society’s(BTS) web based audittools• Existing SIGN guidelines:community managementof lower respiratory tractinfection in adults, andBTS guidelines on CAP.

Time/opportunity costs ofthose recruited intoproject (clinicians, projectteam, patients)

Primary interventionsDouble cycle audit ofepilepsy and CAP usingweb-based approach fordata capture andfeedback• Anonymouscomparative feedback toeach participating clinicalteam• Surveys of participantsand patients (contributingto audit cycle and selfevaluation).

Secondary interventions• Implementation ofsecondary interventions(based on auditstandards and dependingon outcomes of firstbaseline audit) to addressdeficiencies in care; andadditional evaluation.

Measurement of baselineperformance• Standards to beaudited/agreed datadefinitions agreed, andsampling frameworkdefined by steering groupfor each condition• Liaison with user groupsand broader PublicPartnership groups• Clinical teams recruitedfrom urban and ruralhospitals within Scotland• Qualitative informationcollected fromparticipants to informdata capture and othermethodological issues in2nd audit.

Baseline audit• Baseline clinical datacaptured by local clinicalteams and enteredlocally into web-baseddatabase. Resultscollated centrally andanalysed by steeringgroups• Individual results andanonymised comparativefeedback published onweb-based system for allclinician teamsparticipating• Quantitative andqualitative surveysundertaken of patients• Web-based systemmodified (if appropriate).

Selection of secondaryinterventionsSecond audit andadditional evaluation• Assessed effectivenessand efficiency ofsecondary interventionson clinical outcomes• Survey of patient views,questionnaire toparticipating clinicians,and national meetings todiscuss findings and valueof audit approach.

Anticipated outcomes

Engagement ofprofessional/patients

Improved clinical patientcare• Improving standards ofcare for people sufferingfrom CAP• Highlighting ways thatquality of care can beimproved.

Developing a sustainablesystem of high qualitycare• Shared genericmethodology and qualityimprovements with thoseresponsible for clinicalaudit across the NHS toroll out across UK andextend to other diseaseareas• Automated datacollection to sustainregular audit andbenchmarking of results(with support of plannedNHS clinical informationsystem improvements)• Developing model forphysician audit.

Building the knowledgebase for quality andperformanceimprovement

Unanticipated outcomes• To be completed asevidence emerges

} } }

Page 120: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

104 How do you get clinicians involved in quality improvement?

Evaluation framework: Implementation of peri-operative fasting guideline

22 August 2005

Inputs Outputs OutcomesProblemA significant health issueamong the 8 millionpeople admitted forsurgery per annum in UK.Currently wide variation inanaesthetists’ perceptionof acceptable practice• 62% of depts. ofanaesthetists followtraditional practice butmany anaesthetists wantto change practice• For patients prolongedfasting causes discomfort,including hunger anddehydration, delayedrecovery and prolongedwaiting times.

Study population• 10% stratified sample ofacute hospital trusts in UK(n=30)• Within trusts, adultpatients in 6–8 general,orthopaedic andgynaecology surgicalwards and 1–2 day caseunits.

Priorities• Reduce practicevariation• Improve care of adultpatients undergoingsurgery throughimplementation of clinicalguideline• Add to evidence baseabout guidelinedissemination andimplementation.

Resources

People and expertise• Multidisciplinary steeringgroup• Experienced projectteam (involved inguideline developmentetc).

Knowledge base• On fasting protocols –Cochrane systematicreview Brady (2003)• On implementation –Grimshaw (2004).

Existing audit tools• ASA recommendations1999; AAGBIrecommendations 2001(Westby et al – guideline2004).

Time/opportunity costs ofthose recruited intoproject (clinicians, projectteam, patients)

Primary interventions(Three types – eachdelivered to 1/3 ofparticipating trusts)1. Passive disseminationvia printed educationpackage2. Interactive web-basededucational package3. Plan-Do-Study-Act(PDSA).

Secondary interventions• Implementation ofsecondary interventions(based on guidelinerecommendations andoutcomes of baselineaudit) to addressdeficiencies in care.

• Agreed audit criteria• Agreed methods ofdata collection• Participating trustsidentified• Clinical leads identifiedto manage PDSA inparticipating trusts• Key members of clinicalteams in participatingtrusts identified as opinionleaders to promote web-based educationpackage• Patient-basedoutcomes developedfrom patient interviews• Baseline, post-intervention andevaluation datacollected, measured andanalysed (to assesstransfer of guidelinerecommendation intopractice and quality ofcare being delivered).

Anticipated outcomes

Engagement ofprofessional/patients• Change in staffattitudes and perceptions• Networking of localparticipating teams.

Improved patient care• Adherence to guidelinerecommendations reduration of fasting.

Development of asustainable system of highquality care, including• Educational packageson web• National web-basedaudit.

Unanticipated outcomes• To be completed asevidence emerges.

} } }

Page 121: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 105

Evaluation framework: Improved services for people with Chronic Obstructive

Pulmonary Disease (COPD)

30 September 2005

Inputs Outputs OutcomesProblem• COPD is a commonchronic disorder affectingolder people. It accountsfor 23,500 deaths pa inUK. Cost to economy –£492 million• Active managementcan improve QLYS• But there is strongevidence ofunacceptable variationsin practice and outcomes– leading to increasednumber of emergencyadmissions andassociated increasedmortality, reduced lifeexpectancy, andreduced quality of life.

Study population• Intervention sites n=54(27 pairs)• Control sites n=46(23 pairs).

Priorities• Improve quality andeffectiveness of hospitalcare for patientsadmitted with COPD• Reduce unacceptableinequalities by bringingthe experience of highachievers to those withlower achievements• Identify mechanismsthat spread goodpractice• Generalise findings towider healthcare system.

ResourcesPeople and expertise• Multi-disciplinarysteering group (incl.patient rep).

Knowledge base(provides compositeperformance score ofhospitals)• 2003 national clinicalaudit• HES outcome data• Updated BTS data.

Existing audit tools• NICE guidelines (2004)• BTS guidelines (1997).

Time/opportunity costs ofthose recruited intoproject (clinicians, projectteam, patients)

Primary intervention (1)(A system of peer reviewreceived by half of pairsof hospitals)• Team from paired unitvisits matched unit andreports• Visited unit preparesaction plan outlining 3important changes• Paired hospitals dividedinto 2 groups: 1. Actionplan discussed withpatient reps; 2. Patientreps not involved.

Secondary intervention(Applies to half of pairs ofhospitals)• Strategy for change –based on unit report –implemented andmonitored monthly.

Primary intervention (2)(Applies to all hospitals)• Repeat of nationalaudit.

Achieved in interventiongroup• Successful facilitation ofexchange visit system• Development of unitaction plans, outlining 3important changes• Individual unit strategiesfor change shared acrossintervention group• Change diaries –documenting servicedevelopments• Analysis of factorsfacilitating and hinderingchange.

Generally• Results of survey onorganisational andprocess changes inparticipating trusts• Results of national audit• Comparison ofoutcomes between twoarms of the study.

Anticipated outcomes

Engagement ofprofessionals/managers/patients• Improved clinicaloutcomes and processesof care (demonstrated innational audit• Development of asustainable system ofcontinued improvementin patient care applicableto other areas ofmedicine.

Unanticipated outcomes• To be completed asevidence emerges.

} } }

Page 122: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

106 How do you get clinicians involved in quality improvement?

Evaluation framework: Improved services for women with second degree

perineal tears

8 July 2005

Inputs Outputs OutcomesSituation

Epidemiology• Incidence for 2nddegree tears unclear –90%+ (?) of 400,000 pa(85% of vaginaldeliveries).

Knowledge base• RCOG guidelines 2004• Bick et al 2002.

Current practice andtraining• Varies widely.

Priorities• Improve clinical care inline with evidence-basedguidelines• reduce immediate andlonger term post-partummorbidity• Improve women’sexperience of maternitycare (especially perinealcare) and enhance theirperception of their healthand well-being.

Resources• Multi-disciplinarysteering group• Experienced projectteam (involved indeveloping RCOGguidelines)• Local clinicians fromparticipating units to betraining facilitators incluster study.

Interventions• Electronic survey ofpractice• Electronic survey oftraining• Delphi survey ofpatients’ views onoutcomes• Paired cluster study(10 aired units) to assessimpact of trainingintervention, including3 staged prospectiveclinical audits and surveysof patients’ perceptions.

• Feedback fromelectronic surveys andDelphi process• Final list of clinical andpsychological outcomemeasures covering: painand the use of analgesia,incidence of woundinfection, uptake andduration of breastfeeding, well-being• Feedback from audits –providing data to supportthe development of thetraining intervention [and(not mentioned)organisational change?]• Tested audit tools• Tested trainingintervention.

Intended outcomes• Engagement ofclinicians, patient groupsand other stakeholders inquality initiative• Improved clinical care,reduction in morbidityand better patientexperiences• National standard forperineal assessment andmanagement (A NICEguideline)• Intervention packageand audit tools onRCM/RCOG websites inan interactive learningformat as basis forcontinued local audits• Development of DipExwebsite to share women’sviews.

Unintended outcomes• To be completed asevidence emerges.

} } }

Page 123: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 107

Evaluation framework: To improve the quality of care and outcomes of treatment for

colorectal (large bowel) cancer

11 July 2005

Inputs Outputs OutcomesSituation

Epidemiology• Second commonestfatal malignancy in bothsexes after lung cancer,life time risk 1 in 25,incidence rising• 5 year survival approx.40% in UK – lower than inrest of Europe• Estimated cost approx.£200m pa.

Knowledge base• NICE guidelines 2004.

Current practice• Low colonoscopycompletion rates• Adequacy of surgeryvaries greatly• Low use of adjuvanttherapies.

Priorities• Improve quality of carefor patients presentingwith newly diagnosedbowel cancer andimprove outcomes at allstages of diagnosis andtreatment.

Resources• Multi-professionalsteering group• ACPGBI validateddatabase with webbased feedback• Well establishednational audit with 105contributing hospitals.

Interventions• Improve access toappropriate services• Increase use of multi-disciplinary teams to mappatients from referral totreatment• Improve accuracy ofdiagnosis, incl. 90%colonoscopy completionrate• Improve surgery andhistopathology• Improve use ofradiotherapy andchemotherapy• Improve palliative careand treatment ofadvanced disease.

• Recruited hospitals• Validated data setincluding data collectedfrom new patients andexisting patients• Benchmarked andaudited performanceand impacts for units onsix aspects of colorectalcancer management• Audit model developed(and validated)• Proposed interventionsimplemented andprocess of care measured• High performing unitsidentified andappropriate actionsrecommended at clinicaland organisational levels• Algorithms developedfor individual patient riskestimation• Interventions evaluated• Dynamic modelsproduced for eachintervention• Produce ACPGBIannual reports withfeedback to units• Procedures for handlingoutliers developed andagreed with units andclinicians• Outliers identified andspecific interventionsrecommended• Rolled out electronicdata collections• On-line web-based realtime feedback to unitsdeveloped.

Intended outcomes• Engagement ofprofessionals/patients• Improved clinical care,patient• Continually improvedstandards of care throughrisk-adjusted audit• Develop a sustainablesystem of high qualitrycare• All units to take part inon-going ACPGBI audit.

Unintended outcomes• To be completed asevidence emerges.

} } }

Page 124: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

108 How do you get clinicians involved in quality improvement?

Evaluation framework: Improved services for people with inflammatory bowel disease

15 July 2005

Inputs Outputs OutcomesSituation• IBD third most commoncause of medicalemergency admission• 80% Crohn’s patientsrequire surgery. Lowquality of life• Ignored by NSF• Affects 1 per 100 ofpopulation and accountsfor 0.3% of absences fromwork.

PrioritiesImprove quality of careby:• Earlier diagnosis• Optimal treatment• Improved monitoring• Improved managementof side effects• Collecting betterclinical audit data.

Resources• Multi-disciplinarysteering group• Web-based tool fordata collection.

Interventions• 1/3 units receive onlyfeedback; 1/3 developlocal action plan; 1/3develop augmentedaction plan.

• Recruited lead IBDclinicians from 80% oftrusts• Impact assessedthrough second audit• Collected and analyseddata on the relativesuccesses of differentapproaches againstbenchmarked standards• Identified evidence-based improvement planswhich are accepted andtaken up by stakeholders.

Intended outcomes• Engagement ofprofessionals/patients Inc.other stakeholders,eg: RC’s)• Improved clinicalpatient care• Local plans lead toearlier and moreaccurate diagnosis, moretimely and appropriatetreatment, improvedmonitoring of patients,and better managementof side effects• Develop a sustainablesystem of high qualitycare• Data able to supportclinical andorganisational decisionmaking• Development of actionplans leading toorganisational change.

Unintended outcomes• To be completed asevidence emerges.

} } }

Page 125: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Introduction

RAND Europe and the Health Economic ResearchGroup at Brunel University (HERG) began work onthe evaluation of the Health Foundation’s Engagingwith Quality Initiative (EwQI) in July 2005. Thecontract stated that the final research protocolwould be agreed after a round of familiarisationwith the initiative in general, and the work of theprojects in particular. The early stages of theevaluation were seen as ‘emergent’ in this sense.Having completed this first round, it is desirable tospecify in more detail the research protocol to beused to structure the evaluation. The purpose ofthis protocol is to provide a focus for what has beenlearned during the first months, to define the stepsto be taken through to the completion of theevaluation, and to meet contractual requirements.

As outlined in the original proposal, RAND Europeand HERG will undertake a mixed methodology

evaluation. This includes a modified logic modelmethod and realist evaluation designed to identifymechanisms (or interventions), contexts andoutcomes for each project. This will get inside the‘black box’ of the projects to achieve anunderstanding of clinical and organisationalprocesses, and of users’ experiences. We will alsouse Delphi surveys, interviews and workshops.In addition, we will collate qualitative andquantitative data produced by the projects in theirself-evaluations (the format of these is listed inannex 1).

We have achieved a preliminary understanding ofthe aims, processes and intended outcomes of eachproject through discussions with them (initiallyfocused on logic models but ranging more widely).Working closely with the projects to support theirself-evaluation plans has proved to be moreimportant than anticipated and this additionalwork has been resourced with extra money fromthe Health Foundation. We will continue to work

How do you get clinicians involved in quality improvement? 109

Appendix C

The evaluation protocol

(The protocol for the external evaluation of the EwQI was deliberately‘emergent’, it was developed during the first year of the initiative, andfinalised and agreed with the Health Foundation in March 2006)

An Evaluation of the Health Foundation’s Engaging withQuality Initiative

Evaluation Protocol

First Draft March 2005

Final Draft March 2006

Prepared for The Health Foundation

(Abridged for this final report Sep 2009)

Page 126: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

closely with the projects to maintain a strongmutual understanding of our respective evaluationactivities and to coordinate our effort to maximumbenefit. We will also act as a broker,communicating quality lessons from the variousprojects to each other, identifying commoninformation needs and engaging proactively. Thiswill not mean doing the projects’ work for them,but providing expertise, information andlegitimacy for their self-assessments. From anexternal viewpoint, but with inside access, we willprovide objective evaluative assistance.

During the life of the initiative, we will also workwith the projects to refine their logic models andencourage a reflexive approach through whichevaluation contributes to learning and innovation.We have a clear sense of the activities of eachproject and, through discussions with the projects,we have become more aware of the need to identifythe counterfactual (ie: to identify what would havehappened had there been no EwQI) in order tomeasure the additional value contributed by eachproject. We are also more aware of the need tocalculate the full cost of each project (including theopportunity costs of clinicians engaging with theproject) if we are to undertake a meaningful cost-consequence analysis. Throughout the life of theinitiative we will reflect on what we learn asevaluators, and communicate this to the projects,the Support Team and the Health Foundationitself. At the same time, we will capture lessonslearned by the projects through working with theexternal Evaluation Team. This will both supportlearning during the initiative and provide a robustsummative evaluation at the end.

The following sections summarise the key aimsand methods, identify the context and background,the main tasks, the methods and the risks. They areshown in a linear manner for ease of presentation,but they overlap both conceptually andchronologically. The timing of these activities isalso outlined.

Summary of key aims andmethods

Aim 1: To work with award holders indeveloping and implementing theirevaluation plans by helping projects to:

– Collect reliable and valid data and to identifymechanisms, contexts and outcomes,including overall costs and key measures ofeffect (including the presentation of acounterfactual)

– Overcome the practical and methodologicaldifficulties associated with measuringoutcomes, including clinical data, non-clinicalmeasurable improvements users’ views, andprocess improvements105, as agreed with theHealth Foundation and projects.

Aim 2: To synthesise the data andfindings from project level evaluationsby:

– Supporting the projects to identify and analysethe evidence base for the impact of their inputsand processes on outputs and outcomes in aform that can be aggregated, where possible, atinitiative level

– Analysing initiative-wide data to establishwhich improvement interventions, associatedwith which contexts, produce whichimprovements in clinical outcomes, whichprocess improvements and which changes inusers’ views of the care they receive.

Aim 3: To gauge increases in clinicalengagement in clinical qualityimprovement and assess theconsequences by:

– Determining the current state of clinicalengagement in clinical quality improvement ineach of the areas covered in two ways. First, byexamining the documentary evidence(including the original proposal) madeavailable to us by the projects. Second, bysubsequently interviewing project teammembers and key informants. This will includeconsideration of current organisational culture.

110 How do you get clinicians involved in quality improvement?

Page 127: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

– Assessing the change achieved during the life ofthe initiative through support in designing,implementing and analysing a survey ofrelevant participants towards the end of eachproject. This support will include guidance oncontent and on managing the survey itself.Some of these questions will be initiative-wide(and the same for all projects) and others willbe project-specific. They will include questionson the role of the professional bodies, patientengagement and cultural change. They will beanonymised but will allow us to identifyrespondents by function and clinical area.

Aim 4: To measure the effectiveness ofthe award scheme (during its life) inleveraging external commitment toquality improvement by:

– Identifying project-based evidence of theinfluence of the EwQI on public policies andon professional bodies seeking to engageclinicians in quality improvement. This couldmean, for example: standard setting (such asNICE guidelines and NSFs), development ofquality measures, data collection and analysis,peer review and the evidence-based design ofimprovement strategies106. This evidence willbe followed by a workshop identifying barriers,facilitators, processes and illustrations ofexternally supported, clinically led qualityimprovement that will require ongoingmonitoring by the projects. We will alsoencourage the collection of vignettes andillustrations by the projects to add force andvitality to the final report

Aim 5: To evaluate the increase incompetency and infrastructure forquality improvement in the professionalbodies involved in the EwQI by:

– Including questions in the end of projectsurveys (under aim 3) – alongside theoutcomes of aim 4 – to establish howprofessional bodies have supported qualityimprovement. This will be aided by in-depthinterviews with each of the relevantprofessional bodies focusing on theircontribution to the quality agenda including

standard setting, development of qualitymeasures, data collection and analysis, peerreview, and quality interventions.

Aim 6: To assess the influence and costconsequences of the initiative by:

– Assessing the likely legacy of the projectsthrough an appraisal of the suitability,feasibility, sustainability and acceptability of thelegacy plans, and through a wider assessment oftheir impact on the environment of qualityimprovement. This will lead to a summativeassessment of the overall cost of the initiativeand its consequences, and will necessarilyinclude our interpretation and assessment ofthe projects’ self-evaluations. We will invitefeedback from the projects for factual accuracy,but we will arrive at our own judgement abouttheir interpretations.

Context and background(updated and abridged from the proposal, repeatedhere for ease of reference)

Government policy explicitly acknowledges boththe variability in the quality of healthcare and therole of professionals in leading improvement107.Current research also indicates a very wide range ofquality improvement initiatives and a widevariation in their impact and success108. Researchconducted jointly by RAND, UCL and the HarvardMedical School suggests that there are importantorganisational and cultural foundations tosustaining quality improvement in healthcare andthat these are varied and complex109. The sameresearch also suggests that there are macro-levelsupports within which micro-systems of qualityimprovement might flourish, and that they aremediated and managed by what might be termed,following House and colleagues110, a meso-paradigm for quality. We will assess in thisevaluation the initiative’s likely claim that thesemicro-systems might be effectively supported byengaging professionals and their organisations atmeso and macro levels. The potential impact ofclinicians engaged with quality improvement issuggested by Leatherman and Sutherland (L&S) inThe quest for quality in the NHS. Some of therelevant themes from their analysis are:

How do you get clinicians involved in quality improvement? 111

Page 128: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

– clinicians work in the NHS as members ofclinical teams, not as isolated individuals(pp 169, 177)

– the work of these teams is, in turn, stronglyinfluenced by the (local) organisational culture(pp 172, 173)

– the royal colleges are important in supportingquality initiatives in the NHS (p 44)

– measuring cultural change is very difficult,especially when there are multiple cultures andsub-cultures, hence the inadequacy of thecurrent evidence base and the need for rigorousevaluation (pp 177, 178)

– user involvement in clinical audit is important(see below)

– sustaining quality improvements is alsoimportant, hence participative rather than top-down approaches (p 178).

The richness and variety of this analysis reflects themultiple mechanisms available to improve thequality of healthcare – and the multiple barriers todelivering and evaluating such improvements.There has also been a growing interest in recentyears in the cost effectiveness of different activitiesintended to improve clinical quality (such asguideline dissemination and implementationstrategies) and these have been systematicallyreviewed by Grimshaw and colleagues (2004)111.Another important document is Principles for bestpractice in clinical audit produced by NICE andothers in 2002. This captures much previous workon assessment of audit and on the implementationof research. In its summary it notes:

A key finding is that the issues identified in relation toimplementing change in audit, the organisation ofaudit projects and audit programmes, and the findingsof the systematic reviews of studies of implementationmethods indicate similar conclusions. The successfulimplementation of improvements in healthcaredepends in large measure on a conducive environmentwithin healthcare organisations that includes thepromotion of positive attitudes and the provision ofthe time and resources required. Leadership andeffective teamwork are important organisationalattributes … There is adequate evidence about themethods of audit, including projects that shouldencourage greater involvement of users, the use ofmore systematic methods of selecting criteria andcollecting data, and the use of a variety of approachesto suit the setting and the topic concerned112.

The HERG evaluation of the ImplementationMethods Programme also explored many of these

ideas, noting that researchers in this field areincreasingly moving from studying singleinterventions aimed at individual clinicians, tolooking at broader change strategies that pay moreattention to structure, processes and culture113.

Other UK work relevant to this evaluationincludes: the Directory of Clinical Databases, whichhas developed a set of criteria for assessing clinicalaudits114; papers which describe the limitations ofcurrent performance ratings in the NHS –providing gloomy news about the currentavailability and validity of routine data on clinicaloutcomes115; and the Healthcare Commissionwebsite – which reports on participation in auditsin acute trusts.

Synthesising in-depth, holistic case studies of thesort proposed here means giving attention to theorganisational context at both the macro level (forexample, the level of the hospital) and the microlevel (the unit). The literature suggests a very widerange of organisational settings within which aclinician-led micro-system of quality improvementmight thrive. Professional bodies can activelycontribute to this in a wide variety of waysdepending on the local context. The table oppositeindicates the key themes around ‘organising forquality’, examples of the relevant literature, and therelated issues for evaluating the EwQI.

Summary of tasks

The evidence base for exploring the issues outlinedabove is challenging and often contestable. Itfocuses on certain practices, such as the use ofaudit, guidelines, pathways and protocols, qualityimprovement and quality assurance systems,clinical governance, service design, riskmanagement, patient safety initiatives andcapacities for collaboration/partnership.Professional bodies, should they so wish, have thecapacity to influence all these in different ways.

Aims and activities

We are not, therefore, starting from a positionwhere there are a small number of evidence-supported, testable hypotheses concerning thepotential for engaging clinicians in deliveringimprovements in the quality of healthcare.

112 How do you get clinicians involved in quality improvement?

Page 129: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

A variety of patterns of engagement are possible,facilitated or hindered by a range of factors, whichcorrelate in different ways with improvements inthe quality of healthcare in the UK.

Synthesising findings from the projects willtherefore require attention to local variation,sensitivity to local contexts, and a conceptualframework capable of providing some order to theemerging evidence. Assessing the contribution ofprofessional bodies to quality improvementconcerns will require an understanding of theircurrent capacities and of their contribution to localsystems. We have balanced these requirementswithin the resource constraints of the evaluation inthe ways laid out below.

Aim 1: To work with award holders inthe development and implementationof their evaluation plans

Task 1

The first task has been to work with the projectteams to support their self-evaluations, includingdata identification and validation. In the first sixmonths we allocated considerable resources to thistask. Discussions to date with the project teamshave been based on their own proposals, and haveinvolved:– defining the objectives of the project self-

evaluations and, therefore, identifying all therelevant data, including data related to theexperiences of users (see annex 1 for the agreedend-of-project self-assessment format)

How do you get clinicians involved in quality improvement? 113

Evaluation issues concerning organisational settings raised by the EwQI116

Key themes ‘organising

for quality’ Examples of literature Issues for EwQI

Organisational citizenship Smith, Organ and Near Rights and obligations of

(1983); Manville and Ober organisational members

(2003)

Organisational identity Dutton and Dukevich Content and understanding

(1991); Whetton and of, and commitment to,

Godfrey (1998) organisational identity

Relevance and fit of Rycroft-Malone et al Importance of organisational

evidence with existing (2004); Dopson et al (1999) context as a mediator of

patterns of organisation successful implementation

and practice

Empowerment Conger and Kanungo, Capacity of individuals and

(1988); Conger (1989) groups to do what they think

is right

Bottom-up organisational Barley and Tolbert (1997); Spread of socio-technical

change; diffusion of Strang and Soule (1998); activities through the

innovations Rogers (1995) organisation

Organisational learning, Agyris (1982); Levitt and Learning from past behaviour,

communities of practice March (1988); Wenger and sharing knowledge

Snyder (2000)

Mindfulness, cultural Weick and Sutcliffe; Bate Commitment to the public

leadership, social commitment (1994, 2004) good

Socio-technical systems design Trist (1981); Pugh and Consequences of deliberately

Hickson (1976) designed systems such as ICT

Cooperative communities, Barnard (1968); Williamson Capacity to respond system-

whole systems change (1990) wide to systemic challenges

Distributed leadership Denis, Lamothe and Role of leadership at every

Langley (2001); Gron (2002) level of organisation

Page 130: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

– encouraging systematic data collection on costsand on anticipated key effects

– establishing what project teams think are thecontext, mechanisms, and anticipated outcomesof their project

– working with each project team to help themdevelop, and agree with us, a ‘counterfactual’.The counterfactual would allow the teams toassess how much change during the life of theinitiative was attributable to the initiative andhow much to ‘secular’ activity

– supporting projects’ understanding of thebroad conceptual model for building systemiccapacity that was outlined in L&S117 anddiscussed with the project teams at theresidential meeting in November 2005

– supporting projects’ understanding of the layersof organisational culture outlined in L&S118,which demonstrate what needs to be changed ifquality is to be improved, ie: beliefs, values,behaviour, etc.

– ensuring projects’ understanding of factorsassociated with success as identified in theHealth Foundation tender119

– discussing projects’ understanding of whereprofessional bodies fit into the above.

The aim has been to support the projects indeveloping logic models for their improvementinterventions.

Further activities include:1. Continuing to work with project teams to

identify inputs, processes, outputs andoutcomes in order to specify more preciselywhich inputs, associated with which processes,and in which contexts produced the intendedoutputs and outcomes. The model will beparticularly sensitive to the role of professionalbodies in influencing quality improvementsthrough engaging professionals.

2. Continuing to discuss detailed datarequirements with the projects.

3. Ensuring that the data collected by the projectscan be effectively brought together in our finalreport and that all projects collect somecategories of data (on costs, for example).

4. Supporting the project-level evaluationsthroughout the life of the initiative

5. Maintaining a ‘diary’ showing what has beenlearned from the external Evaluation Team’sinvolvement with the projects.

Outputs

We have developed, and gained agreement for, anoutline of the end-of-project self-assessments (seeannex 1). The project teams were asked to produceself-evaluation plans by February 2006. We willproduce updated logic models for each projectevery six months.

Task 2

‘The proposition that users should be involved inall stages of the audit process is clear … there is agrowing number of studies to support particularmethods of involving them in audit which result insuccess in achieving health gain120.’ All the projectsare encouraging the involvement of users, bothcentrally and locally. We will assess the experiencesof the users as ‘active partners’121 in the projects,seeking to establish, for example, their role indefining outcome measures, and their contributionto the design and implementation of improvementinterventions and to governance arrangements.Several projects are planning surveys of users.Where this is the case we will discuss these surveyswith the project team to ensure that they meet therequirements of both levels of evaluation.

Output

A paper on user involvement across the EwQI,covering users’ roles, responsibilities andperceptions, discussed with the project teams andproduced at the end of the initiative.

Task 3

Discussions with project teams will also considerhow the counterfactual can be addressed. In thecontext of other simultaneous efforts to improvequality in healthcare, we need as far as possible toidentify the confounding effect of suchdevelopments on our data. 1. There is no single approach to this problem

that is right for every healthcare context. Oneapproach is to benchmark not just the work ofclinicians to whom the EwQ improvementinterventions apply (for example, those inreceipt of specific training initiatives), but alsothe work of comparable groups outside theinitiative. Another approach is to use existinghistoric trend data to support assessment of theimpact of the intervention. We will exploreplanned approaches with each project team.

114 How do you get clinicians involved in quality improvement?

Page 131: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

2. To set the context, we will also provide anongoing list of key quality initiatives in the UKover the four years of the EwQI. We will ask theproject teams to consider what impact, if any,each quality initiative has had on their project.

Outputs

– An agreed approach on addressing thecounterfactual with each project team,developed as part of their work on their end-of-project self-assessments.

– A discussion paper on the counterfactual forthe EwQI as a whole.

Timing

On balance, the tasks under aim 1 will occupymore time in the first nine months of the initiative,but a support function will continue to operateuntil the projects’ final reports are completed. Thereflexive approach described above will continuethroughout the evaluation.

Aim 2: To synthesise the data andfindings from project level evaluations

Task

We will synthesise the data and findings from projectlevel evaluations using a modified form of logicmodelling122 within an overall framework informedby realist evaluation123 and develop a logic modelfor the initiative as a whole. This generic model willseek to illustrate how – at each level within thehealth system (which might be labelled macro, mesoand micro), and within the broad context describedabove – initiatives such as the EwQI influence priordeterminants such as beliefs, values, and patterns ofbehaviour to produce changes in clinical and non-clinical outputs. This will be an iterative andreflexive process, developed collaboratively with theHealth Foundation and the projects, and willprovide an important tool for informing andinfluencing others. The Evaluation Team anticipatesthat the data generated by the projects will besufficient and accurate enough to allow conclusionsto be drawn. It is not able to quality-assure thesedata and nor can it provide a data-collectingfunction. Should the Evaluation Team becomeanxious about the extent or quality of the data theywill make the Health Foundation aware of this and

discuss ways of addressing this. If the data collectionon the projects has slipped, we suggest a review, inor around June 2007 where we either push back theactivities of years 3–4, or we find some other way toensure the availability and completeness of theevidence. June 2007 would also be an appropriatetime to review the level of support to be madeavailable to the projects. During the first ninemonths of the project the Evaluation Team providedmore input than had originally been proposed withconsequences for that team’s budget. Given thebenefits that came from this, a similar level of inputmight be desirable in year three but this would needto be funded above the current budget.

Outputs

A regularly updated logic model (six monthly) forthe EwQI as a whole, which will form the basis forwork on subsequent aims and for later papers andreports.

Timing

More time will be taken on activities under aim 2during the first year as this involves working withthe projects to ensure that data collected arerelevant to the aims of the initiative and, wherepossible, collected in a way that facilitatescomparison and contrast. However, as with aim 1,this function will continue, probably in a less time-consuming way, until the end of the projects.

Aim 3: To assess increases in clinicalengagement in quality improvement

Task 1

Our first task is to gauge current clinicalengagement through an examination of thedocumentary evidence, using the projects’ originalproposals and other evidence made available to usby the projects.

Task 2

Following this we will conduct interviews withproject team members and key informants, whowill be identified following advice from theprojects. Through these interviews we will explorethe state of affairs in the quality improvementcontext of each project before it has a chance toinfluence that setting. This will include exploring

How do you get clinicians involved in quality improvement? 115

Page 132: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

the influence of factors such as organisationalculture, team building, team support,organisational support, patient involvement,professional body involvement, and so forth, onclinical engagement in quality improvement. Weenvisage interviewing some two to three peoplewith an understanding of the context of eachproject. Typically these should be selected fromclinicians, royal colleges and patient groups butmight also include academic experts working inthis area.

Task 3

We will assess the change achieved during the life ofthe initiative by supporting each project indesigning, implementing and analysing a survey ofrelevant clinicians towards the end of the project.This will be sent to a population selected by eachproject to ensure that the views of all cliniciansinvolved are represented. Our support for thissurvey will include guidance on content and onmanaging the survey itself. Some of these questionswill be initiative-wide (and will be the same for allprojects) and others will be project-specific. Theywill include questions on the role of theprofessional bodies, patient engagement, andcultural change. Although anonymised the surveywill allow us to identify respondents by functionand clinical area. Both the initiative-wide andproject-specific questions will attempt to identifyhow far credit for change can be attributed to theactivities of EwQI, as opposed to other pressures(in the medical profession in general at theirinstitution, or in their specialty/profession).This will take place around year 3 of the life ofeach project to allow the impact of the initiative tobe felt.

Task 4

In the final year of the initiative, we will conduct aweb-based Delphi survey to identify: howclinicians can best be engaged in qualityimprovement initiatives; what impact this isthought to have on clinical outcomes; and how thiswork best interfaces with the engagement ofpatients, other professionals and health servicesmanagers to leverage external commitment toclinical leadership of quality improvement. We willapproach COREC for a view on whether thisrequires ethics approval and, if so, obtain the

necessary approval. There appear to be nooverwhelming problems with securing approval.

Outputs

To enhance the impact of any findings, these datawill be presented in a series of before-and-afterspidergrams showing our summary of the situationat the start of the initiative and the subjective viewsof clinicians in each project area at the end of theinitiative. These are intended to facilitatecommunication of findings (rather than being ananalytical tool to create findings).

We will also produce a short briefing paper toinform aim 4.

Timing

The documentary assessment and interviews willtake place between February and July 2006. Thesurveys will take place in year 3 of each project andthe Delphi in year 4.

Aim 4: To measure the effectiveness ofthe award scheme (during its life) inleveraging external commitment toclinical leadership of qualityimprovement

Task 1

The web-based Delphi survey described underaim 3 will be used to deepen our understanding ofthis question.

Task 2

The results of the Delphi, and the short briefingpaper produced on the basis of the project surveysunder aim 3, will be used in a workshop onleveraging external commitment. At thisworkshop, representatives from each project willidentify barriers, facilitators, processes, outcomesand illustrations.

Output

The output of this workshop will be a paper onfacilitators, barriers, processes, outcomes andillustrations drawing upon the experience of

116 How do you get clinicians involved in quality improvement?

Page 133: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

project teams throughout the initiative. Thisoutput will directly feed into the delivery of aims 5and 6, which consider the long-term sustainabilityof the aims of the EwQI and the initiative’scontribution to the infrastructure for qualityimprovement in professional bodies.

Timing

The initial aspects of this aim will be deliveredthrough delivering aims 2 and 3. The briefingpaper and workshop will be produced in the finalyear of the initiative.

Aim 5: To evaluate the increase incompetency and infrastructure forquality improvement in the professionalbodies involved in the EwQI

‘The key question here is not whether the royalcolleges have a pivotal role in the Quality Agenda,but rather how to engage them most constructivelyin a set of critical tasks including standard setting,development of quality measures, data collectionand analysis, peer review and the design, based onevidence, of interventions to predictably improvepatient care124.’ It is against this set of tasks thatincreases in competency and changes in the infra-structure of the relevant professional bodies willbe measured.

Task 1

In the process of achieving aim 4, we will find outwhich supports by professional bodies areconsidered the most relevant to clinician-led QI bythe clinicians participating in the EwQI projects.In addition we will conduct in-depth interviewswith each relevant professional body focusing onthe issues identifies by L&S in the quotation aboveInterviewees (one or two from each professionalbody) will be selected following discussions withthe projects.

Task 2

We will also know how the projects thinkprofessional bodies might more effectively supportclinician-led quality improvement. Here we intendto identify how changes in the competency andinfrastructure of relevant professional bodies

during the course of the EwQI have enhancedclinician-led quality improvement. We thereforepropose to look in detail at what the professionalbodies involved in the EwQI have done. Howeffectively have they involved users? Have theypromoted more effective use of audit and of auditdata? We expect that the surveys and Delphi willalso cast further light on this.

Output

A briefing paper to inform the appraisal workshopin aim 6.

Timing

The main activities under aim 5 will be carried outin the final 18 months of the initiative.

Aim 6: To assess the influence and costconsequences of the initiative

Task 1

Influence of the EwQI1. We will systematically evaluate the projects’

legacy plans, using the evidence collected duringthe evaluation to identify the acceptability,suitability, feasibility and sustainability of theplans. This would provide an opportunity bothto evaluate likely impact and, in the reflexivespirit of both levels of evaluation, to enable theproject teams to adjust their legacy plans for amore sustainable influence. (‘Sustainability’refers to the extent to which the aims andobjectives of the project are likely to be sustainedinto the future. The ‘legacy plan’ concerns thespecific steps taken by each project to secure this.Thus, a legacy plan can be minimal, becausesustainability was being delivered through othermeans such as changes in the royal colleges.Conversely, there might be very extensive legacyplans but limited sustainability.)

2. We will also ask the project teams to identifythe impact of their work on the developmentand implementation of other quality initiatives,such as the development of a relevant NSF.

3. We will then take the finalised legacy plans andcombine them with the key findings of theinitiative in a brief report.

4. Using this as background and the workshopfindings delivered under aim 4, we will run an

How do you get clinicians involved in quality improvement? 117

Page 134: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

appraisal workshop with stakeholders(professional bodies, NHS Confederation,Healthcare Commission, audit bodies, etc) andwith policy makers (Department of Health,HM Treasury, etc).

5. Conceptually, we intend to consider differentlevels of quality improvement and theirinteractions. These levels are: specialism,local/institutional, national and international.

Task 2

Cost consequences1. We will work with the projects to explore what

data they can provide to estimate costs. Thiswill involve records/estimates of the timeresources (by classes/levels of staff) devoted tothe project (that would not otherwise havebeen incurred) by those most directlyinvolved/affected. This takes time regardless ofwho is paying for it. Project teams will alsoneed to set out all the (main) consequences:describing them, measuring them and valuingwhere possible/easy. These consequences mightinclude improved patient satisfaction (using anindex); reduced serious events (estimatednumber, possibly costed); fewer formalcomplaints (number only); increased demandson specialist advice (frequency and numbers,possibly costed); and reduced risk ofsubsequent serious events (expressed as areduction in a risk score).

2. We will provide further advice on theserequirements to the project teams and, inparticular, we will work in the early months ofthe EwQI to ensure that the projects establishmechanisms to collect suitable data.

3. We will also collect data throughout the EwQIon the ‘central’ costs of the initiative, ie: thecosts to the Health Foundation, including thecosts of the contracts with the Support Teamand the external evaluators.

Outputs

The outputs of this aim will feed directly into thefinal report for wider dissemination. However, wewould also like to reflect on the findings in a moreacademic setting (as yet to be determined).

Timing

Much of this work will be ongoing throughoutthe initiative.

The appraisal workshop will be planned at the endof the initiative.

The final reports and papers will be produced atthe end of the initiative.

Evaluation of the supportprogramme

The support programme is expected, among otherthings, to ‘provide the opportunity to share learningamong the award holders with national andinternational experts’, and to build capacity and tostrengthen project leaders’ ability to influencepolicy. In the original proposal we envisaged doingthis alongside the pursuit of the aims listed above.However, because it would be helpful for the HealthFoundation to have information to help them forma view of the desirability of this approach beforeinviting more projects to join the initiative, wepropose to evaluate this separately.

Dissemination

We will work actively with the Health Foundationand the projects to maximise the impact of theevaluation. In addition to publication in academicand practitioner journals we will publicisefindings through RAND Europe’s ownmechanisms and participate in wider activities incollaboration with the Health Foundation. Weacknowledge that the dissemination strategy willbe led by the Health Foundation and we will workto support this strategy.

Ethics approval

We are satisfied that the work of the EvaluationTeam does not require separate ethical approvalwith the possible exception of the web-based Delphidetailed in aim 3 task 4. We will seek advice fromCOREC on this and act on it. However, we identifythe need for the projects to secure ethics approval asan important risk facing the initiative as a whole.

118 How do you get clinicians involved in quality improvement?

Page 135: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Quality assurance

RAND Europe has a strong and well-establishedquality assurance process. This starts with theassumption of responsibility for quality lying withindividual researchers and their managers, but it isreinforced through an internal quality assuranceprocess led by senior researchers within theorganisation. Given the complexity of thisevaluation, we propose to engage with QualityAssurance throughout the life of the evaluation(rather than the more typical quality assurance ofthe final report). We have identified this as elevendays work throughout the project. More can befound about RAND’s quality assurance system atwww.rand.org/randeurope/about/quality.html

MethodsThe proposed evaluation is methodologicallypluralistic. There is disagreement in the literatureconcerning whether evaluation should have theprimary purpose of proving that standards havebeen achieved or improved (Peryer125) or ofimproving delivery or policy (Weiss126). Ourevaluation is concerned with both so there will be asummative element intended to measure delivery(as far as possible) and formative element intendedto assist learning and improvement. In this sectionwe clarify how we propose to use logic modelling,realist evaluation and appraisal workshops.

The methodological approach used to ‘get insidethe black box’ in the projects combines a form oflogic modelling127 in an over arching frameworkinformed by ‘realist evaluation’128. There are a

number of reasons for (and some limitationsresulting from) this choice. Realist evaluation isparticularly appropriate in this context for anumber of reasons. First, it aims to establish clearrelationships between the project and outcome.Secondly, it assumes that there is an underlyingtheory of change behind the programmeexplaining how it brought about the measuredchange. Finally, it is sensitive to the context inwhich the programme is to be delivered. These arepersuasive claims on behalf of this approach andthey immediately address some of the limitationsof experimental and quasi-experimental methods(such as identifying control groups that are bothcooperative and sufficiently similar andunderstanding causal mechanisms). However, thereare risks and limitations and we guard againstthese in our proposal.

First, the underlying theory, according to realistevaluation, is identified through the use of a seriesof Context-Mechanism-Outcomes (CMO) for eachintervention. In improving clinical quality thecontext might be higher than normal re-admissionrates and the mechanism might be a new approachto professional training. Behind the apparentsimplicity of this, however, there aremethodological and practical difficulties. Anyintervention could have many CMOs, each ofwhich, in theory, could form the basis of a ‘mini-experiment’. Logically, only when all of theseexperiments have been completed can absolutelyunequivocal transferable lessons be learned.

At a methodological level, there are also difficultiesin establishing how local and how global theCMOs should be. To address these limitations we

How do you get clinicians involved in quality improvement? 119

Indicative timing of evaluation activities (size of X reflects anticipated intensity of work)

June 2005 June 2006 June 2007 June 2008

Year 1 Year 2 Year 3 Year 4

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

Aim 1 X X X X X x X x x x x x x x x x

Aim 2 X X X X X X x x x x x x x x x

Aim 3 X X X

Aim 4 x x X x X X X

Aim 5 X x X X X X X X

Aim 6 x x x X x X X X X X

Page 136: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

propose working with the projects and the HealthFoundation to construct logic models where theycan use their professional, tacit and formalknowledge to identify the inputs, processes,outputs and outcomes associated with particularinterventions to improve the quality of clinicalinterventions. In effect we are narrowing down thepossible range of CMOs by drawing uponpractitioner and other expertise. Consequently,only a manageable number of mechanisms will beconsidered in each project after discussions withthe project participants and the HealthFoundation. This introduces a quasi-experimentalelement into the methodology and guards againstthe challenge that realist evaluation approaches canlead to a large and unmanageable number ofCMOs. It also draws upon the skills and expertiseof clinicians in understanding the logic connectingprogrammes with outcomes. This guards againstthe risk that any external researchers will have onlya limited knowledge of the local context. Inaddition, it guards against the danger that realistevaluation might be unable to distinguish betweena failed theory and a failed implementation. Byfocusing on the logic model, as we propose, itshould be possible to identify and explain moreeasily failures and successes. Thirdly, it bringsexperienced clinical judgement into the datacollection processes of the project.

The appraisal workshop builds on a process wehave developed during recent years, particularlywith work at the Medical Research Council, theDepartment of Health and Breakthrough BreastCancer. It involves working with a group ofinformed people to identify suitability (ie: is it theright tool for the job), acceptability (will keystakeholders support it), feasibility (how easy is itto implement) and sustainability (will it be morethan a short-term solution).

We will provide an evidence base to supportjudgments about the overall cost-consequence ofthe initiative. We do not propose to arrive at asingle economic ratio but we will provide a strongevidential base to allow others to make a judgment.

Risk assessment

Data availability and time risk

There is a significant risk that projects will collectincomplete data, and/or that they will not be ableto collect and analyse the completed data set toagreed timetables. Following from this, there is arisk that meaningful data will not be readilyavailable to make comparisons across the EwQI asa whole. This risk would be managed by RANDEurope providing substantial early support to theprojects as they devise their evaluations. Boththrough the expertise we have assembled, and inthe time allocated, we have ensured that these riskswill be minimised. We would also be aware of thequality of data being produced by the projects andwe would alert the Health Foundation as soon aspotential problems were identified. We will have areview meeting with the Health Foundation in oraround June 2007 to review the accuracy andcompleteness of data coming from the projects.The Health Foundation would also have animportant role in ensuring that the projects mettheir contractual obligations and, if necessary,responding flexibly to support failing projects.

Biases in information

There is a risk of a ‘conspiracy of optimism’, whereall involved wish to make the initiative succeed andthis may encourage a reporting bias. Similarly, thereis the danger of a ‘Hawthorn effect’, where the act ofmeasuring would itself create turbulence in thedata. This risk will be minimised by relyingwherever possible on objective data and bycommunicating the danger of this risk to theprojects, thereby encouraging a reflexivemanagement of the risk within the projects.

Non-cooperation by projects

Some of the data required to make initiative-widecomparisons will involve self-reporting byhealthcare professionals. As busy people, they maynot complete this or, perhaps under pressure oftime, produce a less than accurate picture of theirengagement with quality. This risk cannot beremoved but we can be aware of it and whereemerging findings differ radically from otherprojects, then we may need to go back to the

120 How do you get clinicians involved in quality improvement?

Page 137: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

projects concerned for further reassurance. We donot believe that the demands on the time ofprofessionals and others are unreasonable and wewill minimise this risk by ensuring at a very earlystage that all involved are aware of the informationneeds of both levels of evaluation, and the datathey are expected to provide. By March 2006,relationships between the projects and EvaluationTeam were very good and this suggests that thisrisk will be manageable.

Ethics approval for projects

The Evaluation Team has indicated from the outsetthat the project teams need to apply for ethicsapproval at the earliest opportunity. Delays in thiscould significantly compromise the ability of theprojects to carry out their work. This risk mayneed to be actively managed by the HealthFoundation. Members of the Evaluation Team haveworked hard to ensure that COREC understands,and is supportive of, the initiative and this haseased some of these risks.

Management

This is a complex project involving internal andexternal players and different disciplines fromwithin RAND Europe and HERG. However, wehave a long track record of working to tighttimescales and in close collaboration. To managethe relationship with the projects, we have alreadyspent time making ourselves known and accessibleto the project teams. We have also sought toestablish a close relationship with the HealthFoundation, with Tom Ling as the key contact

point. In addition, we have developed a goodworking relationship with the Support Team. Themanagement of the project has been fullyresourced.

Dissemination

Perhaps the greatest risk of all is that the EwQI hasno impact or legacy. The proposed methodsoutlined above are intended to be engaging and tosome degree, the dissemination will be achievedthrough the evaluation. However, we would wantto work with the Health Foundation early on todevise a dissemination strategy aimed at key policymakers, in the first instance, and then at thepractitioner and professional community.

How do you get clinicians involved in quality improvement? 121

Page 138: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

122 How do you get clinicians involved in quality improvement?

Annex 1

The projects’ self-evaluation returns

Table 33: Key questions to be answered in the EwQI self-evaluations

Q 1. Background – Why was this project needed?

– Why did you think that your approach would be

effective?

– Did you consider other approaches? If so, why were

these rejected?

– What was the project team’s understanding of the

self-evaluation and its purpose? Did this change

during the project?

Q 2. Process – what improvement – What did the project team do?

intervention was introduced, to – Who did it involve?

whom and how? – How were these activities evaluated?

Q 3. Outputs – What did these activities produce?

– How were these outputs evaluated?

Q 4. Who did what – Who was involved in designing, implementing and

evaluating the project?

– What was their contribution?

Q 5. Outcomes – did the project – What did these activities achieve in terms of:

work? - measurable improvements in patient care

- increase in the levels of professional engagement

in QI

- increase in the capacity and infrastructure for QI in

the professional bodies involved in the project

- increase in the knowledge base

- sustainable arrangements for improving quality of

care in this field of medicine?

– How were these changes measured?

Q 6. What difference did the – The EwQI is only one of a number of initiatives

project make? currently addressing quality improvement in the UK

health system generally, and in particular specialties.

How much difference was really made by the project

in the context of all this other work?

continued

Self-evaluation report (SER)

Project self evaluations should cover all theobjectives outlined in the Health Foundation brief.

The questions the end-of-project self-evaluationwill need to address are identified below. Thisproforma was agreed with the project teams inNovember 2005.

Page 139: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 123

Table 33: Key questions to be answered in the EwQI self-evaluations – continued

Q 7. What are the cost – Without attempting to provide a monetary value to

consequences of the project? the outcomes of the project, how much did the

project cost in real terms and with what benefits?

Could this have been achieved more easily in other

ways?

Q 8. Why did the project work? – List factors that helped/hindered.

– How were clinicians and patient groups engaged

and with what consequences?

– What were the key ways of bringing about change

(eg: repeat audit, training, information provision) and

how well did these work?

– Could the project be seen to have worked for some

people but not for others?

Q 9. What arrangements are in – How might the result of the project ‘fit’ with wider

place to ensure the sustainability changes (eg: in the professions, funding, training,

of the project’s work? organisational context)?

Record of significant events (RSE)

In addition the project teams were asked tosupplement the SERs with a record of significantevents (RSE) in a (small) number of areas where the

project had made significant achievements and/orfaced particular difficulties. These brief accountswere to be presented in the form of a grid for eacharea covering ‘achievements’ and ‘key factors’, and‘challenges’ and ‘action taken’, as below.

Achievements Key Factors

Challenges Action taken

Page 140: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

124 How do you get clinicians involved in quality improvement?

Appendix D

Data collection forexternal evaluationData collection mechanism Date collection period Participants

Meetings with eight projects to Summer 2005 Eight EwQI projects

discuss self-evaluation reports Spring 2007

and logic models Winter 2008/09

Interviews with service users Summer 2007 Eight service users from 8 EwQI

projects

Interviews with the royal Spring 2009 Eight key informants from

colleges and professional participating royal colleges and

bodies professional bodies aware of the

EwQI projects

Interviews with eight projects to May–July 2006 17 key informants, including

gauge status of quality clinicians, project managers and

improvement when the EwQI researchers.

projects began

Two round Delphi survey Round 1: Sep–Oct 2008 Participating clinicians in EwQI

involving six EwQI projects Round 2: Feb–Mar 2009 (Round 1: n=150; Round 2: n=54)

Submission of self-evaluation July 2006 Eight EwQI projects

documents May 2007

Winter 2008/09

Evaluation of the support Summer 2005 13 interviews with EwQI projects,

programme interviews the Health Foundation and the

Support Team

‘Putting the evaluation Sep 2009 11 policy/decision makers.

findings into policy and

practice: the implications of

the EwQI’ – a roundtable

discussion

Page 141: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 125

Colorectal Cancer

Self-evaluation report April 2009

RSE April 2009, covering:– results of survey of trusts and relation between (clinical) processes and outcomes

– comparison of outcomes between thoseconsultants who claimed to have read the SilverBook but had not participated in the auditversus those who had done both

– reasoning behind the original six categories ofoutcome

– recruitment – publishing trust-identifiable data and the moveto open reporting

– online data submission – maintenance of patient confidentiality anddata security

– MDT discussion– specialist colorectal nurse involvement– pre-operative staging– pre-operative radiotherapy– emergency surgery– post-operative mortality– post-operative mortality– laparoscopic surgery– lymph node analysis– abdomino-perineal excision rates (APER)

– circumferential resection margins in rectalcancer

– main recommendations for 2009 from theaudit.

RSE May 2008, covering:– development of a communication strategy– recruitment of hospital trusts– sustainability– data collection and analysis– feedback to service users.

Audit reports (NBOCAP 2004–07) – time seriesquantitative data on performance againststandards.Status: detailed yearly report but no data yetavailable beyond 2007 report (which covered 2006).Team expressed considerable concerns about theaccuracy of some of the data.

Questionnaire to trusts (2007) (Sent to 171 trusts,549 consultants; responses received from 117trusts (66%) and from 159 consultants (29%))covering:– organisational structures of respondents’ trusts– feedback from clinicians about the NationalBowel Cancer Audit (NBOCAP).

Status: relatively weak quantitative data which issuggestive only.

Appendix E

EwQI evidence base

This appendix outlines the data sources from the projects usedfor the external evaluation of the EwQI. Comments about thestatus of these data are shown in italics; any comments made bythe teams are also shown. Sources of outcome data used in ouranalyses of patient outcomes per project are highlighted in red.

Page 142: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Self-harm

SER May 2009

RSE February 2006, covering:– development of a communication strategy– marketing the project and the recruitment of

participating trusts– development of quality standards– finalising data collection tools– service user involvement.

Key findings from two baseline audits, September2007 (sent to 50 trusts; responses received from 38UK emergency departments and their associatedmental health and ambulance services)

Baseline data from:– A case flow audit that records time of arrival,

waiting times and patient outcome.– A service user survey that invites respondents to

reflect on each aspect of their journey, fromarrival by ambulance, receiving physicaltreatment, psychosocial assessment and dischargefrom the emergency department. Of 682 serviceuser questionnaires received, around 60% relatedto emergency departments that participated inthe ‘Better Services’ programme. The remainderrefer to non-participating hospitals.

– Staff surveys to elicit feedback about training,support and supervision relating to self-harm,as well as staff attitudes towards people whoself-harm (2006–07):- the training needs of 562 emergency

department staffRespondents: 55% qualified nursing staff,30% doctors, 15% others

- the training, support and supervision needsof 152 ambulance staffRespondents: 64% paramedics, 26%technicians, 3% managers, 7% others

- the training, support and supervision needsof 436 mental health staff.Respondents: 45% qualified mental healthnurses, 13% mental health practitioners, 9%consultant psychiatrist/staff grade, 9%training grade doctors, 24% others.

– A policy checklist that assesses the workingarrangements each team has in place.

Wave 1 follow-up data, summary report, July 2007Status: some (limited) quantitative data on outcomesbut team notes that even from Wave 1, the largest

wave, the available data on outcomes is limitedbecause of poor response rates to re-audit: ‘eventhough there are positive signs, it is hard for us to saythat with great confidence’.

Evaluation survey of Wave 1: ‘Evaluating thebenefits and challenges of the ‘Better Services forPeople who Self-Harm’ Programme’, October 2007

Between January 2006 and March 2006, 30 teamstook part in the first audit cycle. Participants wereasked to complete an evaluation survey on eachaspect of the programme and reflect on whatfactors had helped or hindered qualityimprovement. Over two-thirds (22 teams)completed the survey.

Wave 3 baseline data September–December 2007(Also includes Key Findings document above)

395 members of staff and 81 service users from11 UK hospitals completed separate questionnairesrelating to emergency care for people who self-harm. Service user respondents were invited tocomment on all aspects of care, from initial contactwith ambulance staff, through triage or initialassessment, physical treatment, psychosocialassessment and discharge. Staff were also askedtheir opinion on these aspects of care, as well theirviews on the training and support they receive.

Respondents: Three-quarters of the service usersresponding to the survey were female and 90%were of white British origin. For 17% ofrespondents, this was the first use of emergencyservices following self-harm. Forty-nine per cent ofstaff respondents work in mental health, 35% inthe emergency department, 7% in the ambulanceservice, and 9% work in other services.

Comment: no details of re-audit following theintervention.

POMH-UK

SER May 2009

This also contained sections on lessons learned asfollows:– Q1: local support; communications– Q2a: change interventions; developing

interventions; barriers questionnaire

126 How do you get clinicians involved in quality improvement?

Page 143: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 127

– Q2c: service user involvement– Q3: timing– Q5: clinical engagement; understanding

variation in results; data collection, analysis andvalidation

– Q7: trust subscription, sustainability andspread; levers; transferability of model; costconsequences.

Accompanying appendices:– A: list of topics– B: launch event attendance– C: feedback from the launch events – D: trust presentations – E: rationale for selecting and developing the

interventions used in topic 1 – F: barriers to screening for metabolic side

effects – G: change interventions– H: service user involvement – discussion

document– I: specialist advisers involved in each topic – J: trust subscription over the length of the

project– K: number of cases entered for each topic – L: summaries of results, topics 1 – 5 – M: list of publications

Status: detailed quantitative data on change betweenbaseline and re-audit.

RSE January 2009 (very brief), covering:– rapid data feedback– ongoing interest in single topics– increased portfolio of QI programmes– self-financing– generation of publications and conference

presentations– multidisciplinary nature of steering group,

topic groups, project team and most localproject teams, including service users

– sharing good practice– team capacity.

NCROP

SER June 2009

RSE June 2007, covering: – recruitment of sites to participate with the

NCROP – the ‘peer review’ process

– patient involvement – change diaries – the project.

RSE 2009, covering:– communications– change diaries– National COPD Audit.

The National Chronic Obstructive PulmonaryDisease Resources and Outcomes Project(NCROP) Final Report, June 2009 Status: detailed quantitative data on change betweenbaseline (2007) and audit following intervention(2008).

Change diaries (report on these in Final Report)

Ninety-three fully completed pairs of baseline andfinal change diaries were received. Forty-one (89%return rate) of these came from the control groupand 52 (96% return rate) from the interventionunits.Status: qualitative data on participants’ reflectionson service change and value of NCROP.

Audit reports on:– Patient survey, December 2008– Resources and organisation of care in acute

NHS units across the UK, September 2008– Survey of COPD care within UK general

practices, December 2008– UK primary care organisations: resources and

organisation of care, November 2008– Clinical audit of COPD exacerbations admitted

to acute NHS units across the UK, November2008.

Status: detailed qualitative and quantitative data.

The Final Report on the Qualitative Sub-studyof the National COPD Resources and OutcomesProject, Oct 2008Status: an external qualitative evaluation of theNCROP project undertaken by QMUL.

Page 144: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

PoISE

SER May 2009 and supporting documentation,covering:– duration of fasting– patient experience questionnaires– local investigator audit – key contact interviews– change agent interviews– combined key contact and change agent

interviews– patient interviews– learning organisation survey– focus groups– economic evaluation.Status: detailed qualitative and quantitative analysisof the project.

RSE April 2009, covering:– role of clinicians– communications– data collection– PDSA– opinion leader and website.

EVIDENCE CONTEXT FACILITATION DataSynthesis Report June 2009Status: quantitative data on change between baselineand re-audit and detailed qualitative/quantitativeanalysis of other aspects of the project.

RCPE SNAP

Engaging with Quality Scottish National Auditproject (SNAP) Final Report, June 2009Status: detailed qualitative account and comparisonof both SNAP projects.

EPI-SNAP

SER June 2009

RSE June 2009, covering:– the (changing) focus of the project and the

development of the two main improvementinterventions

– interactions with other agencies, particularly inrelation to the development of the two mainimprovement interventions – the referralprotocol and annual review.

Four audit reports from first seizure clinics,2009Status: quantitative data on change between baselineand re-audit.

SNAP-CAP

SER June 2009

RSE June 2009, covering:– the (changing) focus of the project and the

development of the two main improvementinterventions

– interactions with other agencies in relation tospreading SNAP-CAP to at least one acutehospital in each health board in Scotland.

IBD (completing September 2009)

UK IBD Audit 2nd Round (2008) Report,March 2009

SER July 2009Status: quantitative data on change between baselineand re-audit.

Evaluation of the UK IBD Audit ActionPlanning Visits, September 2009Status: quantitative data on change between baselineand re-audit.

UK IBD Audit Action Planning Visits – SiteFeedback, September 2009

PEARLS (not yet complete)

SER (interim) June 2009

RSE June 2009, covering:– communications strategy– survey of midwives, obstetricians and

educationalists– ethics and governance processes– Delphi study.

Draft self-evaluation report, June 2009

128 How do you get clinicians involved in quality improvement?

Page 145: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 129

Page 146: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

130 How do you get clinicians involved in quality improvement?

Introduction

We used the Delphi method for conducting a two-round web-based survey of participating cliniciansin the EwQI. The survey enabled us to understandhow best to engage clinicians in a process ofquality improvement and what consequences thisengagement has had.

The Delphi method was developed at RAND in thelate 1950s as a way to collect and synthesise expertjudgements129. The Delphi method differs from aconventional survey in that participants are invitedto reassess (in several rounds) their initialjudgements in the light of the overall pattern ofresults, including the average or median ofresponses and reasons of participants for holdingextreme positions130. By keeping the process ofsurveys and feedback anonymous, Delphi isintended to avoid undesirable group effects (forexample, socially desirable answers, assertiveindividuals are often leading the discussion and soon)131. Although the process tends to move toconsensus, this is not necessarily the objective ofthe Delphi method.

Delphi survey

A conventional Delphi was designed to collectopinions from clinicians involved in the EwQIregarding:– barriers and facilitators to engage clinicians in a

process of quality improvement (QI)– consequences of engaging clinicians in QI on

clinical outcomes– use of external influences to leverage clinical

engagement in QI at trust level.

To design the survey questions, we used:– results from surveys or questions that EwQI

projects have asked their clinicians about theirrole in QI

– previous survey by RAND on a similar topicand the results from interviews undertakenwith the projects over the course of theevaluation

– literature review on QI.

The draft survey was reviewed by all members ofthe Evaluation Team. The survey was piloted viacognitive interviewing with four clinicians, selectedby the project manager of the Self-harm project. Itwas sent to 12 participating clinicians in thisproject to ensure that it functioned technically, aswell as to confirm that the questions were clear andrelevant to the task. The final web-based surveytook around 10 to 12 minutes to complete.

The second-round survey differed from first-roundsurvey in that we asked the respondents of thefirst-round survey to fill in the survey again, takinginto account average scores. This means, that foreach question we showed the initial score and theaverage score of all respondents associated with thesame EwQI project.

Identifying and approachingparticipants

There is no agreement on the panel size for Delphistudies, and there exists no recommendation/definition of a small or large sample132. Manypublished Delphi studies use panels consisting of10 to 100 or more panellists. These are oftenconvenience samples, dependent on availability ofexperts and resources.

Appendix F

Delphi survey analysis

Page 147: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 131

In this study, the population to which the survey istargeted was the clinicians involved in EwQI. Tohelp improve the response rate, each projectmanager of the eight EwQI projects was asked toforward by email a letter to participating cliniciansinviting them to complete the online survey. Twoprojects (PoISE and Colorectal Cancer) decidednot to be involved.

In April 2008, we asked project managers of eachproject to give an indication of the potentialnumber of respondents; it was as follows: – IBD: 210 clinicians – PEARLS: About 40 facilitators in 24 units. Each

unit probably has about 30–40 staff involved intheir project

– NCROP: 100 study sites participating inNCROP audit

– EPI-SNAP and SNAP-CAP: Total about 40–50(combined). However, this would bemultidisciplinary teams (consultants, juniordoctors, pharmacists, nurses, etc)

– Self-harm: Includes previous and participatingteams. There were 30 participating emergencydepartments and associated trusts

– POMH-UK: Probably 40 current trusts(80–240 people) and 48 previous participatingtrusts (96–288 people).

Overall, project managers were happy to forwardthe survey to their participating clinicians but twoimportant issues came up: (1) facilitators mightnot know all their staff ’s email address and (2) notall staff in the NHS have a trust email address. Wediscussed the possibility of doing hard copies ofthe Delphi too, but decided not to proceed withthis due to the workload of computing scores

during the two-round exercise. In most cases,project managers forwarded the survey to clinicalleads, who then forwarded it to participating staff.

The survey was in the field for one month(17 September–17 October 2008 for round 1, and28 January–1 March 2009 for round 2). Duringthis time we sent two reminders, the first after oneweek of the survey going live and the second aftertwo weeks. From previous surveys we have foundthat sending reminders increases the response rate.

Response rate of the survey(round 1 and 2)

By 17 October 2008 (closing date of the first roundsurvey), a total of 150 answers were recorded.Those respondents who provided us with an emailaddress and who could be linked to one of theeight EwQI projects received an invitation to fill inthe second-round survey (n=97). This panel alsoreceived two reminder emails, resulting in a finalresponse rate of 54 (see Table G1).

Data analysis

Data analysis comprises the following activities:– collecting survey data, including narrative

comments for the first-round survey – analysing data. The data of the first-round

survey was analysed using descriptive statistics(average scores). In the second-round survey,we presented per respondent (n=97) theirinitial answer and the average score of allrespondents involved in the same quality

Table G1: Delphi survey respondents

First round Second round

NCROP 24 NCROP 12

SNAP 6 SNAP 5

IBD 26 IBD 12

PEARLS 18 PEARLS 12

POMH-UK 13 POMH-UK 7

Self-harm 10 Self-harm 6

Total 97 Total 54

Not linked to the EwQI project133 53

Page 148: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

132 How do you get clinicians involved in quality improvement?

improvement project. If the respondent decidedto change his/her initial answer, we also askedthe respondent to provide the reason(s) fordoing so.

We included partially completed surveys in theanalysis. This means that the composition ofrespondents’ groups may vary slightly fromquestion to question. The numbers of respondentsper question are provided in the final analysis.Please note that the number of respondents perEwQI project is too small to perform a statisticallymeaningful analysis for all projects.– Participants were asked to fill out the rating

forms again (round 2), and we collected thesurvey data in a similar way to the first round(by an online survey).

Findings

The following sub-sections and tables summarisethe views of project participants on improving thequality of care through engagement of clinicians.We present the overall results, project meanimportance ranking (used to determine the mostimportant issues), and standard deviation asindicator of the variability of importance rankingamong participants. The activities are rankedaccording to highest mean scores and the smalleststandard deviations in round 2.

Activities to engage cliniciansin quality improvement

Clinicians were provided with a list of activities toengage clinicians in quality improvement that wereidentified in the literature. They were asked toexpress their views on how important each activityis in improving quality through their involvementin their EwQI project by using a 1 to 5 scale (where1=unimportant, 2=fairly unimportant, 3=neitherunimportant nor important, 4=fairly important,and 5=very important).

Overall results

The top activities in improving quality throughclinicians’ involvement in their EwQI are

providing training for clinicians and managers (B2)and keeping up-to-date with clinical practiceguidelines (B3).

Project results

The table below presents the three most importantactivities for engaging clinicians within theprojects. The activities are ranked according tohighest mean scores and the smallest standarddeviations in round 2. Activities that lay outsidethe top six of the overall activities are shaded. Themajority of the activities of the Self-harm project(in round 2, n=6) did not reflect the overallactivities for engaging clinicians. This is likely dueto a small sample size. PEARLS, with a sample sizeof twice the size of the Self-harm project,unanimously identified B2 and B3 as being veryimportant (x=5.0).

Clinicians were invited to comment on the averagescore on activities to engage clinicians in QI.Eleven respondents provided comments. Threerespondents changed their scores admitting theyhad incorrectly used the scoring system in the firstround. Another respondent changed their opinionas they felt differently at the time of completing thesurvey. Two respondents said they changed theirscoring to reflect the average view, since they hadno experience of the QI activity concerned. Tworespondents said they were unlikely to change theiropinion. Another respondent perceived the QIactivity as more important than the ‘average’ (ie:performing peer review of practice with the aim toimprove quality) since this was being used in theirEwQI project.

Effective ways of supportingclinical engagement inquality improvement

Clinicians were provided with a list of ways ofoffering support for clinical engagement in QIwhich were identified in the literature. They wereasked to rate the effectiveness of these methods ona scale of 1 to 5 (where 1=very ineffective, 2=fairlyineffective, 3=neither effective nor ineffective,4=fairly effective, and 5=very effective).

Page 149: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 133

Table G2: Overall results – activities to engage clinicians in quality improvement

Mean, Mean,

Overall round 1 round 2 Standard Mean/SD,

rank (n=97) (n=54) deviation round 2

B2. Providing training for clinicians and 1 4.33 4.58 0.71 6.45

managers (eg: continuous medical

education)

B3. Keeping up to date with clinical 2 4.30 4.52 0.71 6.36

practice guidelines

B1. Undertaking clinical audit 3 4.14 4.35 0.73 5.95

B10. Keeping up to date with providing best 4 4.06 4.29 0.74 5.79

care to each patient (eg: reading journals)

B7. Performing peer review of practice 5 3.87 4.19 0.82 5.10

with the aim of improving quality

B13. Helping patients and service users to 6 3.89 4.15 0.83 5.00

participate in improving healthcare quality

B4. Taking part in regular formal discussions 7 4.06 4.25 0.96 4.42

with colleagues about improving healthcare

quality (eg: gaining formal feedback and

advice from colleagues or attending clinical

review meetings)

B11. Using appropriate IT support systems 8 3.83 3.88 0.89 4.35

to support healthcare quality improvements

B5. Taking part in regular informal 9 4.00 4.29 1.01 4.24

discussions with colleagues about

improving healthcare quality (eg: discussing

how patient plans can be improved)

B8. Participating in clinical networks 10 3.68 3.85 0.97 3.96

B12. Writing about how to improve 11 3.39 3.54 0.90 3.93

healthcare quality (in peer or non-peer

reviewed literature)

B9. Being a member of clinical 12 3.23 3.35 0.97 3.45

governance committee(s)

B6. Doing rapid learning cycles 13 3.46 3.36 1.19 2.82

(eg: Plan-Do-Study-Act)

Table G3: Overall results – priorities within projects – activities to engage clinicians in qualityimprovement

Projects (round 2) 1 2 3

NCROP (n=12) B2, B3, B4

SNAP (n=5) B4 B1 B2

IBD (n=12) B1 B10 B3

PEARLS (n=12) B2, B3 B4

POMH-UK (n=7) B10 B2, B1, B13

Self-harm (n=6) B10 B9 B12

Page 150: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

134 How do you get clinicians involved in quality improvement?

Overall results

The results indicate that of all the ways ofproviding support for clinical engagement in QI,securing good inter-professional relationships(C3), and communicating candidly and oftenabout quality improvement (C7) are the mosteffective (4.0<x<5.0). The standard deviationincreased in six of the results in round 2, whichprovides slightly concerning overall results. This islikely due to the small sample size of the totalpopulation. We have taken into account the widedisparity of views in the rank order; however, thedivergence in opinion should be noted.

Project results

The table below presents the three most effectiveways of supporting clinical engagement within theprojects. These were ranked according to highestmean scores and the smallest standard deviationsin round 2. Priorities that lay outside the top six ofthe overall priorities are shaded. The prioritiesvaried widely between projects and within projects.Two-thirds of the SNAP and Self-harm projects’

results were not in the top six overall priorities.However, this is likely due to these two projects’small sample sizes (round 2: n=5 SNAP; n=6 Self-harm).

The standard deviation within the project resultsincreased in round 2 in this section. The IBDproject (round 2, n=12) had an increase in thevariability of scoring for all but three questions. It isunclear why this is the case, since this project had arelatively high sample size (compared with otherprojects). The Self-harm project (Round 2, n=6)had an increase in the variability of scoring inRound 2 for all but one question. This indicates awide spread of views on the priorities and is mostlikely due to the low sample size in the Self-harmproject. Every project increased the variability inscoring in Round 2 on question C9 (applyingrewards systems), with the exception of POMH-UK.

Respondents were asked to comment on theaverage scores on effective ways of supportingclinical engagement in QI. Seven respondentsprovided comments. One respondent changed theanswers owing to having misread the scoringsystem in the first round. Another respondent

Table G4: Overall results – effective ways of supporting clinical engagement in qualityimprovement

Mean, Mean,

Rank round 1 round 2 Standard Mean/SD,

order (n=97) (n=54) deviation round 2

C3. Securing good inter-professional 1 4.46 4.46 0.77 5.79

relationships

C7. Communicating candidly and often 2 4.33 4.26 0.83 5.13

about quality improvement

C2. Involving patient organisations 3 3.83 3.98 0.87 4.57

C5. Allocating time to quality improvement 4 4.39 4.31 0.99 4.35

activities

C8. Securing interest of trust/board 5 4.40 4.57 1.06 4.31

C10. Committing the trust/board to 6 4.29 4.39 1.15 3.81

engaging healthcare professionals to

improve the quality of healthcare

C4. Allocating budget to QI activities 7 4.42 4.87 1.44 3.38

C6. Availability of champions (ie: leaders 8 4.10 3.98 1.18 3.37

in quality improvement)

C1. Involving the royal colleges 9 3.90 4.15 1.38 3.00

C9. Applying reward systems 10 3.57 4.13 2.07 1.99

Page 151: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 135

changed the answers to reflect the average view onaccount of no personal experience in the particulararea concerned. One respondent was averse to‘applying reward systems’ as an effective way ofsupporting clinical engagement, disagreeing withpayment by results versus other approaches, suchas career promotion and career development.

Barriers to engagingclinicians in QI

Clinicians were provided with a list of factors thatmay serve as an obstacle to engaging clinicians inQI, as described in the literature. They were askedto rate the extent to which each factor is anobstacle to improving quality in their EwQI projectby using a scale of 1 to 5 (where 1=not an obstacle,2=minimal obstacle, 3=small obstacle,4=considerable obstacle, and 5=large obstacle).

Overall results

The top barrier to engaging clinicians in QI is thelimited number of staff available for qualityimprovement (D1). This was considered to be aconsiderable obstacle (x=4.0). Other ‘small’obstacles (3.0<x<4.0) include: lack of widelyshared knowledge (D3), lack of leadership (D2),poor handover from other staff (D4), lack ofcontinuity of the care pathway (D9) and lack ofpatient or service user involvement (D10).Financial rewards, performance targets, andprotocols were ranked as minimal obstacles toengaging clinicians.

Respondents were asked to comment on theaverage scores on the barriers to engagingclinicians in quality improvement. Eightrespondents provided comments. One respondentamended the scores, having misread them in thefirst round. Another respondent changed theresponse to reflect the ‘average’ since on account ofno experience of the barrier concerned. Anotherrespondent was not sure why they rated ‘lack offinancial rewards’ so highly in the first round. Onerespondent justified why scoring ‘lack ofperformance targets’ was difficult – it was hard toengage organisations in improvement work unlessit was linked to national targets. However, thereare too many national targets that deal with thedetail of processes of care, which prevent clinicalstaff or organisation from engaging in widerimprovement work.

Project results

The table below presents the three highest barriersto engaging clinicians in quality improvementwithin the projects. The results are rankedaccording to highest mean scores and thesmallest standard deviations in round 2. Prioritiesthat lay outside the top six of the overall prioritiesare shaded.

NCROP, in particular, prioritised D7 and D6,which were not in the top six project priorities.All projects, with the exception of SNAP,indicated that D1 was a barrier. Finally, all projects,with the exception of NCROP, indicated that D2was a barrier.

Table G5: Overall results – priorities within projects – effective ways of supporting clinicalengagement in QI

Projects (round 2) 1 2 3

NCROP (n=12) C8, C10 C3

SNAP (n=5) C4, C6, C8

IBD (n=12) C3, C13 C2, C12

PEARLS (n=12) C8 C5 C2, C12

POMH-UK (n=7) C8 C5 C3, C10, C13

Self-harm (n=6) C1, C11 C2, C12

Page 152: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

136 How do you get clinicians involved in quality improvement?

Consequences of engagingclinicians

Clinicians were provided with a list of possibleconsequences of engaging clinicians in qualityimprovement that were identified in the literature.Clinicians were asked to rate on a scale of 1 to 5(where 1=very unlikely, 2=fairly unlikely,3:=neither unlikely nor likely, 4=fairly likely, and5=very likely) the extent to which eachconsequence results from their EwQI project.

Overall results

The top consequences of engaging clinicians inquality improvement (all ranking above fairly likely(x>4.0)) are: greater standardisation ofprofessional practice (E2), more equitable care(E4), greater quality control (E6) and improvedpatient satisfaction (E1). Patient waiting times andcosts were seen to be very unlikely to be affected byengaging clinicians in QI.

Table G6: Overall results – barriers to engaging clinicians in quality improvement

Mean, Mean,

Rank round 1 round 2 Standard Mean/SD,

order (n=97) (n=54) deviation round 2

D1. Limited number of staff available for 1 4.07 4.00 0.86 4.65

quality improvement

D3. Lack of widely shared knowledge 2 3.74 3.55 1.02 3.48

(eg: access to performance data)

D2. Lack of leadership 3 3.64 3.76 1.13 3.33

D4. Poor handover from other staff 4 3.74 3.31 1.00 3.31

D9. Lack of continuity of the care pathway 5 3.74 3.27 1.04 3.14

D10. Lack of patient or service user 6 3.74 3.26 1.07 3.05

involvement

D7. Lack of non-financial rewards 7 3.74 2.94 1.03 2.85

D8. Lack of performance targets 8 3.74 2.88 1.06 2.72

D6. Use of financial sanctions 9 3.74 3.07 1.24 2.48

D11. Poor protocols 10 3.74 3.02 1.23 2.46

D5. Lack of financial rewards 11 3.74 2.47 1.12 2.21

Table G7: Overall results – priorities within projects – barriers to engaging clinicians in qualityimprovement

Projects (round 2) 1 2 3

NCROP (n=12) D1, D7 D6

SNAP (n=5) D2 D3 D4, D11

IBD (n=12) D1 D10 D2

PEARLS (n=12) D1 D4 D2

POMH-UK (n=7) D2 D1 D9

Self-harm (n=6) D1, D4 D2

Page 153: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 137

Project results

The table below presents the three highestconsequences to engaging clinicians in qualityimprovement within the projects. The results areranked according to highest mean scores and thesmallest standard deviations in round 2. Prioritiesthat lay outside the top six of the overall prioritiesare shaded.

E2 is ranked as a consequence of engagingphysicians in every project. E4 is ranked as apriority in every project except NCROP; however,this is likely partly due to the high standarddeviation in the scores of the NCROP project.Within projects, SNAP and PEARLS also identified

E7 as a consequence of engaging clinicians. Itshould be noted that there was high standarddeviation in round 2 for every project, butespecially in the SNAP project. In this particularproject, the standard deviation increased in round2 with all questions except for one (E3). This islikely due to the small sample size (n=5).

Respondents were invited to comment on theiranswers on consequences of engaging clinicians orif they had any comments on the average scores.One respondent commented that they did notthink there had been any improvements – thethings that had worked well were in placebeforehand. Another respondent felt there hadbeen short-terms costs but long-terms savings.

Table G8: Overall results – consequences of engaging clinicians in quality improvement

Mean, Mean,

Rank round 1 round 2 Standard Mean/SD,

order (n=97) (n=54) deviation round 2

E2. Greater standardisation of professional 1 4.18 4.13 0.67 6.16

practice

E4. More equitable care 2 4.13 4.17 0.78 5.35

E6. Greater quality control (ie: safe care) 3 4.27 4.19 0.80 5.24

E1. Improved patient satisfaction/experience 4 4.29 4.28 0.85 5.04

E5. Uniform patient reports (eg: 5 4.19 3.87 0.99 3.91

standardised discharge letter)

E3. Cost-effective services 6 3.70 3.62 1.07 3.38

E11. Cost savings for the organisation 7 3.30 3.26 1.00 3.26

E7. Improved rules, regulations and legislation 8 3.5 3.25 1.02 3.19

E8. Decreased patient waiting times 9 3.22 3.26 1.07 3.05

E10. Increase in costs to the organisation 10 2.87 2.68 1.03 2.60

E9. Increased patient waiting times 11 2.38 2.22 .94 2.36

Table G9: Overall results – priorities within projects – consequences of engaging clinicians in QI

Projects (round 2) 1 2 3

NCROP (n=12) E2 E1 E6

SNAP (n=5) E2 E4 E7

IBD (n=12) E4 E6 E2

PEARLS (n=12) E7 E6 E2

POMH-UK (n=7) E1 E4,E2

Self-harm (n=6) E1 E4 E6

Page 154: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

138 How do you get clinicians involved in quality improvement?

Attitudes towards the value ofengaging clinicians in qualityimprovement

Clinicians were asked to list the three mostimportant activities they view as quality

improvement (F1). In total clinicians listed64 activities, which are shown in Box G1. The four activities that were perceived by clinicians as the most important were clinical audit (cited 58 times); engaging withpatients/service users (cited 23 times);communication (cited 21 times) and continuingeducation (cited 18 times).

Box G1. The most important activities viewed as QI among participating clinicians in the EwQI

No. times Activity cited

Clinical audit 58

Engaging with patients/service users 23

Communication 21

Continuing education 18

Clinical engagement 14

Guidelines/standards 14

Training 14

Allocated money and/or time for QI 8

Leadership 8

Team working 8

Care pathways 7

Protocols 7

Clinical governance 6

Continuity/delivery/equality of care 6

Feedback 5

Administrative systems 4

Change management 4

Clinical outcomes 4

Engaging managers 4

Engaging trust 4

Keeping up to date with knowledge/literature 4

Awareness 3

Clinical research 3

Management and clinical commitment 3

Meetings 3

Peer review 3

Remove system inefficiencies 3

Review and management of clinical risk 3

Service development 3

Evaluation 2

Evidence-based practice 2

PDSA 2

No. times Activity cited

Reduction in waiting times 2

Actioning results 1

Applauding success 1

Benchmarking 1

Better educational standards 1

Clinical networking 1

Clinician attitudes 1

Consensus 1

Culture 1

Dissemination 1

Home care 1

Hygiene/minimise patient movement 1

Increased breastfeeding rates 1

Interventions 1

Involvement of primary care/nursing teams 1

Learning 1

Letting clinicians lead management decisions 1

Maintaining high standard 1

Midwifery supervision 1

More control for clinicians 1

Ownership of improvement 1

Peer support 1

Pilot 1

Preventing risk 1

Process mapping 1

Quality measures 1

Reflective practice 1

Resource efficiency 1

Royal colleges 1

Shared vision 1

Sharing good practice 1

Workforce 1

Page 155: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 139

Perceived engagement ofclinicians in QI

Clinicians were asked to rate on a scale of 1 to 5how they perceive engagement of clinicians in QI(where 1=very unimportant, 2=fairly unimportant,3=neither unimportant nor important, 4=fairlyimportant, and 5=very important).

Clinicians in all the projects perceived clinicianengagement in QI as at least fairly important(more than 4.0), trending towards very important.

The following table summarises the means forhow each project perceives engagement ofclinicians in QI.

Respondents were invited to elaborate on theiranswers and comment on the average score on howthey perceive engagement of clinicians in QI. Onerespondent felt that engaging clinicians in QI wasimportant, but was not necessarily happening.Another respondent felt that the implementation ofnew processes and recognition of the need tochange were vital – otherwise, an attitude of‘business as usual’ or ‘passive resistance’ was likely.Another respondent felt that financial problems orrestructuring of regional services can act as abarrier for clinicians striving to improve services

and, in some cases, providing high quality care.Another respondent felt that while clinicians wouldlike to improve quality, often they feel they cannotdo so owing to time pressures and low staffingratios. Another respondent felt that involvingclinicians helps support and continue good practiceand helps the backing and support of furtherstudies trying to change and improve practice.

Success of the EwQI inengaging clinicians in QI

Clinicians were asked to rate on a scale of 1 to 5how successfully the EwQI project had engagedthem in QI (where 1=very unsuccessfully, 2=fairly unsuccessfully, 3=neither unsuccessfullynor successfully, 4=fairly successfully, and 5=very successfully).

The average rating was 3.0<x<4.0 or betweenneither unsuccessfully nor successfully and fairlysuccessfully.

The following table summarises the project meansfor the above question. Within the projects, onlySelf-harm clinicians indicated that they felt ‘fairlysuccessfully’ engaged in quality improvement withthe EwQI in both round 1 and round 2 (x>4.0).

Mean, round 1 Standard

(n=97) deviation 1

4.5 1.06

Mean, round 2 Standard

(n=54) deviation 2

4.77 .43

Mean, round 2

NCROP (n=12) 4.73

SNAP (n=5) 4.80

IBD (n=12) 4.75

PEARLS (n=12) 4.82

POMH-UK (n=7) 4.71

Self-harm (n=6) 4.83

Mean, round 1 Standard

(n=97) deviation 1

3.74 1.11

Mean, round 2 Standard

(n=54) deviation 2

3.75 0.94

Mean, round 2

NCROP (n=12) 3.64

SNAP (n=5) 4.00

IBD (n=12) 3.50

PEARLS (n=12) 3.83

POMH-UK (n=7) 3.71

Self-harm (n=6) 4.17

Page 156: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

140 How do you get clinicians involved in quality improvement?

Respondents were invited to elaborate on theiranswers on whether they felt their EwQI projectsuccessfully engaged them in quality improvementto comment on the average score. One respondentfelt that they were already engaged in a number ofprojects and did not find the question valid.Another respondent felt that an excellent servicewas already being provided before the EwQIproject. Another respondent stated ‘the passage oftime has reduced my feeling of engagement in thisquality improvement programme.’ Anotherrespondent stated:

The EwQI has been a steep learning curve, andleading the project had probably been the hardestproject that I have ever undertaken – but also themost rewarding. I have 30 years of experience onresearch and have been a principal investigator onlaboratory and clinical projects, including RCTs. Thechallenges presented by a quality improvement projectare quite different and, I think, much greater. It istherefore baffling that quality improvement projectsstill have such low regard in academic circles.

Change of attitudes towardsthe value of engagingclinicians in QIClinicians were asked to rate on a scale of 1 to 5whether their attitude towards the value ofengaging clinicians in QI had changed due to theirinvolvement in their EwQI project (where 1=not atall, 2=a little, 3=moderately, 4=considerably, and5=extremely).

The average rating was 2.0<x<3.0, or between alittle and moderately.

The following table summarises the project meansforthe

above question. Within the projects, the mostsignificant change was the SNAP project with amean change of 3.20, or moderately.

Respondents were invited to elaborate on theiranswers on whether their attitude towards the valueof engaging clinicians in QI had changed owing totheir involvement in their EwQI project or tocomment on the average score. One respondent saidthey have always valued involving clinicians and so,from that point of view, their attitude had becomeslightly more positive. Another respondent felt thatthe EwQI project had not changed their view on theimportance of engaging clinicians, but it had‘certainly provided solid evidence’ to support theirpre-existing beliefs. One respondent said they didnot understand the question.

Conclusions

In order to explore the views of cliniciansparticipating in the EwQI on professionalinvolvement in QI, we undertook a Delphi survey.This covered a small sample of clinicians from sixEwQI projects (n=97 in round 1 and n=53 in theround 2). In summary, we found that:– Overall, clinicians in all six projects perceived

the role of clinician engagement in successfulQI to be fairly important, tending towardsvery important.– The top activities identified as improvingquality through clinicians’ involvement were:providing training for clinicians and managers,and keeping clinicians up to date through thedevelopment and promulgation of clinicalpractice guidelines. Participants in three out ofthe six projects ranked taking part in regularformal discussions with colleagues as one ofthe three most important activities for engagingclinicians, but this was not one of the six toppriorities of the overall population.

– With regard to providing support for clinicalengagement in QI, the three most effective ways

Mean, round 1 Standard

(n=97) deviation 1

2.37 1.28

Mean, round 2 Standard

(n=54) deviation 2

2.36 1.06

Mean, round 2

NCROP (n=12) 1.91

SNAP (n=5) 3.20

IBD (n=12) 2.58

PEARLS (n=12) 2.17

POMH-UK (n=7) 2.71

Self-harm (n=6) 2.00

Page 157: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 141

were seen to be: securing good inter-professional relationships, communicatingcandidly and often about QI, and involvingpatient organisations. But there was also widedivergence of opinion. On the question of howbest to support clinical engagement, mostparticipants identified at least one effectiveapproach, which was not among those rankedas six highest by the overall population.

– The top barrier to engaging clinicians in QI wasidentified as the limited number of staffavailable for QI. Other ‘small’ obstaclesincluded the lack of widely shared knowledgeand leadership. Lack of financial rewards, lackof performance targets, use of financialsanctions and poor protocols were ranked as‘minimal’ obstacles to engaging clinicians.

– Greater standardisation of professionalpractice, more equitable care, greater qualitycontrol and improved patient satisfaction wereperceived as the most important consequencesof engaging clinicians in QI.

– The Delphi also sought clinicians’ views abouttheir attitudes towards the value of engagingclinicians in QI. Clinicians were asked to list thethree most important activities they viewed asQI. In total, clinicians listed 64 activities. Thefour activities that were perceived as the mostimportant were clinical audit (cited 58 times),

engaging with patients/service users (cited23 times), communication (cited 21 times) andcontinuing medical education (cited 18 times).

– Clinicians in all six projects perceivedclinicians’ engagement as at least ‘fairlyimportant’, trending towards ‘very important’.

– Clinicians also rated the success of their EwQIproject in engaging the respondent in QI on afive-point scale. The average rating was between‘neither successfully, nor successfully’ and ‘fairlysuccessfully’. Various reasons for this werereported by respondents, including alreadybeing engaged in QI projects, and that anexcellent service was already being providedbefore the EwQI. However, one respondentsaid, ‘EwQI has been a steep learning curve, andleading the project had probably been thehardest project that I have ever undertaken –but also the most rewarding’.

– Attitudes to the value of engaging clinicians inQI did not change dramatically as a result ofinvolvement in the EwQI. Within the projects,the most significant change was in EPI-CAP,where the mean score was ‘moderately’. Whenrespondents were invited to elaborate on theiranswers, one respondent pointed out that theyhave always valued involving clinicians andthat, from that point of view, their positiveattitude had only increased slightly.

Page 158: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

142 How do you get clinicians involved in quality improvement?

It was a requirement of the Health Foundationfunding that the teams:

Identify a clinical problem or deficiency in care, forwhich there is a scientific evidence base and/orconsensual professional guidelines. The clinical area ofinterest must have reliable data, as well as objectiveand credible measures of clinical process and/oroutcome. The guidelines or standards may be selectedfrom an authoritative national or international source– for example, a royal college, specialist society,National Service Framework, National Institute forClinical Effectiveness (NICE) or the Scottish Inter-Collegiate Guideline Network (SIGN) or theclinical/research literature.

This appendix describes the specific bases onwhich the measures of clinical quality used in theEwQI projects were selected by each project team.The appendix also explains how these were used bythe teams to develop the standards used in eachproject. It supports tables 7–17 in chapter 3, whichsummarise each project’s achievements in terms ofmeasurable patient outcomes.

Colorectal cancer

Evidence base: 2004 NICE Guidelines for cancerservices on improving outcomes in colorectal cancer.

What the proposal said: The original proposal(2004) noted that these guidelines would be usedto benchmark performance across institutions onsix aspects of colorectal cancer management,endpoints of the process of care or outcomes: – access to appropriate services – evidence that all patients found to have

colorectal cancer are referred to the colorectalcancer multidisciplinary team meeting (MDT)

– accuracy of diagnosis

– surgery and histopathology– palliative therapy and advanced disease.

Subsequent thoughts and developments: In 2006,the team noted that:

The major difficulty with cancer surgery is that thereal endpoints of interest are long-term: namely 5-yearoverall, cancer-specific, and disease-free survival. Anewly appointed consultant surgeon would, quiteevidently, have to wait at least five years before beingable to quote any 5 year survival figures, but whenconsidering procedure specific results, a surgeonperforming 20 rectal cancer excisions per year wouldhave to wait 10 years before reporting long-termoutcomes on a sample of 100 patients ... As yet there isno mechanism by which high quality data fromnational audit can be linked to long-term outcomes,and this, combined with the significant time-laginvolved, means that proxy, or surrogate, measures ofsurgical outcomes are required to drive and monitorquality improvement in the short to medium term.Such markers have previously been identified byagencies such as NICE (National Institute of ClinicalExcellence), and many have been previously evaluatedby NBOCAP. Markers include those with an obviousshort-term impact, such as 30-day mortality rates, oreffects on patients’ quality of life (including permanentcolostomy rates following rectal cancer excision), whileothers are more closely linked to disease recurrenceand long-term survival, such as circumferentialresection margin involvement (CRMI) rates followingrectal cancer excision134.

This list coincides with the measures in table 7which all fall within the fourth of the broadcategories identified above, that is, surgeryand histopathology. The team told us that theyhad found it hard to obtain robust data in theother areas.

Self-harm

Appendix G

Measures of clinicalquality

Page 159: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Evidence base: 2004 guidelines Self harm: Theshort-term physical and psychological managementand secondary prevention of self harm in primaryand secondary care, produced by NICE and theNational Collaborating Centre for Mental Health(managed by the Royal College of Psychiatry’sResearch Unit, in which the Self-harm project wasalso based).

What the proposal said: The original proposal(2004) noted that these guidelines (‘the mostrigorously developed body of evidence-basedknowledge about the care of people who self-harm’) would be the main source of qualitystandards for the project, and that they includedstandards related to: – factors that impact directly on the user

experience of services– service planning and provision: standards in

this area are supplemented by therecommendations of the Royal College ofPsychiatrists and the British Association forAccident and Emergency Medicine (RoyalCollege of Psychiatrists, 1994)

– staff training– the immediate medical management of self-

poisoning and self-cutting by both ambulanceand emergency department staff

– the assessment of people who self-harm,including triage assessment in the emergencydepartment, risk assessment and psychosocialassessment

– referral, admission and discharge following self-harm

– special issues relating to young people andolder people who have self-harmed.

The project team noted that it is the testimony ofservice users that provides the most compellingevidence for the deficit in the quality of services forpeople who self-harm, and the guidelinesconcluded that improving staff knowledge andattitudes is the key to better services.

Subsequent thoughts and developments: In 2006,the team produced a set of quality standards forhealth professionals135 that drew on a wide rangeof sources in addition to the NICE guidelines.These sources included: – a review of documents from relevant

professional bodies, such as the Royal Collegeof Nursing, the Royal College of Psychiatrists,

the Faculty of Accident and EmergencyMedicine and the Joint Royal CollegesAmbulance Liaison Committee

– a review of Department of Health policy andrecommendations, including the EmergencyCare Checklists

– a written consultation exercise with keystakeholder groups (these included healthcareprofessionals from emergency care, mentalhealth and ambulance, service users, voluntaryorganisations and other experts in the field)

– the ideas and discussions of a teleconference onstandards held with service users, researchersand carers

– consultation with experts from otherQI programmes.

POMH

Evidence base/what the proposal said: The originalproposal noted that there were a number ofsources of authoritative guidance about prescribingof psychotropic medication, including:– the British National Formulary, which lists the

indications for use of these drugs, recommendsdose ranges and precautions to be taken whenprescribing – including their use in pregnancy– and lists adverse interactions with other drugs

– systematic reviews of the effectiveness ofpsychotropic medications – more than 100reviews of pharmacotherapy of psychiatricdisorders completed by the CochraneCollaboration national evidence-based clinicalguidelines – six published or draft NICEguidelines and four technology appraisalguidance documents containingrecommendations about prescribing ofpsychotropic medication

– consensus-based recommendations producedby professional bodies in the UK and otherEnglish-speaking countries.

Subsequent thoughts and developments: Currentpractice is described on the POMH website, whichpoints out that observatory members can proposetopics for consideration by the POMH-UK steeringgroup who then use the eight criteria below (in noparticular order) to prioritise nominated topics.Those shortlisted are then voted on by membersfor further development:– relevant to implementation of particular NICE

guideline(s)

How do you get clinicians involved in quality improvement? 143

Page 160: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

144 How do you get clinicians involved in quality improvement?

– fulfils criterion of high cost, high volume orhigh risk

– seen as a clinical priority for trusts nationallyby clinicians

– seen as a clinical priority for trusts nationallyby service users

– likely variation in practice across UK trusts– clear standards can be formulated that relate to

prescribing practice– practical and feasible to collect relevant audit

data– change in practice that achieves the standards is

likely to have a positive impact on clinical careand clinical outcomes.

NCROP

Evidence base: 2004 NICE guidelines Managementof chronic obstructive pulmonary disease in adults inprimary and secondary care

What the proposal said: The original proposal(2004) noted that these guidelines would be usedas the basis for the standards used in the project.

Subsequent thoughts and developments: In early2007, the online NCROP News reported that:

When the NCROP was initially conceived, it wasagreed that the study would focus on three keyindicators of quality in the provision of services forpeople with COPD – namely: non-invasive ventilation,pulmonary rehabilitation and early discharge schemes.However anecdotal evidence, along with lessonslearned during the pilot phase of the study, suggeststhat many hospitals are now providing these servicesand in order to measure changes in practice, a fourthindicator could be added. After much debate within theNCROP Implementation Group, the provision ofoxygen services was identified as the extra indicatorand in addition, the project team will collectinformation about the provision of palliative careservices for people with COPD. As a number ofguidelines already exist in relation to the 4 indicators,these have been used, along with expert advice fromthe NCROP Steering Group members, to develop thestandards by which practice will be measured under theauspices of the NCROP.

The 2009 NCROP final report described thisprocess as follows:

The Steering Group selected four particular aspects of

COPD care to be examined by each review, based onthe strength of the literature, the variability shown inthe 2003 audit and the likely importance to chronicdisease management (group consensus).

These were:– non-invasive ventilation (NIV) – oxygen provision out of hospital – early discharge schemes – pulmonary rehabilitation.

PoISE

Evidence base:– The Association of Anaesthetists of Great

Britain and Ireland adopted the AmericanSociety of Anaesthesiologists’ recommendations(1999) and published a brief chapter on fastingpolicies (2001) recommending that each trustshould develop its own written policies.

– A Cochrane systematic review on ‘Pre-operativefasting for adults to prevent peri-operativecomplications’ was published in 2003. Thisused accepted NICE methodology, framedaround AGREE principles and was supportedby a multidisciplinary peri-operative fastingguideline group, with representation from theRoyal Colleges of Anaesthetists, Midwifery, andPaediatrics and Child Health.

What the proposal said: The proposal identifiedseven outcomes to be evaluated. The main measurewas on:– duration of fasting pre- and post-operatively.

The key recommendations in the guidelinerelated to reducing time of eating and drinkingpre- and post-surgery

The other six related to:– guideline recommendations – patients – practitioners – process – organisational issues– economic issues.

Page 161: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 145

RCPE (EPI-SNAP and SNAP-CAP)

Evidence base:– Scottish Intercollegiate Guidelines Network

(2003). Diagnosis and management of epilepsy inadults www.sign.ac.uk/guidelines/fulltext/70/index.html

– Scottish Intercollegiate Guidelines Network(2002). Community management of lowerrespiratory tract infection in adults www.sign.ac.uk/guidelines/fulltext/59/index.html

– British Thoracic Society (2001). ‘Guidelines forthe management of community-acquiredpneumonia’. Thorax 56: (suppl IV). 2004update: www.brit-thoracic.org.uk/docs/MACAPrevisedApr04.pdf

What the proposal said: The original proposal(2004) noted that a multi-professional steeringgroup for each condition (epilepsy and CAP)would be set up to confirm the standards to beaudited, agree data definitions and define thesampling framework. Audit standards developed bythe steering groups would be presented foragreement to two national meetings, one foreach condition.

EPI-SNAP

What the proposal said: The original proposal saidthat the standards would include:– time from first seizure to first secondary care

appointment– proportion of patients seeing a neurologist or

other recognised epilepsy specialist at their firstsecondary care appointment

– time from first seizure to establishing workingdiagnosis

– time from first seizure to decision on long-termtreatment

– proportion of patients undergoing cranial MRimaging (segregated data for primarygeneralised and focal epilepsies)

– provision of patient information while waitingto see a specialist and on diagnosis.

Subsequent thoughts and developments: A July2007 report to the Health Foundation confirmedthat EPI-SNAP began with the following main aims:

– Reduce waiting times for first seizure clinics,and that under this aim there would be onemain indicator – giving driving advice:- The project team aimed to improve

diagnosis in order to reduce waiting times. - The team identified giving driving advice is

the means to encourage better and moretargeted referral, and therefore moreconsidered diagnosis. DVLA advice was thatif a driver is referred to a first seizure clinicand the diagnosis was epilepsy, then theadvice should be not to drive.

- Doctors were unwilling to do this due to theimpact on the lives of their patients; drivingadvice was rarely given, so there was plentyof room for improvement.

– Introduce standardised elements of the GP-ledannual review, and under this aim, the mainindicator would be the provision ofinformation generally:- The project team aimed to audit the

provision of information using a groupof information domains from theSIGN guidelines and an online packagethat allowed GPs to conduct reviewmore efficiently.

SNAP-CAP

What the proposal said: The original proposal saidthat the standards would include:– the time between admission and the

administration of the first dose of antibiotics(The guidelines suggest that the first dose shouldbe administered within four hours of admission)

– assessment of severity using the CURB 65 score(confusion, blood urea, respiratory rate, lowdiastolic and/or systolic blood pressure, andage). (This relates to outcome and theguidelines have suggested a need for increasedintensity of therapy and monitoring in patientswith higher severity scores.)

– the time to senior review and whether adecision on further treatment (specifically, ITUreferral/care) in the event of deterioration hasbeen made prospectively

– use of antibiotic regimens according to BTSguidelines and the use of IV antibiotics forsevere CAP

– oxygen usage– provision of patient information on the process

of care.

Page 162: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

Subsequent thoughts and developments: A July2007 report to the Health Foundation confirmedthat the SNAP-CAP steering group had formalisedthe following aims and planned to deliver themusing a care-bundle approach:– to increase the survival rate of patients

diagnosed with CAP– to equalise mortality rates between weekday

and weekend admissions– to reduce related illness/infection– to increase equality of service from centre to

centre across Scotland– to ensure timely and accurate treatment (as

defined by BTS guidelines)– to reduce number of days spent in hospital by

CAP patients– to provide appropriate information to patients

and family/carers at the appropriate time– to ‘join up’ care between primary and acute care– to improve appropriateness of antibiotic

prescription.

There was extensive debate within the projectsteering group and with pilot sites about the scopeof the care bundle and the standards to beadopted. The latter finally included:– oxygenation to be assessed during first four

hours of care– CURB65 score to be derived and measured– mild cases to be treated at home with

antibiotics– severe cases to be admitted– antibiotics to be given within four hours of

admission.

IBD

Evidence base: The British Society ofGastroenterology produced national evidence-based guidelines covering all the clinical aspects ofmanagement of IBD in 2004. This wasaccompanied (from the same source) by a ‘Serviceand Standards of Care’ document that set out therequirements that should be in place to deliver afirst class service.

What the proposal said: The project team notedthat the latter standards were largely consensus-based because, as for most chronic conditions(including those covered by NICE guidelines), theevidence on care delivery was scanty.

The original proposal noted that:

The management of IBD is complex and there aremany opportunities for errors that can result inincreased morbidity or even mortality. These includedelayed diagnosis – eg: median of 5–12 weeks forCrohn’s disease with 5% taking more than two years –delay to institution of optimal high dose corticosteroidtherapy in severe colitis, delay to colectomy in non-responding severe colitis, failure to recognise presenceof serious intra-abdominal sepsis in steroid-treatedpatients with Crohn’s disease, failure to monitor bloodcount in patients treated with the immunosuppressivesazathioprine or mercaptopurine, and failure tomonitor bone density (osteoporosis) in steroid-treatedpatients. There is evidence that practice falls short ofboth clinical and organisational standards. ... Issuesthat are likely to be amenable to intervention includedelays in medical treatment and colectomy inUlcerative Colitis (UC), delayed surgical treatmentand post-op mortality in UC both of which areassociated with increased mortality in youngerpatients. The availability of an appropriatemultidisciplinary team that is working effectivelywould be a potential intervention. The audit wouldhope to highlight those patients on inappropriatesteroid therapy and encourage strategies to stop thesteroids. It will also address monitoring of andprevention of iatrogenic complications such asosteoporosis and immunosuppression-related bone-marrow toxicity. Whereas the audit of ulcerativecolitis will be confined to issues relevant to in-patientepisodes, the audit of Crohn’s disease patients will alsoinclude assessment of outcomes and process relating toprevious out-patient management. For bothconditions, death within 30 days of admission will bea primary outcome measure, but various secondaryoutcome measures, 20 for ulcerative colitis and 24 forCrohn’s disease, will be used to cover a range of issues,including provision of service and standard of care.The reproducibility of these measures will be assessedduring the first six months of the audit and thoseshown to have good reproducibility will be used forthe main audit. This would be by some margin thelargest study of IBD and it is hoped that it would sheduseful additional observational insights into thecausation and treatment options.

Since there were limited local audit data and nohistory of large national audit, the originalintention was that the first stage of the projectwould define the elements of the organisation andclinical care to be measured:– The clinical indicators for the process and

outcome of care for people with IBD would bedefined from the published guidelinesaugmented by consensus methods.

– The key ‘organisational’ indicators for assessingthe IBD service would be defined from:

146 How do you get clinicians involved in quality improvement?

Page 163: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 147

– the recently published BSG guidelines andstatement on service provision

– experience from previous RCPnational audits.

PEARLS

Evidence base:– Royal College of Obstetrics and Gynaecology

guidance 2004 (written by the applicants)– a Cochrane systematic review on the use of

absorbable synthetic material (1999)– a Cochrane systematic review on the use of a

continuous subcuticular techniques forsuturing (1998).

What the proposal said: ‘Studies of maternalmorbidity have identified several outcomesassociated with perineal trauma, some of whichwill be used to measure improvements in maternalhealth following implementation of theintervention. However, to ensure outcomes of theproposed study also reflect women’s perspectives

on improvements in the quality and experience oftheir care, consumer representatives will besurveyed using a Delphi process.’

Subsequent thoughts and developments: Theproject team reported to the evaluation team inApril 2007 that a Delphi study of patientsundertaken in 2006 identified the followingoutcomes:– fear of infection– failure of wound healing– pain as a result of perineal tearing and repair

(although this was expected)– the importance of being free from pain three

months after birth.

These results were later confirmed by furtherDelphi studies in the UK and in Brazil.

Page 164: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

148 How do you get clinicians involved in quality improvement?

1 Berwick D (2008). ‘The Science of Improvement’. JAMA,vol 299, no 10, 2008, pp 1182–84.

2 Various lists of QI approaches have been given in theliterature. See, for example: Spencer, E and Walshe K(2009). ‘National quality improvement policies andstrategies in European healthcare systems’. Qual. Saf.Health Care, 18; (suppl 1), pp i22–i27; and USAID (June2008). ‘The Improvement Collaborative: An Approach toRapidly Improve Health Care and Scale Up QualityServices’. USAID Health Care Improvement Project,available at: http://pdf.usaid.gov/pdf_docs/PNADM495.pdf, accessed 4 December 2009.

3 Department of Health (2000). The NHS Plan: a Plan forInvestment, a Plan for Reform. London: The StationeryOffice.

4 Department of Health (2001). Assuring the Quality ofMedical Practice: Implementing Supporting DoctorsProtecting Patients. London: Department of Health.

5 Department of Health (2004). The NHS ImprovementPlan: Putting People at the Heart of Public Services.London: The Stationery Office.

6 Darzi, Professor, the Lord of Denham KBE (June 2008).High quality for all: NHS Next Stage Review Final report.London: Department of Health.

7 The Scottish Executive (2006). Delivering a HealthyScotland: Meeting the Challenge. Edinburgh: The ScottishExecutive.

8 Department of Health (2006). The future regulation ofhealth and adult social care in England. London: CrownCopyright.

9 General Medical Council (2009). Tomorrow’s Doctors:Outcomes and standards for undergraduate medicaleducation. General Medical Council, available at:www.gmc-uk.org/education/documents/GMC_TD_2009.pdf, accessed 14 September 2009.

10 Department of Health (2007). World class commissioning:vision. Department of Health, available at:http://www.dh.gov.uk/ prod_consum_dh/groups/dh_digitalassets/documents/digitalasset/dh_080952.pdf,accessed 14 September 2009.

11 Bunker JP (2001). ‘The role of medical care incontributing to health improvements within societies’.International Journal of Epidemiology, vol 30, pp 1260–63.

12 McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J,DeCristofaro A et al (2003) ‘The quality of healthcaredelivered to adults in the United States’. New EnglandJournal of Medicine, vol 348, pp 2635–45.

13 Seddon ME, Marshall MN, Campbell SM and RolandMO (2001). ‘Systematic review of studies of quality ofclinical care in general practice in the UK, Australia andNew Zealand’. Qual Health Care, vol 10, pp 152–58.

14 Steel N, Bachmann M, Maisey S, Shekelle P, Breeze E,Marmot M, Melzer D (2009). ‘Self-reported receipt ofcare consistent with 32 quality indicators: nationalpopulation survey of adults aged 50 or more in England’.British Medical Journal, vol 337, no a957.

15 Institute of Medicine (2001). Crossing the Quality Chasm:A New Health System for the 21st Century WashingtonDC: National Academy Press, at:www.iom.edu/Object.File/Master/27/184/Chasm-8pager.pdf, accessed24 August 2009.

16 Institute of Medicine website, at: www.iom.edu/?id=19174, accessed 21 August 2007.

17 Bate SP, Robert G and McLeod H (2002). Report on the‘breakthrough’ collaborative approach to quality andservice improvement within four regions of the NHS. Aresearch-based investigation of the orthopaedic servicescollaborative within the Eastern, South and West, SouthEast and Trent Regions. Birmingham: Health ServicesManagement Centre, University of Birmingham.

18 Bate P, Mendel P, Robert G (2008). Organizing for Quality:The improvement journeys of leading hospitals in Europeand the United States. Oxford: Radcliffe Publishing, andMendle P (2008) ‘Organizing for Quality. Inside the ‘blackbox’ of healthcare improvement in Europe and the UnitedStates’. RAND Research Brief available at http://www.rand.org/pubs/research_briefs/2008/RAND_RB9329.pdf,accessed on 24th February 2010.

19 As cited in “An evaluation of the Health Foundation’sEngaging with Quality Initiative: joint proposal fromRAND Europe and HERG, Brunel University.” Preparedfor the Health Foundation (2005). (And see alsoappendix C of this report)

20 Soper B and Hanney S (2007). ‘Lessons from theevaluation of the UK’s NHS R&D ImplementationMethods Programme’. Implementation Science, vol 2, no 7,pp 2–7.

21 Nutley SM and Davies H. ‘Making a reality of evidence-based practice’ in Davies HTO, Nutley SM and Smith PC,eds (2000). What works? Evidence-based policy andpractice in public services, ch. 15, pp 317–50. Bristol: ThePolicy Press.

22 Grimshaw JM, Thomas R, MacLennan G, Fraser C,Ramsay C and Vale L et al (2004). ‘Effectiveness andefficiency of guideline dissemination and implementationstrategies’. Health Technology Assessment, vol 8, no 6.

References

Page 165: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

23 Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser Cand Bero L et al (2001). ‘Changing provider behaviour.An overview of systematic reviews of interventions’.Medical Care, vol 39, no 8, suppl 2.

24 Leatherman S and Sutherland K (2003). The Quest forQuality in the NHS: A midterm evaluation of the ten yearquality agenda. The Nuffield Trust, London: TheStationery Office.

25 Pawson R and Tilley N (2007). Realistic Evaluation.London: Sage Publications.

26 Berwick DM (2008). ‘The Science of Improvement’.JAMA, vol 299, no 10, pp 1182–84.

27 Berwick DM (2005). ‘Broadening the view of evidence-based medicine’. Quality and Safety in Health Care,vol 14, pp 315–16.

28 Soper B, Buxton M, Hanney S, Oortwijn W, Scoggins A,Steel N and Ling T(2008). ‘Developing the protocol forthe evaluation of the Health Foundation’s Engaging withQuality Initiative – an emergent approach’.Implementation Science, vol 3, no 46.

29 van Bokhoven MA, Kok G and van der Weijden T (2003).‘Designing a quality improvement intervention: asystematic approach’. Qual Saf Health Care, vol 12,pp 215–20.

30 Byng R, Norman I and Redfern S (2005). ‘Using realisticevaluation to evaluate a practice-level intervention toimprove primary healthcare for patients with long-termmental illness’. Evaluation, vol 11, pp 69–93

31 Hurteau M, Houle S and Mongiat S (2009). ‘Howlegitimate and justified are judgements in programmeevaluation?’ Evaluation, vol 15, vo 3, pp 307–19.

32 Weiss CH ‘Nothing as practical as a good theory:exploring theory-based evaluation for comprehensivecommunity initiatives for children and families’ inConnell J, Kubisch AC, Schorr LB and Weiss CH eds(1995). New Approaches to Evaluating CommunityInitiatives: Concepts, Methods and Contexts. WashingtonDC: The Aspen Institute, pp 66–67.

33 Aldersen P (1998). ‘Theories in healthcare and research:the importance of theories in health care’. BMJ, vol 317,pp 1007–10.

34 Davies P, Walker A, Grimshaw G (2003). ‘Theories ofbehaviour change in studies of guideline implementation’.Annual Meeting of the International Society of TechnologyAssessment in Health Care, vol 19, abstract no 310.

35 Greenhalgh T, Humphreys C, Hughes J, Macfarlane F,Butler C, Pawson R (2009). ‘How do you modernise ahealth service? A realist evaluation of whole-scale changein London’. Milbank Quarterly, vol 87, no 2, pp 391–416.

36 Ryan P (2009). ‘Integrated theory of health behaviourchange: background and intervention development’.Clinical Nurse Specialist, vol 23, no 3, pp 161–70; quiz171–72.

37 Mayne J (2008). ‘Contribution analysis: an approach toexploring cause and effect’. International LaboratoryAccreditation Corporation (ILAC) brief 16, availableat:www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief16_Contribution_Analysis.pdf, accessedSeptember 2009.

38 Mayne J (2001). ‘Addressing attribution throughcontribution analysis: using performance measuressensibly’. The Canadian Journal of Program Evaluation,vol 16, no 1, pp 1–24.

39 Langley GJ et al (1996). The Improvement Guide: apractical approach to enhancing organisationalperformance. SF, California: Jossey-Bass.

40 Colorectal Cancer SER (2009). All the quotes in this andsubsequent chapters are from the relevant project’s finalself-evaluation report (SER) unless otherwise indicated.

41 Brady M, Kinn S, Stuart P (2003). ‘Preoperative fastingfor adults to prevent peri-operative complications’.Cochrane Database of Systematic Reviews, Issue 4.

42 Kitson A, Rycroft-Malone J, Harvey G, McCormack B,Seers K, and Titchen A (2008). ‘Evaluating the successfulimplementation of evidence into practice using thePARiHS framework: theoretical and practical challenges’.Implementation Science, vol 3, no 1.

43 Institute for Healthcare Improvement (2009) IHI qualityimprovement resources: a model for acceleratingimprovement’. Accessed 23 September 2009 at:www.healthychild.ucla.edu/First5CAReadiness/materials/siteInfra/IHIQualityImprovementResources.pdf

44 Hibbard J (2003). ‘Engaging healthcare consumers toimprove the quality of care,’ Medical Care. vol 41, no 1,pp 61–70.

45 Clark M, Glasby J, Lester H (2004). ‘Case for change: userinvolvement in mental health services and research’.Research Policy and Planning, vol 22, no 2.

46 Kazandjian V (2002). ‘Can the sum of projects end up ina program? The strategies that shape quality of careresearch’. Quality and Safety in Health Care, vol 11, no 3,pp 212–13.

47 Freeman C, Todd C, Camilleri-Ferrante C, Laxton C,Murrell P, Palmer CR, Parker M, Payne B and Rushton N(2002). ‘Quality improvement for patients with hipfracture: experience from a multi-site audit’. Quality andSafety in Health Care, vol 11, no 3, pp 239–45.

48 Alderson P (1998). ‘Theories in Health Care andResearch: The importance of theories in healthcare. BMJ,vol 317, no 7164, pp 1007–10.

49 Campbell NC, Murray E, Darbyshire J, Emery J et al(2007). ‘Designing and evaluating complex interventionto improve health care’. BMJ, vol 334, pp 455–59.

50 We use outcomes in a broad sense to include desiredchanges in process of care and not simply changes inpatients’ health state.

51 The Health Foundation (2004). Engaging with QualityInitiative – call for outline proposals.

52 ‘It is also well-recognised that there are wide year-on-year variations in results, particularly the post-operativemortality, and that it is therefore unsafe to makejudgements on the quality of care provided by clinicalunits on the basis of a single year’s results. Safe andreliable measurement of unit’s safety and the quality theyare providing will only emerge over 3–4 years of accuratehigh quality data collection, both from individual unitsand across the whole Audit to establish truly nationalstandards.’ (NBOCAP Report, 2007)

53 Roberts CM, Buckingham RJ, Pursey NA, Stone RA andLowe D (2009). The National Chronic ObstructivePulmonary Disease Resources and Outcomes Project(NCROP) Final Report. Clinical Effectiveness andEvaluation Unit, Royal College of Physicians of London,British Thoracic Society and British Lung Foundation.

54 For this comparison we believe that the Mann-Whitneytests used by the project team are the correct ones.

148 How do you get clinicians involved in quality improvement?

Page 166: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

55 PoISE (May 2009). Duration of fasting findings report,pp 1–2.

56 The PoISE data synthesis report provides these findings indetail.

57 SNAP (2009). Scottish National Audit Project: final report.Royal College of Physicians of Edinburgh

58 EPI-SNAP SER.59 Evaluation of the UK IBD audit action planning visits

(September 2009).60 SNAP (June 2009). Final Report, June 2009.61 The Colorectal Cancer project was a continuation of an

existing audit, and it therefore had an existingpublication record before the Health Foundationfunding. Only papers published after the HealthFoundation project started are included here.

62 Available at: www.rcplondon.ac.uk/clinical-standards/ceeu/Current-work/ncrop/Pages/Overview.aspx, accessed September 2009.

63 PoISE (June 2005). Data Synthesis Report, p 105.64 But the extent of these changes is not quantified in the

project SER, nor is their significance discussed.65 Tekkis P, Smith J, Constantinides V, Thompson M and

Stamatakis J (2005). Report of the National Bowel CancerAudit Project ‘Knowing your results’. The Association ofColoproctology of Great Britain and Ireland.

66 The National Bowel Cancer Audit Project (2007). TheAssociation of Coloproctology of Great Britain andIreland.

67 Bero LA, Grilli R and Grimshaw JM, et al (1998).‘Closing the gap between research and practice: anoverview of systematic reviews of interventions topromote the implementation of research findings’. BMJ,vol 317, pp 465–68.

68 Walshe K and Freeman T (2002). ‘Effectiveness of qualityimprovement: learning from evaluations’. Qual and Safetyin Health Care, vol 11, pp 85–87.

69 Powell AE, Davies HTO and Thomson RG (2003). ‘Usingroutine comparative data to assess the quality of healthcare: understanding and avoiding common pitfalls’. Qualand Safety in Health Care, vol 12, pp 122–28.

70 Brown SES, Chin MH and Huang ES (2007). ‘Estimatingcosts of quality improvement for outpatient healthcareorganisations: a practical methodology’. Quality andSafety in Health Care, vol 16, pp 248–51.

71 Ovretveit J (2009). Does improving quality save money?,London: The Health Foundation.

72 Wilson T, Berwick DM, Cleary P (2003). ‘What docollaborative projects do? Experience from sevencountries’. Jt. Comm. J Qual. Saf., vol 29 (2), pp 85–93.

73 Davies H, Powell A and Rushmer R (2007). Healthcareprofessionals’ views on clinician engagement in qualityimprovement: a literature review. London: The HealthFoundation.

74 Plsek P and Bevan H. (2003). IHI Forum 04: Available at:www.directedcreativity.com/pages/SpiderDiagram.pdf,accessed September 4 2009.

75 The three issues listed here are taken from a survey ofGPs designed by the REST project in the the HealthFoundation-funded EWQPC scheme.

76 Rivas C, Taylor S, Clarke A (2008). The final report on thequalitative sub-study of the national COPD resources andoutcomes project. Barts and the London.

77 See also section 2.2.4

78 The Health Foundation (February 2005). ITT for theexternal EwQI evaluation.

79 Coulter A, Ellins J (2006). Patient-focussed interventions: areview of the evidence. QQUIP, Picker Institute Europe,available at www.health.org.uk/publications/research_reports/patientfocused.html, accessed 22 September 2009.

80 Nilsen E, Myrhaug H, Johansen M, Oliver S and OxmanA (2006). ‘Methods of consumer involvement indeveloping healthcare policy and research, clinicalpractice guidelines and patient information material’.Cochrane Database of Systematic Reviews, issue 3.

81 Schunemann H, Fretheim A and Oxman A (2006).‘Improving the use of research evidence in guidelinedevelopment: integrating values and consumerinvolvement’. Health Research Policy and Systems, 4:22.

82 Davies E and Cleary PD (2005). ‘Hearing the patient’svoice? Factors affecting the use of patient survey data inquality improvement’. Qual Saf. Healthcare, issue 14,pp 428–32.

83 Williamson C (2007). ‘How do we find the rightpatients to consult?’, Qual in Primary Care, issue 15,pp 195–99.

84 Barnard A et al (2006). The PC11 report summary: anevaluation of consumer involvement in the LondonPrimary Care Studies Programme. Peninsula MedicalSchool, available at: www.invo.org.uk/pdfs/Summary_of_PC11Report1.pdf, accessed 27 September 2009.

85 Tarpey M (2006). Why people get involved in health andsocial care research. Involve Support Unit, Department ofHealth, available at: www.invo.org.uk/pdfs/whypeoplegetinvolvedinresearch_August2006.pdf,accessed 27 September 2009.

86 INVOLVE (2006). Guide to Reimbursing and PayingMembers of the Public Actively Involved in Research.Department of Heath, available at: www.invo.org.uk/pdfs/Payment_Guidefinal240806.pdf, accessed27 September 2009.

87 Equally, in schemes such as the EwQI, which involvedseveral projects with the potential for sharing viewsacross the scheme, service users need to have an adequateand appropriate understanding of the wider scheme ifthey are to share their expertise and experienceseffectively with others.

88 See, for example, the training developed as part of theRoyal College of Psychiatry’s QI programmes.

89 For example, on shared decision making between doctorsand patients, Coulter and Ellins say that ‘Communicationskills training should be the main mechanism by whichclinicians learn about and gain competencies in theprinciples and practice of shared decision making, butthe extent to which it is explicitly included in medicalcurricula is not known. There is evidence that suchtraining can be effective in improving communicationskills.’ Also that ‘Coaching for patients in communicationskills and question prompts can have a beneficial effecton knowledge and information recall. Theseinterventions also empower patients to become moreinvolved in decisions…’ (Op cit).

90 EwQI Self-harm project (2007). Service user handbook.

91 INVOLVE (2007). Good practice in active publicinvolvement in research. Available at: www.invo.org.uk/pdfs/GoodPracticeD3.pdf, accessed 27 September 2009.

How do you get clinicians involved in quality improvement? 149

Page 167: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

92 Telford R, Boote J and Cooper C (2004). ‘What does itmean to involve consumers successfully in NHSresearch?’. Health Expert, vol 7(3), pp 209–20.

93 This clarity about the role of service users isfundamental not only to effective public involvement,but also to its evaluation. For example, Coulter andEllins note: ‘There is very little reliable evidence aboutthe effectiveness of public involvement methods, forwhich the lack of an agreed evaluation framework is amajor factor. Before developing a coherent frameworkfor the assessment of outcomes, the intended aims ofpublic involvement must be specified and defined.’(Op cit).

94 Spencer E, Walshe K (2009). ‘National qualityimprovement policies and strategies in Europeanhealthcare systems’. Qual. Saf. Health Care, vol 18,supp l 1, pp i22–i27

95 Walmsley J and Miller K (2008). A review of the HealthFoundation’s Leadership Programme 2003–07. London:the Health Foundation.

96 Lucas W (2005). ‘Understanding more about the impactof those leadership interventions in the health serviceswhich are supported by the Health Foundation’. Journalof Leadership in Public Services, vol 2, issue 1.

97 Ferlie EB, Shortell SM (2001). ‘Improving the quality ofhealth care in the United Kingdom and the UnitedStates: a framework for change’. Millbank Quarterly,vol 79, no 2, pp 281–315.

98 Ovretveit J (2009). Leading improvement effectively.Review of research. London: The Health Foundation,p xi.

99 www.nres.npsa.nhs.uk/applications/guidance/#researchoraudit

100 Cornwell J and Jakubowska D (2006). Clinical qualityimprovement: a report on potential of professional colleges,societies and associations to build improvement capabilityin healthcare. Working Paper. London: The HealthFoundation.

101 The project SERs also discuss this set of outcomes (seetable 18, Chapter 3).

102 Neuhauser D. ‘The Heroes and Martyrs series: jobdescriptions for health care quality improvementprofessionals?’. Quality and Safety in Health Care, vol 14,p 230.

103 Department of Health (2008). Call for proposals:evaluation of the partnerships between universities andNHS organisations: learning from the NIHRcollaborations for leadership in applied health research andcare (CLAHRC).

104 The exception was SNAP-CAP, in which there wascontinuous data collection.

105 NICE et al (2002). Principles for best practice in clinicalaudit. Oxon: Radcliffe Medical Press Ltd. (Note,pp 142–3 argue that process improvement and users’views of the care they receive are appropriate measuresof audit.)

106 These were identified as ways in which royal collegescould use their influence by Leatherman and Sutherlandin The Quest for Quality in the NHS, p. 44.

107 See, for example, Department of Health (2001). Assuringthe quality of medical practice: implementing supportingdoctors protecting patients. London: Department ofHealth.

108 Bate SP, Robert G and McLeod H (2002). Report on the‘Breakthrough’ collaborative approach to quality andservice improvement within four regions of the NHS.London: University College London.

109 Research briefing ‘Organising for quality: journeys ofimprovement at leading hospitals and healthcare systemsin the US, UK and Netherlands. University CollegeLondon, RAND Corporation, Harvard Medical School.

110 House R et al (1995). ‘The meso paradigm: a frameworkfor the integration of micro and macro organisationalbehaviour’. Research in organisational behaviour, vol 17,pp 71–114.

111 Grimshaw JM et al (2004). ‘Effectiveness and efficiencyof guideline dissemination and implementationstrategies’. Health Technology Assessment, vol 8, issue 6.

112 NICE et al.(2002). Principles for best practice in clinicalaudit. Oxon: Radcliffe Medical Press Ltd, p 164.

113 Soper B and Hanney S (2007). ‘Lessons from theevaluation of the UK’s NHS R&D ImplementationMethods Programme’. Implementation Science, vol 2,no 7, pp 2–7.

114 Available at: www.lshtm.ac.uk/docdat/page.php?t=index

115 Rowan K et al (2004). ‘Ratings are determined by a smallnumber of process measures; outcome measures playonly a small role and are based on scanty poor qualitydata, which do not adequately account for case mix’BMJ, vol 328 , pp 924–25.

116 Based on research briefing Organising for quality:journeys of improvement at leading hospitals andhealthcare systems in the US, UK, and Netherlands.University College London, RAND Corporation,Harvard Medical School.

117 Leatherman S and Sutherland K (2003). The quest forquality in the NHS: A midterm evaluation of the ten yearquality agenda. The Nuffield Trust, London: TheStationery Office pp 26 & 28 (hereafter L&S).

118 L&S p 170.

119 The Health Foundation (2004) Evaluation of theEngaging with Quality Initiative: call for proposals p 9.

120 NICE et al.(2002). Principles for best practice in clinicalaudit. Oxon: Radcliffe Medical Press Ltd, p 147.

121 L&S p 174.

122 Available at: www.wkk.org/Programming/|ResourceOverview.aspx

123 Pawson R and Tilley N (1997). Realistic evaluation.London: Sage Publications.

124 L & S, pp 44, 45 and 270–71.

125 Peryer A (1997). Get going: a guide to the evaluation oftraining Milton Keynes: joint initiative for communitycare. London: Department of Health.

126 Weiss CH (1997). ‘How can theory-based evaluationsmake greater headway?’. Evaluation Review, vol 21,issue 4, pp 501–24.

127 Available at: www.wkkf.org/Programming/ResourceOverview.aspx

128 Pawson R and Tilley N (1997). Realistic evaluation.London: Sage Publications.

129 Gordon T, Pease A (2006). ‘RT Delphi: an efficient“round-less” almost real time Delphi method’.Technological Forecasting and Social Change, vol 73,pp 321–33.

150 How do you get clinicians involved in quality improvement?

Page 168: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

How do you get clinicians involved in quality improvement? 151

130 Methodology of the Fistera Delphi (2005). Fisterathematic network on foresight on information societytechnologies in the European research area: IST-2001-37627 FISTERA DELPHI Report, 2005. Available at:http://fistera.jrc.es/docs/RP_The_FISTERA_Delphi.pdf,accessed February 2008.

131 Delphi Method. Available at: www2.chass.ncsu.edu/garson/PA765/delpi.htm, accessed February 2008.

132 Atkins RB, Tolson H, Cole BR (2005). ‘Stability of

response characteristics of a Delphi panel: application ofbootstrap data expansion’, BRC Medical ResearchMethodology, 5:37.

133 These respondents could either not be linked to aEwQI project or could not be invited to complete thesecond round because their email address wasundeliverable.

134 The ACPGBI/NBOCAP report 2006.

135 Available at: www.rcpsych.ac.uk/cru/auditSelfHarm.htm

Page 169: Evidence: How do you get clinicians involved in …...How do you get clinicians involved in quality improvement? v CHAPTER 4 Engaging with the initiative 58 4.1 Introduction 58 4.2

The Health Foundation90 Long AcreLondon WC2E 9RAT 020 7257 8000 F 020 7257 8001E [email protected]

Registered charity number: 286967Registered company number: 1714937

www.health.org.uk

The Health Foundation is an independent charity working to continuously improve the quality of healthcare in the UK.

We want the UK to have a healthcare system of the highest possible quality – safe, effective, person-centred, timely, efficient and equitable. We believe that in order to achieve this, health services need to continually improve the way they work.

We are here to inspire and create the space for people, teams, organisations and systems to make lasting improvements to health services.

Working at every level of the healthcare system, we aim to develop the technical skills, leadership, capacity, knowledge, and the will for change, that are essential for real and lasting improvement.

Identify Innovate Demonstrate Encourage