7
The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma Jonathan McConnell a, * , f , Carron Devaney b , Matthew Gordon b , Mark Goodwin c , Rodney Strahan d , Marilyn Baird e a School of Health Sciences, Robert Gordon University, Room H321, Garthdee Road, Aberdeen, UK b Royal Brisbane and Womens Hospital, Australia c Austin Health, Melbourne, Australia d Southern Health, Melbourne, Australia e Monash University, Melbourne, Australia article info Article history: Received 1 March 2012 Received in revised form 23 April 2012 Accepted 25 April 2012 Available online 17 May 2012 Keywords: Australian healthcare Radiographer interpretation Image abnormality Emergency image interpretation Role development abstract Introduction: Interpretation of trauma images by radiographers is a task substitution that has been debated for many years in Australia and enacted in various forms internationally since the 1980s. This paper describes the standardised test portion of a pilot project (which also had a clinical component) that investigated the potential for radiographers to describe abnormalities as a change to models of healthcare delivery being adopted in Queensland. Method: Randomly selected appendicular musculo-skeletal trauma images were reported by four radi- ologists to conrm image content. 102 images, matched for population injury incidence and body area proportionality served as a standardised image test. Ten radiographers described images before, immediately after and 8e10 weeks following an education programme. Receiver operator characteristic curves and kappa statistics were calculated to evaluate radiographer descriptive performance relative to the radiologist reports. Results: Using the Friedman and Wilcoxon signed ranks tests there was statistically signicant improvement of sensitivity and accuracy of radiographer performance by the third standardised test with values: sensitivity (p ¼ 0.023/0.012), accuracy (p ¼ 0.012/0.021) specicity demonstrated no or very close statistically signicant change (0.118/0.058). Kappa values (Cohen p ¼ 0.019/0.011, Gwet 0.025/ 0.007 and Byrt et al. 0.021/0.047) demonstrated statistically signicant change across the test sequence. Positive and negative predictive values with positive likelihood ratios were also calculated. Discussion: Most (9/10) radiographers demonstrated a high level of agreement of description accuracy with the radiologists used to create the standardised test. Conclusion: With appropriate education radiographers can match radiologist descriptions of appendic- ular musculo-skeletal trauma. Ó 2012 The College of Radiographers. Published by Elsevier Ltd. All rights reserved. Introduction Debate about task substitution of medical roles to the non- medically educated within Australia continues. Several authors 1e3 have expressed the potential for radiographers to safely relieve pressure on the emergency healthcare system through preliminary abnormality description of radiograph content. They assert this would enhance radiographer job satisfaction whilst enhancing the effectiveness of the multi-professional team. A submission by the Australian Institute of Radiography 4 to the Garling inquiry 5 sug- gested that radiology reporting turnaround times are not meeting expected targets, and abnormalities seen by radiographers have been missed or dismissed by the emergency department resulting in management delays. As far back as 1999 6 it was indicated at Alice Springs hospital up to 1.5% of examinations received no report. Despite efforts to provide services through nighthawkproviders (radiologists reporting in a time zone different to the country requiring the service), there are still issues in radiological servicing of emergency department in some areas of Australia as suggested by Garling. 5 * Corresponding author. Tel.: þ44 (0) 1224 262936. E-mail addresses: [email protected] (J. McConnell), Carron_Devaney@ health.qld.gov.au (C. Devaney), [email protected] (M. Gordon). f Previous address: Medical Imaging and Radiation Sciences, School of Biomed- ical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne. Contents lists available at SciVerse ScienceDirect Radiography journal homepage: www.elsevier.com/locate/radi 1078-8174/$ e see front matter Ó 2012 The College of Radiographers. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.radi.2012.04.005 Radiography 18 (2012) 184e190

The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma

Embed Size (px)

Citation preview

Page 1: The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma

at SciVerse ScienceDirect

Radiography 18 (2012) 184e190

Contents lists available

Radiography

journal homepage: www.elsevier .com/locate/radi

The impact of a pilot education programme on Queensland radiographerabnormality description of adult appendicular musculo-skeletal trauma

Jonathan McConnell a,*, f, Carron Devaney b, Matthew Gordon b, Mark Goodwin c, Rodney Strahan d,Marilyn Baird e

a School of Health Sciences, Robert Gordon University, Room H321, Garthdee Road, Aberdeen, UKbRoyal Brisbane and Women’s Hospital, AustraliacAustin Health, Melbourne, Australiad Southern Health, Melbourne, AustraliaeMonash University, Melbourne, Australia

a r t i c l e i n f o

Article history:Received 1 March 2012Received in revised form23 April 2012Accepted 25 April 2012Available online 17 May 2012

Keywords:Australian healthcareRadiographer interpretationImage abnormalityEmergency image interpretationRole development

* Corresponding author. Tel.: þ44 (0) 1224 262936E-mail addresses: [email protected] (J. M

health.qld.gov.au (C. Devaney), Matthew_Gordon@hef Previous address: Medical Imaging and Radiation

ical Sciences, Faculty of Medicine, Nursing and HealthMelbourne.

1078-8174/$ e see front matter � 2012 The College odoi:10.1016/j.radi.2012.04.005

a b s t r a c t

Introduction: Interpretation of trauma images by radiographers is a task substitution that has beendebated for many years in Australia and enacted in various forms internationally since the 1980s. Thispaper describes the standardised test portion of a pilot project (which also had a clinical component) thatinvestigated the potential for radiographers to describe abnormalities as a change to models ofhealthcare delivery being adopted in Queensland.Method: Randomly selected appendicular musculo-skeletal trauma images were reported by four radi-ologists to confirm image content. 102 images, matched for population injury incidence and body areaproportionality served as a standardised image test. Ten radiographers described images before,immediately after and 8e10 weeks following an education programme. Receiver operator characteristiccurves and kappa statistics were calculated to evaluate radiographer descriptive performance relative tothe radiologist reports.Results: Using the Friedman and Wilcoxon signed ranks tests there was statistically significantimprovement of sensitivity and accuracy of radiographer performance by the third standardised testwith values: sensitivity (p ¼ 0.023/0.012), accuracy (p ¼ 0.012/0.021) specificity demonstrated no or veryclose statistically significant change (0.118/0.058). Kappa values (Cohen p ¼ 0.019/0.011, Gwet 0.025/0.007 and Byrt et al. 0.021/0.047) demonstrated statistically significant change across the test sequence.Positive and negative predictive values with positive likelihood ratios were also calculated.Discussion: Most (9/10) radiographers demonstrated a high level of agreement of description accuracywith the radiologists used to create the standardised test.Conclusion: With appropriate education radiographers can match radiologist descriptions of appendic-ular musculo-skeletal trauma.

� 2012 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

Introduction

Debate about task substitution of medical roles to the non-medically educated within Australia continues. Several authors1e3

have expressed the potential for radiographers to safely relievepressure on the emergency healthcare system through preliminaryabnormality description of radiograph content. They assert this

.cConnell), [email protected] (M. Gordon).Sciences, School of Biomed-Sciences, Monash University,

f Radiographers. Published by Else

would enhance radiographer job satisfaction whilst enhancing theeffectiveness of the multi-professional team. A submission by theAustralian Institute of Radiography4 to the Garling inquiry5 sug-gested that radiology reporting turnaround times are not meetingexpected targets, and abnormalities seen by radiographers havebeen missed or dismissed by the emergency department resultinginmanagement delays. As far back as 19996 it was indicated at AliceSprings hospital up to 1.5% of examinations received no report.Despite efforts to provide services through ‘nighthawk’ providers(radiologists reporting in a time zone different to the countryrequiring the service), there are still issues in radiological servicingof emergency department in some areas of Australia as suggestedby Garling.5

vier Ltd. All rights reserved.

Page 2: The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma

J. McConnell et al. / Radiography 18 (2012) 184e190 185

Duckett,3 Gibbon2 and McConnell and Smith7 have indicatedwhere the bottlenecks occur in the prevention of radiographerinput into the interpretation process (detailed below), whileenhancing the diagnostic pathway between radiology and theemergency department. Despite international evidence that radi-ographer interpretation with appropriate radiologist report verifi-cation safety nets is possible, the movement towards this processhas not been forthcoming in Australia. In fact there has been openrejection by radiologists often with minimal evidence to supportthis position.8,9 The current Medicare rebate system, that onlyallows certified medical practitioners to provide services in radi-ology, acts as a barrier to improving the efficiency of publichealthcare delivery through the use of radiographers.1,7 Althoughpostgraduate study in image interpretation for radiographers isavailable in several Australian universities, the benefit to emer-gency departments has failed to be realised. The existence ofa professional silo mind set and political nervousness by the federalgovernment has delayed evaluation of the potential to changecurrent working practices. Currently, radiographers are expected bytheir professional body to verbally indicate to the referrer of theradiographic examination when they believe an abnormality ispresent on the images they produce10 and takes the form oftracking the referrer and discussing image appearances with them.This position is also endorsed by state and territory registrationboards. The initial developments in the U.K. supported the samepremise,11 leading to the eventual removal of the ‘infamousconduct’ statement from registration documentation,12 thusenabling radiographer interpretation. Change of practice throughwider radiographer use in U.K. radiology departments was raisedby the NHS Audit Commission report of 1995.13 This was supportedby University courses to masters’ level with significant radiologistinput. Through developments by the colleges of radiologists andradiographers, U.K. practice has nowmoved toward an expectationthat all radiographers will provide descriptions of abnormalitylocation and characterisation on radiographs. As such interpreta-tion by radiographers in emergency settings has become a coreskill14 even though some barriers exist. U.K. research has alsodemonstrated that radiographers are the best alternative profes-sional group to perform a trauma image description functiondespite the thrust to encourage the use of the emergency nursepractitioner15,16 as a replacement doctor for minor injuries.

There is a relative scarcity of research into radiographer inter-pretation/description in Australia with only five related papersidentified between 1997 and 2009.6,17e20 This paper will describea pilot project performed in Queensland to develop an educationprogramme (EP) delivered in two stages, create a worksheetresponse system similar to one initially suggested by Smith andYounger18 and measure the descriptive performance impact thatthe EP has had on radiographer ability.

Method

An image test library was created from emergency departmentimages of the appendicular skeleton selected at random from a fiveyear old radiographic film based archive that had been previouslyradiologist reported. These images were scanned at high resolution(600 dpi) to enable computer based viewing with no imagedegradation and full imagemanipulation capable with the softwareavailable within a standard Microsoft� Office package. The patientage, gender and history were provided to three radiologists to re-report the radiographs and add to the initial report so that agree-ment about image content could be assured. From 435 casesgenerating 650 examinations as the image library, 102 examina-tions investigating trauma to the adult appendicular skeleton,including the hip and shoulder joint, were chosen to form an image

test. Body areas were created in the image test library to includeimages of normal and abnormal appearance defined by consensusof report content of at least three of the four radiologists. Radio-graphic images at body area rates evident from the initial selectionof cases from the archive, were randomly selected and added to theradiographer’s test. From this selection process a 30.3% injury ratewas revealed resulting in 35 abnormal examinations beingincluded in the test, constructed to reflect the range and propor-tions of examinations noted within the image library.

All 102 images were viewed by each participant covering theareas described above each time the test was performed. A work-sheet was developed that used a tick box approach and freehanddescriptions for the participants to give responses about theirimpressions of image content (see Fig. 1). The patient age, genderand a short history was provided on each worksheet, which wasassociated with a ‘jpeg’ image file for each radiographic study.Microsoft� ‘Picture Manager’ was used to view the images so thatmagnification and image brightness/contrast could be adjusted aswould be possible in the clinical setting. This software was chosenas future testing using this image library was to be performed ona standard 1 megapixel monitor, which is ubiquitously available tohome computers or radiographerworkstations. Earlier research hasshown that resolution is acceptable to emergency images.21 Tickboxes on the worksheet included indications of no abnormality,fracture, dislocation, soft tissue sign for fracture and foreign bodypresence. Additional tick boxes asked whether the radiographerbelieved further imaging was necessary in the form of moreradiographic projections, Computed Tomography, Ultrasound,Magnetic Resonance Imaging, and Nuclear Medicine. A freecomment response section was also available for participants toadd details about the appearances seen in the images. This allowedconfirmation of appropriate tick box selection by the marker.Finally a confidence level tick box section was also included toestablish the level of certainty the radiographer had in his/herdescription so that a receiver Operator Characteristic (ROC) curvecould be calculated using Eng’s22,23 internet calculator. Empirical orraw ROC and fitted ROC curves were constructed by inputting theconfidence information into the tool. Participants knew thatnormal and abnormal images would be present but were not awareof the relative proportions. No time limit was placed on theparticipants for completion of the test and all were asked not todiscuss the content of the test between sittings. Repeat tests wereperformed in the same departments, using the same conditions andmonitors.

Ten radiographers of mixed clinical experience in two groups offive from two Brisbane tertiary hospitals, volunteered to participatein the research. Ethics approval from the Human Research EthicsCommittee at The Royal Brisbane and Women’s Hospital wasreceived in February 2010. Participants, who received identifiers tomaintain anonymity from the test marker, were requested todescribe image content before the first of the two educationsessions in the EP. This test was administered by two research co-ordinators who were radiographers based in one of the two Bris-bane hospitals who also had a responsibility to regularly reportprogress to the funding bodies. The standardised test images werere-read within two weeks of the first education session then fora third time within three weeks of a second weekend study period.No feedback was given between readings and image numbers weresufficient to reduce memorising of case presentations. The Kappastatistic was used to compare inter rater variability with the typesproposed by Cohen, Gwet and Byrt et al.,24e26 being performed toaddress issues linked with this statistic and provide the potentialrange of performance that was evident.

The EP was developed and delivered by a senior lecturer inmedical imaging with U.K. radiographer reporting training and

Page 3: The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma

Figure 1. Radiographer Description Worksheet.

J. McConnell et al. / Radiography 18 (2012) 184e190186

a clinical research/publication background in the field. Box 1 detailsthe curriculum and timeframe of the EP and radiographer perfor-mance measurement sequencing.

Results

Apart from one participant, all radiographers demonstratedsignificant improvements in their performances across the threestandardised image tests, with mean values for sensitivity, speci-ficity, accuracy, positive and negative predictive values, and posi-tive likelihood ratio. These are presented in Table 1 as mean,median, maximum and minimum values. The empirical and fittedROC and three Kappa statistic variables in the same detail are

shown in 2. The Kappa statistic variables were used to show relativeperformance between the individual radiographers and the radi-ologist’s group response. Statistical inference for the, sensitivity,specificity, accuracy and Kappa results was performed and aredemonstrated in Table 1 and Table 2 respectively. The Friedman testshows if the distribution of results across all three image tests wasthe same and the Wilcoxon signed ranks test was employed toshow whether the median of the differences between first and lasttests varied. This acted as a confirmation of the validity of theFriedman test calculations.

It is apparent from the results that radiographer mean sensi-tivity increased significantly across the three tests with statisticalevidence from the Friedman test indicating that the distribution

Page 4: The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma

Box 1. The Education Programme.

J. McConnell et al. / Radiography 18 (2012) 184e190 187

between participants is not the same. This is supported by theWilcoxon signed ranks test that indicates the median of thedifferences of participant performances between the first and lasttests is not the same. As accuracy is a function of sensitivity andspecificity it would be expected that this would also improve.This analysis shows the variation between standardised imagetests with no patient contact is statistically significant. Gradualincreases are evident across the predictive values and the likeli-hood ratio of the radiographers. Kappa scores were calculated toshow performance relative to the consensus radiologist reports. Aconsistently low Kappa value of 0.484 indicated that a relativelycautious approach taken by one individual persisted resulting in

high false positive responses, which contrasted with theremaining participants. According to the definitions of Kappavalues by Landis and Koch27 (who applied their definitions toCohen’s Kappa see Fig. 2) there is a general shift from moderateto substantial agreement with the radiologists, with someparticipants achieving almost perfect agreement as their valuesexceed 0.81. The spread of Kappa variant results demonstrated 6/9 substantially agreeing and 3/9 reaching almost perfect agree-ment by test 3. Statistically significant change in performanceacross the Kappa statistics variations is also evident, whencalculated with Friedman and Wilcoxon signed ranks of first andthird tests using SPSS 17.0�.

Page 5: The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma

Table 1Standardised image test results n ¼ 10.

Test Sensitivity % Specificity % Accuracy % PPV NPV LR þve

Mean Median Min/Max Mean Median Min/Max Mean Median Min/Max Mean Median Min/Max Mean Median Min/Max Mean Median Min/Max

1 87.3 90.6 73.0/97.3 78.9 80.8 64.6/90.8 82.0 82.4 73.5/89.2 0.71 0.72 0.59/0.84 0.92 0.93 0.84/0.98 4.73 4.45 2.52/9.402 90.8 89.2 83.8/100.0 76.0 70.8 64.6/93.8 81.4 78.9 73.5/91.2 0.70 0.64 0.59/0.89 0.94 0.93 0.88/1.00 5.23 3.21 2.52/13.953 93.5 94.6 89.2/ 100.0 82.9 86.2 58.5/96.9 86.8 88.7 72.5/95.1 0.77 0.80 0.57/0.94 0.96 0.96 0.92/1.00 8.57 6.80 2.34/29.65Friedman

test p ¼0.023 0.118 0.030

Wilcoxonsignedrankstest p ¼

0.012 0.058 0.021

J. McConnell et al. / Radiography 18 (2012) 184e190188

ROC values demonstrate a strong performance as both theempirical (raw calculation), or fitted versions. Radiographerperformancemeasured as the area under the curve increases as rawcalculations demonstrated mean values between 87% and 90.4%and median values of 88.4%e92.1%. Fitted ROCs again show anapproximate 3% increase with means rising from 89.4% to 92.6%and median values increasing from 91.1% to 94.3%. This issupported by generally increased results of sensitivity, specificityand accuracy across the three tests with gradually greaternumbers of radiographers performing above 85% accuracy at test3 (6/10) compared with test 1 (3/10).

Discussion

Improving sensitivity and specificity (Table 1) shows that eventhough falsepositive scoreswerehigher tobeginwith thisdecreasedover theperiodof testing.Apatternof areaswhereparticipantswereincorrect with this range of images is not shown as this changedacross the testing sequence. Overall the results obtained around thedelivery of the EP are encouraging. Employing a worksheet withdefined criteria and at the same time giving the radiographer theopportunity to express opinion by freehand comment, has the

Table 2Standardised image test e Kappa variations and ROC scores n ¼ 10.

Kappa variations Descriptive statistic

Mean Median Min/Max

Cohen 1 0.589 0.594 0.468/0.756Gwet 1 0.594 0.628 0.452/0.698Byrt et al 1 0.728 0.775 0.202/0.999Cohen 2 0.545 0.539 0.427/0.768Gwet 2 0.542 0.548 0.295/0.798Byrt et al 2 0.724 0.715 0.390/0.943Cohen 3 0.674 0.710 0.452/0.795Gwet 3 0.683 0.724 0.412/0.813Byrt et al 3 0.847 0.860 0.657/0.999

Friedman test p ¼Cohen 0.019Gwet 0.025Byrt et al 0.021

Wilcoxon signed ranks test p ¼Cohen 0.011Gwet 0.007Byrt et al 0.047

Test number Descriptive statistic

ROC e empirical (raw) ROC e fitted

Mean Median Min/Max Mean Median Min/Max

1 0.870 0.884 0.704/0.925 0.894 0.911 0.731/0.9402 0.880 0.886 0.814/0.934 0.912 0.919 0.829/0.9653 0.904 0.921 0.704/0.974 0.926 0.943 0.731/0.980

potential to provide helpful information to the referring emergencydoctor. Oftenmanagement of patients is begun in a protective sense,driven by concerns over litigation. Excessive treatment beforeradiological reporting canbeobtainedmayresult in recall tohospitalwith associated concerns and costs for the patient. Timely infor-mation that is available and reliablemayhelp reduce costs caused byexcessive treatment or prevent an occasional missed injury.Furthermore, last but not least, descriptions provided by radiogra-phers would be less expensive to the healthcare system.

The use of the various measurement approaches acts in a ‘beltand braces’ way to demonstrate how reporting of just one methodcan be misleading. Although Cohen’s Kappa is meant to correct forchance in the decision making between raters/readers of a test, useof the multiple Kappa methods accounts for the failure to accountfor bias and prevalence in the choices made in a binary measuringsystem such as seen in Cohen’s testing method. Byrt et al.26 createdan adjustment for bias (the proportion of yes choices by a readercompared to the original) and prevalence (probability of a ‘Yes andNo’ choice compared with the original). They state if bias increasesthen the chance agreement expectation falls to cause Kappa to beartificially raised. Likewise, if prevalence rises so too does thechance agreement expectation, as a result this artificially reducesthe Kappa value. Gwet24 also argues that the assumptions used tocalculate the level of chance assumed within Cohen’s Kappa areflawed. He asserts that the probability for chance agreement shouldnot exceed 0.5 in value if both observers have randomly come toa decision that they both agree on. Following this argument enablesthe Kappa statistic to be calculated taking into account variation inmarginal values that are necessarily generated in a 2 � 2 square.

The descriptive ability of this group of radiographers for thisrange of radiographs shows gradual improvement that starts ata high level. There is a trend towards matching radiologist perfor-mance and there appears to be statistical significance overall.Specificity however shows least improvement with responsestending towards excessive false positive reporting. The ROC curvesgive the highest values of performance but as Eng22 points outthese values should not just be taken at face value.When reviewingROC curves plotted from the figures generated by each individual,

Kappa statistic value Strength of agreement <0.00 Poor

0.00-0.20 Slight 0.21-0.40 Fair 0.41-0.60 Moderate 0.61-0.80 Substantial 0.81-1.00 Almost perfect

Figure 2. Landis and Koch’s agreement definitions for Cohen’s Kappa.

Page 6: The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma

J. McConnell et al. / Radiography 18 (2012) 184e190 189

fitted variations smooth out the performance and tend to increasereported ability. However ‘hooks’may be present in the empiricallyplotted graphs which are a more truthful representation and assuch the empirical value is more representative of true perfor-mance. Acknowledging this, these radiographers demonstratedmean upper performance accuracy values ranging approximatelybetween 90 and 92% and medians of 91e94%.

There are someweaknesses in this study that shouldbe considered.The size of the studyand using only adult appendicular trauma imageslimits it to the status of pilot. Those participants that volunteeredwereclearly interested in enhancing theirwork thus leading to the potentialfor selection bias amongst radiographers. The experience of radiogra-phers was not differentiated or controlled; length of time since quali-ficationvaried, prior education in image interpretation fromAustralianor U.K. universities and the impact of other imaging modality experi-encewas not accounted for. Multiple use of the same test with a shorttime between them using one examination type that aimed to detectthe presence of limited appearances to measure change is open tocriticism.28 The size of this study and mix of appearances influenceswithno feedbackgivenuntil the endof the investigation could accountfor this limitation; closer analysis of individual performance showedvariation in descriptions between tests suggesting thatmemorising byindividuals was minimised. The safety of this approach has been dis-cussed.29 A lack of feedback can causeuncertaintyand an investigationof intra observer variationwould be helpful to analyse this.

Despite some limitations, this study attempted to account forseveral weaknesses in the Australian work performed between1997 and 2009.6,17e20 A clinical radiographer description pilot(performed after this component and to be reported elsewhere)may reinforce the perception that patient presence influencesdecision making and descriptive content. This could be a notablecontext to consider when the non-medically educated arecompared against medical staff in the emergency department. Thismay mean there is a greater degree of safety in radiographerabnormality descriptions when radiographers assess images forsigns of injury with the advantage of the patient being present.

The assertion that same day reporting is not required in wellsupervised emergency departments30 may hold concerns for theaccreditation of radiology departments in Australia, especiallywhen the work of Hallas and Ellingsen31 on an international basis isalso considered. They suggest up to 3.1% of fractures are missed,thus supporting this research that identifies immediate radiogra-pher input could be valuable. This study that follows the outcomesof a condensed education programme, has shown radiographersare capable of closely matching the radiologist in their descriptionsof image appearances in adult musculo skeletal trauma. Wheredelays in emergency trauma reporting by radiologists are noted,4,5

then radiographer contributions could ensure radiology depart-ment input in a timely fashion is guaranteed. This could also satisfyperformance indicators for Australian departmental accreditationprocesses, whereby the departmental performance is measured foraccreditation to provide services within Medicare, one of which isspeediness of radiologist report turnaround time.

Conclusion

It would appear that there is potential, with the right educa-tional support and on-going audit of performance, for Australianradiographers tomake a useful contribution to an aspect of practicethat may be understaffed or relies on relatively junior medicalpersonnel. In the regional and rural setting where attracting staff isproblematic then radiographer contributions could be more valu-able both in terms of service provision and attracting staff to nonmetropolitan working. A larger study with wider coverage ofradiographic examinations should be performed to fully establish

where radiographers can be appropriately employed, and in whatways they make mistakes that could impact on overall imagingmanagement. By this it is meant that the scope of practice withinradiographer descriptions can be defined clearly, so that anacceptable self regulating framework can be developed and appliedand recognised across Australian radiology practice. Emergencymedical staff should be invited to perform the standardised test toestablish their relative performance compared with radiographers;it may also be possible to establish whether clinical contact impactson emergency department doctor’s performance. Radiologistcolleagues should welcome the opportunity to confirm or refuteany findings, as improved radiological teamwork to meet demandis the key outcome of this research as suggested by Gibbon.2

Enhancement of the multi-professional team rather than pressur-ising a smaller number of non-medically/specialist educatedgroups to take on other roles for which they are inappropriatelyprepared is the correct way to move forward. Radiographers havereceived a degree based education across the whole of Australia formore than 20 years and are not being utilised to their highestpotential. This is a waste of resources when the healthcare systemfaces challenges that could be addressed by use of the right personat the right time in the right place.

Funding

The project was primarily funded by the Queensland HealthAllied Health Workforce and Co-ordination Unit (AHWACU) toinvestigate the potential for radiographer abnormality descriptionas a change of healthcare model of practice. This body acted as thestate government interest and supported the steering group for theproject reported to frequently by Carron Devaney and MatthewGordon. The Medical Radiation Technologists Board of Queenslandalso supported the research financially to extend the period of theinvestigation.

Competing interests

Jonathan McConnell received financial support from Queens-land Health to enable travel to facilitate the project.

Carron Devaney e Competing interests: none identified.Matthew Gordon e Competing interests: none identified.Mark Goodwin e Competing interests: none identified.Rodney Strahan e Competing interests: none identified.Marilyn Baird e Competing interests: none identified.

Acknowledgements

The authors would like to thank Dr Parm Naidoo, ClinicalDirector, Dandenong Hospital, Southern Health, Melbourne thestudy participants and the numerous colleagues at Royal Brisbaneand Women’s Hospital, and The Prince Charles Hospital Brisbanewho made the project possible.

References

1. Smith TN, Baird M. Radiographers’ role in radiological reporting: a model tosupport future demand. Med J Aust 2007;186:629e31.

2. Duckett SJ. Health workforce design for the 21st century. Aust Health Rev2005;29:201.

3. Gibbon WW. Workforce models for a healthier Australia: a productivitycommission submission. Queensland Health; 2006.

4. Australian Institute of Radiography Board of Directors. Submission to the specialcommission of inquiry into acute care services in NSW public hospitals: medicalimaging services in acute care 2008. Collingwood, Victoria.

5. Garling P. Final report of the special commission of inquiry: acute care services inNSW hospitals overview. NSW Government; 2008.

6. Hall R, Jane S, Egan I. The Red Dot system: the outback experience. The Radi-ographer 1999;46:11e5.

Page 7: The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma

J. McConnell et al. / Radiography 18 (2012) 184e190190

7. McConnell J, Smith T. Submission to the National Health and Hospitals ReformCommission: redesigning the medical imaging workforce in Australia:Commonwealth of Australia, http://www.health.gov.au/internet/nhhrc/publishing.nsf/Content/081-fmnhs; 2007 [accessed March 2011].

8. Kenny LM, Andrews MW. Addressing radiology workforce issues. Med J Aust2007;186:615e6.

9. The Royal Australian and New Zealand college of radiologists role evolution indiagnostic imaging. RANZCR response to the QUDI discussion paper on roleevolution 2006 August. Sydney.

10. Australian Institute of Radiography. Guidelines for the professional conduct ofradiographers, radiation therapists and sonographers AIR Melbourne Victoria,http://www.air.asn.au/files/01_TheAir/07_Ethics/Professional_Conduct.pdf;2007 [accessed December 2011].

11. Berman I, de Lacey G, Twomey E, Twomey B, Welch T, Eban R. Reducing errorsin the accident department: a simple method using radiographers. BMJ1985;290:412.

12. Audit Commission. Improving your image. London: HMSO; 1995.13. Council for the professions supplementary to medicine infamous conduct amended

statement. London: CPSM; 1995. March.14. Societyand collegeof radiographersmedical image interpretationandclinical reporting

by non-radiologists: the role of the radiographer. London: SCoR; 2006 October.15. Coleman L, Piper K. Radiographic interpretation of the appendicular skeleton:

a comparison between casualty officers, nurse practitioners and radiographers.Radiography 2009;15:196e202.

16. Piper K, Paterson A. Initial interpretation of appendicular skeletal radiographs:a comparison between nurses and radiographers. Radiography 2009;15:40e8.

17. Orames C. Emergency department X-ray diagnosis e how do radiographerscompare? The Radiographer 1997;44:52e5.

18. Smith T, Younger C. Accident and emergency radiological interpretation usingthe radiographer opinion form. The Radiographer 2002;49:27e31.

19. Cook AP, Oliver T, Ramsay L. Radiographer reporting: discussion and Australianworkplace trial. The Radiographer 2004;51:61e6.

20. Smith TN, Traise P, CookA. The influence of a continuing education program on theimage interpretationaccuracyof rural radiographers.Article 1145.Rural andRemoteHealth:2, http://www.rrh.org.au/articles/subviewnew.asp?ArticleID¼1145, 2008;9[accessedMarch 2011].

21. Spigos DG, Tzalonikou MT, Bennett WF, Mueller CF, Terrell J. Accuracy of digitalimaging interpretation on an 1K x 1K PC-based workstation in the emergencydepartment. Emerg Radiol 1999;6:272e5.

22. Eng J. Receiver operating characteristic analysis: a primer. Acad Radiol2005;12:909e16.

23. Eng J. ROC analysis: a web based calculator for ROC curves, http://www.rad.jhmi.edu/jeng/javarad/roc/JROCFITi.html; 2007 [accessed December 2011].

24. Cohen J. A coefficient of agreement for nominal scales. Educ Psych Meas1960;20:37e46.

25. Gwet K. Kappa statistic is not satisfactory for assessing the extent of agreementbetween raters April. Gaitherburg, Maryland, USA: STATAXIS Consulting. Viewedon, http://agreestat.com/research_papers/kappa_statistic_is_not_staisfactory.pdf; 2002. on 22/08/11.

26. Byrt T, Bishop J, Carlin JB. Bias, prevalence and kappa. J Clin Epidem1993;46(5):423e9.

27. Landis RJ, Koch GG. The measurement of observer agreement for categoricaldata. Biometrics 1977;33:159e74.

28. Ryan JT, Haygood TM, Yamal J-M, Evanoff M, O’Sullivan P, McEntee M, et al. The“Memory Effect” for repeated radiological observation. AJR Am J Rontgenol2011;197(6):w985e91.

29. Brealey S, Scally AJ. Bias in plain film reading performance studies. BJR2001;74:307e16.

30. Sprivulis P, Frazer A, Waring A. Same-day X-ray reporting is not needed inwell-supervised emergency department. Emerg Med 2001;13:194e7.

31. Hallas P, Ellingsen T. Errors in fracture diagnoses in the emergencydepartment e characteristics of patients and diurnal variation. BMC EmergMed(4). viewed on, http://www.biomedcentral.com/content/pdf/1471-227X-6-4.pdf, 2006;6 [accessed December 2011].