8
Original Article http://mjiri.iums.ac.ir Medical Journal of the Islamic Republic of Iran (MJIRI) Iran University of Medical Sciences ____________________________________________________________________________________________________________________ 1 . PhD Candidate in Medical Education, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran. [email protected] 2 . Professor of Department of Clinical Science and Education, Södersjukhuset, Karolinska Institutet, Stockholm, Sweden. [email protected] 3 . Assistant Professor, Department of Internal Medicine, Tehran University of Medical Sciences, Tehran, Iran, [email protected] 4 . Associate Professor, Department of Emergency Medicine, Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran. [email protected] 5 . Assistant Professor, Department of Internal Medicine, Oncology Department, Tehran University of Medical Sciences, Tehran, Iran. [email protected] 6 . Assistant Professor, Department of Emergency Medicine, Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran. [email protected] 7 . Professor, Department of Internal Medicine, Department of Medical Education, Iran University of Medical Sciences, Tehran, Iran. [email protected] 8 . (Corresponding author) Post-doctoral Researcher, Department of Clinical Science and Education, Södersjukhuset, Karolinska Institute, Stockholm, Sweden, & Associate Professor, Educational Development Center, Department of Medical Education, Tehran University of Medi- cal Sciences, Tehran, Iran. [email protected] 9 . Associate Professor, Department of Clinical Science and Education, Södersjukhuset,, Karolinska Institute, Stockholm, Sweden. [email protected] Contextualization and validation of the interprofessional collaborator assessment rubric (ICAR) through simulation: Pilot investigation Fatemeh Keshmiri 1 , Sari Ponzer 2 , AmirAli Sohrabpour 3 , Shervin Farahmand 4 Farhad Shahi 5 , Shahram Bagheri-Hariri 6 , Kamran Soltani-Arabshahi 7 Mandana Shirazi* 8 , Italo Masiello 9 Received: 30 September 2015 Accepted: 27 January 2016 Published: 1 August 2016 Abstract Background: Simulation can be used for educating, evaluating and assessing psychometric properties of an instrument. The aim of this study was to contextualize and assess the validity and reliability of the Interprofes- sional Collaborative Assessment tool (ICAR) in an Iranian context using simulation. Methods: In this descriptive study, contextualization of the ICAR was assessed through several steps. Firstly, validity assessment was approved through expert panels and Delphi rounds. Secondly, reliability assessment was done by arranging a simulation video and assessing reproducibility, test-retest (ICC), internal consistency (Cronbach's Alpha) and inter-rater reliability (Kappa).The participants included 26 experts, 27 students and 6 staff of the Standardized Simulation Office of Teheran University of Medical Sciences. Results: Contextualization and validity of the ICAR were approved in an Iranian context. The reliability of the tool was computed to be 0.71 according to Cronbach´s Alpha. The test-retest was calculated to be 0.76. Conclusion: The Iranian ICAR can be a useful tool for evaluating interprofessional collaborative competen- cies. The development of the instrument through a simulation scenario has been a positive prospect for research- ers. Keywords: Interprofessional Collaboration, Interprofessional Education, Assessment, Validation. Cite this article as: Keshmiri F, Ponzer S, Sohrabpour AA, Farahmand Sh, Shahi F, Bagheri-Hariri Sh, Soltani-Arabshahi K, Shirazi M, Masiello I. Contextualization and validation of the interprofessional collaborator assessment rubric (ICAR) through simulation: Pilot inves- tigation. Med J Islam Repub Iran 2016 (1 August). Vol. 30:403. Introduction Learning to work interprofessionally is fundamental to patient safety and a patient- centered approach (1). According to the World Health Organization’s definition, interprofessional education (IPE) is the process by which a group of students or health care providers of different profes- sions learn with, from and about each other (1). In IPE programs, a competency-based approach is often used to assess the achievements and quality of interprofes- sional work (2-3). The components of a Downloaded from mjiri.iums.ac.ir at 0:47 IRST on Wednesday October 13th 2021

Contextualization and validation of the interprofessional

  • Upload
    others

  • View
    21

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Contextualization and validation of the interprofessional

Original Articlehttp://mjiri.iums.ac.ir Medical Journal of the Islamic Republic of Iran (MJIRI)

Iran University of Medical Sciences

____________________________________________________________________________________________________________________1. PhD Candidate in Medical Education, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran. [email protected]. Professor of Department of Clinical Science and Education, Södersjukhuset, Karolinska Institutet, Stockholm, Sweden. [email protected]. Assistant Professor, Department of Internal Medicine, Tehran University of Medical Sciences, Tehran, Iran, [email protected]. Associate Professor, Department of Emergency Medicine, Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, [email protected]. Assistant Professor, Department of Internal Medicine, Oncology Department, Tehran University of Medical Sciences, Tehran, [email protected]. Assistant Professor, Department of Emergency Medicine, Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, [email protected]. Professor, Department of Internal Medicine, Department of Medical Education, Iran University of Medical Sciences, Tehran, [email protected]. (Corresponding author) Post-doctoral Researcher, Department of Clinical Science and Education, Södersjukhuset, Karolinska Institute,Stockholm, Sweden, & Associate Professor, Educational Development Center, Department of Medical Education, Tehran University of Medi-cal Sciences, Tehran, Iran. [email protected]. Associate Professor, Department of Clinical Science and Education, Södersjukhuset,, Karolinska Institute, Stockholm, [email protected]

Contextualization and validation of the interprofessionalcollaborator assessment rubric (ICAR) through simulation:

Pilot investigation

Fatemeh Keshmiri1, Sari Ponzer2, AmirAli Sohrabpour3, Shervin Farahmand4

Farhad Shahi5, Shahram Bagheri-Hariri6, Kamran Soltani-Arabshahi7Mandana Shirazi*8, Italo Masiello9

Received: 30 September 2015 Accepted: 27 January 2016 Published: 1 August 2016

AbstractBackground: Simulation can be used for educating, evaluating and assessing psychometric properties of an

instrument. The aim of this study was to contextualize and assess the validity and reliability of the Interprofes-sional Collaborative Assessment tool (ICAR) in an Iranian context using simulation.

Methods: In this descriptive study, contextualization of the ICAR was assessed through several steps. Firstly,validity assessment was approved through expert panels and Delphi rounds. Secondly, reliability assessmentwas done by arranging a simulation video and assessing reproducibility, test-retest (ICC), internal consistency(Cronbach's Alpha) and inter-rater reliability (Kappa).The participants included 26 experts, 27 students and 6staff of the Standardized Simulation Office of Teheran University of Medical Sciences.

Results: Contextualization and validity of the ICAR were approved in an Iranian context. The reliability of thetool was computed to be 0.71 according to Cronbach´s Alpha. The test-retest was calculated to be 0.76.

Conclusion: The Iranian ICAR can be a useful tool for evaluating interprofessional collaborative competen-cies. The development of the instrument through a simulation scenario has been a positive prospect for research-ers.

Keywords: Interprofessional Collaboration, Interprofessional Education, Assessment, Validation.

Cite this article as: Keshmiri F, Ponzer S, Sohrabpour AA, Farahmand Sh, Shahi F, Bagheri-Hariri Sh, Soltani-Arabshahi K, Shirazi M,Masiello I. Contextualization and validation of the interprofessional collaborator assessment rubric (ICAR) through simulation: Pilot inves-tigation. Med J Islam Repub Iran 2016 (1 August). Vol. 30:403.

IntroductionLearning to work interprofessionally is

fundamental to patient safety and a patient-centered approach (1). According to theWorld Health Organization’s definition,interprofessional education (IPE) is theprocess by which a group of students or

health care providers of different profes-sions learn with, from and about each other(1).

In IPE programs, a competency-basedapproach is often used to assess theachievements and quality of interprofes-sional work (2-3). The components of a

Dow

nloa

ded

from

mjir

i.ium

s.ac

.ir a

t 0:4

7 IR

ST

on

Wed

nesd

ay O

ctob

er 1

3th

2021

Page 2: Contextualization and validation of the interprofessional

Validation of ICAR

2 Med J Islam Repub Iran 2016 (1 August). Vol. 30:403.http://mjiri.iums.ac.ir

competency-based assessment instrumentshould include objectives of assessmentand its guidelines (4). Some instruments,designed to assess interprofessional compe-tences, have been developed and validated(5-8). In his study, Oates states that Inter-professional Collaborator Assessment Ru-bric (ICAR) can be used to assessknowledge/skills and behavioral acquisition(8). The aim of this study was to validatethe ICAR (6,9-10) as an interprofessionalperformance assessment tool in an Iraniancontext.

The properties of the ICAR can be favor-able to examiners, instructors and learnersbecause they can increase the quality ofinstruction, and their functionality as aguideline with respect to the learning ex-pectations and fostering critical reflectionon performance (6). However, to assess in-terprofessional competencies, the research-er has to assess teamwork skills and com-munication in interaction with other profes-sional members. This remains to be diffi-cult since those assessed have to show ef-fective collaboration and information ex-change (11,12). Hence, several issues canaffect the assessment, validity and reliabil-ity of a tool. Therefore, the quality assur-ance of a tool should be meticulously runand adapted to different contexts.

The validation process is usually adaptedin a real educational context (6), and it sel-dom is applied in a simulation setting (13-17). Considering the challenges of as-sessing the reliability of interprofessionalcollaboration (10,13-14), the aim of thisstudy was to contextualize and assess thevalidity of the ICAR in an Iranian context,and estimate the reliability of the ICAR re-producibility, internal consistency and in-ter-rater reliability (6) through simulation.

MethodsIn this descriptive study, the ICAR was

contextualized and validated from Decem-ber 2011 to January 2013 (Fig. 1).The orig-inal instrument, which is based on a 4-pointLikert scale, includes 31 items within sixdomains: Communication, Collaboration,

Roles and Responsibilities, CollaborativePatient/Client-Family Centered Approach,Team Functioning and Conflict Resolu-tion/Management domains. Scores indicatehow often specific interprofessional compe-tencies occur; i.e., score of 1 indicates min-imally, score of 2 indicates developing (oc-casionally), score of 3 designates compe-tent (frequently) and score of 4 representsmastery (consistently) (6).

Study AreaThis study was conducted at Tehran Uni-

versity of Medical Sciences in Iran, a de-veloping country in the Middle East. InIran, medical education systems are central-ized and medical curriculums are devel-oped by Ministry of Health. However, in-terprofessional education programs are notconsidered in the curriculum. Based on theauthors’ Knowledge, interprofessional edu-cation and collaboration are not consideredin the curriculum of Middle Eastern coun-ties. In Iran, running IPE is not an easy taskdue to the cultural boundaries to the doctor-center discipline. Running IPE in Iranneeds some preparation of context; there-fore, this study, which contextualized toolsand developed educational materials, maybe used as a foundation for establishingnew IPE system in Iran. This study wasconducted to prepare the infrastructure ofeducation and evaluate interprofessionaleducation in Iran.

ParticipantsThe total number of participants in this

study during the contextualization and vali-dation phases were 59; of whom, 26 wereexperts, 27 students, two simulation educa-tors, two standardized patients and twowere film producers. In this study, expertswere individuals who had at least 10 yearsof experience in clinical education, non-technical skills training (such as communi-cation and teamwork) and were familiarwith IPE.

The process is described in Figure1.

Dow

nloa

ded

from

mjir

i.ium

s.ac

.ir a

t 0:4

7 IR

ST

on

Wed

nesd

ay O

ctob

er 1

3th

2021

Page 3: Contextualization and validation of the interprofessional

F. Keshmiri, et al.

3Med J Islam Repub Iran 2016 (1 August). Vol. 30:403. http://mjiri.iums.ac.ir

Study DesignContextualization of the instrument: The

process was based on the “Toolkit onTranslating and Adapting Instruments”(18). The instrument was adapted to theIranian context through the following pro-cess:

Validity Assessment of the ICAR, Part 1:English to Farsi: The expert panel con-firmed the external validity of the tool fol-lowing the translation from English to Farsiand through applying the guidelines of theHuman Sciences Research Institute (HSRI)(18). Content and face validity were as-

Fig. 1. Contextualization and Validation of ICAR in Iranian Context: Process

Dow

nloa

ded

from

mjir

i.ium

s.ac

.ir a

t 0:4

7 IR

ST

on

Wed

nesd

ay O

ctob

er 1

3th

2021

Page 4: Contextualization and validation of the interprofessional

Validation of ICAR

4 Med J Islam Repub Iran 2016 (1 August). Vol. 30:403.http://mjiri.iums.ac.ir

sessed through two Delphi rounds. In thefirst round, the translated tool was sent tothe experts (response rate was 80%). Theresults of the first round were gathered afterseven days. Then, the experts’ opinionswere analyzed through the content analysisapproach. Two items were suggested to beadded in the domains of collaboration andchallenge management, which were relatedto problem solving and providing feedback.

The suggested items were approved in thesecond Delphi round. The agreement be-tween experts was higher than 90%, and nofurther suggestions were made. Hence, inthe final version, the total number of itemswas 33 within six domains. The total max-imum score was 99, and the results werereviewed and confirmed by an expert panelin the Iranian context.

Simulated Scenario Development andVideo Production: The reliability of thetranslated instrument was confirmedthrough a simulation scenario:

1. The simulated scenario was developedbased on the objectives of the ICAR andthe core competencies for interprofessionalcollaborative practice (3,6,19). The scenar-io was about a case of multiple traumas inan emergency ward with emphasis on in-terprofessional collaborative competencyin the health care team. The scenario wasdeveloped during 18 sessions (36 hours).

2. The expert panel members confirmedthe validity of the scenario to increase thescenario reality.

3. The Standardized Simulation Office atTUMS conducted the simulated team train-ing sessions to produce a video. The simu-lation training sessions were conductedand all the simulated team members exer-cised their role- play. Simulation educatorswatched their final exercise, and confirmedquality of their role- play.

4. The simulation situation of the emer-gency ward was conducted at TUMS skilllab.

5. The simulated scenario was recordedand a video was produced. The duration ofthe video was 30 minutes, and its produc-tion took 12 hours of teamwork.

6. The validity of the video and authen-ticity of the simulation were approved interms of interprofessional collaborationskills.Reliability Assessment of the ICAR: In

this study, the reliability of the tool wasassessed by performing reproducibility(test-retest), internal consistency and inter-rater reliability analyses. To do so, tworater-training sessions were provided. In thefirst session, raters gained familiarity withthe reliability assessment process, the IC-AR and its guideline. Then, the raterswatched the video produced in the earlierphase and completed the Iranians’ ICAR.Afterwards, they received feedback fromthe main investigator of the research (MSH).

The test-retest approach was done by theraters filling out the ICAR with one-weekinterval between the first and second occa-sions (20). Internal consistency was calcu-lated per domain through Cronbach’s Al-pha. The agreement between the 23 ratersand experts’ score was assessed throughcorrelation (21).

Inter-rater reliability was approved by as-sessing the correlation between four expertraters after watching the video and applyingnon- parametric analysis tests (Kappa coef-ficient) (22,23).

External Validity Assessment, Part 2:Farsi to English: After demonstrating thevalidity and reliability of the ICAR in anIranian context, the tool was back translat-ed separately from Farsi to English. Thetranslated version was sent to the main de-veloper, Professor Curran, who confirmedthe consistency of the final ICAR. The re-producibility of the ICAR was assessed byICC analysis, the most appropriate andcommonly utilized test for assessing para-metric data (21,24,25).In this study, thetest-retest approach was measured by theICC, internal consistency was measured byCronbach’s Alpha and inter-rater reliabilitywas assessed by Kappa coefficient. Theagreement between the raters and experts’scores was assessed using descriptive sta-tistical analysis. SPSS Version 16 was used

Dow

nloa

ded

from

mjir

i.ium

s.ac

.ir a

t 0:4

7 IR

ST

on

Wed

nesd

ay O

ctob

er 1

3th

2021

Page 5: Contextualization and validation of the interprofessional

F. Keshmiri, et al.

5Med J Islam Repub Iran 2016 (1 August). Vol. 30:403. http://mjiri.iums.ac.ir

for data analysis.

Ethical ConsiderationsThe Ethical Committee of Teheran Uni-

versity of Medical Sciences (ID: 91-01-30-17052) approved thisstudy. Prof. Currangave permission to use the ICAR instru-ment. Inform consent was taken for all theraters who participated in this study.

ResultsDemographic Characteristics of the Study

ParticipantsThe mean±SD age of the expert panel

members was 43.5±6.5yrs., and theirmean±SD work experience was 10±8.5yrs.In this study, 50% (n=6) of the participantsof Delphi rounds were men. The partici-pants' mean±SD age was 41.5±8.5 yrs., andtheir mean±SD of work experience was12±2.5yrs. The mean (Mean±(SD)) age ofthe video production committee was37.5±9.6yrs., and their mean±SD of workexperience was 10±8.5yrs. The mean±SDage of the four raters in inter-rater reliabil-ity assessment was 33.2±6.9yrs. The major-

ity of the raters in reproducibility assess-ment were female (52%) and theirmean±SD age was 28.4±2yrs.

Validation of the ICARThe content and face validity of the Irani-

an ICAR was approved through two Delphirounds. The validation of scenario and vid-eo production was approved through con-sensus.

Psychometric PropertiesThe internal consistency according to

Cronbach´s Alpha was 0.71, and the test–retest reliability was 0.76 (Table 1).

The highest test-retest reliability was inthe team function domain (ICC=0.83), andthe lowest in the domain of communication(ICC=0.73).The agreement between theraters and the experts’ score was calculatedto be 67.8 and 84.3(Table 1).The inter-raterreliability was calculated by kappa coeffi-cient and turned out to be K=0.7 (Table 2).

DiscussionMeasuring interprofessional collaboration

Table 1. Psychometric Properties of the Inter-professional Collaborator Assessment Rubric (ICAR)Agreement between

raters’ scores (23) andexperts’ score (%)

Internal Consistency:Cronbach`s Alpha

Test- Retest:ICC

ICAR domains

75.120.650.73Communication72.650.710.80Roles and responsibilities84.340.740.75Collaboration77.170.710.75Collaborative patient centered approach74.780.750.83Team function67.800.750.75Conflict management75.310.710.76ICAR Domains

Table 2. Inter-rater Reliability between the RatersRaters Test1 vs 2 Kappa=0.7

P<0.0011 vs 3 Kappa=0.71

P<0.0011vs 4 Kappa=0.72

P<0.0012 vs 3 Kappa=0.72

P<0.0012 vs 4 Kappa=0.71

P<0.0013 vs 4 Kappa=0.78

P<0.0011,2 ,3 and 4 Intra class coefficient (ICC=)0.87

P<0.001

Dow

nloa

ded

from

mjir

i.ium

s.ac

.ir a

t 0:4

7 IR

ST

on

Wed

nesd

ay O

ctob

er 1

3th

2021

Page 6: Contextualization and validation of the interprofessional

Validation of ICAR

6 Med J Islam Repub Iran 2016 (1 August). Vol. 30:403.http://mjiri.iums.ac.ir

competencies is still a challenge; therefore,applying a valid and reliable instrumentremains a significant issue. The findings ofthis study confirmed the innovative valida-tion process by endorsing the reliability ofthe ICAR within a simulated setting. In thisstudy, several professionals were engagedin the research to be in line with the as-sumption of multidisciplinary attribution ofIPE.

In our study, the tools' validity wasstrongly agreed upon by applying the ac-cepted HSRI’s guideline (18), Delphirounds, an expert panel, and receivingfeedback from the instrument’s main author(6), which was in line with the researchprocedure of other studies (26,27). We fol-lowed the research process by Curran et al.(6) and applied two rounds to compile theinstrument in a rigorous fashion just as oth-er researchers have done (28).

We used simulation to assess the psy-chometric properties (reliability) of the in-strument in the interprofessional educationdomain. The controlled setting allowed theresearchers to assess the interprofessionalskills by controlling for confounding fac-tors and possible Hawthorne effect thatmay occur in a real setting. In their study,Hayward et al. (10) assessed the reliabilityof another interprofessional performancetool in a real context, but despite the ex-tended observation, they were not able toassess all the domains of their instrument.In this study, to avoid the challenges, re-producibility was assessed within a simu-lated situation. Shirazi et al. (29) conducteda research to evaluate the reliability ofstandardized patients (SP) and to assesscommunication skills as opposed to stand-ard raters. Despite the limited number ofassessors, which was a limitation, the re-searchers found an acceptable correlationamong all raters (29). In this study, to in-crease the statistical power and obviatepo-tential biases, the number of raters was in-creased to 23.

The highest internal consistency was inthe domain of team function and challengemanagement. The lowest internal con-

sistency was computed in the communica-tion domain. This could be interpreted as aninconsistency between items because ofdifferent issues such as ‘respect’, which aremore subjective and more dependent onraters’ perceptions and experience. Howev-er, in accordance with this latest finding,studies from Shyne et al. (30) and Najafi etal. (31) found similar low internal con-sistency with the Team STEPPS TeamworkAttitudes Questionnaire (T-TAQ) andSDOT, respectively. In contrast with ourfindings, Hayward et al. study (10) foundinternal consistency of communication tohave the highest domain. This may be dueto its use of a multi-source feedback (MSF)process to assess collaboration competen-cies (10).

The highest agreement between the 23raters and the experts’ scores was in thefield of collaboration (84.3%), which couldbe related to the raters’ familiarity withthese concepts and gaining experience inthe group work. Moreover, the items’ con-struct in this domain consisted of simplici-ty, clarity and relevance of items for theraters. The lowest accordance was in thedomain of conflict management (67.8%).These findings may be due to the possiblelimited involvement of the 4th-year medi-cal students in health care management is-sues or lack of training. However, this in-formation could not be confirmed.

The correlation between experts’ ratingwas approved by non-parametric statisticaltests. The Kappa coefficient (32) demon-strated propriety agreement among eachpair raters and between all raters’ (four)scores. In line with our study, findings fromSchmitz demonstrated acceptable or highlyacceptable coefficients (33). However, incontrast to our findings, Boulet et al. founda low correlation among their raters (0.09and 0.29), which can be due to the lack oftraining or an assessment guide in theirstudy (34). In this study, to inhibit the pos-sible low inter-rater reliability among theraters, the research group compiled a guide-line for reliability assessment and conduct-ed training sessions.

Dow

nloa

ded

from

mjir

i.ium

s.ac

.ir a

t 0:4

7 IR

ST

on

Wed

nesd

ay O

ctob

er 1

3th

2021

Page 7: Contextualization and validation of the interprofessional

F. Keshmiri, et al.

7Med J Islam Repub Iran 2016 (1 August). Vol. 30:403. http://mjiri.iums.ac.ir

LimitationsThe validity assessment of an interprofes-

sional assessment tool in a real setting is adifficult task as there are many confound-ers. Moreover, In Iran, we do not have in-terprofessional wards where we could runsuch studies. Therefore, in this study, weused a video of a simulated setting to assessthe ICAR psychometric properties. Alt-hough this was a psychometric study, oneof the limitations was the reproducibilityassessment of the instrument by medicalstudents due to the restriction in the imple-mentation of the interprofessional meet-ings. Limited generalizability across insti-tutions, countries and even within the set-ting of an institution (i.e., beyond the trau-ma team) might be a limitation.

ConclusionAssessing IPE is a complex task due to

the difficult task of assessing team compe-tencies. This study confirmed the validityand reliability of the modified ICAR toevaluate interprofessional collaborativecompetencies in an Iranian context, whichwas done using a novel process of using asimulated scenario. IPE in an Iranian con-text is a new endeavor and we recommendconducting more research on this topic.

AcknowledgementsSincere thanks go to Professor Curran for

his comments and kind contribution andalso to the medical students at TUMS whotook part in this research.

References1. WHO: World Health Organization. Framework

for action on interprofessional education &collaborative practice. Geneva: World HealthOrganization. April 2012, Available from:http://whqlibdoc.who.int . 2010.

2. Davis M, Harden R. Competency-basedassessment: mailing it a reaiity. MedTeach 2003;25(6):565-568

3. Canadian Interprofessional Health CollaborativeC: A National Interprofessional CompetencyFramework. May 2010. Available from:www.cihc.ca.

4. Harden M. Outcome-Based Education: the

future is today. MedTeach 2007;29(7):625-9.5. Canadian Interprofessional Health Collaborative

C: An Inventory of Quantitative Tools MeasuringInterprofessional Education and CollaborativePractice Outcomes. May 2012. Available from:www.cihc.ca.accessed

6. Curran V, Hollett A, Casimiro L, Mccarthy P,Banfield V, Hall P, et al. Development andvalidation of the interprofessional collaboratorassessment rubric (ICAR). J Interprof Care 2011;25:339-344.

7. Thannhauser J, Russell-Mayhew S, Scott C.Measures of interprofessional education andcollaboration. J Interprof Care 2010;24(4):336-349.

8. Oates M, Davidson M. A critical appraisal ofinstruments to measure outcomes ofinterprofessional education. Med Educ 2015;49:386-398.

9. Curran V, Casimiro L, Banfield V, Hall P,Lackie K, Simmons B, et al. Research forInterprofessional Competency-Based Evaluation(RICE). J Interprof Care 2009;23(3):297-300.

10. Hayward MF, Curran V, Curtis B, Schulz H,Murphy S. Reliability of the InterprofessionalCollaborator Assessment Rubric (ICAR) in MultiSource Feedback (MSF) with post-graduate medicalresidents. BMC Med Educ 2014;14:1-9.

11. Joshi R, Ling FW, Jaeger J. Assessment of a360-degree instrument to evaluate residents'competency in interpersonal and communicationskills. Acad Med 2004;79(5):458-463.

12. Musial JL, Rubinfeld IS, Parker AO, ReickertCA, Adams SA, Rao S, et al. Developing a scoringrubric for resident research presentations: a pilotstudy. J Surg Res 2007;142(2):304-307.

13. McEvoy M, Hand W, Furse C, Field LC,Clark CA, Moitra VK, et al. Validity and ReliabilityAssessment of Detailed Scoring Checklists for Useduring Perioperative Emergency SimulationTraining. Simul Healthc 2014;9(5):295-303.

14. Wright M, Segall N, Hobbs G, Phillips-ButeB, Maynard L, Taekman JM. StandardizedAssessment for Evaluation of Team Skills Validityand Feasibility. Simul Healthc 2013;8(5):292-303.

15. Rethans JJ, Gorter S, Bokken L, Morrison L.Unannounced standardised patients in real practice:a systematic literature review. Med Educ 2007;41(6):537-549.

16. Cox M, Irby DM, Epstein RM. Assessment inmedical education. N Engl J Med 2007;356(4):387-396.

17. Gorter S, Rethans JJ, Scherpbier A, De´sire´eH, Houben H, Vleuten C, et al. Developing casespecific checklists for standardized‐patient—basedassessments in internal medicine: A review of theliterature. Acad Med 2000;75(11):1130-1137.

18. Chávez L, Canino G. Toolkit on Translatingand Adapting Instruments.Human Services ResearchInstitute. may 2012, Avaliable from: www.tecathsri.org.

Dow

nloa

ded

from

mjir

i.ium

s.ac

.ir a

t 0:4

7 IR

ST

on

Wed

nesd

ay O

ctob

er 1

3th

2021

Page 8: Contextualization and validation of the interprofessional

Validation of ICAR

8 Med J Islam Repub Iran 2016 (1 August). Vol. 30:403.http://mjiri.iums.ac.ir

19. Interprofessional Education Collaborative(IPEC): Core competencies for interprofessionalcollaborative practice: Report of an expert panel.Washington, D.C. May 2011. Avaliable from:www.aacn.nche.edu

20. Norman GR, van der Vleuten CPM, NewbleDI. Psychometric Method In J. A. Shea and S.Fortnag (eds), International Handbook of Researchin Medical Education Series. (Vol. 7): SpringerInternational Handbooks of Education 2002:pp.99.

21. Terweea C, Bota S, Boera M, van der WindtDA, Knol DL, Dekker J, et al. Quality criteria wereproposed for measurement properties of healthstatus questionnaires. J Clin Epidemiol 2007;60(1):34-42.

22. Keen AJA, Klein S, Alexander D. Assessingthe Communication Skills of Doctors in Training:Reliability and Sources of Error. Adv Health SciEduc 2003;8(1):5-16.

23. Xu S, Lorber MF. Interrater agreementstatistics with skewed data: evaluation ofalternatives to Cohen's kappa. J Consult ClinPsychol 2014;826(6):1219-27.

24. Zaki R, Bulgiba A, Nordin N, Ismail NA. ASystematic Review of Statistical Methods Used toTest for Reliability of Medical InstrumentsMeasuring Continuous Variables. Iran J Basic MedSci 2013;16(6):803-807.

25. Al-shair K, Kolsum U, Berry P, Smith J,Caress A, Singh D. Development, dimensions,reliability and validity of the novel ManchesterCOPD fatigue scale.Int J Chron Obstruct Pulmon2009;64:950-955.

26. Tomasik T. Reliability and validity of theDelphi method in guideline development for familyphysicians.Qual Prim Care 2010;18(5):317-26.

27. Schlegel C, Woermann U, Rethans J. Validityevidence and reliability of a simulated patientfeedback instrument.BMC Med Educ 2012;12:1-5.

28. Dizon JM, Grimmer-Somers K, Kumar S. Thephysical therapy profile questionnaire (PTPQ):development, validation and pilot testing. BMC ResNotes 2011;19(4):362-368.

29. Shirazi M, Labaf A, Monjazbi F, Jalili M,Mirzazadeh M, Ponzer S, et al. Assessing medicalstudents’ communication skills by the use ofstandardized Patients: emphasizing standardizedpatients’ quality assurance. Acad Psych 2014;38(3):354-360.

30. Shayne P, Gallahue F, Rinnert T. Reliability ofa Core Competency Checklist Assessment in theEmergency Department:The Standardized DirectObservation Assessment Tool. Acad Emerg Med2006;13(7):727-732.

31. Najafi M, Keshmiri F, Najafi M, Shirazi M.Assessment of Validity and Reliability of TeamSTEPPS Teamwork Attitudes Questionnaire (T-TAQ) in Iran. Payavard Salamat 2013;7(5):389-398.

32. Rothman AI, Cusimano M. A comparison ofphysician examiners', standardized patients', andcommunication experts' ratings of internationalmedical graduates' English proficiency. Acad Med2000;75(12):1206-11.

33. Schmitz CC, Chipman J, Luxenberg M,Beilman GJ. Professionalism and Communication inthe Intensive Care Unit Reliability and Validity of aSimulated Family Conference. Simul Healthc 2008;3(4):224-38.

34. Boulet JR, van Zanten M, de Champlain A,Hawkins RE, Peitzman SJ. Checklist Content on aStandardized Patient Assessment: An Ex Post FactoReview. Adv Health Sci Educ 2008;13(1):59-69.

Dow

nloa

ded

from

mjir

i.ium

s.ac

.ir a

t 0:4

7 IR

ST

on

Wed

nesd

ay O

ctob

er 1

3th

2021