2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 381386 381381
Assessment in practice
Electronic management of practice assessment data Brenda Stutsky , Faculty of Medicine , University of Manitoba , Winnipeg , Canada
SUMMARY Background : The assessment of a practising physician s performance may be conducted for various reasons, including licensure. In response to a request from the College of Physicians and Surgeons of Manitoba ( CPSM ), the Division of Continuing Professional Development in the Faculty of Medicine, University of Manitoba, has established a practice-based assessment programme the Manitoba Practice Assessment Program ( MPAP ) as the College needed a method to evaluate the compe-tence and performance of physi-cians on the conditional register.
Context : Using a multifaceted approach and Can MEDS as a guiding framework, a variety of practice-based assessment surveys and tools were developed and piloted. Because of the challenge of collating data, the MPAP team needed a computerised solution to manage the data and assessment process. Innovation : Over a 2year period, a customised web-based forms and information management system was designed, developed, tested and implemented. The secure and robust system allows the MPAP team to create assess-ment surveys and tools in which each item is mapped to Canadian
Medical Education Directives for Specialists (CanMEDS) roles and competencies. Reports can be auto-generated, summarising a physician s performance on specifi c competencies and roles. Overall, the system allows the MPAP team to effectively manage all aspects of the assessment programme. Implications : Throughout all stages of design to implementa-tion, a variety of lessons were learned that can be shared with those considering building their own customised web-based system. The key to success is active involvement in all stages of the process!
The assessment of a practising doctors performance may be conducted for various reasons, including licensure
tct_12159.indd 381tct_12159.indd 381 7/17/2014 12:30:33 PM7/17/2014 12:30:33 PM
382 2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 381386
The assessment of a practising physician s performance may be conducted for a variety of reasons, including identifying suboptimal practices, as a continuing professional development activity or for validating performance for a stakeholder. 1 The College of Physicians and Surgeons of Manitoba (CPSM) required a method to validate the compe-tence and performance of physicians on its conditional register, and enlisted the Faculty of Medicine at the University of Manitoba to develop a high-stakes manda-tory practice-based assessment programme.
In 2010, work began on an as-sessment programme designed for physicians who have com-pleted postgraduate training, have practised for at least 2 years and for a variety of reasons have not achieved Royal College of Physicians and Surgeons of Canada (RCPSC) or The College of Family Physicians of Canada (CFPC) certifi cation. The outcome of the assessment enables the CPSM to determine whether to grant full registration to these physicians or not allow them to continue to practise in the province. Based on best practices, it was determined that a multifaceted approach would be needed to assess all components of a physician s prac-tice using the Canadian Medical Education Directives for Specialists (CanMEDS) roles and competen-cies as the guiding framework. 16 The established programme, called the Manitoba Practice Assessment Program (MPAP), was piloted in 2011 and involves fi ve main com-ponents: (1) self-assessment; (2) multisource feedback or 360 feed-back; (3) chart audit/chart-stimu-lated recall; (4) interviews; and (5) direct observation. A team of two physicians and one non-physician
health care provider conducts the on-site assessment that includes the last three components.
The self-assessment compo-nent involves the completion of two surveys. First, physician candidates provide their educa-tional and practice history, and details of their current scope of practice. Next, via an 87item refl ective practice survey, physi-cians self-assess their performance on a fi ve-point Likert scale (i.e. 5, among the best; 4, top half; 3, average; 2, bottom half; 1, among the worst; and unable to assess is not scored) and write a refl ective note for each of the CanMEDS roles. Specialists also complete a clinical skills checklist.
Multisource feedback is obtained from physician col-leagues request original be used, interprofessional colleagues and from patients. Colleagues rate the physician candidate on the same fi ve-point scale used by candi-dates in the self-assessment (i.e. from among the best to among the worst; Figure 1 ), whereas patients use a strongly agree to strongly disagree scale. Multisource feedback surveys
range in length from 36 to 57 items, and every question on these and other surveys are mapped to one or more CanMEDS competencies.
Data collected during the chart audit and chart-stimulated recall session are recorded using a satisfactory/unsatisfactory scale. The main evaluation criteria include legibility, record keeping, clinical assessment, diagnosis, investigation, referral, treatment and management, and follow-up.
Interviews are conducted with one or more medical colleagues, usually a supervisor, and medical students and residents, if applicable. Interview guides have been created, and responses are recorded on the guides.
During the on-site assess-ment, fl ow sheets are used to record patient interactions, treatments and procedures, and diagnostic imaging interpreta-tions. Flow sheets range in length from 11 to 25 items using a satisfactory/unsatisfactory scale. An additional 36item fi nal report tool is used to record
It was determined
that a multifaceted
approach would be needed to
assess all components of
a doctors practice
tct_12159.indd 382tct_12159.indd 382 7/17/2014 12:30:34 PM7/17/2014 12:30:34 PM
2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 381386 383
overall performance on each of the competencies, taking into consideration assessment data from all tools.
Based on the pilot test, we identifi ed that the collation of data to produce various reports based on the CanMEDS roles and competencies was a signifi cant challenge, given the number and length of the surveys and tools. The solution was to computerise the entire process, convert all tools into web-based forms and to auto-generate the reports.
A variety of regulatory authorities and organisations offer physicians access to online systems designed to track continuing professional development activities, or for recertifi cation or revalidation purposes 710 ; however, the pur-pose and functionalities of those systems differ from what was required. In reviewing the litera-ture and completing an environ-mental scan , we did not fi nd an appropriate system or an assess-ment team that was electronically managing high-stakes data that included mapping to competen-cies. Therefore, we hired a local
computer programming company to work with us to build a custom-ised software system to electroni-cally manage our assessment data.
Our forms and information management system is separated into fi ve main sections: (1) form management; (2) user manage-ment; (3) document management; (4) reporting; and (5) system settings. In terms of form management, we are able to create custom forms by manipulat-ing the layout of the forms and incorporating a variety of respons-es, such as check boxes, radio buttons, drop-down lists or text boxes, to name a few (Figure 2 ). A key component of the forms, which is unique to this system, is the ability to map each item to one or more of the seven CanMEDS roles and the 154 RCPSC or 189 CFPC competencies (see Figure 3 ).
User categories include administrators, assessors, candidates, colleagues and patients. Administrators have full access to all sections of the system, including the ability to login as any other user. In preparation for an assessment, assessors can access self-assess-ment data entered directly into
the system by the candidate, eliminating the need to copy and courier documents to assessors. For the on-site assessment, assessors are provided with laptops and mobile Internet sticks so that they can input assessment data. Assessors are also provided with hardcopy forms as a backup, and depend-ing on the situation and location, assessors may record observations on the hardcopy forms and then later input data into the system. Physician and interprofessional colleagues complete the 360 surveys online; however, patients are given hardcopy surveys that are returned to the MPAP offi ce for data input.
The document management section of the system allows for the uploading of PDF documents. There is the ability to assign document access to one or more groups or individual users.
A variety of reports can easily be generated from the system, including a multisource feedback report (see Figure 4 ), a breakdown of the chart audit, a summary of the onsite assessment, and a comprehensive report that includes competencies and an average score from tools measur-ing that competency (see Figure 5 ). General MPAP reports include the monitoring of candi-date fl ow, progress at-a-glance and details of the assessors.
Another unique feature is that through the system settings, any standards or competencies (not only CanMEDS) can be entered and mapped to items without any additional program-ming. For each stage of the process emails can be set automatically: for example, when a physician candidate completes all self-assessment forms, an email is automatically sent to an MPAP coordinator.
Overall, the forms and information management system has proven to be a robust
The collation of data to produce various reports based on the CanMeds roles and competencies was a signifi cant challenge
Figure 1 . Example of interprofessional colleague survey
tct_12159.indd 383tct_12159.indd 383 7/17/2014 12:30:37 PM7/17/2014 12:30:37 PM
384 2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 381386
platform that allows us to effectively manage all aspects of our MPAP. Our end-users verbally report satisfaction with general ease of use, and we rarely receive requests for assistance. As discovered during our pilot, it was too diffi cult to be able to generate manual reports given the volume of information and number of competencies; however, we can now generate multiple reports with just a couple of clicks. The system has worked so well that it has been duplicated and will be used by
another assessment type pro-gramme in our faculty.
Throughout our 2year design, development, testing and im-plementation period, we learned many lessons that can be shared with those who are considering building their own customised web-based system.
Envision what you want the end product to look like, even if you do not build all options
at one time. It is much easier for computer programmers to design the system as a total package as opposed to adding functionalities later.
Select a reputable company, get references and ensure that you will be able to trust that the company can deliver the fi nal product within the estimated budget. In our case, the company underestimated the time to build the system, but still honoured the original quote of $60 000.00 CAN.
Our end-users verbally report
satisfaction with general ease of use
Figure 3 . View of competency selection drop-downs for each form fi eld
Figure 2 . View of form-building page
tct_12159.indd 384tct_12159.indd 384 7/17/2014 12:30:37 PM7/17/2014 12:30:37 PM
2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 381386 385
Ensure all ownership, licens-ing, copyright and intellectual property issues are addressed prior to development.
Work out hosting details with your computer programmers, as you may host the system in one place during development and then switch to another server at a later time. Clearly outline the security and privacy mecha-nisms, along with the data backup plan, which may include backup in another province, state or country.
Have your entire process clearly outlined for the computer programmers. In our case, we had a binder of all forms and process steps that had been piloted. A bad process will not improve just because it becomes electronic.
Establish clear timelines for each phase of the project and meet with the computer programmers regularly. Include a demonstration of work to date, so that feedback and direction can be provided
throughout the development process. Expect development time to take approximately 1 year.
Ensure proper beta testing before going live. With any customised system, expect the debugging process to take up to 1 year.
Despite knowing the computer literacy level of your end-users, plan on developing step-by-step user guides with screen captures and clear instructions for all end-users.
Multisource Feedback Report - Note: Higher scores reflect more positive ratings.
Medical Expert 3.9 4.2 4.8 3.9 3.8
Communicator 3.8 3.9 4.6 3.8 3.8
Collaborator 4.0 4.2 4.7 3.7 4.3
Manager 3.4 4.0 4.1 4.4 3.3
Health Advocate 3.6 4.1 4.4 3.7 4.1
Scholar 3.5 3.9 4.3 2.4 3.3
1X = Unable to Assess, 1 = Among the Worst, 2 = Bottom Half, 3 = Average, 4 = Top Half, 5 = Among the Best21 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree Nor Disagree, 4 = Agree, 5 = Strongly Agree31 = Almost Never, 2 = Once in a While, 3 = Fairly Often, 4 = Very Frequently, 5 = Almost Always41 = Among the Worst, 2 = Bottom Half, 3 = Average, 4 = Top Half, 5 = Among the best
4.1 4.1 4.9 4.1 3.7
Averages 3.8 4.1 4.5 3.7 3.8
Respondents 7 9 43 43 1
360 Degree Survey Scores
Figure 4 . Example of multisource feedback report
Figure 5 . Example of full competency report
Despite knowing the computer literacy level of your end-users, plan on developing step-by-step user guides
tct_12159.indd 385tct_12159.indd 385 7/17/2014 12:30:38 PM7/17/2014 12:30:38 PM
386 2014 John Wiley & Sons Ltd. THE CLINICAL TEACHER 2014; 11: 381386
User support is a continuing requirement for both the administrative and end-user sides of the system, so plan and budget for continuing support and potential changes...