21
This article was downloaded by: [The University of Manchester Library] On: 21 November 2014, At: 04:32 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Assessment & Evaluation in Higher Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/caeh20 Factors affecting the adoption of an eassessment system Ann L. McCann a a Texas A&M Health Science Center , Baylor College of Dentistry , Dallas, Texas, USA Published online: 04 Aug 2009. To cite this article: Ann L. McCann (2010) Factors affecting the adoption of an eassessment system, Assessment & Evaluation in Higher Education, 35:7, 799-818, DOI: 10.1080/02602930902981139 To link to this article: http://dx.doi.org/10.1080/02602930902981139 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Factors affecting the adoption of an e‐assessment system

  • Upload
    ann-l

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Factors affecting the adoption of an e‐assessment system

This article was downloaded by: [The University of Manchester Library]On: 21 November 2014, At: 04:32Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Assessment & Evaluation in HigherEducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/caeh20

Factors affecting the adoption of ane‐assessment systemAnn L. McCann aa Texas A&M Health Science Center , Baylor College of Dentistry ,Dallas, Texas, USAPublished online: 04 Aug 2009.

To cite this article: Ann L. McCann (2010) Factors affecting the adoption of ane‐assessment system, Assessment & Evaluation in Higher Education, 35:7, 799-818, DOI:10.1080/02602930902981139

To link to this article: http://dx.doi.org/10.1080/02602930902981139

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher EducationVol. 35, No. 7, December 2010, 799–818

ISSN 0260-2938 print/ISSN 1469-297X online© 2010 Taylor & FrancisDOI: 10.1080/02602930902981139http://www.informaworld.com

Factors affecting the adoption of an e-assessment system

Ann L. McCann*

Texas A&M Health Science Center, Baylor College of Dentistry, Dallas, Texas, USATaylor and FrancisCAEH_A_398285.sgm10.1080/02602930902981139Assessment & Evaluation in Higher Education0260-2938 (print)/1469-297X (online)Original Article2009Taylor & Francis0000000002009Dr. [email protected]

A case study was conducted in 2006–07 to explore how one US campusimplemented a centralised e-assessment system. The study specifically measuredthe extent of adoption by faculty members, identified their reasons for adoption andevaluated the impact on teaching and learning. The purposes of the system, entitledresearching learning (REAL, a pseudonym), were to measure student learning,meet accountability requirements and assist the university in becoming a learningorganisation. An electronic database included programme assessment plans, resultsand sample instruments. The system also included a peer review process toimprove plans by an assessment committee of faculty and administrators.

Keywords: change; innovation; e-assessment systems; faculty culture; learningorganisation; adoption

Literature review

Theoretical foundation

Two theories were used to explore adoption of an e-assessment system, Rogers’(2003) Diffusion of Innovations and Eckel and Kezar’s (2003) Mobile Model forTransformational Change. Rogers’ diffusion theory, based on over 5000 researchstudies, explains how and why new ideas and practices are adopted. According toRogers, five attributes explain 49–87% of the variance in the rate of adoption ofinnovations: relative advantage, compatibility, trialability, observability andcomplexity. Relative advantage is the degree to which an innovation is viewed asbetter than the idea it supersedes (Rogers 2003). The greater the perceived advantage,the faster is the rate of adoption. Incentives and mandates to adopt increase the rela-tive advantage of an innovation. Compatibility is the degree to which the innovationis seen ‘as consistent with the existing values, past experiences and needs of potentialadopters’ (2003, 240). The rate of adoption increases as compatibility increases.Complexity is the degree to which an innovation is viewed as difficult to understandand use. Adoption increases as complexity decreases. Trialability is the degree towhich an innovation can be experimented with on a limited basis. An innovation thatcan be tried creates less uncertainty to the potential adopter and so speeds up theadoption rate. Observability is the degree to which results of an innovation can beseen by potential adopters. Adoption increases if adopters can see the results of theinnovation.

Kezar and Eckel (2002a) studied six higher education institutions that hadundergone a comprehensive and sustained change in teaching and learning. They

*Email: [email protected]

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 3: Factors affecting the adoption of an e‐assessment system

800 A.L. McCann

identified five core strategies that explained change across the institutions: (1)senior administrative support (show value, provide resources and add new adminis-trative structures); (2) collaborative leadership; (3) flexible vision (clear anddesirable picture of the future that does not foreclose opportunities); (4) staff devel-opment; and (5) visible actions. Another finding was core strategies allowed peopleto make new meaning (i.e. change the way they perceived their roles) skills andapproaches/philosophies. Kezar and Eckel (2002b) also found that change strategieswere more successful when they were compatible with the dominant culture. Theyproposed a mobile model of transformational change which includes an outer ringof ‘culture’, an inner mesh of the five change strategies and 15 substrategies and asolid band running down the middle entitled ‘thinking differently’ (Eckel and Kezar2003).

Factors affecting adoption

Virginia Commonwealth University used a commercial software product to create ane-assessment system. The product helped them accomplish the project and identifyareas for improvement, such as recognising that students could not discern the qualityof resources for doing research. Although faculty and staff members were ‘required’to use the e-assessment system, probably less than one-half actually did, according tothe associate director of institutional assessment (Carnevale 2007). He stated, ‘Facultymembers do not like to be forced to do anything, especially when it involves admin-istrators looking over their shoulders … It’s an assessment coordinator’s dream and afaculty member’s nightmare’ (2007, 3).

The adoption of computer-assisted assessment (CAA) within UK higher educationhas lagged behind expectations, according to Warburton (2008). To determine thereasons for this delay, he conducted a national survey of CAA practice (n = 166) in2004 and interviewed CAA experts and practitioners. Factors identified as inhibitingadoption included fear of CAA failure, ineffective dissemination of good CAA prac-tice, difficulty using the systems, resources withheld by senior management andconcerns about the ‘dumbing down’ effect of assessment systems. Factors identifiedas facilitating adoption included informal discussion about CAA practice amongdepartmental colleagues, achieving a critical mass of CAA use and administrativesupport and resources. He believed that campus culture factors were more significantto adoption than operational ones. Metrics of successful adoption identified byWarburton included width of practice (range of items and uses of the system), level ofintegration with corporate management information systems, embeddedness (CAAtaken for granted and becoming invisible) and width of application (system adoptedby other institutions).

Bull et al. (2002) presented four mini case studies to illustrate the ways in whichCAA was maturing from being peripheral to a central part of the teaching and learn-ing process in the UK. Case 1, the evaluation of the UK CAA Centre and its impacton integrating CAA with teaching and learning, is the most relevant to this discus-sion. Barriers to using CAA appropriately included limited functionality, the hiddencost of staff time and lack of equipment, resources and training. Another culturalbarrier was the conflict between faculty time spent on research versus teaching; peda-gogical research (assessment) was viewed as secondary to mainstream research.Factors that facilitated using CAA included peer and institutional support and projectchampions.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 4: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 801

Two other UK reports identified similar factors affecting CAA adoption. Whitelock(2006) presented the barriers as lack of policy integration, suitable infrastructure,funding for pilot projects, sufficient time for faculty to develop e-assessment,confidence in the system and training. She emphasised that a culture shift needs to takeplace within higher education to improve institutional attitudes and to create a fit withthe new e-assessment practices. In her key lessons from the CAA movement, Bull(2001) stressed that cultural and political barriers impede adoption. Organisational andpedagogical issues, especially not understanding assessment, often outweigh thetechnical limitations of the software and hardware.

Summary

The two theories were used as a lens to view the adoption of REAL, an electronicinnovation, on a university campus. Much effort goes into developing the technologyfor an efficient and easy to use electronic assessment system. However, successfulimplementation depends as much, or even more, on understanding campus culture andusing appropriate change strategies. What often happens is an assessment system getsimplemented ‘just in time’ for an accreditation site visit and then is not sustained.This study was conducted to identify the factors that would enhance adoption ofe-assessment systems and foster sustainment on university campuses.

Methods

A mixed methods approach was used in this case study, with an electronic survey offaculty and telephone interviews with adopters. This allowed data triangulation as wellas using the survey data to inform the interview process (Creswell 2003). E-assessmentsystem was defined as an electronic system for managing assessment plans andoutcomes rather than one for delivering assessment methods and instruments.

Electronic survey

Instrument

The electronic survey was created with Inquisite® software. The first sectionmeasured respondent demographics. The second section measured whether respon-dents were adopters of REAL, how they had used REAL and whether they hadreceived training and incentives. The third section measured the impact of REAL onteaching, learning and campus practices. The fourth section measured opinions aboutthe adoption attributes of REAL using a 25 item instrument with the following eightscales: (1) voluntariness, (2) relative advantage, (3) compatibility, (4) image, (5) easeof use, (6) result demonstrability, (7) visibility, and (8) trialability (Moore andBenbasat 1991). This instrument split Rogers’ (2003) observability into resultdemonstrability and visibility. Voluntariness was defined as ‘the degree to which theuse of the innovation is perceived as being voluntary or of free will’, and image wasdefined as ‘the degree to which use of an innovation is perceived to enhance one’sstatus in one’s social system’ (2003, 195). The respondents reported their opinions ona seven point agreement scale, 7 = strongly agree and 1 = strongly disagree. Oneopen-ended question at the end asked, ‘What would increase or improve the use ofREAL?’

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 5: Factors affecting the adoption of an e‐assessment system

802 A.L. McCann

Sampling strategy

The survey sample of 568 faculty members was comprised of 95 known users ofREAL and 473 randomly chosen users. The known users routinely used REAL and/or were responsible for entering programme data. The group of 473 represented thesample size needed for a faculty population of 1076 with a sampling error of 5% andan expected response rate of 60% (Dillman 2000). This group was further stratified tohave proportionate distribution among the eight colleges at the university.

Procedures

The survey was pilot tested by five faculty members who were users of REAL. It wasthen administered online through a link in an email message with five points ofcontact (Dillman 2000), including an advance notice email letter several days beforethe survey, a first email request, and second, third and fourth email requests to all non-responders. All correspondence and tracking of survey returns were managed throughthe Inquisite® system.

Data analysis

Descriptive statistics were used to determine who used REAL, how much and how itwas used, whether training and incentives were offered and whether REAL wasperceived to possess the adoption attributes. Specifically testing the adoptionattributes, the perceptions of adopters were compared to non-adopters using theMann–Whitney U-test. To test whether the innovation attributes predicted adoption ofREAL, as well as other survey variables, logistic regression was used for the categor-ical outcome data. SPSS software was used for data analysis.

Interviews

Instrument

The telephone interviews were conducted to expand on the reasons for adopting theassessment system, obtain examples of the change strategies used, and evaluate theimpact of the assessment system. The interview questions were: (1) How wasREAL introduced to you? (2) Do you have a clear understanding of the purpose ofREAL? (3) What support did you get to implement REAL? (4) Was collaborationinvolved in implementing REAL? (5) Why did you decide to use REAL? (6) Howhave you used REAL? (7) Do you know how REAL has affected learning acrossthe campus? (8) What could have/could be been done to improve the introduction ofREAL on the campus? (9) What is the history of REAL (director of assessmentonly)?

Sampling

Purposeful sampling was used to identify experienced users of REAL. Ten interviewswere conducted with nine individuals. Three participants were primarily administra-tors (one being the director of assessment), two were academic deans and four werefaculty members.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 6: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 803

Procedures

Prior to the interviews, an email letter was sent to the potential participants with thestudy purpose and the consent form. They were asked to read the form and return anemail message indicating their consent to be interviewed. Following the return of theconsent, appointments were made for telephone interviews. The interviews weredigitally recorded with the Digital Logger® and then transcribed.

Data analysis

The researcher coded the data in Excel, beginning with 11 categories that explain theadoption of innovations. The definitions used for the coding are provided in Table 1.These coding categories were then broken down into 77 secondary level categories.Following this analysis, themes emerged that were greatly influenced by the lens fromthe foundational theories.

Human subject protection

Approval for the study was obtained from the Institutional Review Board at theUniversity of Nebraska. The survey consent statement assured the respondents thattheir answers would be confidential and only released as summaries in which no indi-vidual’s answers could be identified. Interview participants were assured that theirnames, titles and years of experience would not be revealed (except director ofassessment), as requested by the university under study.

Table 1. Definitions of interview coding categories.

Coding categoriesa Definition

Voluntariness Ways in which REAL was either required or not requiredVision for REAL Descriptions of what REAL achieved or would achieve for the

campusSenior administrative

supportTypes of support provided by senior administrators to

encourage adoption of REAL or lack thereofCollaborative leadership Types of collaboration that occurred among campus leaders to

facilitate the adoption of REAL or did notStaff development Education and training efforts to assist campus personnel in

learning the purpose of REAL and how to use it or lack thereof

Visible actions Ways in which REAL and its impact could be observed or could not (includes visibility and result demonstrability)

Relative advantage Ways in which REAL was better than prior practices or was not

Compatibility with campus culture

Ways in which REAL was compatible with how faculty work, faculty culture and campus practices or was not

Ease of use Ways in which using REAL was easy or hard to doImage Ways in which using REAL improved the image of programs/

the campus or did not

aTrialability was eliminated due to the lack of associated data.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 7: Factors affecting the adoption of an e‐assessment system

804 A.L. McCann

Results

Survey

Demographics

The electronic survey yielded a 42% response rate (239/568). The sample was repre-sentative of the total campus population, with the proportion of individuals from eachof the university’s eight colleges about the same as that of the faculty population. Thesample was also representative of the entire faculty in terms of gender and academicrank.

Knowledge and use of REAL

Table 2 indicates that REAL was not well known to the faculty. Sixty per centreported hearing of REAL. Only 35% had ever viewed REAL’s documents online oron paper, and only 26% had ever entered data into REAL. Table 3 illustrates thatfaculty did not often use REAL. The most common response for how often they hadused REAL was ‘never’ (63.7%), with the next most common being one to five times(16.9%). Table 4 illustrates the different ways that faculty had used REAL. The mostfrequent responses were programme assessment (n = 72), programme review (n = 52),department planning (n = 30) and accreditation (n = 24).

Although the majority (85.6%) of faculty indicated that REAL had not helped theirdepartment improve teaching or learning, those who responded positively providedthe improvements indicated in Table 5. Department goal setting, course content,student evaluation methods and student learning were the most frequently reportedimprovements.

Table 2. Knowledge of REAL.

No Yes Total

Question % n % n % n

Have you heard about REAL? 40 95 60 143 100 238Have you gone online and viewed REAL? 65 154 35 84 100 238Have you seen paper plans or reports? 65 155 35 83 100 238Have you entered information/data? 74 177 26 61 100 238

Table 3. Frequency of using REAL.

How often % n

Never 63.7 1511–5 times 16.9 406–10 times 5.5 137–11 times 3.0 712–16 times 2.1 517–20 times 2.1 5More than 20 times 6.8 16Total 100.0 237

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 8: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 805

Training and incentives

Approximately one-quarter of the respondents were either shown how to use REAL(26.6%; n = 63) or were offered training (25.3%; n = 60). Only 17.7% (n = 42) hadparticipated in formal training. Only two faculty members reported receiving anyincentive for working with REAL, one attending a conference and the other somerelease time.

Profile of adopters

Examining the characteristics of the REAL’s adopters (n = 86) in Table 6, they servedon assessment committees (83%), entered information into REAL (67%) and wereassigned to enter data into REAL (52%). Eighty percent had used REAL forprogramme assessment and 61% for programme review. While the most reportedfrequency was again only 1–5 times (46%), 19% had used REAL more than 20 times.They had also been shown how to use REAL (67%). Only 33% believed that REALhad improved teaching and learning.

Table 4. How faculty used REAL.

Type of use n

Program assessment 72Program review 52Department planning 30Accreditation 24Curriculum development 19Classroom teaching 7Research about student learning 3Documentation for promotion and tenure 0Not at all 153Total responses 360a

aRespondents could provide multiple answers so the total is larger than the size of the sample.

Table 5. Teaching and learning improvements through REAL (n = 34).

Improvements n

Department goal setting 20Course content 17Student evaluation methods 14Student learning in courses 10Department student learning results 10Teaching methods 6Facility 3Total 80a

aRespondents could provide multiple answers so the total is larger than the number of respondents.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 9: Factors affecting the adoption of an e‐assessment system

806 A.L. McCann

Adoption attributes

This section with Moore and Benbaset’s (1991) instrument was very reliable with alleight scales highly correlated and Chronbach alpha levels >.80. Regarding validity,seven of the eight scales emerged distinctly using principle components analysis withVARIMAX rotation, with items loading highly on only one factor. The one exceptionwas a relative advantage question that loaded more highly on the compatibility scale,a result similar to that of Moore and Benbasat.

The respondents reported REAL was generally lacking in the attributes that supportadoption of innovations. As indicated on the scale scores in Table 7, they did NOTthink REAL: (1) had a relative advantage over previous practices (mean 3.15 = some-what disagree), (2) was compatible with their work (mean 2.88 =somewhat disagree),(3) was easy to use (mean 3.27 = disagree), (4) could be tried out prior to using it,(mean 3.49 = somewhat disagree), (5) was visible on campus (mean 2.26 = disagree),or (6) was a status symbol on campus (mean 1.97= disagree). They were ‘neutral’(mean = 3.71) as to whether the results of REAL were well demonstrated and ‘some-what agreed’ (mean = 4.98) that REAL was a voluntary activity for them.

Moore and Benbasat (1991) compared the scores of adopters and non-adopters andfound that the adopters reported significantly stronger agreement on every scale,except voluntariness. In this study, the two groups differed in agreement with onlythree attributes, ‘voluntariness’, ‘visibility’ and ‘result demonstrability’ (Table 8). Theadopters agreed that the decision to use REAL was a less voluntary one than the non-adopters. This can probably be explained by the fact that the majority of adopters wereassigned to enter data into REAL (52%), while the non-adopters were not (6%). The

Table 6. Profile of REAL adopters and non-adopters.

Characteristic Adopters (n = 86) Non-adopters (n = 151)

Heard of REAL 100% (86) 37% (56)Viewed REAL online 88% (76) 5% (8)Served on assessment committee 83% (71) 33% (50)Seen paper plans/reports 76% (65) 11% (17)Entered data into REAL 67% (58) 2% (3)Assigned to enter plans/results into REAL 52% (44) 6% (9)

Have used REAL for:• Program assessment 80% (69) 1% (2)• Program review 61% (52) 0% (0)• Department planning 33% (28) 0.7% (1)• Accreditation 27% (23) 0.7% (1)• Curriculum development 20% (17) 1% (2)

Used REAL:• 1–5 times 46% (40) 0% (0)• 6–20 times 35% (30) 0% (0)• > 20 times 19% (16) 0% (0)

Shown how to use REAL 67% (58) 3% (5)REAL has helped improve teaching and learning. 33% (28) 4% (6)

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 10: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 807

Table 7. Descriptive statistics for perceptions of REAL regarding adoption attributes.a

Attribute Mean SD

Voluntariness scale score (somewhat agree)b 4.98 2.35• My chair and /or Dean does NOT require me to use REAL. 4.92 2.47• REAL is NOT compulsory in my work. 5.42 2.32Relative advantage scale score (somewhat disagree) 3.15 1.57• REAL improves the quality of my work. 3.33 1.79• REAL makes it easier to do my work. 2.82 1.59• REAL improves my work performance. 3.03 1.72• REAL enhances the effectiveness of my work. 3.17 1.73• REAL gives me greater control over my work. 2.70 1.54• REAL helps me plan and improve student learning. 3.38 1.83• REAL provides me with new ideas for assessing and improving student learning. 3.60 1.90

Compatibility scale score (somewhat disagree) 2.88 1.63• Using REAL is compatible with all aspects of my work. 2.80 1.70• Using REAL fits well with the way I like to work. 2.98 1.75• REAL fits into my work style. 2.92 1.71

Image scale score (‘disagree’) 2.00 1.15• People at MSU who use REAL have more prestige than those who do not. 1.97 1.20• People at MSU who use REAL have a high profile. 2.20 1.36• Using REAL is a status symbol at MSU. 1.75 1.06

Ease of use scale score (somewhat disagree) 3.27 1.40• Using REAL is NOT frustrating. (positive version)c 2.64 1.45• It is easy to get REAL to do what I want it to do. 3.20 1.60• Learning to use REAL is (would be) easy for me. 4.01 1.58• Overall, I believe that REAL is easy to use. 3.32 1.74Result demonstrability scale score (neutral) 3.71 1.69• I would have no difficulty telling others about the results of using REAL. 3.83 1.96• I believe I could communicate to others the consequences of using REAL. 3.92 1.98• The results of using REAL are apparent to me. 3.12 1.97• I would NOT have difficulty explaining why using REAL may or may not be

beneficial. (positive version)c3.87 2.11

Visibility scale score (disagree) 2.26 1.38• At MSU, one sees many people using REAL. 2.14 1.40• REAL is very visible at MSU. (positive version)c 2.35 1.50

Trialability scale score (somewhat disagree) 3.49 1.58• I know where I can go to satisfactorily try out various uses of REAL. 2.70 1.83• Before deciding whether to use any REAL applications, I was able to properly

try them out.3.58 1.83

• I was permitted to use REAL on a trial basis long enough to see what it could/can do.

3.21 1.78

a7 = strongly agree, 6 = agree, 5 = somewhat agree, 4 = neutral, 3 = somewhat disagree, 2 = disagree and1 = strongly disagree.bThe overall scores for each scale (in bold font) were calculated by adding up all the responses for thequestions in that scale for each individual.cRespondents answered the question in the ‘negative version’ which calls for disagreement with thestatement. The responses were reversed in these calculations so that the means all reflected the sameconcept of agreement.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 11: Factors affecting the adoption of an e‐assessment system

808 A.L. McCann

adopters also agreed REAL was more visible on campus and that they were moreknowledgeable about the results of REAL.

Prediction of adoption

Table 9 illustrates the factors that significantly predicted adoption of REAL. Of theadoption attributes, ‘voluntariness’ was the most consistent predictor of REAL use,with ‘compatibility’ significant for one of the outcome variables. When all eightattribute scale scores were used to predict ‘viewed online’, the logistic regressionmodel was significant, although only ‘voluntariness’ was identified as a significantvariable (p = .035). With ‘entered data’ as the outcome variable, the eight scale modelwas again significant (p = .001), with both ‘voluntariness’ (p = .002) and ‘compatibil-ity’ (p = .014) significant. With ‘used or not’ as the outcome, voluntariness was againa significant variable (p = .024) though the overall model was not. Testing anothermodel with logistic regression, ‘serving on an assessment committee’ and ‘beingshown how to use REAL’ significantly predicted use for all the outcome variables(p > .001), and both variables were significant (p < .001).

What would increase or improve use of REAL?

For the above open-ended question, Table 10 illustrates the coding categories usedand the responses. The need for staff development was the most frequently reportedcategory, closely followed by incompatibility of REAL with faculty work, lack ofrelative advantage, need for senior administrative support and ease of use problemswith REAL.

The primary issues in compatibility were that REAL did not relate to the work offaculty and they did not have the time to spend on it. As one professor reported:

Table 8. Comparison between scale scores of adopters and non-adopters.

Perceived attributesAdopter means (SD)

n = 86Non-adopter means

(SD) n = 151Mann–Whitney U-test p values*

Voluntariness 3.56 (2.40)‘neutral’

6.26 (1.37)‘agree’

<.001**

Relative advantage 2.41 (1.23)‘disagree’

2.53 (1.23)‘somewhat disagree’

.727

Compatibility 2.75 (1.68)‘somewhat disagree’

3.18 (1.47)‘somewhat disagree’

.159

Image 2.03 (1.19)‘disagree’

1.97 (1.10)‘disagree’

.800

Trialability 3.64 (1.59)‘neutral’

3.09 (1.51)‘somewhat disagree’

.169

Visibility 2.78 (1.53)‘somewhat disagree’

1.81 (1.06)‘disagree’

<.001**

Result demonstrability

4.01 (1.70)‘neutral’

3.09 (1.48)‘somewhat disagree’

.009**

Ease of use 3.23 (1.45)‘somewhat disagree’

3.46 (1.29)‘somewhat disagree’

.398

*Due to multiple testing, the more stringent probability value of <.01 was used rather than <.05.**Significant differences.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 12: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 809

Table 9. Models that predicted using REAL with logistic regression.

Predictors of using REAL Outcome variables for using REAL

Significant modelSignificant variables Viewed online Entered data Used or not

Eight adoption attributes

p = .001explains 58%85.7% correctly

classified

p = .001explains 51%78.6% correctly

classified

NS

Voluntariness p = .035 p = .002 p = .024Compatibility NS p = .014 NS

Serving on assessment committee/shown how to use REAL

<.001Explains 60%86.1% correctly

classified

<.001Explains 51%84.8% correctly

classified

p < .001explains 62%86.1% correctly

classified

Serving on assessment committee

p < .001 p < .001 p < .001

Shown how to use REAL

p < .001 p < .001 p < .001

Table 10. Results from survey question ‘What would increase or improve the use of REAL?’(n = 58 respondents).

Category Frequency

Need for staff development 43Need more information, more training or know what REAL was

Incompatible with faculty work 25Needs to relate to teaching/research, No time/overburdened, Need for integration with other processes, Too critical of assessment plans

Lack of relative advantage 25No advantage, Need to know advantage, Designed for administrators not faculty

Need for senior administrative support 20Need incentives, support, REAL work to have value, communication about REAL

Ease of use problems 15Do not like REAL interface, Too complicated, Restricted access, Need support

Voluntariness 13Already required, would need to be required

Trialability 6Need to try out the system

Visible actions 5Would adopt if peers thought it mattered or the results mattered

Flexible vision 1Storage of results overtime for longer-term reviews

Total 153

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 13: Factors affecting the adoption of an e‐assessment system

810 A.L. McCann

If my department head forced me to be the assessment person for the department, Iwould have to learn how to use it. Otherwise, this programme and its utility are not evenon my radar screen…. This mandatory programme does not affect the way we do thingsat all. We just do it because the University administration forces us to. …We do notactually use the output from REAL for anything except pleasing the administrators. Theycan say to the President and even legislators that we ‘have an ongoing process for assess-ing the quality of what we do’…

Another professor addressed the compatibility issue as well as lack of communi-cation about REAL:

Keep in mind that I am an assistant professor chasing tenure, keeping up with course-work, and participating in university service. I also have a family. It really becomesdifficult to take on new programmes/software/initiatives unless we are forced to, andthen it only begrudgingly happens. It’s just the nature of the beast. Something like REALis just one of dozens of initiatives swirling around in the storm of issues we have to dealwith. It probably seems profoundly relevant to those promoting it, but I think they shouldlook closely at the actual working experience of faculty before making assumptionsabout how it will be adopted.

Another incompatibility was that REAL was not valued like research. As onerespondent reported, ‘Teaching and advising outcomes are not highly rewarded in theuniversity structure, so there is little incentive to adopt new techniques like REAL. Infact, emphasis on research and grants creates a disincentive for using REAL’.

Senior administrative support comments were about the need for incentives, forother types of support such as leadership and communication and for REAL work tobe valued by the leadership. As one faculty member said, I would use it ‘if it countedin some way’. Another said, he/she would use it ‘if I really thought that anyone in thisuniversity was paying any attention to the results, other than the REAL director, whoreally tries his best’.

Interviews

The nine interview participants who had developed and/or used REAL extensivelywere more positive than the survey respondents about REAL. They believed REALwas having an impact on campus programmes, although they were not able to identifyexamples within their own programmes. They were frustrated by a lack of buy-inwithin their colleges, by both faculty and deans. Although they found it difficult to linkREAL with greater student learning, they presented 16 different ways that they hadused REAL (Table 11). Accreditation, programme review and organisational learningwere the most commonly reported. The director of assessment unequivocally statedthat he could measure the impact of REAL by tracking the number of improvementsin assessment plans as well as the number of performance targets that had been raisedover time. Five themes emerged from these interviews: (1) the dream of the directorof assessment, (2) incompatibility with campus culture, (3) not thinking differently, (4)problems with REAL infrastructure, and (5) the need for a mandate.

Theme 1. The dream

The dream was the director of assessment’s vision for REAL and how it could changethe university. One participant coined REAL as the director’s ‘dream child’, and

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 14: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 811

several participants made reference to his vision. According to the director, REALwould provide evidence of campus learning by capturing student performance withnumerical rubrics. This evidence would be used to make learning ‘transparent’ to thepublic and to facilitate peer review and accountability: REAL, as an organisationallearning system, would facilitate: (1) systematic planning and assessment, (2) collec-tive responsibility for improving the curriculum and the university, and (3) campusresearch and collaboration. He believed that having an organisational learning systemlike REAL was vital to the future of the institution:

I cannot see how a major research university can continue to compete internationally andnationally as the search for quality continues to accelerate. Forget state accountabilityand regional accreditation. We are now in a global higher education market. If – doesnot systematically improve itself, if departments do not systematically increase theirvisibility in the discipline, other peer institutions will.

The director brought his dream for REAL to the campus and sold it to theadministration. He clearly conveyed the dream to the interview participants whowere significant players in the development of REAL. They understood the impor-tance of having an assessment system in the current environment and believedREAL was the vehicle to make it happen. The responsibility for REAL primarilyrested on the shoulders of the director. As one participant stated, he carried ‘thebanner for assessment’. He created the infrastructure including software design, thecampus-wide assessment committee of faculty and administrators and supervisedmost of the training.

Table 11. Uses for REAL reported by interview participants.

UsesNumber of individuals

reporting (n = 9)

Accreditation 5Program review 5Organizational learning 3

Improving faculty and student learning, curriculum revision, changing culture so faculty collectively own the curriculum, faculty research projects and measuring the impact of research and service

For administrators 2Assessment plans & reports 2Marketing on public site 2Reporting to agencies 2Peer review of assessment plans by AAC 1Best practices (list of examples of good assessment) 1Professional development for faculty 1Classification system for program quality 1Transparency for public 1Teaching tool 1Strategic planning 1Development of policy 1Model for other institutions 1

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 15: Factors affecting the adoption of an e‐assessment system

812 A.L. McCann

Theme 2. Incompatibility with campus culture

One participant described the role assessment should play in the campus culture.Systematic assessment:

… is something that needs to become engrained into the culture of the campus and theorganisation, that we expect it’s a part of … doing business, that we are evaluating whatwe do and have metrics or standards or target goals and some means of assessingwhether we’re accomplishing our purposes and our mission.

Unfortunately, participants indicated that a different culture existed that was notcompatible with systematic assessment.

Participants indicated trends come and go on campus, and REAL was viewed asanother educational fad. They believed REAL would fade away with a change inadministration or after a successful, accreditation site visit. Another culture issue wasREAL work did not matter to the faculty. At best, REAL’s work was viewed as anuisance.

I think faculty fundamentally want to do the right thing. They want to be accountable butthey don’t want to do the hard work of figuring out all the … dreary details … that areneeded in order to really be accountable.

At its worse, REAL was viewed as wasting time that should be spent on scholarlyactivity. ‘All this is doing is forcing us to … do bureaucratic paperwork instead ofbeing in the trenches teaching or performing or doing whatever we’re actuallysupposed to be doing’. As another stated,

When it comes to assessment, … it’s the lowest on the totem pole. Maybe it shouldn’tbe that way, but I think it’s because it’s not valued. If it’s not valued in the rewardssystem, then people are going to put their effort where they know they can get rewards.

Another aspect of faculty culture was academic freedom. Faculty felt they shouldbe able to decide how to teach students in their classrooms without interference, sohaving to evaluate differently and report results conflicted with this belief. They alsobelieved they were already doing a good job of evaluating student learning. One inter-viewee said, ‘The whole thing to me seems like it’s superimposed. It’s an extra layerof stuff that seems superfluous, that we already do in some way’. They also viewedassessment as an evaluation of their own teaching which was ‘an invasion of theirrights’. Another participant agreed that ‘there’s a lot of faculty out there … that thinkit’s a threatening process, because somehow it’s going to link to their evaluation’.

Using REAL to systematically improve learning would require collective action byfaculty. This runs counter to the culture of working in discipline silos. The director said:

I think the challenge for developing faculty culture is to get them to begin thinking abouttheir department as an organisation, not themselves alone. Not how many articles theycan publish to get tenure, but, how do they fit into an organisation or a community andhow can they perform better as a community and how can they research how well they’redoing and systematically improve ….

Theme 3. Not thinking differently

According to Eckel and Kezar (2003), a campus change will result in new ways ofthinking. In this case, the faculty would value the role of REAL in improving learning

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 16: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 813

and would actively involve themselves in assessment dialogue and activities.However, the participants did not believe faculty viewed assessment as a means forimprovement or that there was any disadvantage to not engaging in assessment.Assessment was something they did for the administration, not for improvement.‘They identify it as something on the checklist the administrators expect us to do, sowe’ll make enough effort to get an “atta boy” that we’ve got it accomplished and thenwe’re on to doing our own thing’.

Theme 4. Problems with REAL infrastructure

Although REAL’s electronic medium was viewed as advantagous over a papersystem, learning to use it was repeatedly described as overwhelming and frustrating.One interviewee described trying to get his dean to use REAL on multiple occasions,but she disliked the appearance of the screens so much that she would not use it. Oneparticipant so described her frustration with using REAL for a programme review.

Just looking at it as an average liberal arts kind of professor, it looks like a lot of datacrunching and a lot of bureaucratese kind of thing … I got to the point towards the endof this program review thing where I would just be sending him [director of assessment]stuff every day. I would say, ‘Would you please translate this into everyday English?’

Getting access to REAL was also a problem. Other than the public site whereexample assessment plans could be viewed by anyone, a password was needed toaccess REAL materials. The password was only available from the director and theassessment committee member of a given college. Even with this password, facultycould only view plans from their own college.

Theme 5. Mandate for REAL

The participants clearly indicated that REAL was mandated, whether for accredita-tion, programme review, strategic planning or other reporting requirements. Rogers(2003) posits that adoption happens most quickly when it is mandated, but individualscan find ways to circumvent the innovation. This appears to have been the case at thisuniversity. One hundred and sixty-nine assessment plans were posted on REAL in athree month time period when accreditation was looming. However, the processlanguished after a successful site visit. The administration was continuing to imple-ment policies that mandated using REAL, particularly programme review. Accordingto the director, this was the strategy they were going to use to insure that REAL waswidely adopted.

Discussion

This study was conducted to identify factors that would enhance adoption and sustain-ment of a centralised e-assessment system on a university campus. A low rate ofadoption was reported by the faculty, with just a little over one-third ever usingREAL. The respondents also reported that REAL generally lacked the attributes thatsupport adoption. Of the eight attributes measured, only three demonstrated strongeragreement by the adopters, voluntariness, visibility and result demonstrability. Adopt-ers were usually assigned to enter information into REAL. Adopters also believed that

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 17: Factors affecting the adoption of an e‐assessment system

814 A.L. McCann

REAL was more visible on campus and that they were more knowledgeable about theresults of REAL. As illustrated in Figure 1, four variables significantly predicted usingREAL: (1) feeling required to use it, (2) whether it was compatible with how theyliked to work, (3) serving on an assessment committee, and (4) being shown how touse REAL.Figure 1. Significant predictors of adopting REAL ( N = 86 users and 151 non-users).Additional factors from the study illustrated the incompatibility of REAL withfaculty culture that could possibly explain the low rate adoption. These included:

● REAL not viewed as relevant to teaching or a tool for improvement;● REAL work not valued by the academy like research;● faculty too busy to engage in REAL work;● faculty uncomfortable with being evaluated;● faculty believed they were already doing a good job of assessment; and● faculty did not feel responsible for departmental, college or campus assessment

efforts.

Another obstacle to REAL adoption was the difficulty or complexity of using theelectronic system. Even experienced users described it as a frustrating or even over-whelming exercise. They had problems accessing the system, as well as entering dataand generating reports. They even complained about the negative appearance of thescreens and how that kept administrators from using it. The adopters of REAL, whowere generally required to use REAL and received training, were able to overcome thedifficulties of the software.

Faculty reported that REAL had little impact on learning and campus practices.Even among the adopters, only a small proportion believed that REAL had helpedimprove teaching or learning. Interview participants were more positive and believedthat: (1) improvements were happening, (2) REAL was not ‘going to go away’, and(3) REAL would become part of the culture. The director of assessment had evidencethat assessment plans and practices had greatly improved. Also, REAL had beenintegrated into campus policies which assured the continued use of the system.

Figure 1. Significant predictors of adopting REAL (N = 86 users and 151 non-users).

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 18: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 815

These findings align with the work of other e-assessment system researchers. Theygenerally report a slower rate of adoption than expected. Bull et al. (2002) stressed theimportance of training for adoption. Both Warburton (2008) and Bull identifieddifficulty with using e-systems as an impediment to adoption. Carnevale (2007),Warburton (2008), Whitelock (2006) and Bull (2002, 2001) identified culture factorsas primary impediments to adoption, with generally greater influence than infrastruc-ture. These culture factors included assessment research not valued by the academyand faculty not having enough time for assessment. Regarding the effort to integrateassessment with campus policy and practices, both Warburton and Whitlock empha-sised this concept. Warburton identified ‘embeddedness’ as a primary metric forsuccessful adoption of e-assessment systems.

One important area for future research would be the development of an instrumentthat better explains adoption of assessment systems in higher education. In previousstudies, Moore and Benbasat’s (1991) instrument predicted adoption of personalcomputers in the workplace, bedside terminals in hospitals (Hebert and Benbasat1994), a smart card-based electronic payment system (Plouffe, Hulland, and Vanden-bosch 2001) and e-government initiatives (Carter and Belanger 2005). If such aninstrument could be developed for assessment systems, institutions could use pilotdata to plan implementation or determine why a system was not being widelyadopted.

Another area for research would be identifying ways to improve the compatibilityof centralised assessment systems with faculty culture. Assessment needs to be under-stood and valued by college professors in order to get them beyond the lip service theycurrently pay to it. Doctoral education and faculty development efforts could providea more thorough grounding in assessment theory and practice so that professors under-stand the rationale for assessment, what good assessment practices are and how toincorporate them into their teaching.

Pathway for changing campus culture

This study suggests that certain steps should be followed to implement centralisedassessment systems. The vision would not only be widespread use of the system buta change in campus thinking so that assessment is viewed as valuable and ‘how we dothings around here’. Becoming a learning organisation would also be in this vision,where assessment ‘data are routinely talked about and acted upon by a communityever dedicated to improving its own performance’ (Barr and Tagg 1995, 20).

Choose the leadership

One leader who has sole responsibility for assessment is necessary. Other senior lead-ers need to be on board to provide focus on assessment so that it is viewed as valuableand a high priority in faculty work.

Define the dream

A vision should be developed for the system that shapes the assessment process andthe infrastructure of the electronic system. Again, this vision will provide continuedfocus on assessment work.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 19: Factors affecting the adoption of an e‐assessment system

816 A.L. McCann

Start with pilot projects

Use the positive results of pilot projects to get faculty buy-in prior to implementation.Have a few volunteers conduct small projects and demonstrate that planning andassessment with the system results in improved student or departmental learning.

Educate faculty with hands-on training

Hands-on training will help faculty understand the importance of using the system,what they and the institution stand to gain and exactly how to use it.

Communicate about the system, especially results and impact

To value assessment, faculty members need to know the vision and see that it is beingachieved. Since improvement happens over time, they need to be regularly informedabout how the institution and its programmes are changing due to assessment.

Monitor the changes

Continuously monitoring the impact of an assessment system is vital for collectingevidence of campus improvement. How has the assessment system affected the curric-ula, teaching practices, campus policies, budgets and institutional structures? How hasit affected the ways that campus personnel converse and interact and the language theyuse? How has it affected relationships with university stakeholders? This is the typeof dialogue in which a learning organisation engages, and the answers would providea comprehensive picture of how and to what extent the assessment system haschanged the institution and its culture.

Supportive campus practices for changing the culture

This study also suggests campus practices to support the implementation of a centra-lised assessment system.

● Mandate faculty engagement through REAL data collection, reporting anddiscussions in departmental planning sessions.

● Require faculty service on campus assessment committees.● Integrate the assessment system into campus policies and practices, such as

strategic planning, programme review and annual evaluation of colleges andindividuals.

● Make the software accessible and user-friendly.● Use collaborative leadership though peer review of assessment plans, inter-

department and college assessment research projects and involvement of upperlevel administrators.

● Give research credit for assessment work so it is viewed as scholarly activity.● Develop awards and incentives such as the faculty member with the ‘most

important student learning project’, assessment grants, release time, travel andspecial fellowships.

● Give the OK to not be perfect so faculty will take the risk of trying somethingnew.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 20: Factors affecting the adoption of an e‐assessment system

Assessment & Evaluation in Higher Education 817

● Make assessment dialogues a routine in the life of professors. Conversationsin faculty gatherings should be about assessment, how it can improve the insti-tution, especially the curriculum and how evidence can be collected to givedirection for improvement.

Conclusions

REAL was adopted at the university studied by a small proportion of faculty memberswho were required to use it, received hands-on training and served on assessmentcommittees. A major contributor to this lack of adoption was the incompatibility ofthe centralised e-assessment system with the faculty culture of autonomy. Facultymembers preferred spending their time in research and teaching activities that weremost valued by the academy. This culture flies in the face of the cooperation that isneeded for a centralised assessment system. On this campus, the change strategies andsupport systems to change the culture were not sufficiently in place.

The culture clash between assessment mandates and faculty autonomy is notunique to this campus. Other campuses experience similar resistance to assessment.However, federal agencies, state legislatures and accrediting bodies will continue todemand evidence of student learning which will require the infrastructure of a centra-lised e-assessment system. The recommendations offered in this report may assistother institutions in successful adoption of centralised e-assessment systems.

Notes on contributorAnn McCann is currently the assessment specialist on two National Institute of Dental andCraniofacial Research grants, as well as one from the American Dental Education Association.She has been the director of planning and assessment at the Texas A&M Health Science CenterBaylor College of Dentistry since 1994. She was formerly chair of the Dental Hygiene Depart-ment at the University of Detroit Mercy. In 1992 and 1996 she was awarded federal grants tocreate assessment methods and instruments for the health professions.

ReferencesBarr, R.B., and J. Tagg. 1995. From teaching to learning – A new paradigm for undergraduate

education. Change 27, no. 6: 13–19.Bull, J. 2001. TLTP85 Implementation and evaluation of computer-assisted assessment: Final

report. http://www.caacentre.ac.uk/dldocs/final_report.pdf (accessed February 27, 2009).Bull, J., G. Conole, H. Davis, S. White, M. Danson, and N. Sclater. 2002. Rethinking assess-

ment through learning technologies. http://www.ascilite.org.au/conferences/auckland02/proceedings/papers/056.pdf (accessed February 27, 2009).

Carnevale, D. 2007. New software aids in assessment. Chronicle of Higher Education 53, no.30: 1–4. http://chronicle.com/weekly/v53/130/30a0371.htm pdf (accessed March 30,2007).

Carter, L., and F. Belanger. 2005. The influence of perceived characteristics of innovating one-government adoption [Electronic Version]. EJEG (Electronic Journal of e-Government).3, 1–13. http://www.ejeg.com/volume-2/volume2-issue-1/v2-i1-art2.htm (accessed January23, 2006).

Creswell, J.W. 2003. Research design: Qualitative, quantitative, and mixed methodsapproaches. Thousand Oaks: Sage.

Dillman, D.A. 2000. Mail and internet surveys: The tailored design method. New York: JohnWiley & Sons.

Eckel, P.D., and A. Kezar. 2003. Taking the reins: Institutional transformation in highereducation. Westport CT: Praeger.

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14

Page 21: Factors affecting the adoption of an e‐assessment system

818 A.L. McCann

Hebert, M., and I. Benbasat. 1994. Factors influencing the adoption of ‘bedside terminals’ bynursing staff: A field study of the diffusion of innovations in hospitals. Hospital andHealth Services Administration 39, no. 3: 369–83.

Kezar, A., and P.D. Eckel. 2002a. Examining the institutional transformation process: Theimportance of sensemaking, interrelated strategies, and balance. Research in HigherEducation 43, no. 3: 295–328.

Kezar, A., and P.D. Eckel. 2002b. The effect of institutional culture on change strategies inhigher education. Journal of Higher Education 73, no. 4: 435–60.

Moore, G.C., and I. Benbasat. 1991. Development of an instrument to measure the percep-tions of adopting an information technology innovation. Information Systems Research 2,no. 3: 192–222.

Plouffe, C.R., J.S. Hulland, and M. Vandenbosch. 2001. Research report: Richness versusparsimony in modeling technology adoption decision-understanding merchant adoption ofa smart card-based payment system. Information Systems Research 12, no. 2: 208–22.

Rogers, E.M. 2003. Diffusion of innovations. 5th ed. New York: Free Press.Warburton, B. 2008. Quick win or slow burn? modelling UK HE CAA uptake. Assessment &

Evaluation in Higher Education. http://www.informaworld.com/smpp/content∼content=a793501863∼db=all (accessed February 27, 2009).

Whitelock, D. 2006. Report for JISC roadmap for e-assessment. http://www.jisc.ac.uk/uploaded_documents/RM%20Final%20report%20Vfv.doc (accessed February 27, 2009).

Dow

nloa

ded

by [

The

Uni

vers

ity o

f M

anch

este

r L

ibra

ry]

at 0

4:32

21

Nov

embe

r 20

14