Transcript
Page 1: Evaluation frameworks for nursing informatics

International Journal of Medical Informatics (2005) 74, 908—916

Evaluation frameworks for nursing informatics

Leanne M. Curriea,b,∗

a School of Nursing, Columbia University, New York, NY, USAb New York Presbyterian Hospital, New York, NY, USA

KEYWORDSInformatics;Evaluation;Informatics researchframeworks

Summary Rigorous evaluation of informatics applications in healthcare is impor-tant so that the impact of such systems can be understood. Although many quan-titative methods have been employed to evaluate informatics systems, there is agrowing trend to utilized qualitative methods during both the formative and sum-mative phases of research. Several evaluation frameworks that have been proposed

highlight the need for both qualitative and quantitative evaluation methods in thedevelopment and post-implementation phases of informatics systems development.Recommendations regarding the timing, type and use of qualitative methods differfor each of these frameworks. This paper examines the strengths and weaknessesof the published evaluation frameworks and enumerates the qualitative researchmethods in use in each of these frameworks.© 2005 Elsevier Ireland Ltd. All rights reserved.

1. Informatics research methods

Rigorous evaluation of informatics applicationsis increasingly important in the current health-care environment. Effective evaluation can guidehealthcare Information Technology (IT) decision-making related to system development andimplementation. In addition, effective iterativeevaluation during the development process hasthe potential to avert system failure and thus savehuman and financial investments. Toward the goalof identifying appropriate selection and use ofevaluation methods, researchers have put forwardseveral frameworks to guide the evaluation process[1—5,8—14]. Each of these frameworks offers aslightly different perspective from which to per-form informatics evaluation. Research frameworks

∗ Present address: Mail Code 6, 630 West 168th Street, NewYork, NY 10032, USA.

E-mail address: [email protected].

published in the early 1990s describe the timing forquantitative and qualitative methods to be appliedwithin the system development life cycle (SDLC),with a primary focus on quantitative methods.More recently, however, evaluation frameworksaccentuate the need for qualitative research to bea critical portion of, and the only method used inthe evaluation process.

A recent review of informatics studies byAmmenwerth and Keizer [15] describes the mat-uration of healthcare informatics research fromsmall individual studies, to a more mature statemanifested by an increase in the number of sys-tematic reviews and publication in informaticsdomain-specific journals. The increased numberof systematic reviews demonstrates that there issufficient research in certain areas to conduct suchreviews. The increase in publications in informaticsdomain-specific journals indicates that the field issufficiently large to warrant several internationalinformatics publications. The Ammenwerth and

1386-5056/$ — see front matter © 2005 Elsevier Ireland Ltd. All rights reserved.

doi:10.1016/j.ijmedinf.2005.07.007
Page 2: Evaluation frameworks for nursing informatics

Evaluation frameworks for nursing informatics 909

Keizer review also noted an increase in the numbersof qualitative studies, particularly those examiningorganizational or socio-cultural factors [15], anindication of a shift in focus of informatics research.

Other work of note by Ammenwerth et al. reportson a European workshop geared toward identifica-tion of strategies to improve health informationsystem evaluation. This workshop culminated inthe Declaration of Innsbruck in 2003 [16], whichprovides definitions of the two key componentsof informatics research, the system and evalu-ation. The authors define the system as ‘a setof components. . ., together with their attributesand relationships, which as a whole is needed toaccomplish an objective.’’ [16] The second keycomponent, evaluation, is defined as ‘‘the act ofmeasuring. . .the properties of a health informationsystem. . ., the result of which informs a decision tobe made concerning that system in a specific con-text’’ [16]. In addition, the Declaration of Innsbrucklists twelve recommendations including the needfor research to be grounded in scientific theory, forthe availability of guidelines for good evaluationpractice and the need for evaluation methods tobe part of informatics curricula [16]. These valu-aidm

2

Foeeocftowcembmc[ocet

c

ical evaluation. The RCT process demonstrates ahigh level of scientific rigor because it maintainsstringent objectivity and controls for extraneouseffects. The RCT process involves the explicitdefinition of quantitative variables and randomassignment of groups. An intervention is deliveredto the experimental group(s), which provides forrigorous comparison between control and exper-imental groups. As with summative evaluation,the RCT process is typically carried out at theconclusion of the study [13]. Critics of the RCT forinformatics applications claim that the iterativedevelopment process precludes evaluators toperform continuous evaluation during the develop-ment process rather than post hoc. Grant et al. [13]claim that the RCT is suitable for the evaluation ofa single intervention, such as a drug, only because adrug does not change during the evaluation process.They argue that software system development isby nature an evolutionary process which demandsan alternative to the RCT evaluation methodology[6,13]. An increase in the use of qualitative meth-ods for informatics evaluation reflects the inabilityof the RCT to capture information required foreffective system development during the formativeds

3

Qftttlpistfapamiitdosahci

ble recommendations reflect the general interestn advancing methods in informatics research andemonstrate the maturation of the field of infor-atics.

. Informatics evaluation

riedman and Wyatt define evaluation as the studyf the ‘‘impact or effects [of software] or [its]ffects on users and the wider world’’ [3]. Thus,valuation frameworks need to describe method-logies that capture the processes integral to appli-ations, the users and the world in which the usersunction. Evaluation can be characterized as forma-ive or summative. Formative evaluation is carriedut during the development phase of the research,hereas summative evaluation is carried out uponompletion of a project [17]. During formativevaluation, processes are evaluated in an iterativeanner which provides feedback for improvementsefore the final product is put forth. During sum-ative research, the impact or the outcomes asso-

iated with the use of the system are examined17]. These terms tend to be used in the contextf program evaluation and are thus applicable toomputer system evaluation because informaticsvaluation takes place both during and after sys-em development.

The randomized controlled trial (RCT) is theurrent gold standard for informatics and biomed-

evelopment process. Instead, an RCT can andhould be used for summative evaluation.

. Qualitative evaluation methods

ualitative methods have been effectively usedor clinical informatics evaluation [2,3]. The moveo qualitative methods was largely in responseo early informatics work which demonstratedhat system failures were related to developers’imited understanding of human factors and systemrocesses [18]. The phenomenon of system failures not unique to healthcare informatics and hasince spawned the use of qualitative methodshroughout the software development industry. Inact, iterative processes and user-centered designre now industry standards, with ethnographicrocesses being one of the primary tools for needsnalysis [19]. User-centered design refers to ‘‘aultidisciplinary design approach based on active

nvolvement of the user to improve the understand-ng of user and task requirements’’ [20]. Accordingo Germain, ethnography ‘‘is the systematicescription, analysis and interpretation of culturesf subcultural groups’’ [21]. Qualitative methods,uch as ethnography and focus groups, have thebility to capture experiences, emotions, anduman interaction processes through the inductiveollection of and subsequent rigorous analysis ofnformation from the individuals’ perspective [22].

Page 3: Evaluation frameworks for nursing informatics

910 L.M. Currie

As such, these methods are appropriate for useduring the formative stage of system development.

Friedman and Wyatt, in their seminal text Eval-uation Methods in Medical Informatics, distinguishbetween ‘objectivist’ and ‘subjectivist’ approachesto evaluation by using an archetypal framework firstput forth by House in 1980 [3,23]. In their workthey use the term subjectivist to mean ‘qualita-tive.’ However, subjectivism is also a philosophicaltheory that states that knowledge is generated asa subjective experience without any connection toreality or objectivity [24]. As such, the term sub-jectivism cannot be equated with the term ‘qual-itative research’ which, though emanating from asubjective perspective, is not ‘subjectivism’ per se.Moehr criticizes Friedman and Wyatt’s apologeticintroduction of ‘subjectivist’ research methodolo-gies in their text, as they describe subjectivistmethods as a ‘‘different set of premises. . .that maybe. . . discomforting to some readers’’ [25]. Thesecautionary words emphasize the perspective thatqualitative research methods are inferior researchmethods. This perspective has been perpetuatedbecause of the value placed on the RCT, and as suchthe perception that qualitative research is inferior

out that the quest for reliability can hinder qual-itative validity. She posits that by assuming that‘‘reality is. . .singular and tangible’’, rather than‘‘multiple and constructed,’’ the researcher mightobscure validity [27]. In other words, if one adheresto rigid measurements, the inherent limitations ofpredefined concepts might obscure the truth.

Qualitative researchers, aware of the perceivedinferiority of their methods, have developed strictprocesses to ensure rigor. Lincoln and Guba putforth qualitative concepts to reflect the rigorof the qualitative process. These terms includecredibility, transferability, dependability andconfirmability. Table 1 lists the concepts andthe methodological processes associated withmaintaining qualitative rigor. These processesinclude maintaining accurate audit trails, frequentrevisits with the subjects to ensure validity and,methodological triangulation [28]. Although theseprocesses are neither objective nor quantifiable,they are arguably equally rigorous [29]. Despitethe potential distinction between the words ‘sub-jectivist’ and ‘qualitative,’ and the continueduse of the term ‘subjectivist’ by Wyatt and Wyatt[30] this paper will refer to all qualitative andp

cwnpiQsn

radig

tion

ntoplin

her

firm

because it lacks rigor. However, it is increasinglyevident that this is not the case.

3.1. Reliability and validity in informaticsevaluation

Research is considered rigorous if it is both validand reliable. To quantitative researchers reliabilityrefers to ‘‘the extent to which the results of mea-surement are consistent’’ [26]. Sandelowski points

Table 1 Terms used to describe rigor in qualitative pa

Qualitative concept Process to maintain rigor

Credibility Vivid and faithful descripProlonged engagementPersistent observationPeer debriefingTriangulationMember checks

Transferability How well the findings fit iTheoretical purposive samThick, descriptive data

Dependability Ability of another researcat every stage of analysisAudit trailExternal audit

Confirmability Data, interpretational conTriangulationPracticing reflexivity

sychological methods as qualitative methods.According to Waltz and associates, reliability

an exist without validity, but validity cannot existithout reliability. ‘‘Validity refers to whether orot a device or method. . . measures what it pur-orts to measure’’ [26]. In other words, validitys the proximity to the ‘truth’ of a measurement.ualitative methodologies, though historically con-idered inferior to quantitative methodologies, areow being seen as methods that might generate

ms

Quantitative concept

of the phenomenon Internal validity

another context External validityg

to follow decision trail Reliability

ability Objectivity

Page 4: Evaluation frameworks for nursing informatics

Evaluation frameworks for nursing informatics 911

a closer approximation of the ‘truth.’ Qualitativeprocesses bring researchers closer to the truth of adomain via the subsequent rich and detailed anal-ysis of the human experience [27].

4. Evaluation frameworks forinformatics

The evaluation frameworks reviewed in this paperwere developed in an attempt to provide rigorousevaluation processes that can be tailored to theunique needs of informatics which requires eval-uation both during and after system development.

4.1. Identification of evaluation frameworks

The aim of this review was to identify and critiqueevaluation frameworks relevant to clinical infor-matics research and to explore the types of andrecommended timing for qualitative methods dur-ing the evaluation process. The relative increasein research that uses qualitative research methodsreflects the need to understand the impact of clini-cAisamisiVfemfrsecshof

tci[phsi

uses of technology. Recommendations regardingthe timing and use of qualitative methods differfor each of these frameworks. The qualitativeresearch methods described in the followingevaluation frameworks include both the traditionalsocial science methods familiar to nurses suchas ethnography as well as methods derived frompsychological research such as cognitive taskanalysis.

4.2. Critique of evaluation frameworks

Qualitative methods are by definition user-,context-, and functionality-centric, thus systemsthat are developed using qualitative methods effec-tively reflect the needs of users; systems that aredeveloped using the appropriate qualitative meth-ods may be more successful than those that do notreflect the user’s perspective [31]. The identifiedframeworks were analyzed based on the followingcriteria: (1) user-centricity; (2) context-centricity;(3) functionality-centricity; (4) recognition of thesystem development process; and (5) presenceor absence of a theoretical foundation [3,16,25].User-centricity refers to the extent to which theesttrdfwptaotstoaf

4

Ti(fIt

4Tw

al information systems from a human perspective.s such, the frameworks were deemed applicable

f they described an evaluation framework or aeries of evaluation methods that might be useds a framework for the evaluation of clinical infor-atics applications for use by humans. Towards the

dentification of published frameworks, healthcare,ocial science and computer science databasesncluding PubMed, CINAHL, PsychInfo, Engineeringillage 2 and the ACM digital library were searchedor the terms ‘informatics evaluation’, ‘computervaluation’, ‘evaluation frameworks’, and ‘infor-atics evaluation frameworks.’ The reference lists

rom relevant articles were then searched for otherelevant articles not identified via the databaseearch. In addition, informatics text books werexamined for evaluation frameworks. Articles,hapters or books were excluded if they pertainedolely to informatics applications with which auman did not interact, or if the evaluation meth-ds did not describe a formal framework. Twelverameworks that fit these criteria were identified.

The evaluation frameworks identified highlighthe need for the use of qualitative methods inlinical informatics either in conjunction with orn the place of traditional quantitative methods25]. The qualitative methods described in thisaper target the identification and evaluation ofuman processes integral to clinical informatics,uch as man—machine interaction, organizationalnfluences on users and domain and context specific

valuation framework considers the end-user’s per-pective. Context-centricity refers to the extento which the evaluation framework can capturehe situation in which the evaluation is being car-ied out. Functionality-centricity is related to theegree to which the framework can measure theunctions that are in use by the end-user. Frame-orks that acknowledge the system developmentrocess may be more suitable for use than thosehat do not. Finally, frameworks that are based ontheoretical foundation are likely to be more rig-

rous than frameworks that are not based on soundheory. Theory based methods might advance thecience in a more systematic manner, as put forth inhe recommendations contained in the Declarationf Innsbruck [16]. Table 2 identifies the presence orbsence of each of these criteria for each of therameworks.

.3. Evaluation frameworks

he following 12 frameworks have been organizednto four general groups as follows: (1) generic;2) behavioral-focused; (3) social-organizationalocused; (4) system development life cycle focused.n the following sections each framework is cri-iqued based on the five criteria listed above.

.3.1. Generic frameworkshree frameworks were identified as generic frame-orks, that is, frameworks that are based on gen-

Page 5: Evaluation frameworks for nursing informatics

912 L.M. Currie

Table 2 Characteristics of frameworks

Author(s) Framework Context-centric

User-centric

Functionality-centric

RecognizesSDLC

TheoryBased?

Generic evaluation frameworksStead et al. [2] Development

evaluation matrixYes Yes Yes No No

Friedman and Wyatt[3]

House’s eightapproaches to research

No Yes Yes Yes No

Shaw [4] CHEATS Yes Yes Yes Yes No

Frameworks that focus on human behaviorPatel et al. [7] Cognitive psychology Yes Yes Yes Yes YesDixon [1] Behavioral psychology Yes Yes Yes Yes Yes

Frameworks that focus on social/organizational relationshipsKaplan [6,8] Social interactionism Yes Yes No Yes NoBerg [9] Sociotechnical

approachYes Yes Yes Yes Yes

Effken [10] Carper’s four ways ofknowing

Yes Yes Yes Yes Yes

Anderson [11] Social network analysis Yes Yes Yes Yes YesWestbrook et al.

[12]Multi-disciplinary,multi-methodframework

Yes Yes No Yes No

Frameworks that focus on software life-cycleGrant et al. [13] TEAM methodology Yes Yes Yes Yes YesKushniruk [14] Systems development

life cycleYes Yes Yes Yes Yes

eral research principles with no indication of theuse of behavioral, social or software developmentlife cycle principles as a guide.

Stead put forth the first generic informat-ics evaluation framework in 1994, which usesa matrix to describe the relationship betweenspecific system development stages and levels ofevaluation [2]. In this matrix the system develop-ment stages are: (1) specification; (2) componentdevelopment; (3) components into system; (4)system into environment; (5) routine use. Theevaluation methodologies are: (1) definition; (2)bench; (3) field; (4) validity; (5) efficacy. Thisframework describes the need for ‘extensivequalitative studies’ and expert reviews to cap-ture the users’ needs. Although not based on atheoretical foundation, the framework is clearlydescribed and therefore may be useful to guideresearch.

Friedman and Wyatt, 1997, use Houses’ genericresearch methods framework. House’s frameworkseparates research methods into eight approacheswithin two categories; objectivist and subjectivist(qualitative). The categories are situated on a con-tinuum, ranging from most objective-comparison-

based approach; the decision facilitation approachand the goal-free approach. The four quali-tative approaches are as follows: the quasi-legal approach; the art criticism approach; theprofessional review approach; and the respon-sive/illuminative approach. This framework is help-ful for delineating research methodologies, butdoes little to guide research. Friedman identifiestwo important time periods in the software lifecycle for the qualitative methods: ‘‘as part ofthe design process;’’ and ‘‘after a. . .resource ismature and has been tested in laboratories’’ [3].This framework then, provides guidelines for use ofqualitative methods during the software develop-ment process, but only provides a weak theoreticalfoundation.

A generic framework put forth by Shaw,‘CHEATS’, provides a list of aspects to include in theevaluation of informatics applications. These are asfollows: (1) clinical; (2) human and organizational;(3) educational; (4) administrative; (5) technical;(6) social. Shaw describes the use of both quali-tative and quantitative methods for the post hocevaluation of an informatics application. The qual-itative data is to be analyzed ‘‘to show thematiccpw

based, to most subjective-responsive/illuminative.The four objectivist approaches are as follows:the comparison-based approach; the objectives-

onstructs common to subjects’ experiences anderceptions’’ [32]. The limitations of this frame-ork lie in its very generic nature. Although the

Page 6: Evaluation frameworks for nursing informatics

Evaluation frameworks for nursing informatics 913

components of context, users, functionality, and toa lesser extent, software processes are addressed,the framework fails to provide adequate theoreti-cal support for use or to reflect sensitivity for earlyevaluation. Neither does it present the informationwith clarity or depth, instead, it is superficial lim-iting its usefulness.

4.3.2. Frameworks based on human behaviorTwo of the identified frameworks focus on individualhuman behavioral processes associated with humancomputer interaction.

Patel’s methodology focuses on understandingthe mental processes associated with memory andcomprehension in relation to the words and designsthat are used in clinical information systems. Patelet al. use propositional analysis as well as seman-tic analysis to develop a deep understanding of theusers’ mental processes prior to building or imple-menting an application and through the imple-mentation process [5] thus meeting all of thecriteria.

The information technology adoption model(ITAM), by Dixon, focuses on the users’ skill level,the user’s perception of usefulness of the system,atfgvudt

4Fr

iadaesreoKguiiuoid

Berg describes the use of socio-technical the-ory in the iterative and user-centered developmentof healthcare information systems [9]. This theoryposits that systems function in a ‘tightly interwo-ven’ manner in which the disruption of one factorcan have serious ramifications. Thus, the work envi-ronment must be truly understood before imple-menting a system and end-users ‘must’ participatein the design and evaluation throughout the devel-opment process. This framework is based in a strongtheoretical foundation and addresses context, usersand functionality, and describes the iterative pro-cess of system development; therefore this frame-work meets all of the criteria.

Effken highlights the importance of develop-ing systems that are aesthetically appealing toensure system usage post-implementation. Theintersection of Carper’s four ways of knowingwith cognitive work analysis processes creates amatrix that provides a detailed description ofwork task related processes and relationships [10].As such, this methodology also meets all of thecriteria.

Anderson uses social network analysis to exam-ine the communication patterns of clinicians pre-aswitiopaec

attibttf

4dTs

aisop

nd the user’s perception of ease of use of the sys-em. These factors are then related to the actualunctional capabilities of the system [1]. Dixon sug-ests that the ITAM model would include user inter-iews throughout the system life cycle indicating hisnderstanding of the iterative process of softwareevelopment. This methodology also meets all ofhe specified criteria.

.3.3. Frameworks based on social relationshipsive of the more recent frameworks focus on socialelationships.

Kaplan describes the application of socialnteractionist theory, which characterizes userss ‘active participants’ in the changes that occururing the implementation of an informaticspplication. Her framework is derived from anxtensive exploration of the literature related toocial, organizational, and professional issues sur-ounding informatics applications in the healthcarenvironment. The resulting framework is basedn communication, control, care and context;aplan’s 4C’s [8]. Kaplan reports using ethno-raphic interviews and observations to develop annderstanding of all issues related to the specifiednformatics application. Although this is a theoret-cally sound framework with sensitivity to context,sers and functionality, the data collection willccur only during post hoc analysis, which indicatesnsensitivity to the iterative nature of softwareevelopment.

nd post-implementation of a clinical informationystem [11]. Social network analysis is a process inhich the relationships between entities are exam-

ned, entered into a matrix and subsequently quan-ified. The data collection may or may not be qual-tative in the form of interviews or ethnographicbservations of interactions. This comprehensiverocess was demonstrated to be effective in evalu-ting communication patterns in the small clinicalnvironment, and meets the previously definedriteria.

Westbrook et al. describe a multi-disciplinarynd multi-method framework anchored in socio-echnical theory [12]. This framework uses quali-ative and quantitative methods for evaluating thempact of a system. This framework is currentlyeing tested and although promising, only charac-erizes post hoc evaluation, thus not characterizinghe system development cycle or identification ofunctional specifications.

.3.4. Frameworks based on the systemevelopment life cyclewo of the identified frameworks consider theystem development life cycle.

Grant developed the total evaluation andcceptance (TEAM) methodology, which functionsn relation to the dimensions of role, time, andtructure. The focus of the TEAM methodology isn integrating evaluation throughout the iterativerocess of software development [13]. Mea-

Page 7: Evaluation frameworks for nursing informatics

914 L.M. Currie

Table 3 Type and timing of qualitative methods as recommended by framework

Author(s) Framework Qualitative approach(es) Time point(s) during SDLC

Generic evaluation frameworksStead [2]

Developmentevaluation matrix

Expert reviews and‘qualitative studies’

Pre-implementationDuring implementationPost-implementation

Friedman and wyatt [3] House’s eightapproaches toresearch

Observation, interviews,document and artifactanalysis

Pre-implementationDuring implementationPost-implementation

Shaw [4] CHEATS Thematic analysis of subjects’experiences, perceptions

Post-implementation

Frameworks that focus on human behaviorPatel et al. [7]

Cognitivepsychology

Propositional analysis,Semantic analysis

Pre-implementationDuring implementationPost-implementation

Dixon [1]BehavioralPsychology

Interviews—no analysisprocess specified

Pre-implementationDuring implementationPost-implementation

Frameworks that focus on social/organizational relationshipsKaplan [6,8] Social interactionism Ethnographic interviews and

observationsPost-implementation

Berg [9]Socio-technicalapproach

Participant observation,interviews, userparticipation indevelopment

Pre-implementationDuring implementationPost-implementation

Effken [10]Carper’s fourways of knowing

Cognitive workanalysis

Pre-implementationDuring implementationPost-implementation

Anderson [11] Social networkanalysis

E-mailcommunication

Pre-implementationPost-implementation

Westbrook et al. [12] Multi-disciplinary, multi-method framework

Interviews, focus groups,ethnography

Post-implementation

Frameworks that focus on software life-cycleGrant et al. [13]

TEAMmethodology

Questionnaire,video-taping

Pre-implementationDuring implementationPost-implementation

Kushniruk [14] Systemsdevelopment lifecycle

Cognitive task analysis,focus groups, usabilitytesting

Pre-implementationDuring implementation

surement methodologies include questionnairesand videotaping. This theoretical methodologyis thorough, although the actual description ofthe measurement processes is vague, limiting itsusefulness.

Kushniruk looks to usability engineering and thesystems development life cycle (SDLC) for thisframework [33,34]. He identifies the types of qual-itative and other research methodologies that can,or should, be used at different points in thesystem life cycle development process includingcognitive task analysis, focus groups, and usabil-ity testing. This framework captures the impor-tance of all characteristics in a clear and effectivemanner.

4.3.5. Timing of qualitative research methodsTable 3 outlines the differences in qualitativeapproaches and timing of evaluation. Pre-implementation, during implementation, andpost-implementation timing is specified. Early useof qualitative methods has the greatest potentialeffect in the clinical applications developmentprocess; therefore, those methods advocating onlypost hoc analysis may be less robust.

5. Discussion

The 12 frameworks have, for the most part, solidtheoretical foundations and well-organized pro-

Page 8: Evaluation frameworks for nursing informatics

Evaluation frameworks for nursing informatics 915

cesses describing their qualitative methods. Thetendency to formally integrate qualitative evalua-tion processes into the clinical system developmentcycle can be viewed as a paradigm shift in informat-ics research evaluation.

Although cost was not addressed in the individualanalyses, it is important to note that the potentialcost related to using these methods must be con-sidered. Qualitative evaluations may appear lesscostly due to low equipment costs; however, thecost related to the time and expertise for adequatedata analysis could be substantial and must be con-sidered.

None of these frameworks have been thoroughlytested, but several have been used for pilot evalu-ations. The success of these methodologies, evenon a small scale, provides a solid foundation forthe further development of research methodologiesusing qualitative approaches.

The informatics evaluation frameworks dis-cussed in this paper are aimed at improving the reli-ability (i.e., consistency) in informatics research.From this perspective, the development and use ofa framework to guide the evaluation process willimprove reliability, and result in an increased rigori

6

TtrtNliaafr

aotroupdeosqp

7. Conclusion

Evaluation frameworks that use both quantita-tive and qualitative approaches are rapidly beingdeveloped and tested for clinical informatics sys-tem development. Frameworks that use qualitativemethods early in the system development cyclehave the potential to enhance user acceptance andideally will prevent system failure. Nurses have theopportunity to participate in the development anduse of these frameworks. This will ensure that nurs-ing needs will be captured during system develop-ment.

Acknowledgment

This project was supported by NLM Medical Infor-matics Training Grant LM07079-11.

References

[1] D.R. Dixon, The behavioral side of information technology,Int. J. Med. Inform. 56 (1—3) (1999) 117—123.

[

[

[

n informatics research.

. Implications for nursing informatics

he frameworks that have been discussed inhis paper provide options for informatics nurseesearchers to address the ongoing work requiredo fulfill a nursing informatics research agenda.ursing informatics research would benefit from

arge multi-center evaluation processes. Exploit-ng natural experiments, such as those cre-ted when a new system is being developednd implemented, is a tremendous opportunityor rigorous and relevant nursing informaticsesearch.

Several different qualitative methods arevailable for use in informatics applications devel-pment and evaluation. These methods ensurehat the healthcare information technologies willeflect an understanding of context, of the users,f the functionality of the system (and thus thesers’ needs) and of the software developmentrocess. Nurses need to be involved in systemevelopment and evaluation processes at thearliest possible time to ensure representationf nursing needs in the system. Developing thekills required to perform both qualitative anduantitative methodologies will ensure nursingresence at the development table.

[2] W.W. Stead, R.B. Haynes, S. Fuller, C.P. Friedman, L.E.Travis, J.R. Beck, et al., Designing medical informaticsresearch and library-resource projects to increase whatis learned, J. Am. Med. Inform. Assoc. 1 (1) (1994)28—33.

[3] C.P. Friedman, J.C. Wyatt, Evaluation Methods in MedicalInformatics, Springer-Verlag, New York, 1997.

[4] N.T. Shaw, CHEATS: a generic information communicationtechnology (ICT) evaluation framework, Comput. Biol. Med.32 (3) (2002) 209—220.

[5] V.L. Patel, J.F. Arocha, M. Diermeier, R.A. Greenes, E.H.Shortliffe, Methods of cognitive analysis to support thedesign and evaluation of biomedical systems: the case ofclinical practice guidelines, J. Biomed. Inform. 34 (1) (2001)52—66.

[6] B.J. Kaplan, Addressing organizational issues into the eval-uation of medical systems, J. Am. Med. Inform. Assoc. 4 (2)(1997) 94—101.

[7] V.L. Patel, D.R. Kaufman, V.G. Allen, E.H. Shortliffe, J.J.Cimino, R.A. Greenes, Toward a framework for computer-mediated collaborative design in medical informatics,Methods Inform. Med. 38 (3) (1999) 158—176.

[8] B. Kaplan, Evaluating informatics applications—–some alter-native approaches: theory, social interactionism, and callfor methodological pluralism, Int. J. Med. Inform. 64 (1)(2001) 39—56.

[9] M. Berg, Patient care information systems and health carework: a sociotechnical approach, Int. J. Med. Inform. 55 (2)(1999) 87—101.

10] J.A. Effken, Different lenses, improved outcomes: a newapproach to the analysis and design of healthcare informa-tion systems, Int. J. Med. Inform. 65 (1) (2002) 59—74.

11] J.G. Anderson, Evaluation in health informatics: social net-work analysis, Comput. Biol. Med. 32 (3) (2002) 179—193.

12] J.I. Westbrook, J. Braithwaite, R. Iedema, E.W. Coiera,Evaluating the Impact of Information Communication Tech-nologies on Complex Organizational Systems: A Multi-

Page 9: Evaluation frameworks for nursing informatics

916 L.M. Currie

disciplinary, Multi-method Framework, Medinfo. 11 (2004)1323—1327.

[13] A. Grant, I. Plante, F. Leblanc, The TEAM methodology forthe evaluation of information systems in biomedicine, Com-put. Biol. Med. 32 (3) (2002) 195—207.

[14] A.W. Kushniruk, Evaluation in the design of health infor-mation systems: application of approaches emerging fromusability engineering, Comput. Biol. Med. 32 (3) (2002)141—149.

[15] E. Ammenwerth, N. de Keiser, An inventory of evaluationstudies of information technology in healthcare: trendsin evaluation research 1982—2002, Medinfo. 11 (2004)1289—1294.

[16] E. Ammenwerth, J. Brender, P. Nykanen, H.-U. Prokosch, M.Rigby, J. Talmon, Visions and strategies to improve evalua-tion of health information systems: reflections and lessonsbased on the HIS-EVAL workshop in Innsbruck, Int. J. Med.Inform. 73 (6) (2004) 479—491.

[17] M. Scriven, Beyond formative and summative evaluation,in: M.W. McLaughlin, E.C. Phillips (Eds.), Evaluation andEducation: A Quarter Century, University of Chicago Press,Chicago, 1997.

[18] G. Wiederhold, E.H. Shorliffe, System design and engi-neering, in: E.H. Shortliffe, L.E. Perreault (Eds.), Med-ical Informatics: Computer Applications in Health Careand Biomedicine, 2nd ed., Springer, New York, 2001, pp.180—211.

[19] B. Schneiderman, Designing the User Interface : Strategiesfor Effective Human Computer Interaction, Addison-WesleyComputer and Engineering Publishing Group, Reading, UK,

tative Perspective, 3rd ed., Jones and Bartlett PublishersInc., Sudbury, MA, 2001, p. 672.

[23] C.P. Friedman, D.K. Owens, J.C. Wyatt, Evaluation andtechnology assessment, in: E.H. Shortliffe, L.E. Perreault(Eds.), Medical Informatics: Computer Applications inHealth Care and Biomedicine, Springer, New York, 2001.

[24] Subjectivism. In: Oxford English Dictionary, 2000.[25] J. Moehr, Evaluation: salvation or nemesis of medical infor-

matics, Comput. Biol. Med. 32 (3) (2002) 113—125.[26] C.F. Waltz, O.L. Strickland, E.R. Lenz, Measurement in Nurs-

ing Research, 2nd ed., F.A. Davis, Philadelphia, 1991.[27] M. Sandelowski, Rigor or rigor mortis: the problem of rigor

in qualitative research revisited, Adv. Nurs. Sci. 16 (2)(1993) 1—8.

[28] Y. Lincoln, E. Guba, Naturalistic Inquiry, Sage Publications,Thousand Oaks, CA, 1985.

[29] A.C.I. Smith, Design and conduct of subjectivist studies,in: C.P. Friedman, J.C. Wyatt (Eds.), Evaluation Methodsin Medical Informatics, 1st ed., Springer-Verlag, New York,1997, pp. 223—253.

[30] J.C. Wyatt, S.M. Wyatt, When and how to evaluate healthinformation systems, Int. J. Med. Inform. 69 (2/3) (2003)251—259.

[31] E.G. Arias, H. Eden, G. Fisher, A. Gorman, E. Scharff,Transcending the individual human mind: creating sharedunderstanding through collaborative design, in: J.M. Car-roll (Ed.), Human—Computer Interaction in the New Mille-nium, ACM Press, Addison-Wesley, New York, 2002, pp. 347—372.

[32] N.T. Shaw, CHEATS: a generic information communication

[

[

1997.[20] J.-Y. Mao, K. Vredenburg, P.W. Smith, T. Carey, The state of

user-centered design practice, Commun. ACM 48 (3) (2005).[21] C.P. Germain, Ethnography, the method, in: P. Munhall

(Ed.), Nursing Research: A Qualitative Perspective, Jonesand Bartlett Publishers, Sudbury, MA, 2001.

[22] C.O. Boyd, Philosophical foundations of qualitativeresearch, in: P. Munhall (Ed.), Nursing Research: A Quali-

technology (ICT) evaluation framework, Comput. Biol. Med.32 (3) (2002) 209—220.

33] A. Kushniruk, Evaluation in the design of health informationsystems: application of approaches emerging from usabilityengineering, Comput. Biol. Med. 32 (3) (2002) 141—149.

34] A.W. Kushniruk, V.L. Patel, Cognitive and usability engi-neering methods for the evaluation of clinical informationsystems, J. Biomed. Inform. 37 (1) (2004) 56—76.


Recommended