26
PERSPECTIVE ON PRACTICE Training evaluation: an analysis of the stakeholders’ evaluation needs Marco Guerci Department of Management, Economics and Industrial Engineering, Politecnico di Milano, Milan, Italy, and Marco Vinante Milan, Italy Abstract Purpose – In recent years, the literature on program evaluation has examined multi-stakeholder evaluation, but training evaluation models and practices have not generally taken this problem into account. The aim of this paper is to fill this gap. Design/methodology/approach – This study identifies intersections between methodologies and approaches of participatory evaluation, and techniques and evaluation tools typically used for training. The study focuses on understanding the evaluation needs of the stakeholder groups typically involved in training programs. A training program financed by the European Social Fund in Italy is studied, using both qualitative and quantitative methodologies (in-depth interviews and survey research). Findings – The findings are as follows: first, identification of evaluation dimensions not taken into account in the return on investment training evaluation model of training evaluation, but which are important for satisfying stakeholders’ evaluation needs; second, identification of convergences/divergences between stakeholder groups’ evaluation needs; and third, identification of latent variables and convergences/divergences in the attribution of importance to them among stakeholders groups. Research limitations/implications – The main limitations of the research are the following: first, the analysis was based on a single training program; second, the study focused only on the pre-conditions for designing a stakeholder-based evaluation plan; and third, the analysis considered the attribution of importance by the stakeholders without considering the development of consistent and reliable indicators. Practical implications – These results suggest that different stakeholder groups have different evaluation needs and, in operational terms are aware of the convergence and divergence between those needs. Originality/value – The results of the research are useful in identifying: first, the evaluation elements that all stakeholder groups consider important; second, evaluation elements considered important by one or more stakeholder groups, but not by all of them; and third, latent variables which orient stakeholders groups in training evaluation. Keywords Training evaluation, Stakeholder analysis Paper type Research paper 1. Introduction The market – that is, buying skills and services from training providers – was once the approach used by company training systems to establish relationships with groups The current issue and full text archive of this journal is available at www.emeraldinsight.com/0309-0590.htm The authors are grateful to Dr Brian Bloch for his comprehensive editing of the manuscript. Training evaluation 385 Received 2 March 2010 Revised 10 June 2010 Accepted 19 July 2010 Journal of European Industrial Training Vol. 35 No. 4, 2011 pp. 385-410 q Emerald Group Publishing Limited 0309-0590 DOI 10.1108/03090591111128342

Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Embed Size (px)

DESCRIPTION

Abstract Purpose – In recent years, the literature on program evaluation has examined multi-stakeholder evaluation, but training evaluation models and practices have not generally taken this problem into account. The aim of this paper is to fill this gap. Design/methodology/approach – This study identifies intersections between methodologies and approachesofparticipatoryevaluation, andtechniquesandevaluation tools typicallyusedfortraining. The study focuses on understanding the evaluation needs of the stakeholder groups typically involved in training programs. A training program financed by the European Social Fund in Italy is studied, using both qualitative and quantitative methodologies (in-depth interviews and survey research). Findings – The findings are as follows: first, identification of evaluation dimensions not taken into account in the return on investment training evaluation model of training evaluation, but which are importantforsatisfyingstakeholders’evaluationneeds;second,identificationofconvergences/divergences between stakeholder groups’ evaluation needs; and third, identification of latent variables and convergences/divergences in the attribution of importance to them among stakeholders groups. Research limitations/implications – Themainlimitationsoftheresearcharethefollowing:first,the analysis was based on a single training program; second, the study focused only on the pre-conditions for designing a stakeholder-based evaluation plan; and third, the analysis considered the attribution of importance by the stakeholders without considering the development of consistent and reliable indicators. Practical implications – These results suggest that different stakeholder groups have different evaluation needs and, in operational terms are aware of the convergence and divergence between those needs. Originality/value – Theresultsoftheresearchareusefulinidentifying:first,theevaluationelements that all stakeholder groups consider important; second, evaluation elements considered important by one or more stakeholder groups, but not by all of them; and third, latent variables which orient stakeholders groups in training evaluation.

Citation preview

Page 1: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

PERSPECTIVE ON PRACTICE

Training evaluation: an analysisof the stakeholders’ evaluation

needsMarco Guerci

Department of Management, Economics and Industrial Engineering,Politecnico di Milano, Milan, Italy, and

Marco VinanteMilan, Italy

Abstract

Purpose – In recent years, the literature on program evaluation has examined multi-stakeholderevaluation, but training evaluation models and practices have not generally taken this problem intoaccount. The aim of this paper is to fill this gap.

Design/methodology/approach – This study identifies intersections between methodologies andapproaches of participatory evaluation, and techniques and evaluation tools typically used for training.The study focuses on understanding the evaluation needs of the stakeholder groups typically involvedin training programs. A training program financed by the European Social Fund in Italy is studied,using both qualitative and quantitative methodologies (in-depth interviews and survey research).

Findings – The findings are as follows: first, identification of evaluation dimensions not taken intoaccount in the return on investment training evaluation model of training evaluation, but which areimportant for satisfying stakeholders’ evaluation needs; second, identification of convergences/divergencesbetween stakeholder groups’ evaluation needs; and third, identification of latent variables andconvergences/divergences in the attribution of importance to them among stakeholders groups.

Research limitations/implications – The main limitations of the research are the following: first, theanalysis was based on a single training program; second, the study focused only on the pre-conditions fordesigning a stakeholder-based evaluation plan; and third, the analysis considered the attribution ofimportance by the stakeholders without considering the development of consistent and reliable indicators.

Practical implications – These results suggest that different stakeholder groups have differentevaluation needs and, in operational terms are aware of the convergence and divergence betweenthose needs.

Originality/value – The results of the research are useful in identifying: first, the evaluation elementsthat all stakeholder groups consider important; second, evaluation elements considered important byone or more stakeholder groups, but not by all of them; and third, latent variables which orientstakeholders groups in training evaluation.

Keywords Training evaluation, Stakeholder analysis

Paper type Research paper

1. IntroductionThe market – that is, buying skills and services from training providers – was once theapproach used by company training systems to establish relationships with groups

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0309-0590.htm

The authors are grateful to Dr Brian Bloch for his comprehensive editing of the manuscript.

Trainingevaluation

385

Received 2 March 2010Revised 10 June 2010

Accepted 19 July 2010

Journal of European IndustrialTraining

Vol. 35 No. 4, 2011pp. 385-410

q Emerald Group Publishing Limited0309-0590

DOI 10.1108/03090591111128342

Page 2: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

outside the firm. Presently, however, companies also try to establish such relationshipsthrough participation in public programs typically financed by public bodies andintended to encourage and stimulate continuous training, which is considered to be a“collective good”.

In training processes delivered in such contexts, many actors are required to makedecisions which may have an impact on the performance of the training initiative.Typically, these stakeholders have different institutional missions, and their traininginterests and objectives may be different as well: their inclusion in the evaluation processcreates and maintains diversity within the participating stakeholder group (Wills, 1993;Mathie and Greene, 1997). Furthermore, “stakeholders can be particularly helpful whenreviewing evaluators” recommendations for program revisions. Recommendations toprogram personnel are commonly expected in evaluation reports (Brandon, 1999, p. 363).

This study focuses on training evaluation in such multi-stakeholder contexts, and theaim is to identify intersections between two different disciplines. The first is programevaluation, a formalized approach to the study of the goals, processes, and impacts ofprojects, policies and programs implemented in public and private sectors. The second istraining and development management, and in particular, the literature on theevaluation models and tools used to evaluate training within companies.

This study focuses on a continuous training project financed by an Italian publicauthority, in order to highlight the evaluation needs of the stakeholder groups typicallyinvolved in this kind of training process.

In the research reported here, stakeholders were contacted after the planning anddelivery of the program’s training modules, using qualitative and quantitative methods.In particular, the research process consisted of an initial qualitative research phase ofkey informants belonging to the different stakeholder groups, followed by quantitativeresearch on the entire population.

The results of the research can be applied usefully to various different purposes.First, the study identifies convergences and divergences between the evaluation needs ofthe different stakeholder groups. Second, it identifies the “guidelines” which orientstakeholder groups in training evaluation.

2. Literature analysisThe literature analysis focuses on three issues. First, the theory on training evaluation isanalyzed, indicating the theoretical reasons why the stakeholder-based evaluationapplied to training can be considered important. The findings in the literature on trainingstakeholder-based evaluation are then presented. The final part of the survey concernsthe contexts in which company training systems operate, and demonstrates the practicalimportance of stakeholder-based training evaluation. Overall, this section highlightsknowledge gaps and defines specific research questions.

2.1 The theoretical background to training evaluation: why a stakeholder-based trainingevaluation?Training and education are an investment from which the organisation expects apositive return; that is, a return on investment (ROI) from training and education.

For this reason, starting from the hierarchical evaluation model of Kirkpatrick (1998)and Phillips (1996) proposes a ROI training evaluation model which comprises fivelevels; each investigating different elements:

JEIT35,4

386

Page 3: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

. Level 1. Reactions: measures programme participant satisfaction.

. Level 2. Learning: focuses on what participants have learned during theprogramme.

. Level 3. Application and implementation: determines whether participants applywhat they learned on the job.

. Level 4. Business impact: focuses on the actual results achieved by theprogramme participants, as they successfully apply what they have learned.

. Level 5. ROI: compares the monetary benefits from the programme with theprogramme’s costs.

This model has made valuable contributions to training evaluation theory and practice,because it stresses the importance of thinking about and assessing training within a“business perspective”. Nevertheless, the model has at least three limitations.

First, the model concentrates on a restricted set of variables. In fact, the five levels ofevaluation, which it proposes are based on an extremely simplified view of trainingeffectiveness. In particular, they do not consider a wide range of organisational, individual,training-design and delivery factors that may influence training effectiveness (Wills, 1993;Bramley and Kitson, 1994; Cannon-Bowers et al., 1995; Ford and Kraiger, 1995; Salas andCannon-Bowers, 2001; Tannenbaum and Yukl, 1992; Kontoghiorghes, 2001). The secondcriticism concerns the causal linkages among training outcomes at different levels. That is,it is not possible to achieve positive results at top levels, if this did not occur at the lowerlevels as well. Research (Alliger and Janak, 1989; Talbot, 1992; Alliger et al., 1997) in thefield has largely failed to confirm such causal linkages.

A third weakness of the hierarchical model of evaluation is that it lacks a multi-actorperspective. In fact, the point of view assumed by the model is that of the company’sshareholders. Indeed, the model assumes that each level of evaluation provides data thatis more informative than the last (Alliger and Janak, 1989). This assumption hasgenerated “the perception among training evaluators that establishing level four resultswill provide the most useful information about training program effectiveness” (Bates,2004, p. 342). As a consequence, the evaluation needs of the stakeholders involved in thetraining process are neglected, and this is particularly restrictive in contextscharacterized by the presence of a plurality of actors.

Applying stakeholder-based evaluation to training may be useful in dealing with thisfinal criticism by including the different points of view of the stakeholder groups in theevaluation program’s design and implementation (Bramley and Kitson, 1994; Mathieand Greene, 1997; Mark et al., 2000; Holte-McKenzie et al., 2006). This could also impacton the first criticism, because designing the evaluation program on the basis ofstakeholder evaluation needs entails extending the set of variables considered by theROI training evaluation model.

2.2 Studies on stakeholder-based training evaluationFor some years, the literature on program evaluation has dealt with the topic ofmulti-stakeholder evaluation (Gregory, 2000; Mark et al., 2000), although reflectionon the issue and practical evaluation in the training field have been less evident (Lewis,1996). In fact, the best-known model of training evaluation is based almost exclusivelyon measuring results from the perspective of one single actor. This actor corresponds

Trainingevaluation

387

Page 4: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

largely to the company’s shareholders, considered as the subjects that fund trainingprograms. This inevitably induces the evaluation system to focus on the impact, infinancial or operational terms, of training on company performance, without consideringthe effects on other stakeholders:

Stakeholder-based evaluation is an approach that identifies, and is informed by, particularindividuals or groups. Stakeholders are the distinct groups interested in the results of anevaluation, either because they are directly affected by (or involved in) program activities, orbecause they must make a decision about the program or about a similar program(Michalski and Cousins, 2000, p. 213).

The literature on stakeholder-based evaluation states that if evaluation is to improveprogram performance, it has an instrumental use and must be structured as a system,which supports actions, and even more so, decision-making processes (Flynn, 1992). Forthis reason, it is necessary to know the evaluation needs of the actors involved in theprogram whose evaluation system has to be designed:

Instrumental use, perhaps the earliest type of use examined in literature, refers to usingevaluation findings as a basis for action [. . .] Examples of instrumental use include eliminatinga program shown to be ineffective, modifying a program based on an evaluation, targeting aprogram to new audiences, allocating new budget outlay for a program and changing thestructure of the organization in which a program operates (Burke Johnson, 1998, p. 94).

It is consequently important to activate a stakeholder-based evaluation process thatinvolves the actors. According to the theory on participatory evaluation (Cousins andWhitmore, 1998; Michalski and Cousins, 2000), such inclusion can be practical, when itspurpose is to improve the program’s performance or transformative, when it aims toemancipate the disadvantaged social/cultural groups at which the program is targeted.This classification is consistent with the more general theories of stakeholdermanagement, which are:

. instrumental theory of stakeholder management, grounded on the assumption thatorganisations which establish relationships with stakeholders, based on trust andcollaboration will have competitive advantages compared with companies whichdo not establish such relationships. The competitive advantages derive from thefact that relationships based on mutual trust and cooperation facilitate efficientagreements which minimize transaction costs (Friedman and Miles, 2006); and

. ethical-normative theory of stakeholder management, which argues that thenormative base of the theory, including the “identification of moral orphilosophical guidelines for the operation and management of the corporation”is the core of the stakeholder theory (Donaldson and Preston, 1995, p. 71).

Based on such considerations, various studies have discussed the topic ofstakeholder-based training evaluation by adopting the concept of practicalparticipatory evaluation, which itself is based on the more general, instrumental theoryof the stakeholder management. Hence, this research strand defines a stakeholder as asubject able to influence the performance of a training process, because she/he is requestedto make decisions during the process. It also conceives the evaluation system as an“instrument” for providing the stakeholders with the information necessary to validatethe decisions they are requested to make.

JEIT35,4

388

Page 5: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

The studies, which delve further into stakeholder-based evaluations applied totraining, divided into two broad categories. Studies in the first category provide atheoretical view of the topic and define which evaluation process should be used forstakeholder-based training evaluation (Reineke, 1991; Talbot, 1992; Wills, 1993; Brandon,1998; Brandon, 1999; Bates, 2004; Nickols, 2005; Shridaran et al., 2006). These studiesadopt the instrumental theory of stakeholder management and consider the participationin evaluation as practical. In fact, the assumption is that, in order to maximize the returnon training investment, there has to be a balance between the contributions the trainingprocess receives from stakeholders and the incentives that they receive in return(Nickols, 2005). For instance, managers that finance the training program investresources, in order to exert a positive impact on the organisation’s business performanceor on the individual performance of participants. The trainees, in turn, participate withtheir efforts, attention and time in the hope of acquiring new knowledge and learningconcepts, methods, and tools that are useful for their careers. The various stakeholdergroups must perceive a value in this exchange: that is, the incentives must have a valueequal to, or greater than, the contributions. The evaluation plan therefore enables thestakeholder groups involved in the program to monitor the added value of this exchange.On the basis of the above-mentioned approach, this research strand defines theevaluation process to be implemented for a training stakeholder-based trainingevaluation (Figure 1).

The second category of studies concerning stakeholder-based training evaluationanalyzes the evaluation needs of the stakeholder groups typically involved in a trainingprogram. That is, they deal with the elements of evaluation they consider useful formonitoring the balance between contributions and incentives. Such studies havedemonstrated the existence of significant differences between the evaluation needs ofstakeholder groups. In particular, they have focused on:

. the evaluation needs of the stakeholder groups within the company, that is,managers, training experts and participants (Brown, 1994; Michalski and Cousins,2000); and

. the evaluation needs of the stakeholder groups outside the company, that is, externaltraining providers, public training schools and trade unions (Garavan, 1995).

2.3 How company training systems operate: the importance of stakeholder-basedtraining evaluationMany companies have established training systems dedicated expressly to providingthe training support necessary to implement corporate strategy. Such systems interactconstantly with the external environment, with which they exchange practices,resources and competencies. In particular, an analysis of the relationship frames thatthese training systems form with actors outside the company shows that there are twodifferent options that companies may pursue.

The first option entails the use of the company’s resources to purchase servicesdirectly from external training providers. The second option consists of taking part inpublic policy programs typically financed by third parties. These programs have thespecific purpose of producing collective goods (Ostrom, 1990), such as an increasein the employment rate, a greater competitiveness of small and medium-sizedbusinesses, and innovation[1].

Trainingevaluation

389

Page 6: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

On selecting one or the other option, a company chooses different regulation systemsdistinguished by different principles and rules used for the allocation of resourcesamong actors (Polanyi, 1944).

The “public-policy” regulation system has the following basic characteristics(Meny and Thoenig, 1989): it is a response to collective demands and requirements; it isextremely complex from both decisional and implementation perspectives; its purpose isto encourage changes in specific populations; and it uses ad hoc instruments andprocedures, combined with incentives to achieve the desired behaviour. In terms ofresources allocation, the most important characteristic of this regulation system is thepresence of a public authority that defines the principles of resources allocation indifferent areas (for instance training, work, health, etc.) and in regard to differentsubjects (individuals, families, workers, companies, etc.).

The other regulation system is the “market”. This is based on the interaction betweentraining demand and supply. The operating model for this system, therefore, requires noregulation processes governed by any third-party but is determined mainly by prices,which act as self-regulating mechanisms (Polanyi, 1944). What is relevant for thepurposes of this paper is the fact that the public-policy regulation system maybe considered a privileged context for the application of stakeholder-based trainingevaluation because:

Figure 1.Stakeholder-basedevaluation process

Macro-phase 1: Evaluation program design

Macro-phase 2: Evaluation program implementation

Identification of the significant stakeholder classes within the training program

Identification, per class, of the program performances able to provide the expected benefits and(where necessary) identify a scale of priorities per class and per performance

Definition (where necessary) of a weight for each stakeholder class in relation to its capacityto influence the program's performance

Design of an evaluation program for the performances identified for each class, anddesign of data reporting

Collection, processing and data presentation to the stakeholder classes according to the above-definedtimes and conditions

Periodic assessment of the stakeholders' satisfaction with the evaluation system

Source: Adapted from Nickols (2005)

JEIT35,4

390

Page 7: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

. a number of actors (stakeholders) are requested to make decisions that have animpact on the program’s performance; and

. these actors may have institutional missions, and consequently interests,objectives and evaluation needs which may not be entirely convergent.

3. Knowledge gaps and research questionsThe above survey of the literature has shown that a stakeholder-based approach totraining evaluation is useful for two reasons. First, it enables the design of evaluationprograms, which the actors involved in the training program can actually use to supporttheir decisional processes. Second, it expands the range of the variables considered bythe ROI training evaluation model, normally focused on the evaluation needs of thecompany’s shareholders (Phillips, 1996; Kirkpatrick, 1998; Ross, 2008).

These two advantages are significant because:

(1) company training systems are structured as open systems which participate innetworks outside the company, and which involve a number of actors that makedecisions with an impact on training program performance; and

(2) such networks are often part of the public-policy regulation system governedand financed by a third-party. Consequently, they include actors with differentinstitutional missions, and, therefore, specific evaluation needs.

This study belongs to the second strand of research on stakeholder-based trainingevaluation, because its aim is to identify the evaluation needs of the stakeholder groupstypically involved in a training project within the public-policy regulation system.

The results of the study are useful in identifying:. convergences, that is, the evaluation elements that all stakeholder groups

consider important;. divergences, that is, evaluation elements considered important by one or more

stakeholder groups, but not by all of them; and. latent variables, identified through a factor analysis, which orient stakeholder

groups in training evaluation and convergence/divergences among stakeholdergroups in the attribution of importance to these variables.

4. Methodology and research processThe research reported by this study selected a training program financed by the ItalianLombardy Region (European Social Fund, D1). The program focused on the “promotionof a competent, qualified and adaptable workforce”, and its objective was to implementtraining interventions which enhance the competitiveness of local manufacturing withparticular reference to small and medium-sized businesses. Enterprise associationscould submit their training proposals to the public authority, which then selected therelevant training providers.

This case was chosen, because it was being implemented by means of cooperationand co-planning among enterprise associations, training providers and companies.The training program was, therefore, considered to be a privileged application ofstakeholder-based training evaluation.

The research process was divided into two phases: the first was based on qualitativetechniques, the second on the survey research method.

Trainingevaluation

391

Page 8: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

4.1 First phaseThe first research phase consisted of in-depth interviews, intended to identify theevaluation dimensions important for the various stakeholder groups involved in thetraining program.

First, subjects able to supply information useful for the exploratory purposes of thisphase were selected from each stakeholders group. The interviewee selection processidentified key informants able to provide items helpful for reconstructing the evaluationneeds of each stakeholder group.

In light of the theory on stakeholder management in evaluation programs (Rossi et al.,1999), the involvement of the following stakeholder groups was considered essential:

. Target participants. those at which the training program is aimed.

. Decision makers. the actors who activated and financed the training program;they were also responsible for monitoring it.

. Program staff. the actors who carried out or supported the activities included inthe program.

. Program managers. the actors who supervised and managed the program.

. Contextual stakeholders. the actors operating in the environment surrounding theprogram and who also had to make decisions which might influence the results.

Table I shows the actors interviewed for each stakeholders group.The interview structure can be illustrated by referring to the concepts of “principle”,

“dimension” (or, in the case of the evaluation research “result dimension”) and“indicator”. Figure 2 shows the logical-formal relationships among these concepts.

A principle is a general viewpoint which helps to orientate the evaluation to definedareas. Examples of principles are the effectiveness of the program, its efficiency,fairness, and so on. A dimension is the first breakdown level of a principle: while aprinciple is, by nature, general and partly uncontextualised, dimensions are more

Stakeholder groups Actors identified within the training programNumber of subjectinterviewed

Target participants Training participants 3Decision makers Manager of the public body financing the program 1Program staff Trainers on the program conducted by the training

providers 3Program managers Program manager and project coordinators of the

program by the training providers 2Contextual stakeholders Training managers of the companies involved in the

program; manager of the enterprise associationresponsible for training activities 4

12

Table I.Stakeholder groups,actors identified,subjects interviewed

Figure 2.Principles-dimensions-indicators

Principle

Dimension Dimension

Indicator Indicator Indicator Indicator

JEIT35,4

392

Page 9: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

specific and concern the object of evaluation. These dimensions can also be related toresult dimensions, which are the specific, actual results pursued by the program.Finally, the indicator is a tool used to classify, categorize, and/or measure a dimension(Lazarsfeld and Rosenberg, 1955; Scriven, 1993).

According to the objective of this phase, the most suitable principles provided by theliterature (Rossi et al., 1999) were selected and adapted. This yielded seven principles,as follows:

(1) Efficacy. whether the training intervention is able to achieve its aimsconsistently with the needs expressed by the actors involved.

(2) Efficiency. the results compared to the resources invested.

(3) Accessibility. whether the training initiative discriminates against certaingroups in gaining its benefits.

(4) Image. positive effects on the image (internal and/or external) due to theorganization’s realization of/participation in the training program.

(5) Multiplier/transferability effect. the intervention’s capacity to generate positiveeffects; more specifically, reproducibility, or transferability, indicates whetheran intervention can be repeated/used in other, similar contexts.

(6) Innovation. the program’s ability to diffuse previously unused practices withinits context.

(7) Synergy. the program’s ability to maximize its results by interacting incoordination with other similar programs.

Using these general principles adapted from the literature, it was hypothesized thateach stakeholder group conceived the principles in a specific way and translated theminto different result dimensions. Hence, each interview included the followingquestions relative to each principle:

. From your point of view, with reference to the training program being evaluated,is this principle useful (for the evaluation)?

. If yes, what dimensions would you use to assess this principle?

Starting from the seven principles explained above, 35 result dimensions were identified;some of them were common to two or more stakeholder groups, others were specific tojust one stakeholder group.

4.2 Second phaseIn order to determine the evaluation needs of the stakeholder groups in more detail, thesecond phase of the research examined the importance attributed by each stakeholdergroup to the result dimensions identified in the previous phase. A survey was carriedout, using a structured questionnaire consisting of the following question for eachresult dimension (35 items): how important is it for you? (measured on a cardinal scalefrom 1 to 10).

Before the questionnaire was administered extensively, it was tested by two subjectsfrom each stakeholder group (in total, 12 subjects): this pre-test assessed comprehensionof the questionnaire, the time required to complete it, the functional and discriminationcapacity of the measurement scale.

Trainingevaluation

393

Page 10: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Once the tool had been created and the pre-test phase completed, the questionnairewas sent by e-mail to all subjects belonging to the stakeholder groups involved in thetraining program. The population was composed as follows: 146 participants,48 companies, 26 trainers, 16 training providers, three representatives of thecompany association, two representatives of the public authority financing the trainingprogram.

After the questionnaires had been dispatched, procedures were established to sendreminders, using a personalized recall system, first by e-mail, subsequently by phone.At the end of the field phase, the subjects who had returned correctly completedquestionnaires were the following: 38 participants, 21 companies, 22 trainers, ninetraining providers, three representatives of the company association, tworepresentatives of the public authority financing the training program. The responserate for each stakeholder group was different, and the rate per group is summarised inTable II.

5. FindingsThe main findings can be summarised under the following headings:

(1) the levels of importance attributed in total to the items by the sample ofstakeholders;

(2) differences among the different stakeholders groups regarding the importanceattributed to the items;

(3) latent variables underlying the 35 items; and

(4) differences among the stakeholder groups regarding the importance attributedto the latent variables.

These findings are presented in greater detail below.

5.1 Levels of importance attributed in total by the sample of stakeholders to itemsConsidering the items achieving the highest values on the scales (Table III[2]), four referto the ROI training evaluation model (Phillips, 1996): Item 1 (satisfaction withdidactics/training methods), Item 3 (quality and amount of knowledge and skillsacquired by the participants), Item 10 (utility of acquired knowledge and skills for theparticipants) and Item 11 (satisfaction level of companies with the training program).

The additional items, which integrate the ROI training evaluation model,essentially refer to:

Stakeholder groups Response rate (%)

Participants 26.03Companies 43.75Trainers 84.61Training providers 56.25Representatives of the enterprise association 100.00Representatives of the public authority 100.00

Notes: The total number of responders amounted to 95 (a 39.4 per cent response rate)Table II.Response rates

JEIT35,4

394

Page 11: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

. the training resources (efficient use of resources by training providers, increasein public resources to be invested in continuous training, increased investment intraining by companies);

. access to training opportunities (accessibility to training program for workers);

. consistency between the training supplied and the requirements of companies andparticipants (more knowledge by training providers about company’ trainingneeds, value of knowledge and skills acquired for the careers of participants,alignment between the program’s training level and level of participants); and

. alignment between training demand and supply (creation of a network amongtraining providers, financer and companies).

The other dimensions – not included in the ROI training evaluation model – areconsidered important by all the stakeholder groups. Consequently, they mustbe considered in the evaluation design, if the evaluation plan is to satisfy stakeholderevaluation needs.

5.2 Differences among stakeholders groups regarding the importance attributed to theresults dimensionsThis section analyses the attribution of importance by stakeholder groups: thedifferences highlighted, refer to the attribution of importance for subpopulationscorresponding to the stakeholder groups defined in Table I. The statistically significantdifferences among groups are:

(1) Satisfaction level of companies with the training program, which is moreimportant for the enterprise association and the public authority than forcompanies.

(2) Possibility to define training financing procedures with the public authorities,which is important for the training providers (delegated to manage resourcesand account for them) and the enterprise association.

(3) Improvement in the training providers’ image among companies, which isimportant for training providers and trainers, but less important for participants.

Dimension Mean

1. Satisfaction with didactics/training methods 8.802. Increase in training investment by companies 8.643. Quality and amount of knowledge and skills acquired by participants 8.444. Increase in public resources to be invested in continuous vocational training 8.295. More knowledge by training providers about company training needs 8.286. Utility of acquired knowledge and skills for the careers of participants 8.217. Alignment between level of training program and level (of knowledge/skill) of participants 8.098. Efficient use of the resources by training providers 8.069. Creation of a network among training providers, financer and companies 8.05

10. Utility of acquired knowledge and skills for participants and their work on a short-termbasis 7.89

11. Satisfaction level of companies purchasing the training program 7.8812. Accessibility to training program (for workers/employees) 7.84

Table III.Result dimensions

considered mostimportant by all

stakeholder groups

Trainingevaluation

395

Page 12: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

(4) Transparency of the mechanism controlling access to financed training services,which is important for the enterprise association, the training providers and thepublic authority, but considered less important for participants and trainersprobably, who are probably more focused on micro-dimensions related tointeraction processes in training settings.

(5) Number of bureaucratic procedures imposed on participating companies, whichis important for the enterprise association, the public authority, trainingproviders and companies, whilst it is less important for participants and least ofall for trainers; the latter, as for the previous item, seem more interested in themicro-dimensions associated with training processes.

(6) Impact of the training program on company results, which is more importantfor the public authority, the enterprise association and companies, and lessimportant for participants and for the training supply system (trainers andtraining providers).

(7) Quality and level of knowledge and skills acquired by participants, which is moreimportant for trainers and training providers, even more so than for participants.

Analysis of the post hoc variance (Scheffe procedure)[3] was useful to refine theanalyses, because it highlighted – for six of the seven dimensions listed above – whichstakeholder groups were differentiated in attribution of importance. The following listpresents these dimensions and the different attributions[4]:

. The quality and level of knowledge and skills acquired by participants is moreimportant for trainers than for participants.

. The number of bureaucratic procedures imposed on participating companies ismore important for training providers than for trainers.

. The impact of the training program on company results is more important forcompanies than for trainers, but is important even for participants.

. The transparency of the mechanism controlling access to financed trainingservices is more important for training providers and companies than forparticipants.

. The improvement in the training providers’ image among companies is moreimportant for training providers and trainers than for participants.

. The possibility to define training financing procedures with the publicauthorities is more important for training providers than for trainers andparticipants.

Table IV shows the differences among stakeholder groups with regard to theimportance attributed to dimensions.

5.3 Latent variables underlying the dimensionsThe aim of the factor analysis was to extract, starting from the 35 dimensions, latentmacro-variables representing a linear combination of the original variables and whichwere independent of each other[5].

The exploratory factor analysis was conducted on 26 items[6] and yielded fivefactors explaining 65.5 per cent of the total variance (Appendix 3).

JEIT35,4

396

Page 13: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

The factor analysis obtained the following latent variables:

(1) Support for the competitiveness of companies and human resources (Factor 1,Cronbach’s alpha 0.866): training is considered (and evaluated) as a meansavailable to companies and workers to improve performance and enhance thecompetitiveness of the economic and productive system. This factor refers toboth company competitiveness and professional worker/employee development.

(2) Promotion of fairness and image (Factor 2, Cronbach’s alpha 0.805): training isconsidered (and evaluated) as promoting social equity – training must alsobe made accessible to “disadvantaged” subjects in the system, including workersand companies – and consolidating the image of actors in the externalenvironment.

ParticipantsTrainingproviders Trainers

Enterpriseassociation

Publicauthority Companies

Trainingof trainers(TOT) *

Satisfaction levelof companies withtraining program 6.60 8.66 8.50 9.66 10 7.57 7.88Impact of trainingprogram oncompany results 6.78 * * 8.55 7.95 * * 8.33 10 8.33 * * 7.68Possibility todefine thefinancingprocedures fortraining withpublic authorities 7.00 * * 9.66 * * 7.00 * * 9.66 7.5 8.33 7.64Improvement intraining providers’image amongcompanies 6.60 * * 9.11 * * 8.59 * * 7.33 7.5 7.28 7.49Transparency ofmechanismcontrolling accessto financedtraining services 6.68 * * 9.11 * * 6.81 9.33 9 8.47 * * 7.47Number ofbureaucraticproceduresimposed onparticipatingcompanies 7.26 8.88 * * 6.09 * * 9.66 9 8.28 7.22Quality andamount ofknowledge andskills acquired byparticipants 7.63 * * 8.88 9.36 * * 8.66 8.5 8.71 8.44

Notes: *The analysis of variance (ANOVA) one-way test shows that the dimensions listed in the charthave significant differences between means for groups of respondents (sig. 0.01); * *ANOVA test, posthoc by Scheffe, mean differences of statistically significant groups (sig. 0.05)

Table IV.Differences amongstakeholder groups

regarding the importanceattributed to dimensions

Trainingevaluation

397

Page 14: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

(3) Network stabilization (Factor 3, Cronbach’s alpha 0.844): training is considered(and evaluated) as a stable system of relationships that are financed by publicand private resources, managed jointly by the actors, and programmed forknowledge transfer.

(4) Training services offer (Factor 4, Cronbach’s alpha 0.807): training is considered(and evaluated) a provision of services for knowledge transfer; this supply chainmust be efficient – in order to reduce the costs to companies of accessing it –and effective – in order to achieve the training objectives.

(5) Learner care (Factor 5, Cronbach’s alpha 0.487[7]): training is considered(and evaluated) as a service which, therefore, considers mainly the variabilityand individual specificities within the training process.

5.4 Differences among stakeholder groups regarding the importance attributed to thelatent variablesThe variables making up each of the five above-described factors were used to create therespective measurement scales (indexes). The indexes (Table V) which totalled the highestattributions of importance were “learner care” (8.20) and “network stabilization” (8.18),while the index with the lowest attribution was “promotion of fairness and image” (7.33).

The analysis of variance (ANOVA) one-way highlighted which indexes had significantmean differences between stakeholder groups: they were “competitiveness for companiesand human resources”, “network stabilization”, “training services offer ”. The “learnercare” index lay just above the acceptable limit of significance, whilst the “promotion offairness and image” index did not have an adequate significance value (sig. . 0.05).

The post hoc test (Scheffe procedure) showed that, for the index “competitiveness ofcompanies and human resources”, two stakeholder groups were significantlyreciprocally differentiated in the attribution of importance; in particular, trainers andcompanies considered this factor more important than participants.

Table V shows the differences among stakeholder groups in the importanceattributed to latent variables.

6. ConclusionsAs reported in the literature review, the studies on stakeholder-based evaluations appliedto training, divide into two broad categories: those that provide a theoretical view of thetopic and define which evaluation process should be used for stakeholder-based trainingevaluation; studies that analyze the evaluation needs of the stakeholder groups typicallyinvolved in a training program that is, the elements of evaluation that they consider usefulfor monitoring the balance between contributions and incentives. This paper is in thesecond category, as it identifies the evaluation needs of the stakeholder groups typicallyinvolved in a training project. The research focused on an intervention in a public-policyregulation system (governed and financed by a third authority), because it includes actorswith different institutional missions, and therefore, specific evaluation needs.

The outputs of this research are as follows: identification of evaluation dimensionsnot taken into account by the ROI training evaluation model of training evaluation,but important for satisfying stakeholder evaluation needs (LeBaron Wallace, 2008);identification of convergences/divergences among stakeholder group evaluation needs;identification of latent variables and convergences/divergences in the attribution ofimportance to them among, stakeholders groups (Michalski and Cousins, 2000).

JEIT35,4

398

Page 15: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Tra

iner

sT

rain

ing

pro

vid

ers

Pu

bli

cau

thor

ity

Com

pan

ies

En

terp

rise

asso

ciat

ion

Par

tici

pan

tsT

OT

An

ova

one-

way

(sig

,0.05)

1.C

omp

etit

iven

ess

ofco

mp

anie

san

dH

R8.

45*

8.44

9.43

8.22

*8.

147.

24*

7.93

0.00

02.

Pro

mot

ion

offa

irn

ess

and

imag

e7.

278.

028.

337.

618.

066.

937.

330.

128

3.N

etw

ork

stab

iliz

atio

n7.

969.

248.

508.

469.

137.

828.

180.

019

4.T

rain

ing

-ser

vic

esof

fer

6.91

8.16

9.20

7.89

8.00

7.15

7.42

0.02

35.

Lea

rner

care

8.48

8.63

8.33

8.38

7.89

7.84

8.20

0.05

3

Note:*

AN

OV

Ate

st,

Sch

effe

pos

th

octe

st,

stat

isti

call

ysi

gn

ifica

nt

mea

ng

rou

pd

iffe

ren

ces

(sig

.0.

05)

Table V.Differences among

stakeholders groupsregarding the importance

attributed to latentvariables

Trainingevaluation

399

Page 16: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Considering the results of the studies on stakeholder-based evaluation included in thesecond category (studies about the evaluation process to be used in participatoryevaluation, and in particular, Nickols, 2005), the results of this paper are useful fordesigning – before the training delivery – an evaluation system for training programs.In particular, the results might be useful in the following phases of the stakeholder-basedevaluation process:

. “Identification of the significant stakeholder groups within the training program”,as the paper suggests the stakeholder groups that are part of the decision-makingprocess for a training program included in a public-policy regulation system.

. “Identification, per group, of the program performances able to provide theexpected benefits and (where necessary) identify a scale of priorities per groupand per performance”, as the paper identifies the evaluation needs of thestakeholder groups in supporting their decision-making processes in the trainingprogram.

The main limitations of the study can be summarised as follows:. the analysis was based on a single training program, which reduces the

possibility of generalisation;. the study focused on the pre-conditions for designing a stakeholder-based

evaluation plan, not on the operational evaluation process; and. the analysis considered the attribution of importance by the stakeholders,

without addressing the problem of how dimensions or latent variables can be“translated” into a set of essential and consistent indicators.

In the light of these limitations, possible developments for further research could be:. The added value of stakeholder-based evaluation and the correlated levels of

increase in training-process performances. This could be useful in identifying theapplicability conditions for stakeholder-based evaluation and privileged contextsof application.

. The impact of regulation systems (market and public policy) on the quality andquantity of stakeholder groups to be involved in the evaluation, their specificevaluation needs, and the evaluation process to be implemented. This potentialdevelopment of the research should take into account the formalized evaluationsystems based on quality and standards that are becoming more and moreimportant, both in the market regulation system and the public-policy regulationsystem (i.e. European Credit Transfer and Accumulation System).

. Methods for “producing” consensus among stakeholders and the process ofcreating a shared evaluation program.

From a methodological point of view, research should integrate both qualitativemethods (Talbot, 1992; Wills, 1993; Maxwell, 1996; Miles and Huberman, 1994)and survey research methods (Hinkin, 1998; Miller, 1994); and, in particular for theanalysis of consensus building processes, it should be collaborative (Bramley andKitson, 1994; Shani et al., 2007).

JEIT35,4

400

Page 17: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Notes

1. The European Union, for instance, has established a specific action programme (DecisionNo. 1720/2006/EC of the European Parliament) in the field of life-long learning. The program –included in the general policy “Education and Training 2010” – has the aim of contributing tothe community’s development as an advanced knowledge society, in accordance with theLisbon strategy objectives.

2. Table III illustrates the mean values of the responding sample, with the distribution in meanvalues $7.80; the mean values for all 35 items are presented in Appendix 1.

3. After identifying the existence of differences between the mean values, the post hoc intervaltest and comparisons of multiple couples make it possible to assess which mean differs fromthe others. The multiple interval tests enabled us to identify the homogeneous subclasses ofmeans that did not differ from each other. By means of this multiple couple comparison, it waspossible to identify the difference between each couple of means and obtain a matrix whichhighlighted the means of the groups with significant differences (sig. 0.05), as in Scheffe’s posthoc test used to analyze variances. It should be pointed out, however, that the test results usedwere affected by the low number of some of the stakeholder classes.

4. See Appendix 2 for a complete overview of the significant differences between the meanvalues per stakeholder group.

5. In this case, in order to improve the interpretation of factors in the exploratory analysis,we decided to use a “Varimax” orthogonal rotation and therefore build independent factors.

6. The variables with factor loading ,0.5 were eliminated from the analysis (eight variables),as well the variables whose elimination improved the Cronbach’s a value on the respectivemeasure scales, making the same scale more coherent (one variable).

7. This – “learner care” scale – was built using three variables which “loaded” on thecorresponding factor: although the Cronbach’s alpha value was lower than the acceptablevalue (0.6), also for scales consisting of a reduced number of items, we decided to maintainthe corresponding analysis for its theoretical significance, as it corresponds to the first levelof evaluation of the ROI model.

References

Alliger, G.M. and Janak, E.A. (1989), “Kirkpatrick’s levels of training criteria: thirty years later”,Personnel Psychology, No. 42, pp. 331-42.

Alliger, G.M., Tannenbaum, S.I., Bennett, W., Traver, H. and Shotland, A. (1997), “A meta-analysisof the relations among training criteria”, Personnel Psychology, No. 50, pp. 341-58.

Bates, R.A. (2004), “A Critical analysis of evaluation practice: the Kirkpatrick model and theprinciple of beneficence”, Evaluation and Program Planning, No. 27, pp. 341-7.

Bramley, P. and Kitson, B. (1994), “Evaluating training against business criteria”, Journal ofEuropean Industrial Training, No. 1, pp. 10-14.

Brandon, P.R. (1998), “Stakeholder participation for the purpose of helping ensure evaluationvalidity: bridging the gap between collaborative and non-collaborative evaluations”,American Journal of Evaluation, No. 19, pp. 325-37.

Brandon, P.R. (1999), “Involving program stakeholders in reviews of evaluators’ recommendationsfor program revisions”, Evaluation and Program Planning, No. 22, pp. 363-72.

Brown, D.C. (1994), “How managers and training professionals attribute causality for results:implications for training evaluation”, unpublished doctoral dissertation, College ofEducation, University of Illinois, Urbana-Champaign, IL.

Trainingevaluation

401

Page 18: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Cannon-Bowers, J.A., Salas, E., Tannenbaum, S.I. and Mathieu, J.E. (1995), “Toward theoreticallybased principles of training effectiveness: a model and initial empirical investigation”,Military Psychology, No. 7, pp. 141-64.

Cousins, B. and Whitmore, E. (1998), “Framing participatory evaluation”, New Directions forEvaluation, No. 80, pp. 5-23.

Donaldson, T. and Preston, L.E. (1995), “The stakeholder theory of the corporation: concepts,evidence, and implications”, Academy of Management Review, Vol. 20 No. 1, pp. 65-91.

Flynn, D.J. (1992), Information Systems Requirements: Determination and Analysis,McGraw-Hill, London.

Ford, J.K. and Kraiger, K. (1995), “The application of cognitive constructs and principles tothe instructional systems design model of training: implications for needs assessment,design, and transfer”, International Review of Industrial and Organizational Psychology,Wiley, Chichester, No. 10, pp. 1-48.

Friedman, A.L. and Miles, S. (2006), Stakeholder: Theory and Practice, Oxford University Press,Oxford.

Garavan, T.N. (1995), “HRD stakeholders: their philosophies, values, expectations and evaluationcriteria”, Journal of Evaluation Industrial Training, Vol. 19 No. 10, pp. 17-30.

Gregory, A. (2000), “Problematizing participation”, Evaluation, Vol. 6 No. 2, pp. 179-99.

Hinkin, T.K. (1998), “A brief tutorial on the development of measures for use in surveyquestionnaires”, Organizational Research Methods, Vol. 1, pp. 104-21.

Holte-McKenzie, M., Forde, S. and Theobald, S. (2006), “Development of a participatory monitoringand evaluation strategy”, Evaluation and Program Planning, No. 29, pp. 365-76.

Kirkpatrick, D.L. (1998), Evaluating Training Programs: The Four Levels, Berrett-Koehler,San Francisco, CA.

Kontoghiorghes, C. (2001), “Factors affecting training effectiveness in the context of theintroduction of a new technology – a US case study”, International Journal of Training andDevelopment, Vol. 5 No. 4, pp. 248-60.

Lazarsfeld, P.F. and Rosenberg, M. (1955), The Language of Social Research, The Free Press,New York, NY.

LeBaron Wallace, T. (2008), “Integrating participatory elements into an effectivenessevaluation”, Studies in Educational Evaluation, No. 34, pp. 201-7.

Lewis, T. (1996), “A model for thinking about the evaluation of training”, PerformanceImprovement Quarterly, Vol. 9 No. 1, pp. 3-22.

Mark, M.M., Henry, G.T. and Julnes, G. (2000), Review of Evaluation: An Integrated Frameworkfor Understanding, Guiding, and Improving Policies and Programs, Jossey Bass,San Francisco, CA.

Mathie, A. and Greene, J.C. (1997), “Stakeholder participation in evaluation: how important isdiversity?”, Evaluation and Program Planning, No. 20, pp. 279-85.

Maxwell, J.A. (1996), Qualitative Research Design: An Interactive Approach, Sage,Thousand Oaks, CA.

Meny, Y. and Thoenig, J.C. (1989), Politiques Publiques, PUF, Paris.

Michalski, G.V. and Cousins, J.B. (2000), “Differences in stakeholder perceptions about trainingevaluation: a concept mapping/pattern matching investigation”, Evaluation and ProgramPlanning, No. 23, pp. 211-30.

Miles, M.B. and Huberman, A.M. (1994), Qualitative Data Analysis, 2nd ed., Sage,Thousand Oaks, CA.

JEIT35,4

402

Page 19: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Miller, T.I. (1994), “Designing and conducting surveys”, in Wholey, J.S., Hatry, H.P. andNewcomer, K.E. (Eds), Handbook of Practical Program Evaluation, Jossey Bass,San Francisco, CA, pp. 271-92.

Nickols, F.W. (2005), “Why a stakeholder approach to evaluation training”, Advances inDeveloping Human Resources, Vol. 7 No. 1, pp. 121-34.

Ostrom, E. (1990), Governing the Commons: The Evolution of Institutions for Collective Action,Cambridge University Press, Cambridge.

Phillips, J.J. (1996), “ROI: the search for best practices”,Training andDevelopment, No. 50, pp. 42-7.

Polanyi, K. (1944), The Great Transformation, Holt, Rinehart & Winston Inc., New York, NY.

Reineke, R. (1991), “Stakeholder involvement in evaluation: suggestion for practice”, AmericanJournal of Evaluation, Vol. 12 No. 39, pp. 39-44.

Ross, J.A. (2008), “Cost-utility analysis in educational needs assessment”, Evaluation andProgram Planning, No. 31, pp. 356-67.

Rossi, P., Freeman, H.E. and Lipsey, M.W. (1999), Evaluation: A Systematic Approach, 6th ed.,Sage, Thousand Oaks, CA.

Salas, E. and Cannon-Bowers, J.A. (2001), “The science of training: a decade of progress”, AnnualReview of Psychology, No. 52, pp. 471-97.

Scriven, M. (1993), Evaluation Thesaurus, Sage, London.

Shani, A.B., Mohrman, S.A., Pasmore, W.A., Stymne, B. and Adler, N. (2007), Handbook ofCollaborative Research, Sage, London.

Talbot, C. (1992), “Evaluation and validation: a mixed approach”, Journal of European IndustrialTraining, Vol. 16 No. 5, pp. 26-32.

Tannenbaum, S.I. and Yukl, G. (1992), “Training and development in work organizations”,Annual Review of Psychology, No. 43, pp. 399-441.

Wills, S. (1993), “Evaluation concerns: a systematic response”, Journal of European IndustrialTraining, Vol. 17 No. 10, pp. 10-14.

Further reading

Abernathy, D.J. (1999), “Thinking outside the evaluation box”, Training & Development, Vol. 53No. 2, pp. 19-23.

Alkin, M.C., Hofstetter, C.H. and Ai, X. (1998), “Stakeholder concepts in program evaluation”,in Reynolds, A. and Walberg, H. (Eds), Advances in Educational Productivity, No. 7,JAI Press, Greenwich, CT, pp. 87-113.

Bassi, L., Benson, G. and Cheney, S. (1996), “The top ten trends”, Training & Development,No. 50, pp. 29-33.

Bates, R.A., Holton, E.F. III, Seyler, D.A. and Carvalho, M.A. (2000), “The role of interpersonalfactors in the application of computer-based training in an industrial setting”, HumanResource Development International, Vol. 3, pp. 19-43.

Bryk, A.S. (1983), Stakeholder-based Evaluation: New Directions for Program Evaluation, JosseyBass, San Francisco, CA.

Burke Johnson, R. (1991), “Toward a theoretical model of evaluation utilisation”, Evaluation andProgram Planning, No. 21, pp. 93-110.

Cook, T.D., Leviton, L.C. and Shadish, W.R. (1985), “Program evaluation”, in Lindzey, G. andAronson, E. (Eds), Handbook of Social Psychology, 3rd ed., Random House, New York, NY,pp. 699-777.

Trainingevaluation

403

Page 20: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Cronbach, L.J., Ambron, S.R., Dornbusch, S.M., Hess, R.D., Hornik, R.C., Phillips, D.C.,Walker, D.F. and Weiner, S.S. (1982), Toward Reform of Program Evaluation, Jossey Bass,San Francisco, CA.

Fitzenz, J. (1988), “Proving the value of training”, Personnel, March, pp. 17-23.

Ford, J.K., Quinones, M., Sego, D. and Sorra, J. (1992), “Factors affecting the opportunity to usetrained skills on the job”, Personnel Psychology, No. 45, pp. 511-27.

Garaway, G.B. (1995), “Partecipatory evaluation”, Studies in Educational Evaluation, Vol. 21No. 1, pp. 85-102.

Geber, B. (1995), “Does your training make a difference? Prove it!”, Training, No. 3, pp. 27-34.

Greene, J.C. (1988), “Stakeholder participation and utilization in program evaluation”, EvaluationReview, No. 12, pp. 91-116.

Guba, E.G. and Lincoln, Y.S. (1981), Effective Evaluation. Improving the Usefulness of EvaluationResults Through Responsive and Naturalistic Approaches, Jossey Bass, London.

Guba, E.G. and Lincoln, Y.S. (1989), Fourth Generation Evaluation, Sage, Newbury Park, CA.

Holton, E.F. III (1996), “The flawed four level evaluation model”, Human Resource DevelopmentQuarterly, Vol. 7 No. 1, pp. 5-21.

House, E.R. and Howe, K.R. (1999), Values in Evaluation and Social Research, Sage,Thousand Oaks, CA.

Kearsley, G. (1982), Costs, Benefits, and Productivity in Training Systems, Addison-Wesley,Reading, MA.

King, J.A. (2007), “Making sense of participatory evaluation”, New Directions for Evaluation,No. 114, pp. 83-105.

McLean, G.N. (2005), “Examining approaches to HR evaluation: the strengths and weaknesses ofpopular measurement methods”, Strategic Human Resources, Vol. 4 No. 2, pp. 24-7.

McLinden, D.J. (1995), “Proof, evidence, and complexity: understanding the impact of trainingand development in business”, Performance Improvement Quarterly, Vol. 8, pp. 3-18.

McLinden, D.J. and Trochim, W.M.K. (1998), “Getting to parallel: assessing the return onexpectations of training”, Performance Improvement, No. 37, pp. 21-6.

Madaus, G.F., Scriven, M.S. and Stufflebeam, D.L. (1986), Evaluation Models: Viewpoints onEducational and Human Services Evaluation, Kluwer-Nijhoff, Boston, MA.

Mark, M.M. and Shotland, R.L. (1985), “Stakeholder-based evaluation and value judgments”,Evaluation Review, No. 9, pp. 605-26.

Michalski, G.V. and Cousins, J.B. (2001), “Multiple perspectives on training evaluation: probingstakeholder perceptions in a global network development firm”, American Journal ofEvaluation, Vol. 22 No. 1, pp. 37-53.

Ostrom, E. (2000), “Collective action and the evolution of social norms”, Journal of EconomicPerspectives, Vol. 14 No. 3, pp. 137-58.

Ostrom, E., Gardner, R. and Walker, J. (1994), Rules, Games and Common-pool Resources,The University of Michigan Press, Ann Arbor, MI.

Patton, M.Q. (1998), Utilization-focused Evaluation, 3rd ed., Sage, Beverly Hills, CA.

Phillips, J.J. (1997), Return on Investment in Training and Performance Improvement Programs,Gulf Publishing, Houston, TX.

Reason, P. and Bradbury, H. (2001), Handbook of Action Research: Participative Inquiry andPractice, Sage, London.

JEIT35,4

404

Page 21: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Scriven, M. (1996), “Goal-free evaluation”, Evaluation News and Comment Magazine, Vol. 5 No. 2,pp. 5-9.

Shadish, W.R., Cook, T.D. and Leviton, L.C. (1991), Foundation of Program Evaluation, Sage,Beverly Hills, CA.

Sridharan, S., Campbell, B. and Zinzow, H. (2006), “Developing a stakeholder-driven anticipatedtimeline of impact for evaluation of social programs”, American Journal of Evaluation,No. 27, pp. 148-62.

Tesoro, F. (1998), “Implementing a ROI measurement process at Dell computer”, PerformanceImprovement Quarterly, No. 11, pp. 103-14.

Vassen, J. (2006), “Programme theory evaluation: multicriteria decision aid and stakeholdervalues”, Evaluation, No. 12, pp. 397-417.

Weiss, C.H. (1983), “The stakeholder approach to evaluation: origins and promise”,New Directions for Program Evaluation, No. 17, pp. 3-14.

Ya Hui Lien, B., Yu Yuan Hung, R. and McLean, G.N. (2007), “Training evaluation based on casesof Taiwanese benchmarked high-tech companies”, International Journal of Training andDevelopment, Vol. 11 No. 1, pp. 35-48.

(The Appendices follow overleaf.)

About the authorsMarco Guerci is a Researcher at the Department of Management, Economics and IndustrialEngineering of the Politecnico di Milano. His research interests are focused on human resourcemanagement, especially on training and development evaluation. Marco Guerci is thecorresponding author and can be contacted at: [email protected]

Marco Vinante is a Senior Researcher and works as a Consultant for public institutions andprivate organisations. His research interests are focused on education, labour policies andworkfare systems; furthermore, his research activities are strictly related to the evaluation ofpublic and social policies.

To purchase reprints of this article please e-mail: [email protected] visit our web site for further details: www.emeraldinsight.com/reprints

Trainingevaluation

405

Page 22: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Appendix 1

Com

pan

ies

Pu

bli

cau

thor

ity

Tra

inin

gp

rov

ider

sE

nte

pri

seas

soci

atio

nT

rain

ers

Par

tici

pan

tsM

ean

1.A

lig

nm

ent

bet

wee

ntr

ain

ing

lev

elof

pro

gra

man

dle

vel

ofp

arti

cip

ants

8.38

9.00

8.67

8.33

8.45

7.53

8.39

2.A

voi

dan

ceof

con

ten

tov

erla

pin

dif

fere

nt

cou

rses

7.95

9.50

7.56

7.00

7.86

7.39

7.88

3.In

crea

sed

inv

estm

ent

intr

ain

ing

by

com

pan

ies

8.57

10.0

09.

009.

678.

688.

429.

064.

Incr

ease

inp

ub

lic

reso

urc

esto

be

inv

este

din

con

tin

uou

str

ain

ing

fin

anci

ng

8.52

8.00

9.11

9.67

7.95

8.08

8.56

5.C

omp

reh

ensi

ven

ess

ofco

urs

eca

talo

gu

esof

fere

db

ytr

ain

ing

pro

vid

ers

7.71

10.0

07.

899.

336.

367.

218.

096.

Kn

owle

dg

e/ca

pac

ity

that

the

trai

ner

has

acq

uir

edd

uri

ng

the

trai

nin

gp

rog

ram

7.19

7.50

7.11

6.33

7.86

6.95

7.16

7.C

reat

ion

ofa

net

wor

kam

ong

trai

nin

gp

rov

ider

s,fi

nan

cer

and

com

pan

ies

8.38

8.50

9.22

7.67

7.95

7.66

8.23

8.P

rep

arat

ion

ofst

and

ard

trai

nin

gca

talo

gu

esb

ytr

ain

ing

pro

vid

ers

7.33

8.00

7.22

6.33

7.00

7.26

7.19

9.P

rep

arat

ion

ofst

and

ard

trai

nin

gp

ack

ages

by

trai

ner

s7.

057.

506.

896.

337.

097.

137.

0010

.A

cces

sib

ilit

yto

trai

nin

gp

rog

ram

(for

wor

ker

s/em

plo

yee

s)8.

299.

007.

676.

337.

827.

717.

8011

.H

eter

ogen

eou

sle

vel

sof

cou

rse

par

tici

pan

ts(f

rom

dif

fere

nt

wor

kca

teg

orie

san

d/o

rp

rofe

ssio

nal

pro

file

s)7.

387.

507.

226.

005.

957.

086.

8612

.S

atis

fact

ion

wit

hth

eco

urs

e’s

org

anis

atio

n(v

enu

es,

sch

edu

lin

g,

clas

sroo

mla

you

t[...]

)7.

818.

008.

116.

677.

917.

477.

6613

.S

atis

fact

ion

wit

hth

ed

idac

tics

/tra

inin

gm

eth

ods

8.95

8.00

9.11

8.67

9.09

8.53

8.72

14.

Imp

act

ofth

etr

ain

ing

pro

gra

mon

com

pan

yre

sult

s8.

3310

.00

8.56

8.33

7.95

6.79

8.33

15.I

ncr

ease

inle

vel

ofin

nov

atio

nw

ith

inth

eco

mp

any

afte

rth

etr

ain

ing

pro

gra

m8.

1010

.00

8.11

8.33

7.86

6.92

8.22

16.

Inte

gra

tion

ofco

urs

eca

talo

gu

esof

fere

dp

rov

ided

by

trai

nin

gp

rov

ider

s7.

819.

007.

787.

007.

186.

927.

6217

.N

um

ber

ofb

ure

aucr

atic

pro

ced

ure

sim

pos

edon

par

tici

pat

ing

com

pan

ies

8.29

9.00

8.89

9.67

6.09

6.61

8.09

18.

Mor

ek

now

led

ge

acq

uir

edb

ytr

ain

ing

pro

vid

ers

ofco

mp

anie

s’tr

ain

ing

nee

ds

8.48

8.50

9.22

9.00

8.23

7.92

8.56

(continued

)

Table AI.Attribution of importanceto the dimensions

JEIT35,4

406

Page 23: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Com

pan

ies

Pu

bli

cau

thor

ity

Tra

inin

gp

rov

ider

sE

nte

pri

seas

soci

atio

nT

rain

ers

Par

tici

pan

tsM

ean

19.

Imp

rov

emen

tin

trai

nin

gp

rov

ider

s’im

age

amon

gco

mp

anie

s7.

297.

509.

117.

338.

596.

617.

7420

.Im

pro

vem

ent

inth

eco

mp

any

’sim

age

wit

hit

sw

ork

ers/

emp

loy

ees

7.81

8.00

7.56

6.67

6.86

7.05

7.32

21.I

mp

rov

emen

tin

the

imag

eof

hu

man

reso

urc

esw

ith

lin

em

anag

emen

t/w

ork

ers/

emp

loy

ees

7.48

7.50

7.89

7.33

6.55

6.68

7.24

22.

Imp

rov

emen

tin

the

ente

rpri

seas

soci

atio

nim

age

wit

hth

eco

mp

anie

s/w

ork

ers/

emp

loy

ees

6.67

7.50

7.67

9.33

6.41

6.24

7.30

23.

Imp

rov

emen

tin

the

exte

rnal

imag

eof

com

pan

ies

par

tici

pat

ing

inth

etr

ain

ing

pro

gra

m7.

197.

507.

447.

336.

236.

637.

0524

.N

ewco

llab

orat

ion

sb

etw

een

com

pan

ies

and

trai

nin

gp

rov

ider

s7.

908.

509.

007.

008.

507.

007.

9825

.P

arti

cip

atio

nb

ysm

all

and

med

ium

-siz

edb

usi

nes

ses

8.33

9.00

7.78

9.00

7.14

7.50

8.12

26.P

ossi

bil

ity

tod

efin

etr

ain

ing

fin

anci

ng

pro

ced

ure

sw

ith

the

pu

bli

cau

thor

itie

s8.

337.

509.

679.

677.

007.

008.

1927

.Deg

ree

ofk

now

led

ge

and

skil

lsac

qu

ired

by

par

tici

pan

ts8.

718.

508.

898.

679.

367.

638.

6328

.N

um

ber

ofco

mp

anie

sp

arti

cip

atin

gin

the

pro

ject

5.43

8.00

7.00

7.33

5.55

6.00

6.55

29.

Com

pli

ance

wit

hp

roje

ctte

nd

erob

lig

atio

ns

7.19

10.0

09.

116.

676.

097.

297.

7230

.S

yn

erg

ies

bet

wee

nth

ep

roje

ctan

dot

her

fin

anci

ng

sou

rces

for

trai

nin

g7.

909.

508.

449.

007.

456.

898.

2031

.S

atis

fact

ion

lev

elof

com

pan

ies

wit

hth

etr

ain

ing

pro

gra

m7.

5710

.00

8.67

9.67

8.50

7.26

8.61

32.

Tra

nsp

aren

cyof

mec

han

ism

sco

ntr

olli

ng

acce

ssto

fin

ance

dtr

ain

ing

serv

ices

8.48

9.00

9.11

9.33

6.82

6.68

8.24

33.

Effi

cien

tu

seof

reso

urc

esb

ytr

ain

ing

pro

vid

ers

8.29

10.0

09.

007.

677.

917.

748.

4334

.Uti

lity

ofac

qu

ired

kn

owle

dg

ean

dsk

ills

for

the

care

ers

ofth

ep

arti

cip

ants

8.48

10.0

07.

677.

338.

777.

848.

3535

.Uti

lity

ofk

now

led

ge

and

skil

lsac

qu

ired

by

par

tici

pan

tsfo

rth

eir

job

son

ash

ort-

term

bas

is8.

489.

008.

227.

678.

237.

268.

14

Table AI.

Trainingevaluation

407

Page 24: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Appendix 2

Qu

alit

yan

dam

oun

tof

kn

owle

dg

ean

dsk

ills

acq

uir

edb

yp

arti

cip

ants

Tra

iner

sv

sp

arti

cip

ants

1.73

2N

um

ber

ofb

ure

aucr

atic

pro

ced

ure

sim

pos

edon

the

par

tici

pat

ing

com

pan

yT

rain

ing

pro

vid

ers

vs

trai

ner

s:þ

2.79

8Im

pac

tof

trai

nin

gp

rog

ram

onco

mp

any

resu

lts

Com

pan

ies

vs

trai

ner

s:þ

2.19

5C

omp

anie

sv

sp

arti

cip

ants

1.54

4T

ran

spar

ency

ofm

ech

anis

ms

con

trol

lin

gac

cess

tofi

nan

ced

trai

nin

gse

rvic

esT

rain

ing

pro

vid

ers

vs

par

tici

pan

ts:þ

2.42

7C

omp

anie

sv

sp

arti

cip

ants

1.79

2Im

pro

vem

ent

oftr

ain

ing

pro

vid

er’s

imag

ew

ith

the

com

pan

yT

rain

ing

pro

vid

ers

vs

par

tici

pan

ts:þ

2,50

6T

rain

ers

vs

par

tici

pan

ts:þ

1986

Pot

enti

alto

defi

ne

trai

nin

gfi

nan

cin

gp

roce

du

res

wit

hp

ub

lic

auth

orit

ies

Tra

inin

gp

rov

ider

sv

str

ain

ers:þ

2.66

7T

rain

ing

pro

vid

ers

vs

par

tici

pan

ts:þ

2.66

7

Table AII.Analysis of variance,post hoc test by Scheffe(Sig. ,0.05), means anddifferences betweengroups

JEIT35,4

408

Page 25: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Appendix 3

Com

pon

ents

Rot

ated

mat

rix

Imp

rov

ing

com

pet

itiv

enes

sof

com

pan

ies

and

sup

por

tto

hu

man

reso

urc

es

Pro

mot

ion

offa

irn

ess

and

imag

eN

etw

ork

stab

iliz

atio

n

Tra

inin

gse

rvic

esof

fer

Lea

rner

care

Imp

act

oftr

ain

ing

pro

gra

mon

com

pan

yre

sult

s0.

8056

91U

tili

tyof

acq

uir

edk

now

led

ge

and

skil

lsfo

rp

rofe

ssio

nal

care

ers

ofp

arti

cip

ants

0.75

8309

Uti

lity

ofk

now

led

ge

and

skil

lsac

qu

ired

by

par

tici

pan

tsfo

rth

eir

job

son

ash

ort-

term

bas

is0.

7421

95In

crea

sein

lev

elof

inn

ovat

ion

wit

hin

the

com

pan

yaf

ter

the

trai

nin

gp

rog

ram

0.73

5902

Deg

ree

ofk

now

led

ge

and

skil

lsac

qu

ired

by

par

tici

pan

ts0.

6669

1N

ewco

llab

orat

ion

sb

etw

een

com

pan

ies

and

trai

nin

gp

rov

ider

s0.

5587

72S

atis

fact

ion

lev

elof

com

pan

ies

wit

hth

etr

ain

ing

pro

gra

m0.

5173

54Im

pro

vem

ent

inth

eex

tern

alim

age

ofco

mp

anie

sp

arti

cip

atin

gin

the

trai

nin

gp

rog

ram

0.77

0078

Acc

essi

bil

ity

totr

ain

ing

pro

gra

m(f

orw

ork

ers/

emp

loy

ees)

0.71

2282

Imp

rov

emen

tin

the

trai

nin

gp

rov

ider

s’im

age

wit

hth

eco

mp

anie

s0.

6912

41S

yn

erg

ies

bet

wee

nth

ep

roje

ctan

dth

eot

her

fin

anci

ng

sou

rces

for

trai

nin

g0.

6803

29P

arti

cip

atio

nb

yth

esm

all

and

med

ium

-siz

edb

usi

nes

ses

0.66

3488

Imp

rov

emen

tin

the

com

pan

yas

soci

atio

n’s

imag

ew

ith

the

com

pan

ies/

wor

ker

s/em

plo

yee

s0.

5164

38In

crea

sein

pu

bli

cre

sou

rces

tob

ein

ves

ted

inco

nti

nu

ous

trai

nin

gfi

nan

cin

g0.

7985

93C

reat

ion

ofa

net

wor

kam

ong

trai

nin

gp

rov

ider

s,fi

nan

cer

and

com

pan

ies

0.76

9497

(continued

)

Table AIII.Factor matrix

Trainingevaluation

409

Page 26: Training Evaluation an Analysis of the Stakeholders’ Evaluation Needs, 2011

Com

pon

ents

Rot

ated

mat

rix

Imp

rov

ing

com

pet

itiv

enes

sof

com

pan

ies

and

sup

por

tto

hu

man

reso

urc

es

Pro

mot

ion

offa

irn

ess

and

imag

eN

etw

ork

stab

iliz

atio

n

Tra

inin

gse

rvic

esof

fer

Lea

rner

care

Pos

sib

ilit

yto

defi

ne

trai

nin

gfi

nan

cin

gp

roce

du

res

wit

hth

ep

ub

lic

auth

orit

ies

0.63

1817

Incr

ease

din

ves

tmen

tin

trai

nin

gb

yco

mp

anie

s0.

6053

77M

ore

kn

owle

dg

eac

qu

ired

by

trai

nin

gp

rov

ider

sof

com

pan

ies’

trai

nin

gn

eed

s0.

6013

01In

teg

rati

onof

the

cou

rse

cata

log

ues

pro

vid

edb

ytr

ain

ing

pro

vid

ers

0.84

9739

Com

pre

hen

siv

enes

sof

cou

rse

cata

log

ues

pro

vid

edb

ytr

ain

ing

pro

vid

ers

0.83

0391

Pre

par

atio

nof

stan

dar

dtr

ain

ing

cata

log

ues

by

the

trai

nin

gp

rov

ider

s0.

6526

59N

um

ber

ofb

ure

aucr

atic

pro

ced

ure

sim

pos

edon

par

tici

pat

ing

com

pan

ies

0.60

184

Effi

cien

tu

seof

reso

urc

esb

ytr

ain

ing

pro

vid

ers

0.58

2032

Ali

gn

men

tb

etw

een

the

trai

nin

gle

vel

ofth

ep

rog

ram

and

the

lev

elof

the

par

tici

pan

ts0.

7434

41S

atis

fact

ion

wit

hth

eco

urs

e’s

org

anis

atio

n(v

enu

es,

sch

edu

lin

g,

clas

sroo

mla

you

t[...]

)0.

5894

29S

atis

fact

ion

wit

hth

ed

idac

tics

/tra

inin

gm

eth

ods

0.54

9852

Notes:

Ex

trac

tion

met

hod

:an

aly

sis

ofth

em

ain

com

pon

ents

;rot

atio

nm

eth

od:v

arim

axw

ith

Kai

ser

nor

mal

izat

ion

;ex

pla

ined

var

ian

ce:6

5.5

per

cen

t;th

ero

tati

onco

nv

erg

edw

ith

six

iter

atio

ns

Table AIII.

JEIT35,4

410