Transcript

1

HOW DO EXECUTIVES EVALUATE DISTANCE TRAINING? :

A GROUNDED THEORY STUDY

Paul Hardt, Core Faculty, Capella University

Contact Information:

Phone: 888-227-3552 extension 5494

E-mail: [email protected]

―Our bias is, if the topic is supercritical, we don‘t use distance learning, because we

don‘t have the opportunity to really evaluate whether the distance training is really

effective. When we do the live training, we can determine better if the training is

effective.‖

Compliance Officer for a Multi-billion dollar petroleum company

―What we‘re doing [with distance training] is what our CFO calls a ‗rounding error.‘ The

cost is so minimal compared to the potential benefit to the organization…it‘s not worth

doing a formal ROI analysis.‖

Executive leading a distance training initiative in a health services organization.

The delivery of ―technologically-mediated‖ training is a significant and growing

trend in corporate training. According to the 2006 ASTD ―State of the Industry Report‖

(2006), out of the $109.25 billion annually spent on corporate training at the time of the

report, $39.33 billion was delivered through technologically-mediated means. This was

up dramatically from $8 billion spent on computer-delivered training, as reported in

2004. Sixty percent of the 2006 technologically-mediated instruction, or $23.6 billion,

was delivered online.

However, an analysis of 27 case studies of distance training (Berge, 2001;

Schreiber and Berge, 1998; Rosenberg, 2001 and 2006; Van Dam, 2004) shows that this

vast amount of training is still being evaluated on very traditional lines. Kirkpatrick‘s

―Four Levels‖ (Kirkpatrick, 1994) was the exclusive framework used in all of the case

studies, to report the results of the 27 cases.

This lack of progress in rigorously examining the effects of distance training is

disturbing, as Twitchell, Holton and Trott (cited in Holton and Naquin, Summer, 2005),

assert:

Popular press and business leaders all discuss the need to increase the rate of

growth in productivity in the face of ever increasing competition. Furthermore,

there is increasing research evidence that human resource practices contribute

significantly to organizational outcomes. The training literature presents

evaluation as a necessary component in providing training that can help

organizations increase these outcomes. There are numerous case studies of

2

effective evaluation. Even estimating financial return, which is often presumed to

be the hardest part of evaluation, has been widely demonstrated to be very

feasible.

Yet, all the literature on how much evaluation is used by business and industry

suggests that only about half of the training programs are evaluated for objective

performance outcomes. Additionally, less than one third of training programs are

evaluated in any way that measures changes in organizational goals or

profitability.

In a specific indictment of the Kirkpatrick model, Swanson and Holton (2001),

cite several problems with the ―Four Levels‖:

―Not supported by research—Research has consistently shown that the levels

within the taxonomy are not related, or only correlated at a low level.

Emphasis on reaction measures—Research has shown that reaction measures

have nearly zero correlation with learning or performance outcome measures.

Failure to update the model—The model has remained the same for the last forty

[fifty!] years with little effort to update or revise it.

Not used. [See Twitchell, Holton, and Trott‘s quotation above.]

Can lead to incorrect decisions—The model leaves out so many important

variables that four-level data alone are insufficient to make correct and informed

decisions about training program effectiveness.‖ (p. 360)

Holton and Naquin (2005), in their critique of the present state of evaluation theory and

practice argue that the slate of evaluation notions should be wiped clean, and grounded

theory methods should be used to construct valid theories of how evaluation should be

conducted. (p. ) That is what this study proposes to do. Given the state of thinking at

present about evaluation of training and performance improvement in general, and

evaluation of distance training in specific, this study will consider the question: What

theories can best account for how executives evaluate distance training?

Grounded Theory

The Grounded theory method is the ―systematic generation of theory from data

that has been empirically collected and analyzed (Hansen, 2005). Grounded theory

method helps generate new insight by discovering how things really happen. Grounded

theory method is especially appropriate to this study, because Kirkpatrick‘s ―Four

3

Levels‖ have never been critically examined for its validity and reliability, as argued by

Swanson and Holton (2001). What is needed in the training and performance

improvement field is an accurate picture of how performance improvement programs are

really evaluated. Holton and Naquin (2005), specifically recommend that grounded

theory methods be used to generate new theories and evaluation frameworks.

As suggested by Holton and Naquin (2005), grounded theory (Eagan, 2002;

Swanson and Holton, 2005; Creswell, 2008) will be the method used to explore the

question of how executives evaluate distance training. Grounded theory research is a five

step process, involving gathering qualitative data, coding the data, and eliciting

theoretical principles that emerge out of the analysis of the data. See Figure 1.

4

Figure 1. ―The Process of Grounded Theory Research‖ adapted with permission from Eagan

(2002). Grounded theory research and research building. Advances in Developing Human

Resources, 4(3), 277-295.

2. Data Selection

1. Initiation of the Research

3. Initiation of the Data Collection

4. Data Analysis

4a. Coding the first set of data

Naming

Comparing

Memoing

4b. Ongoing application of

Codes and potential changes

in sites or respondents.

4c. Comparing and revising

codes

4d. Checking for emerging

categories.

4e. Forming category set(s).

4f. Applying and modifying data

set categories and their properties.

4g. Assessing level of needed

elaboration of categories and

their properties.

4h. Clarification of developed

concepts. 4i. Describing and clarifying the

analytical rationale of the

research

process.

5. Concluding the Research

5a. Determining if research is at point of saturation (If

saturation reached, move to 5b. If not, return to 4b and

repeat process.)

5b. Documenting grounded theory (Narrative framework and

propositions)

No

Yes

O

N

G

O

I

N

G

D

A

T

A

C

O

L

L

E

C

T

I

O

N

5

Method of this Study

Step 1. Initiation of the research.

Preliminary reading of the literature (the case studies cited above) suggested that

executives may be using a traditional framework (e.g. Kirkpatrick). However, as argued

above, Swanson and Holton (2001), the Kirkpatrick model is has been criticized on

several counts. Holton and Naquin (2005) argue for a grounded theory approach to

understanding better how training is evaluated. Therefore, the research question posed in

this study was:

What theory(ies) can best account for how executives evaluate distance training?

Step 2. Data selection

Participant selection.

―Snowball method‖ (Patton, 1990; Johnson, 2005). Students from courses at an

online university were invited to identify executives in the organizations they worked in

or other organizations they with which they were familiar. The general goal of the search

for study participants was to find executives who made significant decisions about the

resources given to distance training, and executives who where not already invested in

the Kirkpatrick 4 Levels. Conditions for participants were communicated to these

students:

The organization had to do distance training as a significant part of their overall

performance improvement strategy.

Executive was defined as ―director‖ level or above, with a preference for vice

president-level or above.

The researcher preferred to interview non-training executives (executives in

operations, sales, marketing, finance, etc.), but would also interview training

executives.

No preference will be shown to any particular industry or size of organization.

To make the interview process more convenient, and thus, allow more variety in

participants, interviews would be done by phone and last no more than one hour.

The first part of this goal (managers who make significant decisions about resources) was

met by the participants who finally were interviewed. The second condition, that a

preference would be shown to non-training executives—was met partially. As is seen in

Table 1, three non-training executives were interviewed. The second wave of

interviewees, as presented in Table 2, was a more diverse group.

6

7

Table 1. Executives who participated in first wave of interviews.

Participant‘s Title Industry Revenues/Sales in

2007

Size of target

audience

Vice President,

Clinical Research

Medical devices $5 billion 13,000+ employees

Director, Corporate

University

Government

contractor- technical

services

$1 billion + 11,000 employees

Senior Vice

President, Education

and Training

Child care $1.6 billion 34,000 employees

Vice President,

Learning and

Organizational

Development

Government

contractor—

technical services

$8.3 billion 44,000 employees

Director, Field

Service Support

Service Division of

―Big Three‖ auto

maker

Annual budget of

$40 million

50 employees

Director,

Manufacturing

Division of

photographic

materials

manufacturer

$125 million 800 employees

Assistant Vice

President, Learning

Solutions

Health care provider $27 billion 190,000 employees

Director, Corporate

University

Customer

management

systems

$150 million 1,500 employees

Senior Vice

President for Sales

Training and

Development

Financial services $250 million 1,000 employees

Vice President for

Learning and

Performance

Outsourced call

centers

$1.2 billion 55,000 employees

8

Table 2. Executives who participated in second wave of interviews.

Participant‘s Title Industry Annual

Revenues/Sales

Size of target

audience

Vice President,

Finance and

Corporate

Controller

Higher education $272 million 1000+ employees

Vice President,

Marketing and

Product

Management

(division leader)

Publishing $8 billion 250 employees

Vice President,

Finance

Public utility $6 billion 10,000 employees

Vice President,

Innovation Center

Medical products

and services

$1 billion + 10,000+ employees

Director, Sales

Training

Pharmaceuticals,

manufacturing

$20 billion + 50,000 employees

Vice President,

Compliance and

Ethics

Petroleum refining,

distribution

$1 billion + 3,700 employees

Vice President,

Technical Support

Information

technology

products, services

$3 billion+ 8,000 employees

Director Real estate sales,

financial services

$ 2 billion 35,000 employees

and customers

world-wide

Senior Director,

Services and

Support

Software

development

Proprietary 300 customers

Director, Training State court system $300 million 3,500 employees

Interview questions.

In Wave I of the data collection process, an interview protocol (Kvale, 1996; Preskill

and Torres, 1999) was developed, using these questions:

1. Describe the content, audiences, and goals of some of your distance training

programs.

2. How are your distance training programs going right now?

3. So, it sounds like you think your programs are doing well/not doing well on these

___________________ points.

9

4. How important are these points to other managers? To management above you?

5. In what kinds of settings are the results of distance training programs discussed?

Can you give me an example of a time when a distance training program was

discussed? What was said?

6. Some organizations use data grouped around these ideas to judge the

effectiveness of a distance training program. Do you use any of these ideas in

judging your distance training programs?

Participants‘ reactions to the training (―This was a good training

program.‖)

Supervisors‘ reactions to the training (―I‘m glad my people took this

course.‖)

On-the-job application reports-from participants and/or supervisors (―I

used the training in these ways….‖)

Business impact reports (―I increased my sales by 20%, after I completed

this training.‖)

Return on Investment

How important is the information you get from these sources of feedback

about distance training?

Step 3. Initiation of Data collection.

The interviews for Wave 1 began in July, 2007 and concluded in February, 2008.

The Wave 2 interviews began in February, 2008, and concluded in October 2009.

Step 4. Data Analysis

In keeping with grounded theory method, participant‘s responses to the questions

were recorded in two ways. The researcher kept contemporaneous notes of the

interviews, and recordings were made of all interviews. All responses were coded.

The first two questions in the interview (‖Describe the content, audiences, and

goals of some of your distance training programs. Describe the delivery methods you

use.‖ and ―How are your distance training programs going right now?‖) were used in

both waves of interviews, to establish a mutual understanding of the nature of distance

training programs in the subject organization between the researcher and the interviewee.

Responses to the first two questions in the interview were very similar between the

waves, so no distinction will be made at this point between the two waves:

10

Question #1. ‖Describe the content, audiences, and goals of some of your distance

training programs.‖

Content:

First wave

Technical training

o Operating the technical equipment (computers, machinery, etc.) that are a

part of the regular business of the organization.

Customer service

Policy/procedure execution and compliance

Personal development

Office skills

User training for new software systems

Business processes

ERP (Enterprise Resource Planning) training

How to use financial systems

Customer training (how to use company‘s products)

Personal development

Effective Meeting Management

Leadership development

Time management

Product knowledge

Regulatory compliance (safety, financial rules and regulations, ethics, ―code of

conduct‖, EEO)

Innovation

Forum for using Six Sigma processes on customer problems

Off-the-shelf online courses

Project management

Audiences:

General employee population

Sales people

Customer service

Engineers

Goals:

Learn how to operate technical equipment.

Meet compliance standards

Orient employees to company policies and procedures

Give employees an opportunity for self-development

Delivery methods:

11

Online courses

Webcasts

Tele-conferencing

Social networking—forum

CD

Live videos

Step 4A. Coding of first wave responses.and Step 4b. Ongoing application of

codes. In Wave 1, when asked, ―Generally, how are your programs doing?‖ early

respondents‘ spontaneous replies indicated their framework for evaluating distance

training followed the conventional Kirkpatrick/Phillips ―Levels.‖ These ―level‖ responses

were heard in many of the remainder of the interviews in Wave 1.

Table 3. Examples of ―level‖ responses in first wave interviews.

―Level‖ Example responses

Level 1-Reaction

―We get ‗excellent‘ to ‗very good‘ on our training

materials.‖

―People are happy with the content.‖

―People like a ‗high touch‘ teaching approach.‖

Level 2-Learning

―Participants must get an 80% on a final quiz.‖

―Management makes the assumption that someone can not

pass a course without passing the test.‖

―[Distance training] is good for building awareness,

informational purposes.‖

Level 3- Application

―We almost always measure application. We do

observations, follow-up.‖

―We look for ‗real-time‘ demonstration of skills.‖

―We listen in on the phones…‖

Level 4- Business

Impact/Return on

Investment

―We got a global research award [that resulted from some of

the distance training we did].‖

―Our climate survey results are very positive.‖

―We do internal audits to see how well we comply with

internal procedures.‖

In addition to these ―level‖ criteria, other feedback was heard in the interviews that did

not fit into the Level-based criteria:

a. Appropriateness of content: ―Content needed to be specific to our needs.‖

―Content must help support compliance with regulations.‖

12

b. Implementation: ―Tracking completion was important.‖ Issues about

employees getting time-off to do training.

c. Design: Concerns about interactivity of courses, use of case studies and

practice scenarios, group vs. individual instructional approach.

Step 4c. Comparing and revising codes. While the results of the first wave of

interviews were helpful in establishing the ―lay of the land‖ in terms of the present state

of evaluation frameworks and methods, the responses presented a very static and

conventional view of evaluation of distance training. There were few comments from the

executives that showed that they drew some connections between, say, Level 1 results

and Level 2 results. There were no responses that suggested how executives drew

connections between the evaluation data they looked at and the judgments they made

about the consequences that should accompany a training and performance improvement

initiative, such as that the program should be dropped.

At this point, Swanson and Holton‘s (2000) argument that ―accountability,‖ not

evaluation, should be the focus of training and performance improvement professionals

informed the research process. Accountability is the application of consequences, positive

and negative, to the results of a performance improvement initiative. If management

determines the initiative was a success, then the initiative may receive more resources,

recognition, and can continue. If the program was not a success, in management‘s eyes,

then negative consequences may ensue. Swanson and Holton argue that Kirkpatrick‘s

Four Levels and other similar frameworks, do not give insight into how management uses

the data they receive. As cited earlier, Holton and Naquin (2005) argue that research

should be done on precisely how management makes these decisions.

In light of this argument, the researcher revised the interview questions, taking out

the last 4 questions, and replacing them with two questions that elicited very fruitful

responses:

At what point, in looking at your evaluation data, do consequences (positive or

negative) start to happen?

The idea behind this question is to get directly to the issue of consequences and

accountability. One of the critical reasons trainers collect and communicate

evaluation data is to gain support for their efforts. This support may mean a larger

budget, recognition for the department, greater influence with management. These

consequences flow from an executive‘s examination, analysis, and judgment

about the success or failure of the initiative. It is vital that training and

performance improvement professionals know how executives make the

connection between data and consequences, so they can create and execute

appropriate initiatives.

13

How do you make the connection between the amount of money you spend on

distance training and the results you get?

Most trainers do not have a finance or operations background. Yet, the people

who make the critical decisions about their programs are primarily from

operations and finance. To many trainers, the executive mind is a ―black box,‖

where mysterious operations happen, that lead to significant decisions about their

initiatives. This question was designed to get into that ―black box,‖ to understand

better the thinking process executives use to make those all-important decisions.

These two questions were asked of all of the executives in Wave 2 of the study.

Step 4e. Forming category sets and Step 4f. Applying and modifying data set

categories and their properties.

When executives responded to the first new question, their responses seemed to best fall

into a framework of management decision-making called ―Normative‖, as described by

Holton and Naquin (2005). The Normative framework contains criteria which are ―ideal‖

models for making decisions. In most business decision-making, these are usually

financial models—profit and loss, return on investment, cost/benefit. In training and

performance improvement, Kirkpatrick‘s ―4 Levels‖ (reaction, learning, transfer, and

business impact) (Kirkpatrick, 1994) is just one of many examples of normative criteria

used to evaluate training programs.

14

Table 4. Question: “What is the point in the evaluations, when consequences (positive and negative) start to happen?”

Responses indicating Kirkpatrick ―Level

1‖ data as being important in management

decision-making:

Responses indicating Kirkpatrick ―Level

2‖ data as being important in management

decision-making:

Responses indicating Kirkpatrick ―Level

4‖ data as being important in management

decision-making:

―People would call me and say, ‗This

really stinks.‘‖

―If course ratings go below 4 or 5.‖ (10

point scale)

―If someone doesn‘t like the training, but

it still applies to their job, they still have

to take the training.‖

―We had (some training) that was

designed to help schedulers. The

manager of the employees—he hated. He

called me up. Employees really liked it.

We concluded that we need a different

type of training for the leaders.‖

―If a leader calls up and lets me have it,

then we pay more attention.‖

―We listen to participants who comment

on the content of the training.‖

―We get some feedback from

Compliance System Owners—we want

them to go out and talk to people about

the training and give us feedback.‖

―We wouldn‘t make a change, if only

one manager complained. We do listen to

the feedback.‖

―Similar job titles may not need similar

training. We have to watch out for that.‖

―Reps can lose their jobs, if they fail a test

3 times.‖

―To attend a classroom event, a sales

person has to pass the distance training

learning tests.‖

―It depends. It may not be a requirement

to pass the training, but it may be a part of

our compliance system.‖

―Passing the test is considered being in

compliant…‖ [This last comment

indicates a ―cross-over‖ in categories. In

the cases where compliance was an

important goal of the training, satisfactory

test scores helped keep the organization in

compliance, thus meeting an important

―business impact‖ (Level 4) goal.]

―Not complying with regulations.‖

―If an interesting, valuable idea came out

of the training process within first 3-6

months. If an idea came out of the

process that the CEO and steering

committee really like.‖

―When things are going well for the

company as a whole, then training is left

alone.‖

―Consequences start, when the business

indicators show a problem.‖

―Unexpected events lead to discussion

about what needs to be done with

training—regulatory changes.‖

―We may need to purchase a new LMS. If

we can train the same number of people

at less cost, then its approved.‖

―In some units, there are direct measures

on compliance.‖

―Performance is tied to compensation.‖

―There are regulations that require a

training requirement. Some regulations

do not have this training requirement.‖

―For some job classes, passing a test is

considered being ―in compliance‖-you

have to be a qualified operator, and

15

passing a test is part of the qualification

process.‖

―It [feedback] comes at different levels. If

it came from a customer, then I would

probably look into that.‖

―We realize we aren‘t developing talent

in a timely way. We are behind in having

the right amount and quality of talent we

need.‖

―Feedback from the performance

management system.‖

―We look at how the training influenced

their performance—‗Suzy Q applied

concept X from the class.‘‖

―Results of training show up in

calculations of ‗Budgeted hours vs. actual

hours‘ worked with a client.‖

―Management knows there are some

clients who don‘t pay attention, some do

pay attention.‖

―Delays in implementation.‖

―Financial impact on me—my bonus

plan.‖

―We have evidence of people getting to a

point where they will leave.‖

―If [key stakeholders] can hear that the

same learning objectives through these

methods, that combined with the

excitement and pleasure of not having to

leave a worksite, not to have to travel to

the Twin Cities, that takes a lot of time

and energy which takes time away from

16

the courts.

―We have a budget short fall, and that has

forced courts to lay people off, so

everyone who is there is doing more

work. They have less and less time to

spend on education, and yet we have

more and more new systems to learn. So,

it‘s seeing the combination of these

methods, combined with saving time and

money.‖

―If reports show that there is better data

quality, because of training.‖

―Hearing that managers feel the training

has positive impact reinforces the top

leaders doing more training. If managers

are giving the feedback that his is on

target, really good, constructive.‖

―Problems with closing a case.‖

There were no responses reflecting ―Level 3‖ as being important to management decision-making about distance training.

17

As a final attempt to delve into the ―black box‖ of executive decision-making, the second

new question was asked:

―How do you make the connection between looking at the costs of what you are

doing and making a judgment about the worth of your efforts?‖

This question yielded the responses, which were the richest in variety. Once again,

Normative criteria (Kirkpatrick‘s ―Four Levels‖) were heard. In addition, Berge‘s ―Level

0‖ (attendance) was heard as important data in judging the success of distance training.

18

Table 5. Question: ―How do you make the connection between looking at the costs of what you are doing and making a judgment

about the worth of your efforts?‖

Responses indicating Berge‘s ―Level 0‖

data as being important in management

decision-making:

Responses indicating

Kirkpatrick ―Level 1‖ data

as being important in

management decision-

making:

Responses indicating

Kirkpatrick ―Level 3‖ data as

being important in

management decision-

making:

Responses indicating

Kirkpatrick ―Level 4‖ data

as being important in

management decision-

making:

―Are people attending the training?‖

―We get no feedback on whether they

participated or passed. Just whether

they completed the training.‖

―If there is low utilization of the

training.‖

―People are busy.‖

―Could be the topics offered.‖

―Some were force-fed. You had to take

some training in order to get your log-

on to work with a system.‖

―I review feedback from

participants with person

who reports to me.‖

"If I can see them apply

something on the job, then

I think I can conclude

there is some business

impact."

―Reports of application‖

―Observations….‖

―What they are looking for

is the true desired

performance. They don‘t

always take the time to

determine what influenced

this performance. They are

not saying, ‗Everyone

went through time

management, and now

everyone is managing their

projects better.‘‖ [So, there

is no direct cause effect

thinking.]

― [Key stakeholders] see

consistent messages,

practices, people learning

―With our bargaining unit

employees, there is a very

formalized evaluation. The

training is designed to

allow employees to move

up the ladder.‖

―For non-union, non-

bargaining unit

employees—

Uses climate survey

―Saratoga‖ statistics

―We use the Gallop survey

to assess climate. We use

perceptions of employees

toward climate of

organization.‖

―Field activity reports.‖

―Observations and field

sales reports and customer

surveys used to determine

if training is working.‖

―From an incentive point of

view, if there aren‘t

19

best practices.‖

compliance issues, then we

see the person as a

contributor to profit to the

organization.‖

―We use customer

satisfaction surveys.‖

―Employee retention is

tracked—if training is

successful, because I‘m

satisfied and my employees

are satisfied and they want

to stay.‖

―There are a lot of reports

that people did not have

access to before.‖

―I can‘t say measures lead

to more resources. We have

been doing more and more

training, with less and less

resources.‖

No Level 2 (―Learning‖) seems to have been used by executives in helping them make decisions about the value of their distance

training program.

20

As cited above, the second new question yielded more variety in responses. Besides the

responses that seemed to represent Normative criteria as being important to making

executive-level decisions about distance training, responses representing two other

criteria frameworks—―behavioral‖ and ―naturalistic‖—also emerged from the

participants in the second wave.

• Behavioral Criteria

– Focus is on the process of evaluation, not criteria.

– Questions may be raised about the ―worth‖ of the training? (not just

financial worth)

– Questions may be raised about the probability of the training doing some

good.

– Examples:

Combs, W.L. and Falletta, S. (2000). The targeted evaluation process.

Alexandria, VA: ASTD.

Preskill, H. and Torres, R.T. (1999). Evaluative inquiry for learning in

organizations. Thousand Oaks, CA: Sage.

• Naturalistic Criteria

– Recognition models—‖Whenever X happens, we do Y…‖

– Speculative models—‖My guess is, the training did some good.‖ ―If we

had not done the training, we might really have been in trouble.‖

– Stories—‖I heard from so and so that your training was really great. I‘d

like you to do the same program for us.‖

– Incremental models—‖The training needs to be part of a long-term

initiative to change the culture of the organization.‖

– Moral and ethical models—‖Doing the training was the right thing to do.‖

– Example:

Brinkerhoff, R. (2006). Telling training’s story. San Francisco,CA:

Berrett-Koehler.

21

Table 6. Responses indicating importance of Behavioral criteria.

Process criteria Probability criteria Worth criteria

―We‘re in the process of improving our

data quality for the information that is

coming in through the system.‖

―Past experience of executives cause

them to make their judgments.‖

―Think about the business impact issues,

within the context of past experience and

environment.

If they had experience with an initiative

that had impact on business goals, they

will use that experience to judge whether

an initiative will work.

Passing interest in level 1, 2, and 3, but

their experience in the past with

initiatives and whether the initiative had

some impact, then they make their

judgment.‖

―Our CFO would says what we are

doing is a ―rounding error‖ [ In other

words, ―The cost of what we‘re doing is

so low…‖]

―The potential return on the activities we

are doing is in the millions. The cost of

what we are doing is in the thousands.

―Opportunity for growth for our

organization, is much greater than our

investment.‖

―If we were to start investing millions

into this tool, then we would start talking

about financial trade-offs.‖

―The challenge for training is to show

value to the business‘ key interests.‖

―Huge concern about the time away

from work. If people did not have to

leave their geographic location.‖

―If people perceive the training does not

apply to them, we do pay attention to

this.‖

―We look for making sure that the

investment is being valued by the

customer.‖

22

Table 7. Responses indicating importance of Naturalistic criteria.

Recognition model of decision-making Speculative model of decision-making Moral, ethical, cultural models of

decision-making

―Training requests are based on their

observations and what is observed from

competitors‖

―If there is a change in environment—

regulatory, competition.‖

―We do try to understand how we rank

on profitability compared to our

competitors.‖

―Budgets set based on customer

purchasing trends. As volumes change,

then we know skills training will have to

follow. If we saw a lot of attrition, we

would start looking at the training.‖

―Doing training is a natural part of

implementation. In an ideal world, the

product would be intuitive, but the

product does not teach itself, so we just

have to do the training.‖

―A subjective decision, based on what I

hear from customers and what I‘m seeing

in the operation.‖

―If we hadn‘t done this training, we might

have been in trouble.‖

―We evaluate every employee on their

commitment to compliance. People don‘t

become a leader, if they are perceived as

lacking commitment to compliance.‖

―We‘re all pretty well aligned. We know

we need to invest in employees.‖

23

Once again, as with the first wave of interviews, some interview responses indicated that

other, non-level-based criteria were important to some executives. These criteria were

represented in these interview responses:

―[The design of the training was…]too generic.‖

―Not related to their challenges and job‖

―It‘s a package, so that‘s the problem. Unless, you have a strong felt need, you

don‘t take it.‖

―Training was done too early; people forgot.‖

―When everything in your world changes, its hard to cover everything.‖

―[I evaluated the training based on …]Coverage of the topic area.‖

Step 5. Concluding the Research

Step 5a. Saturation reached? This study is a start in establishing useful, powerful

theories that can account for how executives evaluate distance training. So, while

―saturation‖ may not have been achieved, due to limitations of time and resources, the

study, nevertheless, offers several helpful findings and the beginnings of a set of

principles, upon which training and performance improvement professionals can act and

other researchers can continue the research.

Step 5b. Documenting grounded theory. The documentation of a proto-theory of

how executives evaluate distance training starts with some general findings that hold for

both waves of interviews:

1. No distinctions were heard in the interviews between how face to face training

was evaluated, versus how distance training was evaluated. Organizations use the

same methods, standards, etc.

2. Distance training thoroughly integrated into human resource

development/performance improvement strategy.

3. HPT/HRD professionals evaluated distance training somewhat differently than

operations/finance professionals:

a. HPT professionals—4/5 levels ―trip off the tongue.‖

b. Operations/finance people—do not use the four levels. Focus on business

results—employee turnover, timeliness of system implementation

4. A disconnect was apparent between the data valued by HPT professionals and

what operations/finance/ professionals valued. Operations/finance professionals

many times do not get end of course evaluation data.

5. Some data not always shared with management—it‘s ―inside baseball.‖

Satisfaction ratings shared within the training organization for the purposes of

improving delivery of training, but may not be shared with upper management.

24

6. Trust in the judgment of the training leader is important. Upper management

trusts training leader to make good judgments about training.

7. Different stakeholders have different goals, expectations, decision-making

orientations.

8. As far as ―satisfaction‖ or ―perception‖ is concerned, the question is, ―what are

people satisfied with?‖

a. Materials?

b. Design of learning experience?

So, general satisfaction or perception ratings are probably not very

helpful…people will have widely varying standards and foci for this question.

Need to be specific, to be really helpful. ―Canned‖ evaluation forms are probably

not very helpful.

9. Maturity of initiative. More mature, the more ―mature‖ the measures.

This organization is just getting into online employee development.

They are interested in basic measures of success—how many people post to their

discussion groups, does the technology work, are people participating

―The CEO has been pretty open with the experimentation of this. He‘s been careful about

us putting too much control on this…putting too hard and dry, set evaluation criteria on

this. The evaluation criteria is primarily coming from the people on the team [who are

developing the development experiences].

The team members themselves are going to have to understand this better, before we can

really feel comfortable with how we can measure success. So, part of this is going to be

how quickly we can learn from this and how we can measure this learning.

This is something new, we haven‘t done this before. So, the question is how quickly we

can get on this.

Blogs and facilitated discussion are pretty well accepted as being valuable in corporations

today.

Part of this is the comfort level with this [technology].

ROI--Maturity

Researcher:―In the absence of formal ROI information, what is your thinking

process that leads you to conclude that the training was worth the money spent on

it, or not worth the money spent on it?‖

25

Financial executive:―We‘re not to that point, yet…we‘re just focused on a

system—was it worthwhile, rather than on the components of the system.‖

Researcher:‖So, you‘re focus is more on the impact of the whole project, as

opposed to thinking about was the training worth it, or was it worth bringing in a

consultant.‖

Financial executive:―No, we‘re not at the maturity level, yet.‖

Researcher: What do you think about the fact that you do not get any feedback

information about how your employees did in the training?

Marketing executive: ―I would think that for those companies that are better

structured for employee development, they would do a better job of getting

information to managers.‖

10. Importance of internal and external environment influences on decision-making

Importance of environment in which organization works:

External environment:

Competitive labor market?

Amount of turnover?

Regulatory environment

Branding strategy—“We put a lot of time and money into employee development.”

Internal culture of organization—does it value systematic approaches to HP?

Personal Development Planning systems

Some organizations live and die on the PDPs.

These organizations seem to do a much better job tracking results of distance training.

A Proto-theory of Explaining How Executives Evaluate Distance Training

Besides these general findings, the proto-theory contains this narrative framework and

propositions:

Theoretical propositions:

1. Stakeholders and their values and goals influence the organization‘s

evaluation process, the data that‘s collected, and how the evaluation results

are interpreted.

2. Stakeholders can be internal (organization management, employees),

straddling the boundary of the organization (boards of directors, who have feet

in both worlds—the organization and the ―outer‖ world), and external

(regulatory agencies, the stock market, etc.

26

3. There are a very wide variety of criteria stakeholders use to hold accountable

distance training programs. These criteria fall into the three main categories of

executive decision-making ―styles‖: normative, behavioral, and naturalistic.

4. Important inputs into stakeholders‘ decisions holding distance training

accountable include the stakeholder‘s past experience, as well as their

business values.

5. The ―maturity‖ of a distance training initiative can influence the criteria used

to hold distance training accountable.

6. The processes used by stakeholders in holding distance training accountable

are internal (within the stakeholder) and external (among stakeholders).

7. The processes used among stakeholders follow the general principles of

feedback systems. The helpfulness of the feedback can be influenced by all

the characteristics of a feedback system: quantity of feedback, appropriate

direction of feedback back to stakeholders, free flow of information, etc.

Figure 2 suggests a visual representation of a systemic explanation how executives

evaluate distance training.

27

Past Business Personal

Experience Values Decision-

making

Orientation

Figure 2. Graphic representation of a theory of how executives evaluate distance

training.

11. Implications for practice:

Use Preskill and Torres‘ (1999) Evaluation Inquiry questions to set the stage or

Combs and Falletta‘s Targeted Evaluation Process (2000).

Don‘t assume management will be interested in the 4 Levels. Ask first.

Use appropriate evaluation data and methods, based on what inquiry says.

Possible tool:

Evaluation Process

Stakeholders

Stakeholders

Stakeholders

Organization

Boundary

?

Stakeholders

28

Dif

fere

nt

sta

ke

ho

lde

rs h

av

e d

iffe

ren

t p

rio

riti

es

Natu

ralis

tic

“If X

happens,

we

alw

ays…

Sto

rie

s

Incre

me

nta

l

Mo

ral/eth

ical

Oth

er_

____

_

“If X

happens,

we

alw

ays…

Sto

rie

s

Incre

me

nta

l

Mo

ral/eth

ical

Oth

er_

____

__

“If X

happens,

we

alw

ays…

Sto

rie

s

Incre

me

nta

l

Mo

ral/eth

ical

Oth

er

_____

“If X

happens,

we

alw

ays…

Sto

rie

s

Incre

me

nta

l

Mo

ral/eth

ical

Oth

er_

____

Behavio

ral

Pro

babili

ty?

Wort

h?

Pro

cess

Describ

e

pro

cess o

f

sharin

g

data

.

N

orm

ative C

rite

ria

RO

I

Results

Applic

atio

n

Learn

ing

Perc

eptio

n

Sta

kehold

er

#1

Sta

kehold

er

#2

Sta

kehold

er

#3

29

References

Berge, Z.L. (Ed.)(2001). Sustaining distance training. San Francisco, CA: Jossey-Bass.

Combs, W. and Falletta, S. (2000). The targeted evaluation process. Alexandria, VA:

ASTD.

Creswell, J. (2008). Educational research. Upper Saddle River, NJ: Pearson.

Hardt, P.O. (March, 2008). ―Does distance training really improve performance.‖

PowerPoint presentation to Minnesota chapter ISPI. St. Paul, MN

Holton, E.F., and Naquin, S. (Summer, 2005). A critical analysis of HRD evaluation

models from a decision-making perspective. Human Resource Development Quarterly,

16(2), 257-280.

Johnson, T. (2005). Snowball sampling. Encyclopedia of biostatistics. Hoboken, NJ: John

Wiley and Co.

Patton, M (1990) Qualitative evaluation and research methods. Newbury Park,

California: Sage Publications.

Phillips, J.J. (2003). Return on investment in training and performance improvement

programs (2nd

ed.). Amsterdam: Butterworth Heineman.

Preskill, H. and Torres, R.T., (1999). Evaluation inquiry for learning in organizations.

Thousand Oaks, CA: Sage.

Rosenberg, M.J. (2001). e-learning. New York:McGraw-Hill.

Rosenberg, M.J. (2006). Beyond e-learning. San Francisco, CA: Pfeiffer.

Schreiber, D.A., and Berge, Z.L. (1998). Distance training. San Francisco, CA: Jossey-

Bass.

__________ (2006). State of the industry report: 2006. Alexandria, VA: ASTD.

Swanson, R.A., and Holton, E. (2001). Foundations of human resource development. San

Francisco, CA: Berrett-Koehler.

Swanson, R.A., and Holton, E. (2005). Research in organizations: foundations and

methods of inquiry. San Francisco, CA: Berrett-Koehler.

Van Dam, N. (2004). The e-learning fieldbook. New York: McGraw-Hill.