19
M EASURING T RAINING E FFECTIVENESS

Measuring Training Effectiveness

Embed Size (px)

DESCRIPTION

Learning & Development business units are under siege and struggling with the effort of what appears to be a very confusing, elephantine challenge of measuring the effectiveness of their training interventions. We need clarity. We need a common sense approach. We need to step up our practice of corporate learning consulting. Let’s discuss existing principles to prove the value of Learning & Development (L&D) deliverables in a corporate environment.

Citation preview

Page 1: Measuring Training Effectiveness

MEASURING TRAINING EFFECTIVENESS

Page 2: Measuring Training Effectiveness

Pa

ge2

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

11639 E. Wethersfield Road, Scottsdale, AZ 85259 USA

www.michaelsandassoc.com Toll-free: 877-614-8440

© 2011 by Michaels & Associates Docntrain, Ltd. dba Michaels & Associates

Copyright holder is licensing this under the Creative Commons License, Attribution-Share Alike 3.0. For more information, check out http://creativecommons.org/licenses/by-sa/3.0/us/

~ Share This! ~

Post this to your blog, Twitter™, LinkedIn® or Delicious™ accounts or email this to someone who might enjoy it.

Share Remix

Attribute

Share Alike

Page 3: Measuring Training Effectiveness

Pa

ge3

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

MEASURING THE EFFECTIVENESS OF TRAINING—

THE BUSINESS OF CORPORATE LEARNING

Learning & Development business units are under siege and struggling with the effort of what

appears to be a very confusing, elephantine challenge of measuring the effectiveness of their

training interventions. We need clarity. We need a common sense approach. We need to step

up our practice of corporate learning consulting. Let’s discuss existing principles to prove the

value of Learning & Development (L&D) deliverables in a corporate environment.

In the training industry, many practitioners insist that regardless of the size or maturity (Paul

Kearns consultant, author, teacher at http://www.paulkearns.co.uk/articles.htm) of your

organization, the Kirkpatrick models and/or Phillips methods of evaluation are the only

sound ways to offer validation of the corporate learning product.

Page 4: Measuring Training Effectiveness

Pa

ge4

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

Going Back to Basics

One of the simplest rules for measuring effectiveness is to ensure you’ve developed materials that will actually train corporate employees. I often

meet training professionals who have left adult learning theory in the college classroom long ago, have forgotten much of it and feel compelled

to whip out training on demand that is instructionally flawed. All too often, through the quirkiness of corporate fate, some organizations have

moved competent people from operational positions into training positions without benefit of being formally taught adult learning theory or

instructional design. Many of these new learning professionals seek the comfort of becoming competent in their positions, but they have gaps in

their knowledge that cause them to create instructionally flawed materials. Because of these and other reasons, during a career that has

spanned thirty plus years, I’ve seen a lot of training developed that is simply not going to be effective because of faulty instructional integrity.

Organizations spend a lot of money on these materials. Developers still put

much work and effort towards developing these materials. This kind of

implementation strategy creates frustration all around the world of corporate

training, for employees, employers and the learning professional. No wonder

we are getting pressure to prove training (or learning) effectiveness to the C-

Suite!

All this pressure creates another compulsion—a need to measure the

effectiveness of the training. Measuring instructionally flawed training is an

exercise in futility. Application of adult learning principles combined with

execution of good instructional design is a key underlayment to how well

employees can learn, and without it, no measurement will yield satisfying

results.

Page 5: Measuring Training Effectiveness

Pa

ge5

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

Additionally, if you are creating e-learning (online or webinar learning), a good understanding of

usability heuristics is imperative (Jakob Nielsen, 10 Heuristics for User Interface Design). Poor

execution in programming or ignorance of usability rules can sabotage all the work of constructing a

beautifully instructionally sound module of training. The most important of these rules, and in my

experience the most flagrantly violated, is to give learners control of their learning experiences.

Formative versus Summative Assessments

Formative assessments in training are those assessments (in lay terms) that engage learners in

assessing themselves and that provide assessment as part of the learning experience (Cowie, B., &

Bell, B. (1999), “A model of formative assessment in science education”, Assessment in

Education, 6: 101-116). Simply stated, formative assessment provides for two-way communication of

the learning so that the assessments themselves are a part of the learning process.

In an ideal corporate training program, each learning module (live, online, webinar, whatever) would

have a small segment of learning. In these segments, we’d develop assessments in which reflective

answers connect disparate information. For instance, in a live training, we’d follow a chunk of training

on a software application with an exercise in using the software processes in the training. The trainer

oversees this exercise and provides mentoring through the exercise. We’d scored the exercise, but

we wouldn’t actually grade it, because it is after all, part of the learning experience. The exercise helps

the trainer gather feedback on which learners “get it” and are successful in completing the exercise,

and helps the learners gain insight on how they are doing. Optimally, the trainer provides (and the

learner seeks) additional sources of knowledge if the assessment results indicate the need for

alternate learning sources.

Page 6: Measuring Training Effectiveness

Pa

ge6

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

Formative assessment is the use of adult cognition that truly engages the learner. Engagement translates to a better and more satisfying

learning experience. These assessments take place during the learning, and should directly relate to learning objectives that are measurable

and achievable and, ideally, reflect corporate and business unit goals and objectives, a nearly perfect map where possible. Use of formative

assessments in exercises and/or in e-learning modules (drag and drop of glossary terms, for instance) can improve learner engagement

provided the assessments are appropriate, relevant and continue to provide for the natural curiosity of a learner.

Summative assessments test the effectiveness of the training, and they judge the competency of the learner after the training intervention takes

place. We use summative assessments to provide quantifiable data about what the learner learned. We grade summative assessments. We

determine what constitutes a passing grade, and devise paths for those who do not pass. Typically, we use summative assessments as the

“final” assessment and the learning management system (LMS) houses the scores.

In my opinion, this is where the pressure to measure the effectiveness of training falls apart. Very often, we use summative assessment

techniques with training that is not instructionally sound and doesn’t have formative assessments to support the instructional integrity of the

training. How do you measure effectiveness of training for corporate metric purposes when the underlayment is unsound?

Page 7: Measuring Training Effectiveness

Pa

ge7

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

Figure 1. Example of LMM

L&D, the Business Unit, the Enterprise

According to Paul Kearns (Evaluating the ROI from Learning - Cromwell Press, Trowbridge, Wiltshire UK.), a solid starting point for corporate

learning professionals (he calls them either trainers or learning consultants) is to begin with evaluating where L&D stands as a group on his six

stage Learning Maturity Model (LMM). He also advises evaluating the business unit and the enterprise on this model. Figure 1 shows an

example of an LMM.

Page 8: Measuring Training Effectiveness

Pa

ge8

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

In a single enterprise, many L&D units can exist, and they exist in varying stages of maturity. What do we mean by “maturity”? Think about these

scenarios:

When you receive an email stating a department needs a five-hour course on X Operation in 30 days, how do you respond? If you

jump and throw together a PowerPoint presentation or something similar, you might be in Stage 1, or a reactive mode.

If you have worked toward making the line managers aware that L&D follows normal business processes equivalent to theirs, and

that a good quality product might result from adequate budgeting and time, you might be at Stage 3.

As you attend meetings with business unit executives, you become aware of initiatives that will require segments of employees to

learn new skills. If you present a strategy with a budget, an achievable deadline, and a commitment to prove the strategy executed

as planned and get the strategy approved, you might be in Stage 4, or a true learning partner.

If the CEO knows about your learning strategies, and in fact has helped to steer information to you so that you can include corporate

strategies into your work, you might be at Stage 6. You could also be a learning consultant who is teaching the entire organization

how to be a learning corporation. Studies show that learning corporations are more profitable than corporations that are not making

an investment in being true learning organizations.

Page 9: Measuring Training Effectiveness

Pa

ge9

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

So where on the LMM is your L&D unit?

Where is your business unit?

Where is your corporation?

These questions are important to consider. If you are at Stage 1, it

can be almost impossible to measure the effectiveness of your

training because your environment won’t allow you to construct

instructionally sound, engaging training. “Death by PowerPoint”

became a cliché for a reason. Level 1 assessments (often referred to

as “smile sheets”) for training aren’t truly effective if your learners

were not engaged in the learning, unless you are looking for a

negative response as a way to educate line managers of the business

unit. Level 1 assessments in this situation can be a waste of your

time—time better invested in moving your L&D unit further into the

LMM.

As you evaluate your own L&D unit, consider how it works internally,

with the other business units and within the enterprise. If L&D is at

Stage 1, what things can you do incrementally to move forward in

maturity?

Begin with a through Training Needs Analysis (TNA) in all business units the L&D unit supports. We define the TNA as the process

of defining on-the-job performance requirements and the gaps between the requirements and what employees are presently doing.

Approach every training request with questions about what the requester is trying to achieve. Turn your questions into an analysis of

whatever depth you can manage to help guide the development of the deliverable.

Page 10: Measuring Training Effectiveness

Pa

ge1

0 M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

Figure 2. Plan, Do, Check, Act (PDCA) cycle

Make allies of the line managers and ask them about their objectives, goals and strategies. They can give you current performance

metrics for their employees. They are your customers, so pay attention to what they want to achieve, and plan for how you might

help them.

Human Resources (HR) departments typically have job descriptions and the competencies required to perform those jobs. Compare

them with what the line managers are telling you.

Analyze your information sources. Define the opportunities. Eliminate the non-training issues.

Develop an L&D strategy for managing training development. Begin discussing with your ally-managers how to plan for training, and

what you need to be successful so you can make them successful.

Develop a replicable process for creating instructionally sound training. Paul Kearns recommends the Deming Cycle: Plan, Do,

Check, Act (PDCA) shown in Figure 2.

Page 11: Measuring Training Effectiveness

Pa

ge1

1

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

I also suggest using an EADDIE form (Evaluate, Analyze, Design, Develop, Implement, Evaluate). I further maintain that either

process model is fine, as long as you find it replicable, consistent and defendable within your organization. Keep in mind that line

managers should respect that you have a process that works.

Practice saying “no” in a nice way. Example: “I probably can get you a presentation in a few hours, but for training that boosts X

metrics, I’m going to need to go through our process to define your objectives and make sure the training meets them. For instance,

I’ll ask you what you would like your employees to be able to do at the end of this training. Would you like to meet on Monday to

start that process?”

The Elephant in the Room

Exuberance regarding the Kirkpatrick and/or Phillips methods of assessment bombards the

corporate training world almost daily, it seems. There is an onslaught of well over sixty

books, countless training sessions and local and national American Society for Training

and Development (ASTD) presentations. It is marketing at its most impressive.

In my opinion, L&D professionals have adopted these assessment methods in

countless L&D units across the world, with very little evaluation. In many L&D units,

unit leaders have not considered the maturity of the units, how they service the line

managers (their customers), or whether the learning developers can actually create

training to support summative assessments. For me, the vast depth of marketing these

instruments as a means of demonstrating ROI raises simple questions: How can this be so

difficult? What is it about these methods that takes so much print space, countless

presentations and evangelizing? Since they take so much explanation, can these

assessment methods be flawed?

Page 12: Measuring Training Effectiveness

Pa

ge1

2 M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

Again, in my opinion, the answer is another question: What method isn’t flawed? The flaw lies in

execution, and the execution relies on learning professionals to step up their game. Things to

consider in using these methods are:

What is your definition of ROI? Is it the same as the business unit manager’s, or the Chief

Financial Officer’s?

There’s tremendous variability in results that unsuspecting learning professionals will

encounter. For an excellent discussion on these variables, see Ron Drew Stone’s

CLOMedia article, “ROI is Like a Box of Chocolates”. He very cleverly proves that

variability in measuring results can negate those results if you aren’t careful and

knowledgeable. He puts accurate measurement truly in the hands of outside consultants

(which is no big wonder since he is a consultant). He raises several excellent points.

Humans are variable. Things that affect humans are varied. A simple example of that is

sales training: how will you isolate the results of your training from variables such as the

natural effects of the economy, a new marketing campaign, a product release slowdown or

failure, compensation adjustments and so forth? Any one or all of these factors can distort

and invalidate your ROI calculations. A random quantity adjustment such as suggested in

Ron Drew Stone’s article is exactly that: random, and not so easy to defend.

Why have a metric assessment separate of those of business unit managers? They are, in

fact, your customers. Servicing them in pursuit of corporate goals is your job. Doesn’t it

make sense to figure out how the business unit managers are measured and to align

training goals and assessments accordingly?

Knowing these things, it has seemed to me for some years now that it might be better simply to

prove that the training we construct has value (Proving Learning Value or PLV).

Page 13: Measuring Training Effectiveness

Pa

ge1

3

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

Proving Learning Value (PLV)

Even at Stage 1 of the LMM, any professional with the grasp of the concepts presented here (and the additional reading I’ll recommend) can

begin the process of proving learning value within their L&D unit, as it applies to the business units and within the enterprise. Start simply and

incrementally and work towards making your unit a true business partner with the other enterprise units. Begin by:

1. Understanding business unit goals and how L&D can support them by allying with business unit managers and including yourself in

business unit meetings.

2. Understanding the overall goals of the company (as stated in annual reports and CEO messages—and if you are advanced to Stage 5,

an actual seat at the C-Suite table).

3. Prioritizing your training interventions according to those goals and objectives.

4. Using your training needs analysis (and gap information derived from it) to plan the training, learning objectives, formative assessments

and summative assessments.

5. Deciding upon your method of proving value. Do you want to use Kearns’ approach? Do you still think the Phillips method work for you?

Do you need an alternative?

Page 14: Measuring Training Effectiveness

Pa

ge1

4 M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

KISS Method – An alternative way to think about Proving Learning Value

If you really want a simple way to provide reasonable metrics without going down the endless path of measure,

measure, measure (sometimes forgetting why we are measuring), here’s an idea: use the classic KISS (in this

case, Keep It Simple Suggestions) method. Consider the following, borrowed from Kirkpatrick, with caution to

evaluate first:

1. Did the learners like the training?

(Level 1 in the Kirkpatrick/Phillips methods, but we can maximize a Level 1 through learner engagement and formative assessments.)

Did they feel engaged in the learning experience? When they asked if they felt they learned, what was their response? In my opinion,

smile sheets are vastly underrated. Given the time and resources to create engaging learning, what better use of an instrument than as

a tool for the learner to express their satisfaction? Remember that this level of assessment is perfectly fine for some training.

Compliance, regulations and mandatory training having a certain percentage of correct choices required can all be fine for this kind of

assessment, because the company has to do this training regardless. The questions then center on ensuring a good learning

experience to make sure employees like learning, and that they haven’t felt the experience was a waste of their time.

2. Did the learners learn the content?

What method(s) will you use to prove that? Will you pre-test to find out the level of their knowledge before they took the training, then

use a post-learning assessment to demonstrate a change? It really could be that simple. Will you use Certainty Based Marking (CBM), a

relatively new assessment method that measures the confidence of the learner in what they’ve learned (see Create Active

Assessments With Certainty-Based Marking)?

Page 15: Measuring Training Effectiveness

Pa

ge1

5

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

If the learner is confident and right, you probably solidify the learning experience for them (as much as 95% according to the few studies

available). If the learner is confident in his or her answer but he or she is wrong, you may have a problem with the training, or you may

need to reroute the learner to other materials to clear up learner’s confusion.

Either way, it is a simple process to implement this method of assessment, and it provides excellent information for proving the value of

the training with pre-planned reporting mechanisms. It is great for leadership training, and medical institutions have used CBM

assessments in medical training (where uncertainty in the answers can have potentially disastrous effects) for years. The Level 1 (or

smile sheet) assessments come back with tremendously positive responses as well, because the assessments serve the purpose

of taking ambiguity on the content out of a learner’s mind.

3. Can learners perform the trained tasks on the job?

Have you set up preplanned support mechanisms online, with

managers, in job aids and in work process alignment to

ensure they can perform once the intervention is over? If so,

can the line managers give you metrics obtained prior to the

training? Did you design your learning objectives, modules

and formative assessments in alignment

to those metrics?

If so, post-intervention performance ratings and appraisals

should easily demonstrate learners can perform on the job,

and you should see metrics improvements as well. Again, not

so hard, and easily defensible with proper planning.

Page 16: Measuring Training Effectiveness

Pa

ge1

6 M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

4. Did the learning intervention affect the bottom line?

Did you evaluate the goal of the intervention within the confines of corporate revenues and/or risk reduction and/or cost reduction? If you

did, there should be published metrics on each of these areas that clearly demonstrate impact on the bottom line. Again, did you

construct the objectives, modules and formative assessments to support those goals? Are there support mechanisms in place to ensure

success after the interventions? Can you demonstrate positive change?

We can apply this level of measurement when the whole corporation is all rowing together to achieve the corporate goals. As an

example, if you can demonstrate that the learning intervention reduced help desk calls by a significant percentage, you can show that

percentage in reduction of cost. Help desk managers always have metrics, and allying yourself with those managers gives you easy

access to them.

Page 17: Measuring Training Effectiveness

Pa

ge1

7

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

Suggested Reading List:

Paul Kearns Training Journal 12 Part Series:

o Part 01 – Organisational learning maturity

o Part 02 – Strategic learning

o Part 03 – The attitude of learning consultants

o Part 04 – Learning cycles

o Part 05 – Evaluation

o Part 06 – Performance management

o Part 07 – Business analysis

o Part 08 – Creative designers

o Part 09 – Delivering solutions

o Part 10 – Consulting skills

o Part 11 – Learning consultants as business partners

o Part 12 – Organisation design and development

Evaluating the ROI From Learning, Paul Kearns

What CEOs Expect From Corporate Learning, William J.

Rothwell, John E. Lindlholm, William G. Wallick (American

Management Association)

Measuring For Success – What CEOs Really Think About

Learning Investments, Jack J. Phillips and Patricia Pulliam

Phillips

ROI is Like a Box of Chocolates, Ron Drew Stone, CLO Media Chief

Learning Officer Magazine, January 2011

Ten Usability Heristics, Jakob Nielsen

Page 18: Measuring Training Effectiveness

Pa

ge1

8 M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

There’s no need for overkill. The observable fact is that some people embrace measurement of the effectiveness of training as a means of job

protection, and in many organizations, measuring training effectiveness is a substitute for getting on with the real work. Sadly, this

measurement, at worst, becomes an end in itself and, at the least, it is a distraction from our real jobs: getting sound instructional materials to

employees to support them in their jobs, and contributing to company goals. Provided we include all the other components of good instructional

design and usability rules, and by using common sense methods of measurement without elaborate contortions, learning professionals can

ensure training effectiveness soars.

Michaels & Associates brings the experience and know-how of solid instructional and media design to every project. Feel free to contact us to

assist with your next training endeavor! Michaels & Associates—where your business is your specialty and improving your business is ours.

[email protected] www.michaelsandassoc.com toll-free: 877-614-8440

Page 19: Measuring Training Effectiveness

Pa

ge1

9

M E A S U R I N G T R A I N I N G

E F F E C T I V E N E S S

About the Author

Sherry Michaels is a veteran in the learning industry of more than thirty years and President of Michaels &

Associates, a company specializing in instructional, media and writing design and content development for

learning. Sherry founded the company in 1998 and developed a staff and network of consultants with

development and project management expertise across all disciplines of corporate and academic learning.

Sherry has presented several workshops for the American Society for Training and Development (ASTD), the

Society for Pharmaceutical and Biological Training (SPBT) and the Society for Technical Communications

(STC).

Michaels & Associates provides custom training and documentation solutions for a client list that includes

companies such as Aetna-Schaller Anderson, Activator Methods, Inc., Avnet, automätik education (BMW MINI Cooper, Honda), Banner Health, Cox

Communications, Defense Acquisition University (DAU), Dow Jones, EMCOR, Excellus Blue Cross/Blue Shield, McKesson Pharmaceutical, MetLife,

Pegasus Solutions, Pfizer, Scottsdale Insurance, Standard Pacific Homes, TriZetto Software and Universal Technical Institute.