Upload
madison-daley
View
216
Download
0
Tags:
Embed Size (px)
Citation preview
Are Your e-Learners Learning?How to develop online level 2 evaluations quickly and effectively
Gus Prestera, PhD, CPT
President, effectPerformance, Inc.
Instructional Design Consultant
April 26, 2004
Why assess learning?
Performance Context(e.g., Workplace)
Learning Context(e.g., Classroom)
Are your PALs aligned?
Performance
Task
AssessmentTask
LearningTask
Agenda Rapid Prototyping Before Test Development… 4-Step Test Development Process Practice Discussion
Rapid Prototyping
Rapid Prototyping: Develop a functional prototype quickly, test/refine it until it is accepted, and then proceed with full development
Rapid Prototyping Reverse-engineering
Develop User test Refine Minimal upfront analysis
Iterative and incremental approach Continuous improvement Progressive refinement
User centric Reliant on user input and user feedback Testing under realistic conditions
Requirements
DevelopPrototype
Design
User Test
Refine
Construction
Installation
Operation
Approved
Rapid Prototyping
Thiagi’s Rapid ID Model Strategy 1. Speed up the process Strategy 2. Use a partial process Strategy 3. Incorporate existing instructional materials Strategy 4. Incorporate existing non-instructional materials Strategy 5. Use templates Strategy 6. Use computers and recording devices Strategy 7. Involve more people Strategy 8. Make efficient use of subject matter experts Strategy 9. Involve trainees in speeding up instruction Strategy 10. Use performance support systems
Thiagarajan (1999)
My Approach Prioritize - Spend time on what matters most Produce - Move from abstract to concrete fast… Pilot - Don’t guess, just see if it works Learn - Creative processes are iterative Listen - Involve learners early and often Leverage - Use technology, templates, EPSS tools Streamline - Reduce process complexity, inefficiencies,
and redundancies to cut cycle time and costs Align - Maintain PAL alignment
Before Test Development…
Analysis Design Development Implementation Evaluation
PerformanceAnalysis
Training NeedsAssessment
Task Analysis
LearnerAnalysis
ContextAnalysis
TestDevelopment
Write ObjectivesDevelop
AssessmentDevelop
Strategies
Source: Prestera, 2004a
Front-End Analysis What are the performance gaps or
opportunities? What are the root causes? What interventions will close those
gaps? Which are skill gaps, i.e., are caused by
gaps in knowledge, skills, or attitudes? What skill gaps can/should be addressed
through training?
Training Needs Assessment (TNA)
Identify critical skills Prioritize skill set
Difficulty of implementation Potential of impact Type of cognitive process Type of knowledge (Krathwohl, 2002)
Survey skill needs
Step 1:
Identify Skills
Step 3:Survey Needs
Ste
p 2:
Pri
oriti
ze S
kills
TNA
(Prestera, 2004b)
Task Skill Difficulty of Potential of Type of Type of Type of IRCImplementation Impact Skill Cognitive Process Knowledge Rating
Hitting a baseball Remains positive even when down in the count 5 Very High 5 Very High 4 Attitudinal 7 Alter Attitude 5 Attitude 5.50
Hitting a baseball Anticipating the pitch correctly 5 Very High 5 Very High 1 Cognitive 3 Apply 3 Procedures 4.00
Hitting a baseball Making solid contact with the ball 5 Very High 5 Very High 3 Psychomotor 2 Understand 3 Procedures 3.75
Hitting a baseball Using appropriate hitting stance consistently 3 Moderate 4 High 3 Psychomotor 3 Apply 3 Procedures 3.25
Hitting a baseball Beating the throw at first base 4 High 4 High 3 Psychomotor 2 Understand 3 Procedures 3.25
Hitting a baseball Running fast 5 Very High 4 High 2 Motor 0 Motor Skill 0 No Knowledge 2.25
Hitting a baseball Translating batting coach's hand signals 2 Low 3 Moderate 1 Cognitive 2 Understand 1 Facts 2.00
Hitting a baseball Remember the pitch count (e.g., 2 strikes, 1 ball) 1 Very Low 2 Low 1 Cognitive 1 Remember 1 Facts 1.25
IRC Worksheet Tool
Tool: IRC Worksheet
High IRC skills are more difficult to implement, have a high potential impact on the organization, and require the most instructional resources to develop/influence.
TNA Survey Tool
Training Needs
Unsure
Need this Training right away to
transition to my new role!
Need Training but the need is not
urgent
No training needed but
written instructions and coaching w ould
be helpful
No training needed but
written instructions
would be helpful
No training or learning support
is needed Mean n= 22 0 5 4 3 2 1 Rating
16 Enter Task or Topic Here 14% 32% 50% 5% 0% 0% 3.73
3 Enter Task or Topic Here 5% 18% 36% 27% 0% 14% 3.32
6 Enter Task or Topic Here 5% 5% 55% 18% 9% 9% 3.23
8 Enter Task or Topic Here 14% 14% 50% 9% 9% 5% 3.18
21 Enter Task or Topic Here 5% 5% 59% 9% 9% 14% 3.18
14 Enter Task or Topic Here 5% 0% 50% 23% 18% 5% 3.09
9 Enter Task or Topic Here 9% 18% 27% 23% 9% 14% 3.00
Tool:TNA Survey
This tool automates the survey development and analysis, quickly informing you of which skills have high perceived training need and which have low perceived training need and can be addressed through non-training interventions, if at all.
Test Development:A 4-Step Process Step 1:
Identify Criteria
Ste
p 4
:R
evis
e T
est
Step 3:Pilot Test
Step
2:D
evelop
Test
Prestera, 2004a
Step 1: Identify Criteria Form panel (3-7 people)
Exemplar workers Subject matter experts (SMEs)
Review skill set Brainstorm
assessment criteria
Step 1:Identify Criteria
Ste
p 4:
Rev
ise
Tes
t
Step 3:Pilot Test
Step 2:
Develop T
est
Step 2: Develop Test Is the skill well-defined or ill-defined?
Is there a set of right and wrong ways of doing things?
Or is right/wrong more dependent upon perspective, degrees of rightness, and context?
Does the skill need to be: Physically performed (motor, psychomotor, and some procedural tasks)
Mentally performed (decision-making, problem-solving, remembering, analyzing, synthesizing, evaluating)
Write test items and test instructions Review with a SME for content (Nitko, 1996)
Review for grammar, spelling, etc.
Step 1:Identify Criteria
Ste
p 4:
Rev
ise
Tes
t
Step 3:Pilot Test
Step
2:D
evelop
Test
Test Format Matrix
Objective(Correct/Incorrect)
Subjective(Rating scales)
Performance(Performance, simulation, projects, apprenticeships)
Objective/Performance
Subjective/Performance
Knowledge(MC, TF, matching, fill-in,
short answer, essay, report)
Objective/Knowledge
Subjective/Knowledge
Link to test writing guidelines:http://taesig.8m.com/createcon.html
Test Format Matrix
Note: There can be an element of objectivity to almost any “subjective” judgments and there certainly is subjectivity in any “objective” judgment, and the same overlap exists between “performance” tasks and “knowledge” tasks, so do not get hung up on the labels.
Practice:What format would you use?
1. Cashier’s ability to distinguish between valid and invalid coupons
2. Cashier’s ability to process transactions involving coupons at the cash register
3. Sales person’s product knowledge4. Designer’s ability to select the right test format5. Manager’s ability to apply laws and regulations
governing hiring practices6. Manager’s ability to conduct a job interview7. Sales person’s ability to use product knowledge to
help customers make good product decisions
Performance Context(e.g., Workplace)
Learning Context(e.g., Classroom)
Remember Your PALs
Performance
Task
AssessmentTask
LearningTask
Step 3: Pilot Test Difficult to write good test items But soooo easy to write bad ones Use a random sample of actual
learners Alternative: two-group approach, use
a group of average learners with no training and a group of exemplars
After data collection, copy the data into our Item Analysis Tool
Set the parameters and you’re ready!
Step 1:Identify Criteria
Ste
p 4:
Rev
ise
Tes
t
Step 3:Pilot Test
Step 2:
Develop T
est
Practice Set 1
ID #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23
ID 1 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 1 0 0 1 0 0 0 0ID 2 0 1 1 0 0 0 0 0 1 1 1 1 1 1 0 1 0 1 0 0 0 0 0ID 3 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0ID 4 1 1 1 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0ID 5 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0ID 6 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0ID 7 1 1 1 1 1 0 1 0 1 0 1 1 0 1 1 1 1 1 1 0 1 0 1ID 8 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0ID 9 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0ID 10 0 0 1 0 0 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0ID 11 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0ID 12 1 0 1 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 1 1 0 0 0ID 13 0 1 1 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0ID 14 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0ID 15 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0ID 16 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0ID 17 0 0 1 0 0 0 1 0 0 1 0 0 1 1 1 1 0 1 0 0 0 0 0ID 18 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0ID 19 0 1 1 1 1 0 0 0 1 1 0 1 0 1 1 1 0 1 1 0 1 0 1ID 20 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 0 1ID 21 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0ID 22 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0ID 23 1 1 1 0 1 0 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 0 1ID 24 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 0 0ID 25 1 0 0 1 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 1ID 26 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 1 0 0 1 0 1 0 0ID 27 1 1 1 1 0 0 0 0 0 0 1 1 0 1 1 0 1 1 1 1 0 0 0ID 28 0 1 1 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0ID 29 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0ID 30 1 1 1 1 1 0 1 0 0 0 0 1 0 1 1 1 1 1 1 0 1 0 0ID 31 1 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 1 1 0 1 0 1 1ID 32 1 0 1 1 1 0 0 1 0 1 1 0 1 1 1 1 1 0 0 1 1 1 1ID 33 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0ID 34 1 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 0 0ID 35 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1 0 1 1 0 0 0 0ID 36 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0ID 37 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1ID 38 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0ID 39 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0ID 40 1 1 1 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 1 0 0 0 1ID 41 1 0 1 1 0 0 0 0 0 0 1 1 0 1 1 1 0 0 0 0 1 0 0
Step 1:
Copy/paste your data set here, starting in cell B2. Be sure that your data is in a 0-1 binary format, where 0 = incorrect and 1 = correct.
*** IMPORTANT ***
When you go to paste your data, be sure that you use the "Paste Special" command and then select "Values." If you do not, you may disrupt many of the formulas and conditional formatting settings that have been established.
The template currently allows for up to 50 items (k) and 159 respondents (n). Rows and columns can be added, however, formulas here and in other sheets may need to be adjusted. If using fewer than 50 items or 159 respondent rows, no adjustments need to be made.
As a general rule, you should run separate item analyses for different types of test items, e.g., multiple-choice, true-false, fill-in, matching, etc., as well as for tests covering completely unrelated subject matter. This will make your results easier to interpret.
NOTE: when you copy your data, the cells contain conditional formatting, so correct items (1) are green and incorrect ietms (0) are red.
Go to Step 3.
Step 2: Check to make sure these are correct.
What shall we call this test? Test 1
How many items in the test? 50
How many people took the test? 159
What is the lowest passing score? 14%
These parameters will be used in our calculations.
Step 2:
Please answer these questions. The parameters you set here will be used in the calculations of reliability, difficulty, and validity values.
Set the Parameters
Test Results by Test Taker
IDRaw
Score%
ScorePass or
Fail?SD Var X-M (X-M)2
ID 1 14 28% Pass 0.45 0.21 (0.40) 0.16 ID 2 19 38% Pass 0.49 0.24 4.60 21.14 ID 3 9 18% Pass 0.39 0.15 (5.40) 29.19 ID 4 10 20% Pass 0.40 0.16 (4.40) 19.38 ID 5 11 22% Pass 0.42 0.18 (3.40) 11.58 ID 6 5 10% Fail 0.30 0.09 (9.40) 88.41 ID 7 31 62% Pass 0.49 0.24 16.60 275.48 ID 8 3 6% Fail 0.24 0.06 (11.40) 130.02 ID 9 24 48% Pass 0.50 0.25 9.60 92.11 ID 10 10 20% Pass 0.40 0.16 (4.40) 19.38 ID 11 3 6% Fail 0.24 0.06 (11.40) 130.02 ID 12 10 20% Pass 0.40 0.16 (4.40) 19.38 ID 13 10 20% Pass 0.40 0.16 (4.40) 19.38 ID 14 4 8% Fail 0.27 0.08 (10.40) 108.21 ID 15 7 14% Pass 0.35 0.12 (7.40) 54.80 ID 16 6 12% Fail 0.33 0.11 (8.40) 70.60 ID 17 25 50% Pass 0.51 0.26 10.60 112.31 ID 18 8 16% Pass 0.37 0.14 (6.40) 40.99 ID 19 27 54% Pass 0.50 0.25 12.60 158.70 ID 20 12 24% Pass 0.43 0.19 (2.40) 5.77 ID 21 10 20% Pass 0.40 0.16 (4.40) 19.38 ID 22 12 24% Pass 0.43 0.19 (2.40) 5.77 ID 23 18 36% Pass 0.48 0.24 3.60 12.94 ID 24 35 70% Pass 0.46 0.21 20.60 424.26 ID 25 14 28% Pass 0.45 0.21 (0.40) 0.16 ID 26 15 30% Pass 0.46 0.21 0.60 0.36 ID 27 17 34% Pass 0.48 0.23 2.60 6.75 ID 28 13 26% Pass 0.44 0.20 (1.40) 1.97 ID 29 6 12% Fail 0.33 0.11 (8.40) 70.60 ID 30 17 34% Pass 0.48 0.23 2.60 6.75 ID 31 23 46% Pass 0.50 0.25 8.60 73.92 ID 32 31 62% Pass 0.49 0.24 16.60 275.48
Step 3:
This sheet contains the scores for each respondent. You can print it and review it.
Note that the conditional formatting in column D is set to make scores below 60% red, 60%-80% yellow, and 80% and above green.
In column E, the determination is made whether or not the score is a passing one or not, based on the cutoff score you set in Step 2. You can ignore columns F through I as they are there simply to help calculate the summary statistics in Step 4a.
After these preliminary steps, you are ready to interpret the results of the Test and Item Analysis
Step 4: Revise Test Interpret indicators:
Reliability estimates Item difficulty (p) Item discrimination (d)
Revisit criteria Revise test items Pilot again
Step 1:Identify Criteria
Ste
p 4
:R
evis
e T
est
Step 3:Pilot Test
Step 2:
Develop T
est
Test Reliability Is it measuring
consistently? How often is your watch
accurate? Would you use it if it were
accurate 50% of the times? Reliability estimates:
KR-20, KR-21 (Kuder & Richardson, 1937)
Alpha (Cronbach, 1951)
Item Discrimination (d) Is it measuring accurately? Discriminant
Validity Does the question differentiate between those
who know their stuff and those who don’t? If your watch was reliable but consistently told
you the wrong time, would you keep it?
d is the key indicator (Sullivan, Wircenski & Major, 1999) d > .1: good question 0 < d < .1: weak question d < 0: bad question
Item Difficulty (p) How difficult was that
question? What are the odds that a
learner will get it right in the future?
Good questions are challenging but feasible
Too easy – Is training even necessary for that skill?
Too hard – Is current training for that skill effective?
Summary Information for…. Fill-in Test (Data1)Number of items (k ) in the test 50 Sample size (n ) 159
Average Score (M ) 14.40 29% Is my test instrument reliable?Cutoff Score 5.00 10%Median (midpoint) 14.00 28% KR20 0.86 Yes , Rel iabl eLowest Score - 0% KR21 0.83 Yes , Rel iabl eHighest Score 43.00 86% Alpha-1 0.86 Yes , Rel iabl e
Which items do I need to revise?
Item Number (#) #1 #2 #3 #4 #5 #6 #7 #8 #9 #10
Item Difficulty (p) 0.6 0.6 0.7 0.3 0.2 0.1 0.1 0.2 0.3 0.1 Item Discrimination (d1) 0.3 0.4 0.4 0.3 0.5 0.3 0.4 0.5 0.3 0.4
#11 #12 #13 #14 #15 #16 #17 #18 #19 #20
0.5 0.6 0.2 0.7 0.6 0.7 0.2 0.3 0.3 0.2 0.5 0.5 0.3 0.4 0.4 0.4 0.4 0.3 0.3 0.3 #21 #22 #23 #24 #25 #26 #27 #28 #29 #30
0.2 0.1 0.3 0.3 0.1 0.3 0.1 0.2 0.1 0.4 0.4 0.3 0.3 0.4 0.4 0.5 0.4 0.4 0.3 0.4 #31 #32 #33 #34 #35 #36 #37 #38 #39 #40
0.6 0.5 0.2 0.0 0.2 0.5 0.1 0.2 0.5 0.2 0.4 0.3 0.3 0.3 0.4 0.4 0.4 0.4 0.4 0.2 #41 #42 #43 #44 #45 #46 #47 #48 #49 #50
0.2 0.2 0.1 0.2 0.1 0.2 0.2 0.1 0.2 0.1 0.3 0.4 0.3 0.4 0.4 0.3 0.4 0.3 0.4 0.4
Scatter Plot of p and d1 Values
-1.00
-0.80
-0.60
-0.40
-0.20
0.00
0.20
0.40
0.60
0.80
1.00
0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00
Item Difficulty (0<p<1)
Ite
m D
isc
rim
ina
tio
n (
-1<
d<
1)
Positive Discrimination - DifficultTest items are okay but revise training to provide more
support for these skills.
Positive Discrimination - EasyConsider if items are too easy, possibly due to prior knowledge.
Perhaps, these can be dropped from training.
Low, Zero, or Negative DiscriminationWhen a test item exhibits low discriminant validity, you should consider dropping that test item or rewriting it.
IDEAL RANGEPositive Item Discriminationand moderate Item Difficulty
Item Difficulty (p)
Ite
m D
iscri
min
ati
on
(d
)
Relative Frequency Distribution
0%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Score
Fre
qu
en
cy
So what’s in curve?
This distribution is skewed to the left because so many items are extremely difficult. This distribution is not “Normal.”
Relative Frequency Distribution
0%
1%
2%
3%
4%
5%
6%
7%
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Score
Fre
qu
en
cy
A “Normal” Distribution
Practice:Interpreting Analysis Results
Practice Set 2
Practice Set 3
Discussion
Benefits Simple-to-use tool makes item analysis fast and easy Tests with high face, discriminant, and ecological
validity as well as reliability Validation promotes sense of fairness in test process Assessments create a sense of learner accountability High-quality tests drive high-quality training Concrete understanding of client needs Iterative cycle enables test development to inform
design decisions Continuous improvement approach compatible with Six
Sigma, LEAN, and Gemba Kaizen quality models
“Analysis takes time and time
is in short supply” (Rossett, 1999; pp. x-xi)
14% usage rates*
60% dropout rates*
ASTD, 2001
Is work performance relevant anymore?
Individuals contribute about 30% less when working in teams
84% admit they could work much harder
50% admit they only work as hard
as they must to keep their jobs
Is e-learning relevant
to work
performance?
Clark, 2004
How can valid assessments help you address these concerns regarding e-learning and the workplace?
Key Success Factors Can you form a panel of exemplar
workers? Can you secure pilot participants? Can you get over the fear of not being
perfect the first time? Are you willing to discard and revise
items?
Obstacles Anti-test cultures Lack of management support Fear of making mistakes and
learning from feedback Tendency to do things once and
forget about them
Did we get there?After attending this session, are you able to use the rapid
prototyping process and tools provided to: Identify and prioritize needed skills? Collaborate with learners to brainstorm assessment
criteria for each skill? Determine what test formats need to be used in order
to keep PALs aligned? Run a pilot and quickly conduct test and item
analyses? Use pilot data to decide what to remove, revise, or
refine? Position assessments as a means to drive training?
effectPerformanceInstructional design solutions for your learning and performance needs
Gus Prestera, Ph.D., CPT
President, effectPerformance, Inc.
www.effectPerformance.com
E-mail: [email protected] 610.449.2060 Fax 610.449.2061
1513 Fairview Avenue, Havertown, PA 19083
Contact
Slides and tools available at: http://www.effectperformance.com/html/library.htm
ReferencesASTD. (2001). Benchmarking Report on e-Learning.Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives:
The classification of educational goals. Handbook 1: Cognitive Domain. White Plains, NY: Longman.Clark, R.E. (2004, March). The “10 Most Wanted” motivation killers. PerformanceXpress.Clark, D. (2003, August). How effective is training? A new summary of the past 40 years of training field research and
evaluation. PerformanceXpress.Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334.Dick, W., & Carey, L. (1990). The Systematic Design of Instruction. Glenview, IL: Scott, Foresman.Kirkpatrick, D. (1998). Evaluative training programs: The four levels (2nd ed.). New York, NY: Barrett-Kohler. Krathwohl, D. R. (2002). A revision of Bloom's Taxonomy: An overview. Theory into Practice, 41(4), 212-218. Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2, 151-160.Nitko, A. J. (1996). Educational Assessment of Students (2nd Ed). Englewood Cliffs, NJ: Prentice-Hall.Prestera, G.E. (2004a). Are your e-learners learning? A rapid prototyping process and tool for test development.
effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm.
Prestera, G.E. (2004b). Training needs assessment: Process and tools to help you identify and prioritize training needs quickly. effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm.
Prestera, G.E. (2004c). Understanding ADDIE: A foundation for designing instruction. effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm.
Rossett, A. (1999). First things fast: A handbook for performance analysis. San Francisco, CA: Jossey-Bass.Sullivan, R. L., Wircenski, J. L., & Major, M. J. (1999). Analyzing knowledge-based tests. In D. L. Kirkpatrick (Ed.),
Another Look at Evaluating Training Programs (pp. 113-118). Alexandria, VA: ASTD. Thiagarajan, S. (1999). Rapid Instructional Design. Workshops by Thiagi, Inc. Retrieved 11/18/2003, from the World
Wide Web: http://www.thiagi.com/article-rid.html.