Upload
doankhanh
View
219
Download
3
Embed Size (px)
Citation preview
Sponsored by the U.S. Department of Defense© 2004 by Carnegie Mellon University
page 1
Pittsburgh, PA 15213-3890
Carnegie MellonSoftware Engineering Institute
Conducting EffectivePilot Studies
Mark KasunicSoftware Engineering Measurement and Analysis [SEMA]Software Engineering [email protected]
2004 SEPG Conference
© 2003 by Carnegie Mellon University page 2
Carnegie MellonSoftware Engineering Institute
The Problem
Quite often, software/system improvements are made withoutmeasurement either before or after the change was introduced.
Therefore, how do we know if the outcome was better or worsethan the original situation?
In these cases, interpretation is based on opinion andimpressions – but there is a lack of data to back it up!
Therefore, there is less confidence in the results of theinnovation – interpretation is problematic and there’s a risk thatconsensus about the results are not achieved.
How do we know if the change worked?
© 2003 by Carnegie Mellon University page 3
Carnegie MellonSoftware Engineering Institute
Need to Validate That Changes AreEffective
As-Is State
Hypothesized
Magnitude of Improvement
As-Is State
To-Be State
The New As-Is StateActual Magnitude of Improvement
Are they the same? Or not?
How do we know it worked?
© 2003 by Carnegie Mellon University page 4
Carnegie MellonSoftware Engineering Institute
Typical Approaches to EvaluatingImprovement
Typical approaches that fail
X represents the introduction of a change
O represents a measurable observation
Before the change Change introduced After the Change
Approach #1 X
Approach #2 X O
Approach #3 XO
© 2003 by Carnegie Mellon University page 5
Carnegie MellonSoftware Engineering Institute
How Do You Know It Really Worked?
In all three approaches, there is no way to tell if the outcome fromthe change was better or worse than the original situation.
Typical approaches that fail
Before the change Change introduced After the Change
Approach #1 XApproach #1 X
Approach #2 X OApproach #2 X O
Approach #3 XO
How do you knowthe change worked?
© 2003 by Carnegie Mellon University page 6
Carnegie MellonSoftware Engineering Institute
Scientific Methods Do ExistResearch designs do exist for proper interpretation ofresults after innovations are introduced … but they arerarely applied!
Before the change Change introduced After the Change
X O2O1
As-Is State To-Be StateTransition
© 2003 by Carnegie Mellon University page 7
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 8
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
• Define the problem
• How will you measuresuccess?
• Where will youconduct the pilotstudy?
• Designing yourapproach usingscientific principles
• Writing down yourplan
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 9
Carnegie MellonSoftware Engineering Institute
Define the Problem
A problem that is clearly defined is half-solved.
Defining the problem means identifying a gap between somedesired situation and the current situation.
An important challenge is for the improvement team tocollect and use valid information to define the currentsituation instead of assuming that it already has thenecessary valid information.
A problem statement implies no particular solutions and nopotential causes.
A good problem statement states only the current anddesired situation.
© 2003 by Carnegie Mellon University page 10
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Subtopics
• Define the problem
• How will you measuresuccess?
• Where will youconduct the pilotstudy?
• Designing yourapproach usingscientific principles
• Writing down yourplan
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 11
Carnegie MellonSoftware Engineering Institute
Develop Pilot Study Success Criteria
You will want to know:
Did the solution component generate the outcome that it wasintended to achieve?
• What are you hoping for in terms of performance changewhen using the solution component?
• Try to define performance standards that will help youdetermine this explicitly. Are there any historical data that canbe used to baseline the status quo?
Did the users experience difficulty in its use?
• What are your expectations in terms of the solutioncomponent’s impact on changing people’s attitudes andbehaviors?
• Try to define qualitative measures that will provide objectiveassessment of job improvement for the users of the
© 2003 by Carnegie Mellon University page 12
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Subtopics
• Define the problem
• How will you measuresuccess?
• Where will youconduct the pilotstudy?
• Designing yourapproach usingscientific principles
• Writing down yourplan
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 13
Carnegie MellonSoftware Engineering Institute
Will we be Able to Generalize Our Findings?
Is there anything we can do to increase the probability thatwe can generalize the pilot study results to the largerpopulation?
• Is this a typical program/project within theorganization?
• Is the experience and skill level of the pilot studypersonnel typical of what one would find in otherprograms/projects in the organization?
• Are there factors beyond our control that can confoundor influence the cause-and-effect relationship of thechange we are trying to evaluate?
Making smart decisions about where (inyour organization) to conduct a pilotstudy improves confidence withgeneralizing the solution to other partsof the organization.
© 2003 by Carnegie Mellon University page 14
Carnegie MellonSoftware Engineering Institute
Understanding the Pilot EnvironmentDuring pilot study planning, you can mitigate the risk of misinterpreting orover-interpreting your eventual results if you can identify and characterizethe impact of potential influences.
• How similar is the project/program environment of the candidate pilotproject to other projects in the organization (size, domain, etc.)?
• Will participants in the pilot study embrace the proposed change (thatis being considered) or resist adopting it?
• What are the adoption characteristics of the pilot project manager andthe project staff in general?
• Given the adoption characteristics of the pilot participants, how muchrefinement (of the solution component) and support will be necessaryto test the potential effectiveness of change?
• How supportive is the project/program manager to the change that willbe piloted? Are they enthusiastic about the idea of serving as a pilot?
• What kinds of pressure is the project/program already under?- Difficult schedule constraints?- New product or domain area?- Inexperienced staff?
© 2003 by Carnegie Mellon University page 15
Carnegie MellonSoftware Engineering Institute
Project Categories ExampleProject Type Description
• on-time release is imperative
• 20-30 individuals on project staff
• using new object-oriented technology
Critical
• 5-10 individuals on project staff
• improvements to baseline products
Productenhancement
• < 5 individuals on project staff
• correction of bugs reported by users
Maintenance
• 5-20 individuals on project staff(dependent on need)
• project formed to addressunanticipated government mandatedorder
Emergency
© 2003 by Carnegie Mellon University page 16
Carnegie MellonSoftware Engineering Institute
Can We Generalize the Pilot Results?
Pilot study conducted here
If successful, will likely work here
But, will itwork ingeneral?
© 2003 by Carnegie Mellon University page 17
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Subtopics
• Define the problem
• How will you measuresuccess?
• Where will youconduct the pilotstudy?
• Designing yourapproach usingscientific principles
• Writing down yourplan
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 18
Carnegie MellonSoftware Engineering Institute
Approaches to ValidationIn the scientific and manufacturing world, improvements orinnovations are validated using a rigorous statistical approachknown as design of experiments (DOE)
• Extraneous variables that might impact the result you’relooking at can be held steadied or controlled
• The experimental design can employ techniques such asrandomization and replication to add clarity and confidenceto the assertions that are made about the change
RunNumber
1
2
3
4
RunNumber
1
2
3
4
VariableA
-
+
-
+
VariableA
-
+
-
+
VariableB
-
-
+
+
VariableB
-
-
+
+
VariableC
-
+
-
+
VariableC
-
+
-
+
Result
7
15
21
11
Result
7
15
21
11
© 2003 by Carnegie Mellon University page 19
Carnegie MellonSoftware Engineering Institute
The Human Side of ChangeIn a human-related, real-time program/project environment, pilotstudies are not conducted in a laboratory under ideal controlledconditions.
• It is difficult—if not impossible– to control variablesinvolving people.
• The characteristics of the experimental medium (i.e., thepeople using the solution component) may influence theresults and the type of feedback that you obtain after youintroduce the change.
SE staff performance
Variables affectingmorale of Technical Staff
Introduction of PersonalSoftware Process (PSP)
PayResistanceto change
Perception ofmanagement
New innovation
© 2003 by Carnegie Mellon University page 20
Carnegie MellonSoftware Engineering Institute
Scientific Methods Do ExistResearch designs called quasi-experimentation do exist forproper interpretation of results from pilot studies … butthey are rarely applied!
Before the change Change introduced After the Change
X O2
Group #2
O1Good approach #1
Better a
pproach #2
Group #1 X O2O1
O4O3
Before the change Change introduced After the Change
A t test and the analysis of covariance method are statistical methods that provide ascientific basis for making assertions about the results of change effort.
© 2003 by Carnegie Mellon University page 21
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Subtopics
• Define the problem
• How will you measuresuccess?
• Where will youconduct the pilotstudy?
• Designing yourapproach usingscientific principles
• Writing down yourplan
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 22
Carnegie MellonSoftware Engineering Institute
Pilot Implementation PlanThe pilot implementation plan is developed collaboratively with thepeople who will participate in the pilot study.
Pilot studies involve altering the environment (e.g., how they work) ofthe participants.
Therefore, changes must be introduced carefully so that they don’tbecome overwhelmed.
Introducing too many variables at one time will make the resultsdifficult to interpret.
A plan is developed that describes how solution component(s) will beintroduced over time into a program/project for pilot testing.
The plan provides a mechanism for setting the appropriateexpectations with the project partner.
© 2003 by Carnegie Mellon University page 23
Carnegie MellonSoftware Engineering Institute
What’s in the Plan?The plan includes
• objectives of the pilot study
• success indicators and how they are measured
• approach
• responsibilities of participants
• training activities
• description of support mechanisms (e.g., mentoring, hot-linesupport, trouble-shooting)
• a description of pilot study retrospective activity
• a schedule
• risks and mitigation strategies
© 2003 by Carnegie Mellon University page 24
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 25
Carnegie MellonSoftware Engineering Institute
Don’t Overlook the Need for Training
The type, style and extent of training will depend on the complexity ofthe proposed change.
The pilot implementation plan describes the training activities.
The training should cover
• how the new process, procedure, and/or associated technologyis performed or used
• how to use the documentation that describes the new feature
• how to obtain additional help if there are problems
Ensure that feedbackmechanisms are included aspart of the training approach.
© 2003 by Carnegie Mellon University page 26
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 27
Carnegie MellonSoftware Engineering Institute
Supporting and Monitoring the Pilot Effort
Any change, however well-planned, can cause unanticipated results.That’s why we conduct a pilot test – we’re not sure what to expect.
Pilot personnel will need help when problems are exposed.
Support must be provided during pilot testing so that program/projectpersonnel can obtain quick solutions to glitches or bugs in the newprocess component(s).
A member of the pilot project support team
• is assigned as the primary point of contact to provide guidanceor help when problems arise
• is responsible for ensuring that performance indicators aretracked throughout the pilot effort.
Document problems or issues asthey occur.
© 2003 by Carnegie Mellon University page 28
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 29
Carnegie MellonSoftware Engineering Institute
Evaluate Pilot ResultsCompile performance measurements and indicators. Evaluate howthe new process component performed with respect to the objectivesand success criteria.
Conduct a lessons-learned meeting with pilot study personnel toobtain
• feedback on the new solution component; what worked welland what didn’t work well
• ideas for improving the solution component
• suggestions for improving how new solution componentsare piloted
In addition, consider using an instrument for obtaining anonymousfeedback.
© 2003 by Carnegie Mellon University page 30
Carnegie MellonSoftware Engineering Institute
The new state of practiceafter improvementintervention
Statistical Analyses to Validate thatthe Change Worked
PoorBarelyAcceptable Acceptable Good Excellent
Magnitude of improvement
Baseline current as-is state ofpractice
© 2003 by Carnegie Mellon University page 31
Carnegie MellonSoftware Engineering Institute
Are You New to MeasurementAnalysis?
For an easy-to-understand example of the T-test andAnalysis of Covariance, see:
Kelley, D. Lynn and Morath P. How Do You Know TheChange Worked?. Quality Progress. AmericanSociety for Quality. pp. 68-74. July 2001.
You can also refer to a statistical textbook or referencebook. Statistical software packages (e.g., SPSS,JMP/SAS) are also capable of performing theseanalyses.
© 2003 by Carnegie Mellon University page 32
Carnegie MellonSoftware Engineering Institute
Analyzing the Results
Since many variables may contribute to the pilot results,it’s important not to draw immediate conclusions withoutexploring root causes.
What factors contributed to success or partial success?What factors led to the less-than-successfulimplementation?
People
Technology
Process
Materials
Problem
© 2003 by Carnegie Mellon University page 33
Carnegie MellonSoftware Engineering Institute
A Structured Approach to a Pilot Study
Plan and design the pilot study1
Train personnel to accomplish change 2
Support and monitor pilot study3
Evaluate pilot results4
Make recommendations & improve5
© 2003 by Carnegie Mellon University page 34
Carnegie MellonSoftware Engineering Institute
After the Pilot StudyOutcome Follow-on steps
• Plan the revision
• Review the plan with pilot project personnel and gettheir feedback—will changes address concerns?
• Revise the solution component
• Review with project personnel—do the changes addressconcerns?
• Conduct another pilot study
Major revisionrequired
• Revise the solution component
• Review with project personnel—do the changes addressconcerns?
Minor revisionsuggested
• Plan the development of whole product supportcomponents that address the need
• Review plan with pilot project personnel—will additionalsupport address concerns?
• Develop additional support components
• Review with project personnel—do the supportcomponents address concerns?
Additional supportrequired to usesolutioncomponent
© 2003 by Carnegie Mellon University page 35
Carnegie MellonSoftware Engineering Institute
The Value of Pilot Feedback
Feedback from the pilot can help you
• remove bugs from the solution component (i.e., theprocess, procedure or technology)
• identify ways for enhancing the solution component
• identify additionalwhole product componentsthat will make it easier forusers (in the actualimplementation) to embracethe process, procedure ortechnology
PotentialSolution
Systems Integration
Job Aids
Tooling
InstallationSupport
Policies
Training
ReferenceMaterials
Procedures
The Whole Product Wheel
© 2003 by Carnegie Mellon University page 36
Carnegie MellonSoftware Engineering Institute
Bringing Closure to the Pilot Effort
Communicate
Meet with the management and other stakeholders to
• review pilot results
• make recommendations
• identify next steps
Post the performance results of the pilot effort in a public areafor review from the organization.
© 2003 by Carnegie Mellon University page 37
Carnegie MellonSoftware Engineering Institute
The Value of Conducting Multiple PilotStudies
Conducting multiple pilot studies leads to more reliableinformation for decision-making.
When “testing” a new technology, investigatorsunderstand that replications are required to understandthe extent of experimental error.
Experimental error is caused by the variation in the testresults caused by environmental influences that arebeyond the control of the experimenter.
To mitigate the risk of experimental error, the investigatorrepeats the experiment multiple times to bettercharacterize the influence of the technology that is beingtested.
© 2003 by Carnegie Mellon University page 38
Carnegie MellonSoftware Engineering Institute
Why Did We Conduct a Pilot Study?
Piloting reduces the risk of rolling out a flawed process,procedure or other solution component to a broad multi-project environments.
The idea behind a pilot is to test the solution componentwithin a bounded and controlled environment before thecomponent is sanctioned for broader use.
During a pilot study, the usability of the solutioncomponent is evaluated in a near real-world projectsetting.
Experience demonstrates that such a test alwaysexposes improvement opportunities that can be exploitedto hone and refine the solution component beforebroader dissemination.
© 2003 by Carnegie Mellon University page 39
Carnegie MellonSoftware Engineering Institute
Contact Information
Telephone 412 / 268-5800
FAX 412 / 268-5758
Email [email protected]
World Wide Web http://www.sei.cmu.edu
U.S. Mail Customer RelationsSoftware Engineering InstituteCarnegie Mellon UniversityPittsburgh, PA 15213-3890