Upload
techwellpresentations
View
341
Download
5
Tags:
Embed Size (px)
DESCRIPTION
How do you properly compare the quality of two or more software deliverables without an accurate normalizing metric? The answer: You can’t. Example: If project A has one-hundred defects and project B has fifty defects, do you automatically assume project B is a higher quality deliverable? Although the number of defects is often the end user’s quality perception, defect counts may not be the right measure. An effective normalizing metric allows you to accurately measure and compare quality levels across software deliverables. David Herron explains how to quickly and easily incorporate this important normalizing metric into your development process to start measuring and improving the quality of your software deliverables. You’ll have a new tool for managing end user expectations regarding software quality in relation to the value the software delivers. Even more, you can use this normalizing metric to predict software quality outcomes or delivery dates and to establish service levels for software quality.
Citation preview
BT8 Concurrent Session 11/14/2013 2:15 PM
"How to (Effectively) Measure Quality across Software
Deliverables"
Presented by:
David Herron DCG
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073 888‐268‐8770 ∙ 904‐278‐0524 ∙ [email protected] ∙ www.sqe.com
David Herron DCG
One of the original founders of the David Consulting Group, David Herron is now a business development manager and VP of knowledge solution services with DCG. With more than thirty-five years of experience in functional measurement and software process improvement, David has provided consulting and coaching services to a variety of IT organizations throughout the US and Canada. He is an acknowledged authority in performance measurement, process improvement, and organizational change management; an advisor on functional measures, software process improvement, project estimating, and agile; and a lecturer and coauthor of several books on IT performance measurement. Contact him at [email protected].
How to (Effectively) Measure Quality Across Software Deliverables
Presenter:
David Herron [email protected]
Better Software Conference 2013
©2012 David Consulting Group
Defining Software Quality
• How do you define software quality in your organization?
1
©2012 David Consulting Group
Software Quality Defined
• Absence of defects • Conformance to requirements • Meets certification standards • Maintainable • Scalable • Reliable • Usable • Secure
©2012 David Consulting Group
Tracking Software Quality • Mr. I.M.A Pib is upset. He is the VP of the Store Systems Division. • He has just seen the first quarter dashboard of results and his #1
priority project, Store Inventory, has the greatest number of defects • Here is what was reported to Mr. Pib
• You are the development manager. How might you respond to Mr. Pib? Do we have all the information we need to properly evaluate these outcomes?
3
Project Delivery Cost (000's)
Quality Defects
PO Special On Time $500 12
Vendor Mods Late $760 18
Prising Adj. Early $80 5
Store Inventory On Time $990 22
©2012 David Consulting Group
Project Delivery Cost (000's)
Quality Defects
Size Value
Rate Density
PO Special On Time $500 12 250 $ 2,000.00 0.048
Vendor Mods Late $760 18 765 $ 993.46 0.024
Prising Adj. Early $80 5 100 $ 800.00 0.050
Store Inventory On Time $990 22 1498 $ 660.88 0.015
Tracking Software Quality • Size (value) can serve as a normalizing metric • A cost per unit of work (Rate) can now be calculated • Defect Density* for Mr. I. M. A. Pib project is in fact the lowest of all his
projects
* Defect Density is calculated as defects / size
©2012 David Consulting Group
Finding – Nine out of ten projects that fail have not been properly
sized
Consider - When you build a house you specify all the functions and
features you want – these are your requirements The builder then generates an estimate based on the size
(square footage) of your requirements.
• Size is the key to effectively managing software projects
Size Does Matter
©2012 David Consulting Group
• Meaningful to both developer and business user
• Defined (industry recognized)
• Consistent (methodology)
• Easy to learn and apply
• Accurate, statistically based
• Available when needed (early)
Characteristics of an Effective Sizing Metric
6
©2012 David Consulting Group
Function Point Analysis is a standardized method for measuring the functionality delivered to an end user.
Function Points - An Effective Sizing Metric
Benefits: • Quantitative (Objective) Measure • Industry Data as Basis for Comparison • Expectations (Perceived Customer Value) Managed • Software Process Improvement Requirements Satisfied
7
©2012 David Consulting Group
• External Inputs • External Outputs • External Inquiries • Internal Logical Files • External Interface Files
External Input
External Inquiry
External Output
Internal Logical
Files External Interface
File
Five key components are identified based on logical user view
Application
The Function Point Methodology
8
©2012 David Consulting Group
SCREENS
(ADDS, CHANGES, DELETES, QUERIES) OTHER OUTPUTS
•REPORTS •FILES •XML •VIEWS •FICHE •TAPE •DISKETTES •LETTERS •NOTICES •ALARMS
APPLICATION
What Do We Count?
INTERNAL LOGICAL FILES (TABLES, DATA FILES,
CONTROL FILES)
INPUT FILES
AND INPUT TRANSACTIONS
CONTROL INFORMATION
EXTERNAL
TABLES & FILES REFERENCED from other applications
(Not Maintained)
OUTPUT FILES AND OUTPUT
TRANSACTIONS (BATCH INTERFACES)
9
©2012 David Consulting Group
• Identify and classify the base functional components – Measure the data functions
• Internal Groupings of data called Internal Logical Files (ILF) • External Groupings of data or External Interface Files (EIF)
– Measure the transactional functions • External Inputs (EI) • External Outputs (EO) • External Inquires (EQ)
– Each function is assigned a functional complexity (L-A-H) and a weight (FPs)
• Calculate the functional size • Document the Function Point Count • Report the result of the Function Point Count
How Do We Count?
10
©2012 David Consulting Group 11
Complexity
Components: Low Avg . High Total
Internal Logical File (ILF) __ x 7 __ x 10 __ x 15 ___ External Interface File (EIF) __ x 5 __ x 7 __ x 10 ___ External Input (EI) __ x 3 __ x 4 __ x 6 ___ External Output (EO) __ x 4 __ x 5 __ x 7 ___ External Inquiry (EQ) __ x 3 __ x 4 __ x 6 ___
___ Total Function Points
Complexity calculations are a function of -- the number of data elements, the files referenced and data complexity
Component Complexity & Weights
Record Element
Types
Data Elements (# of unique data fields)
or File Types Referenced
Low Average High Low
Low Average
High Average High
Data Relationships
1 - 4 5 - 15 16+ 0 - 1 2 3+
©2012 David Consulting Group
The Counting Process
Components: Low Avg. High Total Internal Logical File (ILF) External Interface File (EIF) External Input (EI) External Output (EO) External Inquiry (EQ)
__ x 7 __ x 5 __ x 3 __ x 4 __ x 3
__ x 10 __ x 7 __ x 4 __ x 5 __ x 4
__ x 15 __ x 10 __ x 6 __ x 7 __ x 6
___ ___ ___ ___ ___
Complexity
12
The Process 1) Identify Components
2) Assess Complexity 3) Apply Weightings
4) Compute Function Points
©2012 David Consulting Group
USER
ADD, CHG INVOICES PAYMENTS
VENDOR
INVOICES
ACCOUNTS PAYABLE
USER
PAYMENTS
USER
PAYMENT STATUS
USER
PAID INVOICES
PURCHASE ORDER INFO
PURCHASE ORDER SYSTEM
Identifying the Functionality
13
©2012 David Consulting Group
Sizing Example
Components: Low Avg. High Total Internal Logical File (ILF) External Interface File (EIF) External Input (EI) External Output (EO) External Inquiry (EQ)
__ x 7 __ x 5 __ x 3 __ x 4 __ x 3
x 10 x 7 x 4 x 5 x 4
__ x 15 __ x 10 __ x 6 __ x 7 __ x 6
Complexity
14
The Process 1) Identify Components
2) Assess Complexity 3) Apply Weightings
4) Compute Function Points
USER
VENDOR INVOICES
ACCOUNTS PAYABLE
USER
PAYMENTS
USER
USER
PURCHASE ORDER SYSTEM
Function Point Size
©2012 David Consulting Group
Function Point Quality Measures • Defect Density
– Measures the number of defects identified across one or more phases of the development project lifecycle and compares that value to the total size of the application.
Number of defects (by phase or in total) Total number of function points
• Test Case Coverage – Measures the number of test cases that are necessary to adequately
support thorough testing of a development project. Number of test cases Number of function points
©2012 David Consulting Group
Function Point Quality Measures • Reliability
– A measure of the number of failures an application experiences relative to its functional size.
Number of production failures Total application function points • Rate of Growth
– Growth of an application’s functionality over a specified period of time. Current number of function points Original number of function points
• Stability
– Used to monitor how effectively an application or enhancement has met the expectations of the user.
Number of changes Number of application function points
©2012 David Consulting Group
Defect Removal Efficiency Used to evaluate the effectiveness of development quality activities Defect Density Used to evaluate the overall quality of the developed software Delivered Defects Used to evaluate the quality of the delivered software Test Cases Passed First Time Used to determine the quality of software being tested Inspection Rate by Document Used to determine if inspections positively impact quality Volatility Used to monitor trends in the number of changes per month
Measures of Quality
©2012 David Consulting Group
Non-FP Quality Measures
Range Reqs. Design Code Unit Test Sys. Test UAT Prod TotalInsertion Rate 21 30 35 17 11 3 117Defects Found 5 16 27 31 24 12 2 117Removal Efficiency 4.3% 13.7% 23.1% 26.5% 20.5% 10.3% 1.7%
Review Effectiveness 41.0% Test Effectiveness 57.3%
Peer Reviews Testing
Defect Removal Efficiency Tracks the number of defects removed by lifecycle phase.
Customer Satisfaction Gather information relating to delivery performance, communication, management, solutions, etc. Level of importance.
©2012 David Consulting Group
A Measurement Baseline Model
Measured Performance
Capability Maturity
Baseline of Performance
Measures how you are doing
Identifies what you are doing
Standard of performance
QUANTITATIVE
QUALITATIVE
Size Effort
Duration Cost
Quality
19
Management Requirements
Build Test
Environment
©2012 David Consulting Group
Baseline Results: Example
0
50
100
150
200
250
300
0.0 10.0 20.0 30.0 40.0 Productivity (Hrs/FP)
Size
0
50
100
150
200
250
300
0.0 2.0 4.0 6.0 8.0 10.0 12.0
• Duration on small projects reflects industry norms.
• Relatively high degree of consistency seen in duration data suggests a basis for an estimation model.
• Size to duration relationship suggests that current methods are scalable.
• Small size projects are the norm. • Performance levels vary across all
projects. • The extent of variation is greater than
desired. • Variation potentially driven by mixing
support and development tasks.
Duration (Months)
Size
A
B
C
E
D
F
G
H
I J
K
L A
B
C
E D
F
G
H
I J
K
L
Delivery Rate Time to Market
20
©2012 David Consulting Group
Average Project Size 133Average FP/SM 10.7Average Time-To-Market (Months) 6.9Average Cost/FP $939Delivered Defects/FP 0.0301
Baseline Productivity
Quantitative Assessment Perform functional sizing on all selected projects. Collect data on project level of effort, cost, duration
and quality. Calculate productivity rates for each project, including functional size delivered per staff month, cost per functional size, time to market, and defects delivered.
Baseline Results
Quantitative Performance Evaluation Example
Measured Performance
QUANTITATIVE
Size Effort
Duration Cost
Quality
©2012 David Consulting Group
Qualitative Assessment Conduct Interviews with members of each project team. Collect Project Profile information. Develop Performance Profiles to display strengths and weaknesses among the selected projects.
Qualitative Performance Evaluation
Management Requirements
Build Test
Environment
Capability Maturity
QUALITATIVE
Management Definition Design
• Team Dynamics • Morale • Project Tracking • Iteration Planning • Release Planning • Automation • Leadership Skills
• Evolutionary Requirements • Process • Product Owner Involvement • Experience Levels • Business Impact
• Process • Reviews • Design Reuse • Customer Involvement • Experience • Automation
Build Test Environment
• Code Reviews • Configuration Management • Code Reuse • Data Administration • Experienced Staff • Automation
• Formal Testing Methods • Test Plans • Testing Experience • Effective Test Tools • Customer Involvement
• New Technology • Automated Process • Adequate Training • Organizational Dynamics • Certification
©2012 David Consulting Group
Accounts Payable 55.3 47.73 82.05 50.00 46.15 43.75 50.00Priotity One 27.6 50.00 48.72 11.36 38.46 0.00 42.31HR Enhancements 32.3 29.55 48.72 0.00 42.31 37.50 42.31Client Accounts 29.5 31.82 43.59 0.00 30.77 37.50 42.31ABC Release 44.1 31.82 53.85 34.09 38.46 53.13 42.31Screen Redesign 17.0 22.73 43.59 0.00 15.38 0.00 30.77Customer Web 40.2 45.45 23.08 38.64 53.85 50.00 34.62Whole Life 29.2 56.82 28.21 22.73 26.92 18.75 53.85Regional - East 22.7 36.36 43.59 0.00 30.77 9.38 30.77Regional - West 17.6 43.18 23.08 0.00 26.92 9.38 26.92Cashflow 40.6 56.82 71.79 0.00 38.46 43.75 38.46Credit Automation 23.5 29.55 48.72 0.00 38.46 6.25 26.92NISE 49.0 38.64 56.41 52.27 30.77 53.13 53.85Help Desk Automation 49.3 54.55 74.36 20.45 53.85 50.00 38.46Formula One Upgrade 22.8 31.82 38.46 0.00 11.54 25.00 46.15
Design Build Test EnvironmentProject Name Profile Score Management Definition
Average Project Size 133Average FP/SM 10.7Average Time-To-Market (Months) 6.9Average Cost/FP $939Delivered Defects/FP 0.0301
Baseline Productivity
Average Project Size 133Average FP/SM 24.8Average Time-To-Market (Months) 3.5Average Cost/FP $467Delivered Defects/FP 0.0075
Productivity Improvement
Process Improvements: • Peer Reviews • Requirements Management • Configuration Management
Performance Improvements: Productivity ~ +131% Time to Market ~ -49% Defect Ratio ~ -75%
Accounts Payable 75.3 61.73 82.05 60.00 60.15 53.75 50.00Priotity One 57.6 57.00 55.72 18.36 45.46 22.00 49.31HR Enhancements 52.3 32.55 51.72 23.00 42.31 57.50 49.31Client Accounts 69.5 53.82 65.59 12.00 50.77 67.50 49.31ABC Release 74.1 55.82 69.85 49.09 52.46 63.13 49.31Screen Redesign 67.0 43.73 63.59 21.00 36.38 20.00 51.77Customer Web 59.2 49.45 27.08 58.64 53.85 54.00 49.62Whole Life 50.2 49.82 32.21 27.73 31.92 24.75 53.85Regional - East 57.7 59.36 49.59 0.00 30.77 9.38 50.77Regional - West 52.6 55.18 30.08 0.00 33.92 19.38 26.92Cashflow 67.6 66.82 71.79 0.00 49.46 53.75 49.46Credit Automation 60.5 41.55 78.72 0.00 50.46 26.25 46.92NISE 79.0 68.64 76.41 62.27 65.77 53.13 53.85Help Desk Automation 79.3 64.55 74.36 47.45 63.85 54.00 58.46Formula One Upgrade 52.8 49.82 52.46 0.00 31.54 25.00 56.15
Design Build Test EnvironmentProject Name Profile Score Management Definition
Modeled Improvements
23
©2012 David Consulting Group 24
Project X Project Y Project Z
. . Process Management
Performance Measures
Enterprise Database
Historical Measures End User
Process Measures Impro
ve Contr
ol
Define
Measure
Execute
Process
Measurement Repository
Executive Management Dashboard Enterprise
Process
Project PAL
Project Defect Status
0100200300400500600700800900
1000
Jan'0
8
Feb'0
8
Mar'08
Apr'0
8
May'08
Jun'0
8Ju
l'08
Aug'0
8
Sep'0
8
Oct'08
# o
f D
efec
tsTotal Defects Discovered Total Closed Defects
Requirements Growth and Stability
-50
0
50
100
150
200
Jan'0
8
Feb'0
8
Mar'08
Apr'0
8
May'08
Jun'0
8Ju
l'08
Aug'0
8
Sep'0
8
Oct'08
Nov'08
Dec'08
# o
f Req
uire
men
ts
Added Changed Deleted Total Reqs
Project Resource and Effort Status
0200400600800
1,0001,2001,4001,600
Jan'0
8
Feb'0
8
Mar'08
Apr'0
8
May'08
Jun'0
8Ju
l'08
Aug'0
8
Sep'0
8
Oct'08
Nov'08
Dec'08
Proj
ect
Res
ourc
es/H
ours
Cum Planned Effort Allocated Cum Actual Effort Spent"Earned Value" Baseline Total Hours
Milestone Baseline Plan Actual % Var
Checkpoint A – Charter & Kickoff 1/10/2008 1/10/2008 1/10/2008 0%
Requirements Complete 1/28/2008 1/28/2008 1/28/2008 0%Vendor Selection Complete 2/4/2008 2/4/2008 2/15/2008 7%
PMP/Schedule Complete 2/12/2008 2/12/2008 2/28/2008 11%
Checkpoint B– Planning & Reqs 2/28/2008 3/15/2008 11%
Design Complete 3/15/2008 4/15/2008 20%
Development Complete 4/15/2008 4/30/2008 10%Checkpoint C– Midpoint 4/30/2008 5/15/2008 10%
Testing Complete 4/30/2008 5/15/2008 10%
Training Complete 5/10/2008 5/30/2008 13%
Go Live 5/30/2008 6/15/2008 11%
Lessons Learned/Cust Sat Survey Complete 6/1/2008 6/30/2008 19%
Checkpoint D – Deploy & Close 6/1/2008 6/30/2008 19%
Project Score Mngmnt Req Des Build Test Environ
BI Product Releases | Q2 2007 56.2 68 62 68 58 41 35EDW Phase IV: Applicant Tracking System 44.3 68 49 57 35 28 35CRM Product Maintenance Releases | Q3 2007 60.2 73 74 68 65 41 27Road to 90: In Bound 36.4 57 44 32 46 22 27SAR PM 2.0 37.5 50 51 25 46 28 27Meetings | Teleconf. vendor selection 46.6 68 62 57 38 25 27CoBRA Application 53.6 77 64 50 46 50 31Web 2.1 53.2 61 72 48 58 41 31Web 2.0 Q1 Maintenance 43.7 61 54 20 58 44 31Q3 2007 Web v2.1 Enhancements / Maintenance 47.3 61 54 20 58 41 31Web v2.2 (EPN) 59.8 77 69 55 58 53 31Web v2.2 Enhancements / Maintenance | Q4 2007 44.2 61 54 20 65 41 31
Baseline
Overall Information Framework
Project Estimates
Business Decisions
©2012 David Consulting Group
Measure Name Calculation Notes Example Median
Industry Median)
primarily Level 3 organizations
Goal by 2012
Estimating Accuracy - Effort
(actual labor hours - estimated) / estimated
positive values represent overruns; negative underruns
(1000-500)/500 = +100% overrun +22% 0% 18%
Estimating Accuracy - Schedule
(actual calendar months - estimated) / estimated
positive values represent overruns; negative underruns
(4 - 3)/3 = +33% overrun +21% 0% 18%
Unit Cost dollars / function points
Dollars are estimated from labor hours @ $110 per hour
* 145 hrs per staff month $200,000/100 =
$2,000 $938 $613 $800
System Delivery Rate
function points / calendar months
QSM value is a mean - median not available
100 FPs/ 2 calendar months =
50 32 49 40
Requirements Volatility
added, changed, deleted / total baselined rqts
For all but one project, data not available. Project
manager gave an estimate 10 changed / 100 baselined = 10% 20% 10% 15%
Client Satisfaction ratings by project manager
For all but three projects, ratings by clients
unavailable.
5 = very satisfied 1 = very
unsatisfied 4 Not available 4
System Test Effectiveness
defects found in system test / total defects
total defects = defects found in system test + defects
found in production (first 30 days) 40 / 50 = 90% 83% 90% 90%
Delivered Defect Density (Defects per 100 function points)
(defects found in production / function points) * 100 production = first 30 days
(5 defects / 200 FPs) * 100 = 2.5 2.3 1.3 1.8
20 100 FPs/4 staff
months = 25 17 26 Productivity function points / labor months varies with project size
Dashboard / Service Levels
©2012 David Consulting Group
Alternative Sizing Options
26
Sizing Technique Standard Lifecycle Comparative Data
Lines of Code No Build No
Modules/Components No Design No
Use Cases No Requirements No
Story Points No Requirements No
Function Points Yes Requirements Yes
COSMIC Yes Requirements Partial
NESMA Yes Requirements Partial
Mark II Yes Requirements Limited
©2012 David Consulting Group
Alternative Sizing Options
27
Organizational Specific
DefinitionsIndustry Defined
Modules,Use Cases,Test Cases
StoryPoints
Lines ofCode
Use CasePoints
Cosmic,NESMA FP
IFPUG Function PointsMark II
Internal v External Definitions
Less Accurate More Accurate
Hours,Days
StoryPoints
Lines ofCode
Use CasePoints
COSMICNESMA FP
IFPUG Function PointsMark II
Consistency/ Accuracy
Fewer Rules
More Rules
Hours,Days
StoryPoints
Use CasePoints
COSMICNESMA FP
IFPUG Function PointsMark II
Easier toLearn
Harder to Learn
Power / EaseIndex
Power Increases
Ease of Use
Power v Ease of Use
©2012 David Consulting Group
Summary • Quality defined as a measure of value for the customer
• Size is an critical normalizing metric
• FPA serves as an effective sizing method
• Historical baseline data can provide for potential predictive capabilities
28