Upload
others
View
11
Download
0
Embed Size (px)
Citation preview
2011-03-22
1
ETSF01: Software Engineering Process –
Economy and Quality
Dietmar Pfahl Lund University
ETSF01 / Lecture 3, 2011
Software Project Management
Chapter Five Software effort estimation
ETSF01 / Lecture 3, 2011
What makes a successful project?
Delivering: • agreed functionality • on time • at the agreed cost • with the required quality
Stages: 1. set targets 2. attempt to achieve targets
BUT what if the targets are not achievable?
ETSF01 / Lecture 3, 2011
Cost/Schedule Estimation – Basic Idea
• Software cost/schedule estimation is the process of predicting the amount of effort (cost) required to build a software system and the time (schedule) to develop it.
• Project Cost:
– Hardware costs. – Travel and training costs. – Work costs (based on the effort spent by engineers).
• Project Schedule: – Elapsed time (in hours, days, weeks, months)
ETSF01 / Lecture 3, 2011
Basis for successful estimating
• Information about past projects – Need to collect data about past project:
• Size/difficulty(complexity)/functionality? • Effort/time consumption? • …
• Need to be able to measure the amount of work involved – Lines of Code (or Statements, Halstead’s measures)
• KLOC vs. Delta-KLOC – Cyclomatic Complexity (McCabe) – Function Points (or Use Case Points, User Stories, …)
ETSF01 / Lecture 3, 2011
Typical problems with estimating
• Subjective nature of estimating – It may be difficult to produce evidence to support your assumptions
about cost (schedule) drivers • Political pressures
– Managers may wish to reduce estimated costs in order to win a project proposal
– Developers may wish to increase estimates to safeguard themselves from unforeseen (unforeseeable) risks
• Changing technologies – They involve risks (uncertainties), especially in the beginning when
there is a ‘learning curve’ • Projects differ
– Experience on one project may not be transferable to another
2011-03-22
2
ETSF01 / Lecture 3, 2011
Effort-Time Trade-Off
People Effort = 4
1 1 1 1
People Effort = 4
1 1 1 1
?
People Effort = 4
1
1 1
1 ?
COCOMO
Price S
Source: B. Boehm: Software Engineering Economics, Prentice-Hall, 1981.
Time
Time
Time
ETSF01 / Lecture 3, 2011
Effort-Time Trade-Off
• COCOMO
ETSF01 / Lecture 3, 2011
Estimation Techniques – Main Types
Cost/Schedule Estimation Techniques
Expert Judgment Algorithmic/ Parametric Models
Empirical Factor Models: - COCOMO / COCOMO II - ESTIMACS - PRICE S - Softcost - DOTY - CheckPoint - etc.
Constraint Models: - SLIM - Jensen Model - COPMO - etc.
Analogy
Machine Learning: - Case-Based Reasoning - Collaborative Filtering - Classification Systems - etc.
Other
Parkinson’s Law Pricing-to-Win Top-Down Estimation Bottom-Up Estimation
Source: Boehm (1981)
ETSF01 / Lecture 3, 2011
Top-Down Estimation
• A cost/schedule estimate is established by considering the overall functionality of the product and how that functionality is provided by interacting sub-functions.
• Cost estimates are made on the basis of the logical functionality rather than implementation details.
• Advantage: Takes into account costs such as integration, configuration management and documentation
• Disadvantage: Can underestimate the cost of solving difficult low-level technical problems
ETSF01 / Lecture 3, 2011
Example: Top-down schedule estimation
• Produce overall schedule estimate
• Distribute proportions of overall estimate to components
design code
overall project
test
Estimate: 100 days
30% i.e. 30 days
30% i.e. 30 days
40% i.e. 40 days
ETSF01 / Lecture 3, 2011
Bottom-Up Estimation
• Start at the lowest system level. The cost/schedule of each component is estimated. All these costs are added to produce a final cost estimate.
• Advantage: – Can be accurate, if the system has been designed in
detail • Disadvantage:
– May underestimate costs of system level activities such as integration and documentation
2011-03-22
3
ETSF01 / Lecture 3, 2011
Pricing-to-Win
• The software cost/schedule is estimated to be whatever the customer has available to spend on the project. The estimated effort depends on the customer's budget and not on the software functionality.
• Advantage: Good chances to get the contract • Disadvantages: The probability that costs accurately
reflect the work required and the probability that the customer gets the system he or she wants are small.
ETSF01 / Lecture 3, 2011
Parkinson's Law
• Parkinson's Law states that work expands to fill the time available. In software costing/scheduling, this means that the cost is determined by available resources rather than by objective assessment.
– Example: If the software has to be delivered in 12 months and 5 people are available, the effort required is estimated to be 60 person-months.
• Advantage: No overspending • Disadvantage: System is often unfinished / or effort is
wasted
ETSF01 / Lecture 3, 2011
Expert Judgement
• One or more experts in both software development and the application domain use their experience to predict software costs. Process iterates until some consensus is reached (e.g., using Delphi Method).
• Advantage: Relatively cheap estimation method. Can be accurate if experts have substantial experience with similar systems.
• Disadvantage: Very inaccurate if there are no experts! Difficult to trace back, in case the estimate was not accurate.
ETSF01 / Lecture 3, 2011
Estimation by Analogy
• This technique is applicable when other projects in the same application domain have been done in the past. The cost of a new project is computed by comparing the project to a similar completed project in the same application domain
• Advantages: Accurate if similar project data are available
• Disadvantages: Impossible if no comparable project available. Needs systematically maintained project database (à expensive). It’s not always clear how to define a good similarity function.
ETSF01 / Lecture 3, 2011
How to find similar projects in a data base?
Case-Based Reasoning (CBR): – Involves (a) matching the current problem
against ones that have already been encountered in the past and (b) adapting the solutions of the past problems to the current context.
– It can be represented as a cyclical process that is divided into the four following sub-processes as depicted in the Figure (Aamodt & Plaza 1994):
• retrieve the most similar cases from the case base
• reuse the case to solve the problem • revise the proposed solution – if necessary • retain the solution for future problem
solving
ETSF01 / Lecture 3, 2011
Effort Estimation Model – Example (1) Case-Based Reasoning (CBR) Example:
Attributes New Case Retrieved Case 1 Retrieved Case 2 Project Category Real Time Real Time Simulator Language C++ C++ C++ Team Size 10 10 9 System Size 150 200 175 Effort ? 1000 950 Similarity 90% ~50%
Possible adaptation rule: 7501000
200150
=∗=Effort_edictedPr
Possibilities to predict effort: • adapted effort based on 1 project • average effort of 2 projects • weighted average effort of 2 projects
Effort= f (System_Size)
2011-03-22
4
ETSF01 / Lecture 3, 2011
Effort Estimation Model – Example (2) Case-Based Reasoning (CBR) Example:
Attributes New Case Retrieved Case 1 Retrieved Case 2 Project Category Real Time Real Time Simulator Language C++ C++ C++ Team Size 10 10 9 System Size 150 200 175 Effort ? 1000 950 Similarity 90% ~50%
Possible adaptation rule: Possibilities to predict effort: • adapted effort based on 1 project • average effort of 2 projects • weighted average effort of 2 projects
7829501751501000
200150
21_ ≈⎟
⎠
⎞⎜⎝
⎛ ∗+∗=EffortedictedPr
Effort= f (System_Size)
ETSF01 / Lecture 3, 2011
Effort Estimation Model – Example (3) Case-Based Reasoning (CBR) Example:
Attributes New Case Retrieved Case 1 Retrieved Case 2 Project Category Real Time Real Time Simulator Language C++ C++ C++ Team Size 10 10 9 System Size 150 200 175 Effort ? 1000 950 Similarity 90% ~50%
Possible adaptation rule: Possibilities to predict effort: • adapted effort based on 1 project • average effort of 2 projects • weighted average effort of 2 projects
773145*950
175150
149*1000
200150_Pr ≈∗+∗=Effortedicted
Effort= f (System_Size)
ETSF01 / Lecture 3, 2011
Effort Estimation Model – Similarity (1) Case-Based Reasoning (CBR) Example: Distance Measure (Euclidean Distance) à Similarity = 1 – Distance
n
)P,P()P,P(cetandis
n
kjkik
ji
∑== 1
δ
⎪⎪⎪
⎩
⎪⎪⎪
⎨
⎧
≠=
⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
−
−
=
jkik
jkik
kk
jkik
jkik
PPANDlcategoricakif,PPANDlcategoricakif,
continuouskifminmaxPP
)P,P(10
2
δ
P.,k Pnew,k P1,k δ(Pnew,k, P1,k) Project Category Real Time Real Time 0 Language C++ C++ 0 Team Size 10 10 0 System Size 150 200 0.04=(50/250)2
⇒ distance(Pnew, P1)= 0.1 Assume that smallest system in DB has 100 KLOC and largest system has 350 KLOC à max – min = 250 KLOC
ETSF01 / Lecture 3, 2011
Effort Estimation Model – Similarity (2) Case-Based Reasoning (CBR) Example: Distance Measure (Euclidean Distance) à Similarity = 1 – Distance
n
)P,P()P,P(cetandis
n
kjkik
ji
∑== 1
δ
⎪⎪⎪
⎩
⎪⎪⎪
⎨
⎧
≠=
⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
−
−
=
jkik
jkik
kk
jkik
jkik
PPANDlcategoricakif,PPANDlcategoricakif,
continuouskifminmaxPP
)P,P(10
2
δ
P.k Pnew,k P2,k δ(Pnew,k,P2,k) Project Category Real Time Simulator 1 Language C++ C++ 0 Team Size 10 9 0.01=(1/10)2
System Size 150 200 0.01=(25/250)2
⇒ distance(Pnew, P2) ≈ 0.5 Assume that smallest system in DB has 100 KLOC and largest system has 350 KLOC à max – min = 250 KLOC
ETSF01 / Lecture 3, 2011
Algorithmic/Parametric Models
Idea: • System characteristics:
– Expected size/difficulty/complexity/functionality
• Model needs to be calibrated to data from past projects (or is universal)
System Characteristics
Algorithm (Model) Estimate
ETSF01 / Lecture 3, 2011
Parametric Models – Simple Example
• Simplistic models: Estimated effort = (Estimated system size) / Productivity1 Estimated time = (Estimated system size) / Productivity2
– E.g.: • System size estimated in KLOC • Productivity1 = KLOC / Person-day • Productivity2 = KLOC / Day
– Productivity1 = (System size) / Effort – Productivity2 = (System size) / Day
• E.g., calculated as average value from past projects
2011-03-22
5
ETSF01 / Lecture 3, 2011
Parametric model: COCOMO
• COCOMO = Constructive Cost Model • Size measure (KLOC, FP) & productivity factors as input
COCOMO Model
System size
Productivity factors
Estimated Effort & schedule
ETSF01 / Lecture 3, 2011
COCOMO – Introduction /1
• COCOMO (Constructive Cost Model) [Boe81] – COCOMO is a model based on inputs relating to
the size of the system and a number of cost drivers that affect productivity.
– Enhanced version, called COCOMO II, accounts for recent changes in software engineering technology, including
• object-oriented software, • software created following spiral or
evolutionary development models, • software reuse, • building new systems using off-the-shelf
software components (COTS). Prof. Dr. Barry Boehm
ETSF01 / Lecture 3, 2011
COCOMO – Introduction /2
• The original COCOMO is a collection of three models: – Basic model – applied early in the project – Intermediate model – applied after requirements
acquisition – Advanced model – applied after design is
complete • All three models take the form:
where – E is effort in person months – Tdev is the development time (schedule) – S is size measured in thousands of lines of code
(or DSI*) – EAF is an effort adjustment factor (=1 in the Basic
model) – Factors a, b, c and d depend on the
development mode.
*DSI = Delivered Source Instructions
b ddevE aS EAF T cE= × =
ETSF01 / Lecture 3, 2011
COCOMO – Development Modes
COCOMO has three development modes: • Organic mode: small teams, familiar
environment, well-understood applications, no difficult non-functional requirements (EASY)
• Semi-detached mode: project team may have experience mixture, system may have more significant non-functional constraints, organization may have less familiarity with application (HARDER)
• Embedded mode: hardware/software systems, tight constraints, unusual for team to have deep application experience (HARD)
ETSF01 / Lecture 3, 2011
Basic COCOMO – Effort Equations
• Organic mode: PM = 2.4 (KDSI) 1.05 • Semi-detached mode: PM = 3.0 (KDSI) 1.12
• Embedded mode: PM = 3.6 (KDSI) 1.20
• KDSI = Kilo Delivered Source Instructions (~KLOC)
ETSF01 / Lecture 3, 2011
Basic COCOMO – Examples /1
Note: large changes in effort and productivity, small changes in schedule
2011-03-22
6
ETSF01 / Lecture 3, 2011
Basic COCOMO – Examples /2
• Organic mode project: 32 KLOC – PM = 2.4 (32) 1.05 = 91 person-months – TDEV = 2.5 (91) 0.38 = 14 months – N = 91/14 = 6.5 people (on average)
• Embedded mode project: 128 KLOC – PM = 3.6 (128)1.2 = 1216 person-
months – TDEV = 2.5 (1216)0.32 = 24 months – N = 1216/24 = 51 people (on
average)
Staffing?
ETSF01 / Lecture 3, 2011
Basic COCOMO: Effort Distribution [Boe81]
Table shows percentages!
ETSF01 / Lecture 3, 2011
Basic COCOMO: Schedule Distribution [Boe81]
Table shows percentages!
ETSF01 / Lecture 3, 2011
Basic COCOMO: Accuracy and Quality [Boe81]
Pred(0.3) = 0.29 Pred(1.0) = 0.60 NB: good predictive quality is Pred(0.25) = 0.75, i.e., 75% of all estimates are within ±25% of actual effort
ETSF01 / Lecture 3, 2011
Intermediate COCOMO • The Intermediate COCOMO
model computes effort as a function of program size and a set of cost drivers.
• Intermediate COCOMO equation:
• The factors a, b, c and d for the intermediate model are shown in the tables.
• The effort adjustment factor (EAF) is calculated using 15 cost drivers.
Mode a b Organic 3.2 1.05 Semi-detached 3.0 1.12 Embedded 2.8 1.20
( )b
ddev nom
E a KLOC EAF
T cE
= ×
= Mode c d Organic 2.5 0.38 Semi-detached 2.5 0.35 Embedded 2.5 0.32
ETSF01 / Lecture 3, 2011
Intermediate COCOMO: EAF
• The effort adjustment factor (EAF) is calculated using 15 cost drivers.
• The cost drivers are grouped into 4 categories: product, computer, personnel, and project.
• Each cost driver is rated on a 6 point ordinal scale ranging from very low to extra high importance. Based on the rating, the effort multiplier (EM) is determined (Boehm, 1981). The product of all effort multipliers is the EAF.
15
1nom i
iE E EAF EAF EM
=
= × =∏
extra high very high high nominal low very low
2011-03-22
7
ETSF01 / Lecture 3, 2011
Intermediate COCOMO: 15 Cost Drivers
• Product Factors – Reliability (RELY) – Data (DATA) – Complexity (CPLX)
• Platform Factors – Time constraint (TIME) – Storage constraint (STOR) – Platform volatility (PVOL)
• Personnel factors – Analyst capability (ACAP) – Program capability (PCAP) – Applications experience (APEX) – Platform experience (PLEX) – Language and tool experience (LTEX) – Personnel continuity (PCON)
• Project Factors – Software tools (TOOL) – Multisite development (SITE) – Required schedule (SCED)
ETSF01 / Lecture 3, 2011
Cost Driver Rating – Product Factors
Description Very Low
Low Nominal High Very High
Extra High
RELY Required software reliability
0.75 0.88 1.00 1.15 1.39 -
DATA Database size - 0.94 1.00 1.08 1.16 - CPLX Product
complexity 0.70 0.85 1.00 1.15 1.30 1.65
ETSF01 / Lecture 3, 2011
Cost Driver Rating – Computer Factors
Description Very Low
Low Nominal High Very High
Extra High
TIME Execution time constraint
- - 1.00 1.11 1.30 1.66
STOR Main storage constraint
- - 1.00 1.06 1.21 1.56
VIRT Virtual machine volatility
- 0.87 1.00 1.15 1.30 -
TURN Computer turnaround time
- 0.87 1.00 1.07 1.15 -
ETSF01 / Lecture 3, 2011
Cost Driver Rating – Personnel Factors
Description Very Low
Low Nominal High Very High
Extra High
ACAP Analyst capability
1.46 1.19 1.00 0.86 0.71 -
AEXP Applications experience
1.29 1.13 1.00 0.91 0.82 -
PCAP Programmer capability
1.42 1.17 1.00 0.86 0.70 -
VEXP Virtual machine experience
1.21 1.10 1.00 0.90 - -
LEXP Language experience
1.14 1.07 1.00 0.95 - -
ETSF01 / Lecture 3, 2011
Cost Driver Rating – Project Factors
Description Very Low
Low Nominal High Very High
Extra High
MODP Modern programming practices
1.24 1.10 1.00 0.91 0.82 -
TOOL Software Tools 1.24 1.10 1.00 0.91 0.83 - SCED Development
Schedule 1.23 1.08 1.00 1.04 1.10 -
ETSF01 / Lecture 3, 2011
Advanced COCOMO
• The Advanced COCOMO model computes effort as a function of program size and a set of cost drivers with weights according to each phase of the software lifecycle.
• The Advanced COCOMO model applies the Intermediate model at the module level, and then a phase-based approach is used to consolidate the estimate.
RPD DD CUT IT Phase
Average Staffing
Effort
Effort
Effort
Effort
2011-03-22
8
ETSF01 / Lecture 3, 2011
Advanced COCOMO: Example Cost Driver • The 4 phases used in the detailed (or: advanced) COCOMO model are:
– requirements planning and product design (RPD), detailed design (DD), code and unit test (CUT), and integration and test (IT).
• Each cost driver is broken down by phase as in the table below. • Estimates made for each module are combined into subsystems and
eventually an overall project estimate.
Cost Driver Rating RPD DD CUT IT ACAP (Analyst capability)
Very Low 1.80 1.35 1.35 1.50
Low 0.85 0.85 0.85 1.20
Nominal 1.00 1.00 1.00 1.00
High 0.75 0.90 0.90 0.85
Very High 0.55 0.75 0.75 0.70
ACAP Intermediate 1.46 – Very Low 1.19 – Low 1.00 – Nominal 0.86 – High 0.71 – Very High
ETSF01 / Lecture 3, 2011
COCOMO II
ETSF01 / Lecture 3, 2011
COCOMO II Accuracy Funnel
Feasibility Plans/Rqts. Design Develop and Test
Phases and Milestones
Relative Size
Range
Operational Concept
Life Cycle Objectives
Life Cycle Architecture
Initial Operating Capability
x
0.5x
0.25x
4x
2x
ETSF01 / Lecture 3, 2011
COCOMO II – Three Sub-Models
• COCOMO II includes a three-stage series of models: – The earliest phases (or cycles of the spiral model) will
generally involve prototyping, using the Application Composition Model. (replaces Basic COCOMO)
– The next phases or spiral cycles will generally involve exploration of architectural alternatives or incremental development strategies. To support these activities, COCOMO II provides an early estimation model called the Early Design Model. (replaces Intermediate COCOMO)
– Once the project is ready to develop, it should have a life-cycle architecture, which provides more accurate information on cost driver inputs, and enables more accurate cost estimates. To support this stage, COCOMO II provides the Post-Architecture Model. (replaces Advanced COCOMO)
ETSF01 / Lecture 3, 2011
COCOMO II – Early Design Model (EDM)
• The Early Design model is used to evaluate alternative software/system architectures and concepts of operation.
• An unadjusted function point count (UFC) is used for sizing. This value is converted to KLOC.
• The Early Design model equation is:
• The effort adjustment factor (EAF) is the product of seven cost drivers.
• SFi: Scale Factor i a = 2.94
( )5
12.45 0.91 0.01b
ij
E KLOC EAF b SF=
= × × = + ∑( ) EAFKLOCaE b ××=
ETSF01 / Lecture 3, 2011
COCOMO II – EDM Cost Drivers (Effort Multipliers)
• The effort adjustment factor (EAF) for the Early Design model (= simplified set of effort multipliers (obtained by combining COCOMO’s 17 Post-Architecture cost drivers)
Cost Driver
Description Represent Combined Post-Architecture Cost Drivers
1 RCPX Product reliability and complexity RELY, DATA, CPLX, DOCU 2 RUSE Required reuse RUSE 3 PDIF Platform difficulty TIME, STOR, PVOL 4 PERS Personnel capability ACAP, PCAP, PCON 5 PREX Personnel experience AEXP, PEXP, LTEX 6 FCIL Facilities TOOL, SITE 7 SCED Schedule SCED
+ 5 Scale Factors (Exponent Drivers)
2011-03-22
9
ETSF01 / Lecture 3, 2011
COCOMO II – EDM Cost Drivers (Effort Multipliers)
Extra low
Very low Low Nom-inal
High Very high
Extra high
RCPX 0.49 0.60 0.83 1.00 1.33 1.91 2.72
RUSE 0.95 1.00 1.07 1.15 1.24
PDIF 0.87 1.00 1.29 1.81 2.61
PERS 2.12 1.62 1.26 1.00 0.83 0.63 0.50
PREX 1.59 1.33 1.12 1.00 0.87 0.74 0.62
FCIL 1.43 1.30 1.10 1.00 0.87 0.73 0.62
SCED 1.43 1.14 1.00 1.00 1.00
ETSF01 / Lecture 3, 2011
COCOMO II – EDM Scale Factors
Scale Factors (SF): • Precedentedness (PREC) – [0 .. 6.20]
– Degree to which system is new and past experience applies • Development Flexibility (FLEX) – [0 .. 5.07]
– Need to conform with specified requirements • Architecture/Risk Resolution (RESL) – [0 ... 7.07]
– Degree of design thoroughness and risk elimination • Team Cohesion (TEAM) – [0 .. 5.48]
– Need to synchronize stakeholders and minimize conflict • Process Maturity (PMAT) – [0 .. 7.80]
– SEI CMM process maturity rating
ETSF01 / Lecture 3, 2011
COCOMO II – EDM SF Ratings
• Scale Factor (SF) Ratings SF Very
Low Low Nominal High Very
High Extra High
Precedentedness PREC 6.20 4.96 3.72 2.48 1.24 0.00
Development/Flexibility FLEX 5.07 4.05 3.04 2.03 1.01 0.00 Architecture/Risk Resolution
RESL 7.07 5.65 4.24 2.83 1.41 0.00
Team Cohesion TEAM 5.48 4.38 3.29 2.19 1.10 0.00 Process Maturity PMAT 7.80 6.24 4.68 3.12 1.56 0.00
ETSF01 / Lecture 3, 2011
COCOMO II – Early Design Model Effort Estimates
0
2000
4000
6000
8000
10000
12000
14000
0 100 200 300 400 500 600 700 800 900 1000
KLOC
Pers
on-M
onth
B-VL
B-L
B-N
B-H
B-VH
B-EH
( )5
12.45 0.91 0.01b
ij
E KLOC EAF b SF=
= × × = + ∑
[0 … 6.20] [0 … 5.07] [0 … 7.07] [0 … 5.48] [0 … 7.80]
b=1 b=0.91
b=1.23
b=1.10
( ) EAFKLOCaE b ××=
ETSF01 / Lecture 3, 2011
COCOMO II – EDM Scale Factors
• Sum scale factors SFi to determine a scale exponent, b, with b = 0.91 + 0.01 Σ SFi
ETSF01 / Lecture 3, 2011
COCOMO II – EDM SFs PREC & FLEX
• Elaboration of the PREC and FLEX rating scales:
2011-03-22
10
ETSF01 / Lecture 3, 2011
COCOMO II – EDM Scale Factor RESL
• Elaboration of the RESL rating scale: – Use a subjective weighted average of the characteristics
PDR = Product Design Review COTS = Component Off-The-Shelf
ETSF01 / Lecture 3, 2011
COCOMO II – EDM Scale Factor TEAM
• Elaboration of the TEAM rating scale: – Use a subjective weighted average of the characteristics to
account for project turbulence and entropy due to difficulties in synchronizing the project's stakeholders.
– Stakeholders include users, customers, developers, maintainers, interfacers, and others
ETSF01 / Lecture 3, 2011
COCOMO II – EDM Scale Factor PMAT
• Elaboration of the PMAT rating scale: – Two methods based on the Software Engineering
Institute's Capability Maturity Model (CMM) • Method 1: Overall Maturity Level (CMM Level 1 through 5) • Method 2: Key Process Areas (see next slide)
ETSF01 / Lecture 3, 2011
COCOMO II – EDM Scale Factor PMAT • Elaboration of the PMAT rating scale (Method 2):
– Decide the percentage of compliance for each of the KPAs as determined by a judgement-based averaging across the goals for all 18 Key Process Areas.
ETSF01 / Lecture 3, 2011
COCOMO II – Example
ETSF01 / Lecture 3, 2011
Example
• A software development team is developing an application which is very similar to previous ones it has developed.
Information relevant for Scale Factors: • A very precise software engineering document lays down
very strict requirements. à PREC is very high (score 1.24). • FLEX is very low (score 5.07). • The good news is that these tight requirements are unlikely
to change (RESL is high with a score 2.83). • The team is tightly knit (TEAM has high score of 2.19), but
processes are informal (so PMAT is low and scores 6.24)
2011-03-22
11
ETSF01 / Lecture 3, 2011
Example – Scale factor calculation
• The formula for SF is – SF = b + 0.01 × Σ scale factor values
= 0.91 + 0.01 × (1.24 + 5.07 + 2.83 + 2.19 + 6.24) = 1.0857
• If the system contained 10 KLOC then the effort estimate would be 2.94 x 101.0857 = 35.8 person-months
• Note that for larger systems, the effect of the scale factor becomes over-proportionally stronger
ETSF01 / Lecture 3, 2011
Example – Effort multipliers
• Say that a new project is similar in most characteristics to those that an organization has been dealing with in the past,
• except – the software to be produced is exceptionally complex
and will be used in a safety critical system; – the software will interface with a new operating system
that is currently in beta status; – to deal with this the team allocated to the job are
regarded as exceptionally good, but do not have a lot of experience on this type of software.
ETSF01 / Lecture 3, 2011
Example – Effort multiplier application
• RCPX very high 1.91 • PDIF very high 1.81 • PERS extra high 0.50 • PREX nominal 1.00
• All other factors are nominal • Say the estimated size is 10 KLOC resulting in an effort
estimate of 35.8 person-months • With effort multipliers applied, this becomes:
35.8 x 1.91 x 1.81 x 0.5 = 61.9 [person-months]
ETSF01 / Lecture 3, 2011
Function Points
ETSF01 / Lecture 3, 2011
Function-Oriented Measures
• Functionality is one aspect of software size. The assumption is that a product with more functionality is larger in size and thus needs more effort to be developed.
• The first function-oriented measure was proposed by Albrecht (1979~1983).
– He suggested a size measurement approach called the Function Point (FP) method.
• Function points (FPs) measure the amount of functionality in a system based upon the system specification.
à Estimation before implementation!
Specification
Architectural Design
Implementation
Module Test
Integration Test
Acceptance Test
Micro Design
ETSF01 / Lecture 3, 2011
Parametric Model – Functional Size
• Models for estimating Function Points:
– Albrecht / IFPUG Function Points – Mark II Function Points – COSMIC Function Points
Model
Number of file types
Numbers of input and output transaction types
‘System size’ (Function Points)
2011-03-22
12
ETSF01 / Lecture 3, 2011
Function Point (FP) Standards
Standard Name Content ISO/IEC 14143-1:2000
IFPUG 4.1 Unadjusted functional size measurement method -- Counting practices manual IFPUG Function Points VS 4.1 (unadjusted)
ISO/IEC 20926:2004
IFPUG 4.2 Unadjusted functional size measurement method -- Counting practices manual IFPUG Function Points VS 4.2 (unadjusted)
ISO/IEC 19761:2003
COSMIC-FFP
A functional size measurement method COSMIC Full Function Points Vs. 2.2
ISO/IEC 20968:2002
Mark II Function Point Analysis
Counting Practices Manual Mark II Function Points
ETSF01 / Lecture 3, 2011
Function Points (IFPUG)
ETSF01 / Lecture 3, 2011
Albrecht/IFPUG Function Points
• Albrecht worked at IBM and needed a way of measuring the relative productivity of different programming languages.
• Needed some way of measuring the size of an application without counting lines of code.
• Identified five types of component or functionality in an information system
• Counted occurrences of each type of functionality in order to get an indication of the size of an information system
ETSF01 / Lecture 3, 2011
Albrecht/IFPUG FPs – Elements
Five function types 1. Logical interface file (LIF) types – equates roughly to a datastore in
systems analysis terms. Created and accessed by the target system 2. External interface file (EIF) types – where data is retrieved from a
datastore which is actually maintained by a different application. 3. External input (EI) types – input transactions which update internal
software files 4. External output (EO) types – transactions which extract and display
data from internal computer files. Generally involves creating reports. 5. External inquiry (EQ) types – user initiated transactions which
provide information but do not update computer files. Normally the user inputs some data that guides the system to the information the user needs.
ETSF01 / Lecture 3, 2011
Albrecht/IFPUG FPs – Complexity Multipliers
External user types
Low complexity
Medium complexity
High complexity
EI 3 4 6
EO 4 5 7
EQ 3 4 6
LIF 7 10 15
EIF 5 7 10
ETSF01 / Lecture 3, 2011
Albrecht/IFPUG FPs – Example
Payroll application has: 1. A transaction to add, update and delete employee details à
an EI rated to be of medium complexity 2. A transaction that calculates pay details from timesheet data
that is input à an EI rated to be of high complexity 3. A transaction that prints out pay-to-date details for each
employee à an EO rated to be of medium complexity 4. A file of payroll details for each employee à an LIF rated to
be of medium complexity 5. A personnel file maintained by another system is accessed for
name and address details à a simple EIF What would be the FP count for such a system?
2011-03-22
13
ETSF01 / Lecture 3, 2011
IFPUG FP counts
1. Medium EI 4 FPs 2. High complexity EI 6 FPs 3. Medium complexity EO 5 FPs 4. Medium complexity LIF 10 FPs 5. Simple EIF 5 FPs
Total 30 FPs
• If previous projects delivered 5 FPs per day, implementing the above should take 30/5 = 6 days
ETSF01 / Lecture 3, 2011
Weigthed ILF
Weighted EIF
Weighted EI
Weighted EO
IFPUG Function Point Counting Big Picture
Unadjusted and Unweighted Function Count
L A H
# EI # ILF # EIF # EQ # EO
Weighted EQ
14 Adjustment Factors
Adjusted FP Count
Unadjusted Function Points (UFP)
Complexity Adjustment
Weighting of functional (technical) complexities
Step 2
Step 1b
Step 1a
L A H L A H L A H L A H
=
=
+ + + +
+ + + +
Adjusted Function Points Value Adjustment Factor (VAF))
ETSF01 / Lecture 3, 2011
IFPUG Function Points (FP) – Key Elements
Function Point (FP)
Inputs Outputs
Data Objects Processes
External Inputs (EI)
External Outputs (EO)
External Interface Files (EIF) Internal Logical Files (ILF)
External Inquiries
(EQ)
ETSF01 / Lecture 3, 2011
IFPUG FP – External Inputs (EI)
• External Inputs – IFPUG Definition: – An external input (EI) is an
elementary process that processes data or control information that comes from outside the application boundary.
– The primary intent of an EI is to maintain one or more ILFs and/or to alter the behavior of the system.
– Examples: • Data entry by users • Data or file feeds by external
applications
ETSF01 / Lecture 3, 2011
IFPUG FP – External Outputs (EO)
• External Outputs – IFPUG Definition: – An external output (EO) is an elementary
process that sends data or control information outside the application boundary.
– The primary intent of an external output is to present information to a user through processing logic other than, or in addition to, the retrieval of data or control information .
– The processing logic must contain at least one mathematical formula or calculation, create derived data, maintain one or more ILFs, or alter the behavior of the system.
– Example: • Reports created by the application
being counted, where the reports include derived information
ETSF01 / Lecture 3, 2011
IFPUG FP – External Inquiries (EQ)
• External Inquiries – IFPUG Definition: – An external inquiry (EQ) is an elementary
process that sends data or control information outside the application boundary.
– The primary intent of an external inquiry is to present information to a user through the retrieval of data or control information from an ILF or EIF.
– The processing logic contains no mathematical formulas or calculations, and creates no derived data.
– No ILF is maintained during the processing, nor is the behavior of the system altered.
– Example: • Reports created by the application
being counted, where the report does not include any derived data
2011-03-22
14
ETSF01 / Lecture 3, 2011
IFPUG FP – Internal Logical Files (ILF)
• Internal Logical Files – IFPUG Definition: – An ILF is a user-identifiable group of
logically related data or control information maintained within the boundary of the application.
– The primary intent of an ILF is to hold data maintained through one or more elementary processes of the application being counted.
– Examples: • Tables in a relational database. • Flat files. • Application control information,
perhaps things like user preferences that are stored by the application.
ETSF01 / Lecture 3, 2011
IFPUG FP – Ext. Interface Files (EIF)
• External Interface files – IFPUG Definition: – An external interface file (EIF) is a user
identifiable group of logically related data or control information referenced by the application, but maintained within the boundary of another application.
– The primary intent of an EIF is to hold data referenced through one or more elementary processes within the boundary of the application counted.
– This means an EIF counted for an application must be in an ILF in another application.
– Examples: • As for ILF, but maintained in a
different system
ETSF01 / Lecture 3, 2011
Weigthed ILF
Weighted EIF
Weighted EI
Weighted EO
IFPUG Function Point Counting Big Picture
Unadjusted and Unweighted Function Count
L A H
# EI # ILF # EIF # EQ # EO
Weighted EQ
14 Adjustment Factors
Adjusted FP Count
Unadjusted Function Points (UFP)
Complexity Adjustment
Weighting of functional (technical) complexities
Step 2
Step 1b
Step 1a
L A H L A H L A H L A H
=
=
+ + + +
+ + + +
Adjusted Function Points Value Adjustment Factor (VAF))
ETSF01 / Lecture 3, 2011
IFPUG Unadjusted FP Count (UFC)
• A complexity rating is associated with each count using function point complexity weights:
Item Weighting Factor
Simple Average Complex External inputs (à NEI) 3 4 6
External outputs (à NEO) 4 5 7
External inquiries (à NEQ) 3 4 6
External interface files (à NEIF) 5 7 10
Internal logical files (à NILF) 7 10 15
ILFEIFEQEOEI NNNNNUFC 107454 ++++=
Low High Complexity
ETSF01 / Lecture 3, 2011
IFPUG FP Counting: Weighting of Technical Complexity
Unadjusted Function Points (UFP)
_____ __ x15 = _____ __ x10 = _____ __ x 7 = _____ Internal Logical Files (ILF)
_____ __ x10 = _____ __ x 7 = _____ __ x 5 = _____ External Interface Files (EIF)
_____ __ x 6 = _____ __ x 4 = _____ __x 3 = _____ External Inquiries (EQ)
_____ __ x 7 = _____ __ x 5 = _____ __ x 4 = _____ External Outputs (EO)
_____ __ x 6 = _____ __ x 4 = _____ __ x 3 = _____ External Inputs (EI)
Sum high average low
Complexity Elements
ETSF01 / Lecture 3, 2011
Weigthed ILF
Weighted EIF
Weighted EI
Weighted EO
IFPUG Function Point Counting Big Picture
Unadjusted and Unweighted Function Count
L A H
# EI # ILF # EIF # EQ # EO
Weighted EQ
14 Adjustment Factors
Adjusted FP Count
Unadjusted Function Points (UFP)
Complexity Adjustment
Weighting of functional (technical) complexities
Step 2
Step 1b
Step 1a
L A H L A H L A H L A H
=
=
+ + + +
+ + + +
Adjusted Function Points Value Adjustment Factor (VAF))
2011-03-22
15
ETSF01 / Lecture 3, 2011
IFPUG Value Adjustment Factor (VAF) /1
• Value Adjustment Factor (VAF) is a weighted sum of 14 Factors (General System Characteristics (GSC)):
F1 Data Communications F2 Distributed Data Processing F3 Performance F4 Heavily Used Configuration F5 Transaction Rate F6 On-line Data Entry F7 End-User Efficiency F8 On-line Update F9 Complex Processing F10 Reusability F11 Installation Ease F12 Operational Ease F13 Multiple Sites F14 Facilitate Change
ETSF01 / Lecture 3, 2011
IFPUG Value Adjustment Factor (VAF) /2
1. Data Communications: The data and control information used in the application are sent or received over communication facilities.
2. Distributed Data Processing: Distributed data or processing functions are a characteristic of the application within the application boundary.
3. Performance: Application performance objectives, stated or approved by the user, in either response or throughput, influence (or will influence) the design, development, installation and support of the application.
4. Heavily Used Configuration: A heavily used operational configuration, requiring special design considerations, is a characteristic of the application.
5. Transaction Rate: The transaction rate is high and influences the design, development, installation and support.
6. On-line Data Entry: On-line data entry and control information functions are provided in the application.
7. End-User Efficiency: The on-line functions provided emphasize a design for end-user efficiency.
8. On-line Update: The application provides on-line update for the internal logical files.
9. Complex Processing: Complex processing is a characteristic of the application.
10. Reusability: The application and the code in the application have been specifically designed, developed and supported to be usable in other applications.
11. Installation Ease: Conversion and installation ease are characteristics of the application. A conversion and installation plan and/or conversion tools were provided and tested during the system test phase.
12. Operational Ease: Operational ease is a characteristic of the application. Effective start-up, backup and recovery procedures were provided and tested during the system test phase.
13. Multiple Sites: The application has been specifically designed, developed and supported to be installed at multiple sites for multiple organizations.
14. Facilitate Change: The application has been specifically designed, developed and supported to facilitate change.
ETSF01 / Lecture 3, 2011
IFPUG Value Adjustment Factor (VAF) /3
• Each component is rated from 0 to 5, where 0 means the component is not relevant to the system and 5 means the component is essential.
• The VAF can then be calculated as:
• The VAF varies from 0.65 (if all Fj are set to 0) to 1.35 (if all Fj are set to 5)
14
10.65 0.01 j
jTCF F
=
= + ∑ VAF
ETSF01 / Lecture 3, 2011
Weigthed ILF
Weighted EIF
Weighted EI
Weighted EO
IFPUG Function Point Counting: Complexity Assessment Details
Unadjusted and Unweighted Function Count
L A H
# EI # ILF # EIF # EQ # EO
Weighted EQ
14 Adjustment Factors
Adjusted FP Count
Unadjusted Function Points (UFP)
Complexity Adjustment
Weighting of functional (technical) complexities
Step 2
Step 1b
Step 1a
L A H L A H L A H L A H
=
=
+ + + +
+ + + +
Adjusted Function Points Value Adjustment Factor (VAF))
ETSF01 / Lecture 3, 2011
IFPUG Function Point Counting Elements
ETSF01 / Lecture 3, 2011
IFPUG Function Types for Complexity Assessment
Data Function Types Transaction Function Types
Internal Logical Files (ILF)
External Interface Files (EIF)
External Input (EI)
External Output
(EO)
External Inquiry
(EQ)
Elements evaluated for technical complexity assessment
REcord Types (RET): User recognizable sub groups of data elements within an ILF or an EIF. It is best to look at logical groupings of data to help identify them.
File Type Referenced (FTR): File type referenced by a transaction. An FTR must be an Internal Logical File (ILF) or External Interface File (EIF).
Data Element Types (DET): A unique user recognizable, non-recursive (non-repetitive) field containing dynamic information. If a DET is recursive then only the first occurrence of the
DET is considered not every occurrence.
2011-03-22
16
ETSF01 / Lecture 3, 2011
IFPUG Internal Logical File – Complexity Assessment
• Identify number of Record Types (RET)
• Identify number of Data Element Type (DET)
• Determine complexity (à used to calculate FP count)
ILF #DET
1-19 20-50 > 50
#RET 1 low (7) low (7) average (10)
2-5 low (7) average (10) high (15) > 5 average (10) high (15) high (15)
ETSF01 / Lecture 3, 2011
IFPUG Internal Logical File – Example
Employee - Name - ID - Birth Date - Payment Reference
1 RET 4 DET
ILF #DET
1-19 20-50 > 50
#RET
1 low (7) low (7) average (10)
2-5 low (7) average (10) high (15)
> 5 average (10) high (15) high (15)
ILF: “Employee” =
ETSF01 / Lecture 3, 2011
IFPUG External Interface File – Complexity Assessment
• Identify number of Record Types (RET)
• Identify number of Data Element Type (DET)
• Determine complexity (à used to calculate FP count)
EIF #DET
1-19 20-50 > 50
#RET 1 low (5) low (5) average (7)
2-5 low (5) average (7) high (10) > 5 average (7) high (10) high (10)
ETSF01 / Lecture 3, 2011
Employee - Name - ID - Birth Date - Payment Reference
Weekly Payment - Hourly Rate - Payment Office
Monthly Payment - Salary Level
IFPUG External Interface File – Example
EIF: Employee Administration (DB)
Payroll Software
2 RET 7 DET
EIF #DET
1-19 20-50 > 50
#RET
1 low (5) low (5) average (7)
2-5 low (5) average (7) high (10)
> 5 average (7) high (10) high (10)
ETSF01 / Lecture 3, 2011
IFPUG External Input – Complexity Assessment
• Identify number of File Types Referenced (FTR)
• Identify number of Data Element Type (DET)
• Determine complexity (à used to calculate FP count)
EI #DET
1-4 5-15 > 15
#FTR 1 low (3) low (3) average (4) 2 low (3) average (4) high (6)
> 2 average (4) high (6) high (6)
ETSF01 / Lecture 3, 2011
IFPUG External Input – Example
• Enter a new employee with Monthly Payment:
– Name, – ID, – Birth Date, – Payment Reference, – Salary Level.
Employee - Name - ID - Birth Date - Payment Reference
Weekly Payment - Hourly Rate - Payment Office
Monthly Payment - Salary Level
ILF: Employee Administration (DB)
EI #DET
1-4 5-15 > 15
#FTR
1 low (3) low (3) average (4)
2 low (3) average (4) high (6)
> 2 average (4) high (6) high (6)
1 FTR 5 DET
2011-03-22
17
ETSF01 / Lecture 3, 2011
IFPUG External Input – Example
• Enter a new employee with Weekly Payment:
– Name, – ID, – Birth Date, – Payment Reference, – Hourly Rate, – Payment Office.
Employee - Name - ID - Birth Date - Payment Reference
Weekly Payment - Hourly Rate - Payment Office
Monthly Payment - Salary Level
ILF: Employee Administration (DB)
EI #DET
1-4 5-15 > 15
#FTR
1 low (3) low (3) average (4)
2 low (3) average (4) high (6)
> 2 average (4) high (6) high (6)
1 FTR 6 DET
ETSF01 / Lecture 3, 2011
IFPUG External Output – Complexity Assessment
• Identify number of File Types Referenced (FTR)
• Identify number of Data Element Type (DET)
• Determine complexity (à used to calculate FP count)
EO #DET
1-5 6-19 > 19
#FTR 1 low (4) low (4) average (5)
2-3 low (4) average (5) high (7) > 3 average (5) high (7) high (7)
ETSF01 / Lecture 3, 2011
IFPUG External Output – Example
• Report of all Employees containing Names and Birth Dates, sorted by age.
≤ 40 Years James Miller, 10-02-1966 Angela Rhodes, 05-03-1966
≤ 45 Years Mike Smith, 23-03-1961
ILF: Employee - Name - ID - Birth Date - Payment Reference
EO #DET
1-5 6-19 > 19
#FTR 1 low (4) low (4) average (5)
2-3 low (4) average (5) high (7) > 3 average (5) high (7) high (7)
1 FTR 2 DET
ETSF01 / Lecture 3, 2011
IFPUG External Inquiry – Complexity Assessment
• Identify number of File Types Referenced (FTR)
• Identify number of Data Element Type (DET)
• Determine complexity (à used to calculate FP count)
EQ #DET
1-5 6-19 > 19
#FTR 1 low (3) low (3) average (4)
2-3 low (3) average (4) high (6) > 3 average (4) high (6) high (6)
ETSF01 / Lecture 3, 2011
IFPUG External Inquiry – Example
• Report of all employees belonging to Department X containing Names, Birth Dates, and showing the Department Name.
• Files (ILF): Employee, Department • 2 FTR: Employee, Department
3 DET: Name (Employee), Birth Date (Employee), Department Name (Department)
ILF: “Employee”
EQ
#DET
1-5 6-19 > 19
#FTR 1 low (3) low (3) average (4)
2-3 low (3) average (4) high (6) > 3 average (4) high (6) high (6)
ILF: “Department”
Report
ETSF01 / Lecture 3, 2011
IFPUG FP Counting Example – GUI
1 External Inquiry (input side)
3 External Inputs
Employee
Name First Name Street Postal Code City Birth Date
new
change
delete
close
External Input (new, 1 FTR, 7 DET, low) = 3 FP External Input (change, 1 FTR, 7 DET, low) = 3 FP External Input (delete, 1 FTR, 7 DET, low) = 3 FP External Inquiry (navigate, 1 FTR, 7 DET, low) = 3 FP
12 FP
2011-03-22
18
ETSF01 / Lecture 3, 2011
FP: Advantages – Summary
• Can be counted before design or code documents exist
• Can be used for estimating project cost, effort, schedule early in the project life-cycle
• Helps with contract negotiations • Is standardized (though several competing
standards exist)
ETSF01 / Lecture 3, 2011
FP vs. LOC /1
• A number of studies have attempted to relate LOC and FP metrics (Jones, 1996).
• The average number of source code statements per function point has been derived from case studies for numerous programming languages.
• Languages have been classified into different levels according to the relationship between LOC and FP.
ETSF01 / Lecture 3, 2011
FP vs. LOC /2 Language Level Min Average Max Machine language 0.10 640
Assembly 1.00 237 320 416 C 2.50 60 128 170 Pascal 4.00 90 C++ 6.00 40 55 140 Visual C++ 9.50 34 PowerBuilder 20.00 16 Excel 57.00 5.5 Code generators -- 15
http://www.qsm.com/FPGearing.html
Newer Data!
ETSF01 / Lecture 3, 2011
Effort Estimation with FP
FPàMM (IBM data)
0
500
1000
1500
2000
2500
3000
3500
0 100 200 300 400
Person-Months
Fu
nctio
n Po
ints
Effort estimation based on organization- specific data from past projects.
ETSF01 / Lecture 3, 2011
IFPUG FP: Critique – Summary
• FP is a subjective measure (à counting items, complexity weights)
• Requires a full software system specification – But there exist “light-weight” alternatives à Object Point, Use
Case Point
• Physical meaning of the basic unit of FP is unclear • Difficult to apply to maintenance (enhancement) projects • Not suitable for “complex” software, e.g., real-time and
embedded applications – Proposed solution: COSMIC FP / Feature Points
ETSF01 / Lecture 3, 2011
Mark II Function Points
2011-03-22
19
ETSF01 / Lecture 3, 2011
Function Points – Mark II
• Developed by Charles R. Symons – Reference: ‘Software sizing and estimating -
Mk II FPA’, Wiley & Sons, 1991. • Builds on work by Albrecht • Work originally for CCTA:
– should be compatible with SSADM; mainly used in UK
– emerged in parallel to IFPUG FPs – … but is a simpler method
ETSF01 / Lecture 3, 2011
Function points Mk II continued
• For each transaction, count – data items input (Ni) – data items output (No) – entity types accessed
(Ne)
#entities accessed
#input items
#output items
FP count = Ni * 0.58 + Ne * 1.66 + No * 0.26
ETSF01 / Lecture 3, 2011
COSMIC Function Points
ETSF01 / Lecture 3, 2011
Function points for embedded systems
• Albrecht/IFPUG function points (and Mark II) were designed for information systems
• COSMIC FPs attempt to extend the FP concept to embedded systems
• Embedded software seen as: – being in a particular ‘layer’ in the system – communicating with other layers and other components
at same level
ETSF01 / Lecture 3, 2011
COSMIC FP – Layered software
Higher layers
Lower layers
Software component Peer component
Makes a request for a service Receives service
Receives request Supplies service
Peer to peer communication
Persistent storage
Data reads/ writes
ETSF01 / Lecture 3, 2011
COSMIC FPs
• The following elements are counted:
– Entries: • movement of data into software
component from a user, another layer or a peer component
– Exits: • movements of data out of
software component – Reads:
• data movement from persistent storage
– Writes: • data movement to persistent
storage
• Each element counts as 1 ‘COSMIC functional size unit’ (Cfsu)
2011-03-22
20
ETSF01 / Lecture 3, 2011
Software Project Management
Chapter Six Activity planning
ETSF01 / Lecture 3, 2011
Scheduling
• Having … – selected and tailored a process for doing the project – identified the activities to be carried out – assessed the time needed for each activity
• … need to assign dates/times for start and end of each activity
ETSF01 / Lecture 3, 2011
Activity networks
Help to: • Assess the feasibility of the planned project
completion date • Identify when resources will need to be deployed
to activities • Calculate when costs will be incurred • Co-ordinate activities
ETSF01 / Lecture 3, 2011
Defining activity networks
Activity networks have the following assumptions: • A project is:
– Composed of a number of activities – May start when at least one of its activities is
ready to start – Completed when all its activities are completed
ETSF01 / Lecture 3, 2011
Defining activity networks
• An activity: – Must have clearly defined start and end-points – Must have an estimated duration – Must have estimated resource requirements
• these are assumed to be unchanged throughout the project
– May be dependent on other activities being completed first (precedence networks)
ETSF01 / Lecture 3, 2011
Identifying activities
• Work-based approach – Draw-up a Work Breakdown Structure (WBS) listing the
tasks (work items) to be performed • Product-based approach
– List the deliverable and intermediate products of project • Product breakdown structure (PBS)
– Identify the order in which products have to be created – Work out the activities needed to create the products
2011-03-22
21
ETSF01 / Lecture 3, 2011
Hybrid approach – Example
ETSF01 / Lecture 3, 2011
Final outcome of project planning A project plan as a bar chart
ETSF01 / Lecture 3, 2011
Activity Network Types
• PERT = Program Evaluation and Review Technique • CPM = Critical Path Method
ETSF01 / Lecture 3, 2011
PERT vs CPM
PERT
Activity A
Activity C
Activity B
Activity D
CPM Act. A
Act. B
Act. C
Act. D
Act. E
Act. F
ETSF01 / Lecture 3, 2011
Developing a PERT diagram
• No loops are allowed – deal with iterations by hiding them within single activities
• Milestones – ‘activities’, e.g., the start and end the project, indicate transition points and have duration 0.
ETSF01 / Lecture 3, 2011
Lagged activities
• Where there is a fixed delay between activities – e.g. seven days notice has to be given to
users that a new release has been signed off and is to be installed
Acceptance testing
Install new release
7days
20 days 1day
2011-03-22
22
ETSF01 / Lecture 3, 2011
Types of links between activities
• Finish to start
• Start to start / Finish to finish (here with lag time)
Software development Acceptance testing
Test prototype
Document Amendments
1 day 2 days
ETSF01 / Lecture 3, 2011
Types of links between activities
• Start to finish
Operate temporary
system
Acceptance test of new system
Cutover to new system
ETSF01 / Lecture 3, 2011
Start and finish times
• Activity ‘write report software’ • Earliest start (ES) • Earliest finish (EF) = ES + duration • Latest finish (LF) = latest task can be completed without
affecting project end Latest start = LF - duration
Earliest start
Latest start
Latest finish
Earliest finish
activity
ETSF01 / Lecture 3, 2011
Example
• Earliest start = day 5 • Latest finish = day 30 • Duration = 10 days
• Earliest finish = ? • Latest start = ?
Float = LF – ES – Duration = ?
What is it?
ETSF01 / Lecture 3, 2011
Example
• Earliest start = day 5 • Latest finish = day 30 • Duration = 10 days
• Earliest finish = 15 • Latest start = 20
Float = LF – ES – Duration = 15
ETSF01 / Lecture 3, 2011
‘Day 0’
• Note that in the last example, day numbers are used rather than actual dates
• Makes initial calculations easier – not concerned with week-ends and public holidays
• For finish date/times Day 1 means at the END of Day 1. • For a start date/time Day 1 also means at the END of
Day 1. • The first activity therefore may begin at Day 0 i.e. the
end of Day 0 i.e. the start of Day 1
2011-03-22
23
ETSF01 / Lecture 3, 2011
Notation
Activity label, activity description
Duration Earliest start
Earliest finish
Latest start
Latest finish Float
ETSF01 / Lecture 3, 2011
Complete for the previous example
Write report software
Duration = 10
Earliest Start = 5
Earliest Finish = 15
Latest Start = 20
Latest Finish =30 Float = 15
ETSF01 / Lecture 3, 2011
Example of an activity network
? ?
? ? ?
? ?
? ? ?
? ?
? ? ?
? ?
? ? ?
? ?
? ? ?
? ?
? ? ?
? ?
? ? ?
? ?
? ? ?
ETSF01 / Lecture 3, 2011
Developing the PERT diagram: Forward pass • Start at beginning (Day 0) and work forward following chains. • Earliest start date for the current activity = earliest finish date
for the previous • When there is more than one previous activity, take the latest
earliest finish
138
EF = day 7
EF = day10
ES = day10
ETSF01 / Lecture 3, 2011
Example of an activity network
0 6
? ? ?
6 9
? ? ?
0 4
? ? ?
0 10
? ? ?
4 8
? ? ?
4 7
? ? ?
9 11
? ? ?
10 13
? ? ?
ETSF01 / Lecture 3, 2011
Developing the PERT diagram: Backward pass
• Start from the last activity • Latest finish (LF) for last activity = earliest finish (EF) • Work backwards • Latest finish for current activity = Latest start for the
following activity • More than one following activity à take the earliest LS • Latest start (LS) = LF for activity – duration
2011-03-22
24
ETSF01 / Lecture 3, 2011
Example of an activity network
0 6
2 8 ?
6 9
8 11 ?
0 4
3 7 ?
0 10
0 10 ?
4 8
7 11 ?
4 7
7 10 ?
9 11
11 13 ?
10 13
10 13 ?
ETSF01 / Lecture 3, 2011
Float
Float = Latest finish - Earliest start -
Duration
ES
Latest start
activity
LF FLOAT
Earliest finish
ETSF01 / Lecture 3, 2011
Example of an activity network
0 6
2 8 2
6 9
8 11 2
0 4
3 7 3
0 10
0 10 0
4 8
7 11 3
4 7
7 10 3
9 11
11 13 2
10 13
10 13 0
ETSF01 / Lecture 3, 2011
Critical path
• Critical path = the path through the activity network with zero floats
• Critical path: any delay in an activity on this path will delay whole project
• Can there be more than one critical path? • Can there be no critical path? • Sub-critical paths
ETSF01 / Lecture 3, 2011
Example of an activity network
0 6
2 8 2
6 9
8 11 2
0 4
3 7 3
0 10
0 10 0
4 8
7 11 3
4 7
7 10 3
9 11
11 13 2
10 13
10 13 0
ETSF01 / Lecture 3, 2011
Free vs. interfering float
B
4w 4 0
9
4w
5 5
B can be up to 3 weeks late and not affect any other activity à free float
B can be another 2 weeks late à affects D but not the project’s end date à interfering float
A
0 7w 7
2 9 2
B 4
A
0 10w 10
0 10 0
D
7 1w 8
9 10 2
E
10 2w 12
10 12 0
2011-03-22
25
ETSF01 / Lecture 3, 2011
Rest of this week
• Read textbook chapters 5 and 6
• Continue interaction with ETSA01
• Meet project supervisor (if not yet happened)
Appointment by email: [email protected]
• Exercise 3
ETSF01 / Lecture 3, 2011
Next week
• Read textbook chapters 7, 8, and 9
• Continue interaction with ETSA01
• Finalise and submit Draft Report to [email protected]
• No exercise