31
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 22 Instructor Paulo Alencar

Lecture 22

Embed Size (px)

DESCRIPTION

software planning

Citation preview

Page 1: Lecture 22

1

ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance

Lecture 22

InstructorPaulo Alencar

Page 2: Lecture 22

2

Overview

Software Quality Metrics Black Box MetricsWhite Box MetricsDevelopment EstimatesMaintenance Estimates

Page 3: Lecture 22

3

Software Metrics

• Black Box Metrics– Function Points

– COCOMO

• White Box Metrics– LOC

– Halstead’s Software Science

– McCabe’s Cyclomatic Complexity

– Information Flow Metric

– Syntactic Interconnection

Page 4: Lecture 22

4

• Software project planning consists of two primary tasks:

• analysis

• estimation (effort of programmer months, development interval, staffing levels, testing, maintenance costs, etc.).

• Important:– most common cause of software development failure is

poor (optimistic) planning.

Software Metrics

Page 5: Lecture 22

5

• Risks in estimation can be illustrated as:

• Must keep historical database to assist in future estimation.

Software Metrics

Relative Size (wrt past work)

Relative Complexity

Degree of variation in product structure

low risk domain

Page 6: Lecture 22

6

Black Box Metrics

• We examine the following metrics and their potential use in software testing and maintenance.

• Function Oriented Metrics– Feature Points

– Function Points

• COCOMO

Page 7: Lecture 22

7

• Mainly used in business applications• The focus is on program functionality• A measure of the information domain + a subjective

assessment of complexity• Most common are:

– function points and

– feature points (FP) .

Reference: R.S. Pressman, “Software Engineering: A Practitioner’s Approach, 3rd Edition”, McGraw Hill, Chapter 2.

Function-Oriented Metrics

Page 8: Lecture 22

8

Function Points • The function point metric is evaluated using the following

tables: Weighting Factor

Parameter Count Simple Average Complex Weight

# of user inputs

* 3 4 6 =

# of user outputs

* 4 5 7 =

#of user inquiries

* 3 4 6 =

# of files * 7 10 15 =

# of external interfaces

* 5 7 10 =

Total_weight =

Page 9: Lecture 22

9

The following relationship is used to compute function points:

)](01.065.0[_ iFSUMweighttotalFP

Function Points

Page 10: Lecture 22

10

• where Fi (i = 1 to 14) are complexity adjustment

values based on the table below:

1. Reliable backup/recovery needed?

2. Any data communications needed?

3. Any distributed processing functions?

4. Performance critical?

5. Will system run in an existing heavily utilized operational environment?

6. Any on-line data entry?

7. Does on-line data entry need multiple screens/operations?

Function Points

Page 11: Lecture 22

11

8. Are master files updated on-line?

9. Are inputs, outputs, queries complex?

10. Is internal processing complex?

11. Must code be reusable?

12. Are conversion and installation included in design?

13. Is multiple installations in different organizations needed in design?

14. Is the application to facilitate change and ease of use by user?

Function Points

Page 12: Lecture 22

12

• Each of the Fi criteria are given a rating of 0

to 5 as:

– No Influence = 0; Incidental = 1;

– Moderate = 2 Average = 3;

– Significant = 4 Essential = 5

Function Points

Page 13: Lecture 22

13

• Once function points are calculated, they are used in a manner analogous to LOC as a measure of software productivity, quality and other attributes, e.g.:

– productivity FP/person-month – quality faults/FP – cost $$/FP – documentation doc_pages/FP

Function-Oriented Metrics

Page 14: Lecture 22

14

Example: Function Points

Page 15: Lecture 22

15

Example: Your PBX project

• Total of FPs = 25

• F4 = 4, F10 = 4, other Fi’s are set to 0. Sum of all Fi’s

= 8.

• FP = 25 x (0.65 + 0.01 x 8) = 18.25

• Lines of code in C = 18.25 x 128 LOC = 2336 LOC

• For the given example, developers have implemented their projects using about 2500 LOC which is very close to predicted value of 2336 LOC

Page 16: Lecture 22

16

Feature Point Metrics

• It represents the same thing – “functionality” delivered by the software.

• Measurement parameters and weights are:

Number of user inputs – weight = 4

Number of user outputs – weight = 5

Number of user inquiries – weight = 4

Number of files – weight = 7

Number of external interfaces – weight = 7

Number of algorithms – weight = 3

Total_weight or total_count = ?

Page 17: Lecture 22

17

COCOMO

• The overall resources for software project must be estimated:– development costs (i.e., programmer-months)

– development interval

– staffing levels

– maintenance costs

• General approaches include: – expert judgment (e.g., past experience times judgmental

factor accounting for differences).

– algorithmic (empirical models).

Page 18: Lecture 22

18

Several models exit with various success and ease/difficulty of use.

• We consider the COCOMO (Constructive Cost Model) .

• Decompose the software into small enough units to be able to estimate the LOC.

• Definitions:– KDSI as kilo delivered source instructions (statements)

• not including comments, test drivers, etc. – PM - person months

• 3 levels of the Cocomo models: Basic, Intermediate and, Detailed (We will not see the last one here)

Empirical Estimation Models

Page 19: Lecture 22

19

Model 1: Basic

• Apply the following formulae to get rough estimates: – PM = 2.4(KDSI)1.05

– TDEV = 2.5(PM)0.38 (chronological months)

COCOMO

Page 20: Lecture 22

20

Effort estimates1000

800

600

400

200

00

20 40 60 80 100 120

KDSI

Person-months

Embedded

Intermediate

Simple

©Ian Sommerville 1995

Page 21: Lecture 22

21

• Organic mode project, 32KLOC– PM = 2.4 (32) 1.05 = 91 person months– TDEV = 2.5 (91) 0.38 = 14 months– N = 91/14 = 6.5 people

• Embedded mode project, 128KLOC– PM = 3.6 (128)1.2 = 1216 person-months– TDEV = 2.5 (1216)0.32 = 24 months– N = 1216/24 = 51 people

COCOMO examples

©Ian Sommerville 1995

Page 22: Lecture 22

22

Model 2: Intermediate

• step I: obtain the nominal effort estimation as: – PMNOM = ai (KDSI)bi where

COCOMO

ai bi

Organic 3.2 1.05

Semi-detached 3.0 1.12

Embedded 2.8 1.2

Page 23: Lecture 22

23

– organic: small s/w team; familiar; in-house environment; extensive experience; specifications negotiable.

– embedded: firm, tight constraints; (hardware SRS), generally less known territory.

– semi-detached: in between.

• step II: determine the effort multipliers:– From a table of total of 15 attributes, each rated on a 6

point scale

COCOMO

Page 24: Lecture 22

24

• Four attribute groups:

1. product attributes: required reliability, product complexity,

2. computer attributes: constraints: execution time, primary memory, virtual machine environment, volatility (h/w & s/w), turnaround time,

3. personnel attributes: analyst, programmers’ capability, application experience, VM experience, PL experience,

4. project attributes: modern programming practices, s/w tools used, schedule realistic .

COCOMO

Page 25: Lecture 22

25

• a total of 15 attributes, each rated on a 6 point scale:

• very low - low - nominal - high - very high - extra high

• use the Cocomo model to calculate the effort adjustment factor (EAF) as:

• step III: estimate the development effort as:

PMDEV = PMNOM EAF

15

1iattributesEAF

COCOMO

Page 26: Lecture 22

26

• step IV: estimate the related resources as: TDEV = ci (PMDEV)d

i

ci di

Organic 2.5 0.38

Semi-detached 2.5 0.35

Embedded 2.5 0.32

COCOMO

Page 27: Lecture 22

27

Estimating Software Maintenance Costs

• A basic question: what is the number of programmers needed to maintain software system?

• A simple guesstimate may be:

programmereach per LOC of estimate

maintained be toLOC

Page 28: Lecture 22

28

• Alternatives: by Boehm – define the ACT (annual change traffic) as the fraction of

statements changed per year:

• Level 1 model: PMAM = ACT PMDEV

where AM = annual on maintenance

total

changedadded

KLOC

KLOCKLOC

Estimating Software Maintenance Costs

Page 29: Lecture 22

29

• Level 2 model:

PMAM = ACT PMDEV EAFM

where the EAFM may be different from the EAF for

development since:• different personnel

• experience level, motivation, etc

Estimating Software Maintenance Costs

Page 30: Lecture 22

30

• Factors which influence software productivity (also maintenance):

1. People Factors: the size and expertise of the development (maintenance) organization .

2. Problem Factors: the complexity of the problem and the number of changes to design constraints or requirements.

Estimating Software Maintenance Costs

Page 31: Lecture 22

31

3. Process Factors: the analysis, design, test techniques, languages, CASE tools, etc.

4. Product Factors: the required reliability and performance of the system.

5. Resource Factors: the availability of CASE tools, hardware and software resources.

Estimating Software Maintenance Costs