106
Definition and validation of requirements management measures Annabella Loconsole Ph.D. Thesis, 2007 UMINF-07.22 Department of Computing Science Umeå University SE-90187 Umeå, Sweden Submitted to the Department of Computing Science at Umeå University in partial fulfillment of the require- ments for the degree of Doctor of Philosophy in Computing Science. Defense of the dissertation: Friday January 25, 2008 at 13.15 in room MA121, MIT building. Faculty opponent: Professor Björn Regnell, Department of Communication Systems at the Technical Faculty of Lund University, Lund, Sweden. Akademisk avhandling som med tillstånd av rektorsämbetet vid Umeå universitet framläggs till offentlig granskning fredagen den 25 january 2008 klockan 13.15 i sal MA121, MIT-huset, för avläggande av filosofie doktorsexamen i datavetenskap. Fakultetsopponent är professor Björn Regnell, Lund University, Sweden.

Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

Embed Size (px)

Citation preview

Page 1: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

Definition and validation of requirements management measures

Annabella Loconsole

Ph.D. Thesis, 2007UMINF-07.22

Department of Computing ScienceUmeå University

SE-90187 Umeå, Sweden

Submitted to the Department of Computing Science at Umeå University in partial fulfillment of the require-ments for the degree of Doctor of Philosophy in Computing Science.Defense of the dissertation: Friday January 25, 2008 at 13.15 in room MA121, MIT building. Faculty opponent: Professor Björn Regnell, Department of Communication Systems at the Technical Facultyof Lund University, Lund, Sweden.

Akademisk avhandling som med tillstånd av rektorsämbetet vid Umeå universitet framläggs till offentliggranskning fredagen den 25 january 2008 klockan 13.15 i sal MA121, MIT-huset, för avläggande av filosofiedoktorsexamen i datavetenskap.Fakultetsopponent är professor Björn Regnell, Lund University, Sweden.

Page 2: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

AbstractThe quality of software systems depends on early activities in the software development process, of which themanagement of requirements is one. When requirements are not managed well, a project can fail or becomemore costly than intended, and the quality of the software developed can decrease. Among the requirementsmanagement practices, it is particularly important to quantify and predict requirements volatility, i.e., howmuch the requirements are likely to change over time.

Software measures can help in quantifying and predicting requirements attributes like volatility. However,few measures have yet been defined, due to the fact that the early phases are hard to formalise. Furthermore,very few requirements measures have been validated, which would be needed in order to demonstrate thatthey are useful. The approach to requirements management in this thesis is quantitative, i.e. to monitor therequirements management activities and requirements volatility through software measurement.

In this thesis, a set of 45 requirements management measures is presented. The measures were definedusing the goal question metrics framework for the two predefined goals of the requirements management keyprocess area of the capability maturity model for software. A subset of these measures was validated theoret-ically and empirically in four case studies. Furthermore, an analysis of validated measures in the literature wasperformed, showing that there is a lack of validated process, project, and requirements measures in softwareengineering.

The studies presented in this thesis show that size measures are good estimators of requirements volatility.The important result is that size is relevant: increasing the size of a requirements document implies that thenumber of changes to requirements increases as well. Furthermore, subjective estimations of volatility werefound to be inaccurate assessors of requirements volatility.

These results suggest that practitioners should complement the subjective estimations for assessing vola-tility with the objective ones. Requirements engineers and project managers will benefit from the researchpresented in this thesis because the measures defined, proved to be predictors of volatility, can help in under-standing how much requirements will change. By deploying the measures, the practitioners would be pre-pared for possible changes in the schedule and cost of a project, giving them the possibility of creatingalternative plans, new cost estimates, and new software development schedules

Keywords: Requirement, Measures, Volatility, Theoretical Validation, Empirical Validation, Prediction Mod-el, Correlational Study, Subjective Estimation.

Document nameDoctoral dissertation

LanguageEnglish

Date of issueJanuary 25th, 2008

ISBN978-91-7264-468-7

ISSN0348-0542

UMINF07.22

DepartmentComputing Science

SchoolUmeå University

AddressSE-90187 Umeå, Sweden

Page 3: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

Definition and validation ofrequirements management

measures

Annabella Loconsole

PhD Thesis

Department of Computing ScienceUmeå University, Sweden

UMINF 07.22

Page 4: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

II CHAPTER

Copyright 2007 © by Annabella Loconsole Except paper 1 © 2001 Shaker. Reprinted, with permission.

Except paper 4 © 2005 IEEE. Reprinted, with permission.

Except paper 7 © 2007 IEEE. Reprinted, with permission.

ISBN 978-91-7264-468-7ISSN 0348-0542UMINF 07.22

Printed by Arkitektkopia.

Page 5: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

III

Abstract

The quality of software systems depends on early activities in the software development proc-ess, of which the management of requirements is one. When requirements are not managedwell, a project can fail or become more costly than intended, and the quality of the softwaredeveloped can decrease. Among the requirements management practices, it is particularly im-portant to quantify and predict requirements volatility, i.e., how much the requirements arelikely to change over time.

Software measures can help in quantifying and predicting requirements attributes like vol-atility. However, few measures have yet been defined, due to the fact that the early phases arehard to formalise. Furthermore, very few requirements measures have been validated, whichwould be needed in order to demonstrate that they are useful. The approach to requirementsmanagement in this thesis is quantitative, i.e. to monitor the requirements management ac-tivities and requirements volatility through software measurement.

In this thesis, a set of 45 requirements management measures is presented. The measureswere defined using the goal question metrics framework for the two predefined goals of therequirements management key process area of the capability maturity model for software. Asubset of these measures was validated theoretically and empirically in four case studies. Fur-thermore, an analysis of validated measures in the literature was performed, showing thatthere is a lack of validated process, project, and requirements measures in software engineer-ing.

The studies presented in this thesis show that size measures are good estimators of require-ments volatility. The important result is that size is relevant: increasing the size of a require-ments document implies that the number of changes to requirements increases as well.Furthermore, subjective estimations of volatility were found to be inaccurate assessors of re-quirements volatility.

These results suggest that practitioners should complement the subjective estimations forassessing volatility with the objective ones. Requirements engineers and project managers willbenefit from the research presented in this thesis because the measures defined, proved to bepredictors of volatility, can help in understanding how much requirements will change. Bydeploying the measures, the practitioners would be prepared for possible changes in theschedule and cost of a project, giving them the possibility of creating alternative plans, newcost estimates, and new software development schedules.

Page 6: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

IV CHAPTER

Page 7: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

V

PrefaceThis thesis is based on work made at the Department of Computing Science, Umeå Univer-sity. It contains an introduction and seven papers listed below. The introduction describes thebackground, the research methods, the contributions, and a summary of the papers.

Paper 1Loconsole, A., (2001) Measuring the Requirements Management Key Process Area — Appli-cation of the Goal Question Metric to the Requirements Management Key Process Area ofthe Capability Maturity Model, in Proceedings of ESCOM — European Software Control andMetrics Conference, April 2001, London, UK. Shaker Publishing, pp. 67–76.

Paper 2Loconsole, A., and Börstler J., (2003) Theoretical Validation and Case Study of Require-ments Management Measures, Department of Computing Science, Umeå University, Tech-nical Report UMINF 03.02, July 2003, 20 pages.

Paper 3Loconsole, A. and Börstler J., (2004) A Comparison of two Academic Case Studies on CostEstimation of Changes to Requirements — Preliminary Results, in Proceeding of SMEF —Software Measurement European Forum, 28–30 January 2004, Rome, Italy, pp. 226–236.

Paper 4Loconsole, A. and Börstler J., (2005) An Industrial Case Study on Requirements VolatilityMeasures, in Proceeding of APSEC — 12th IEEE Asia Pacific Software Engineering Conference,15–17 December 2005, Taipei, Taiwan, IEEE Computer Press, pp. 249–256.

Paper 5Loconsole, A. and Börstler J., (2007) Construction and Validation of Prediction Models forChanges to Requirements, Department of Computing Science, Umeå University, TechnicalReport UMINF 07.03, 26 pages.

Paper 6Loconsole, A. and Börstler J., (2007) A Correlational Study on Four Size Measures as Predic-tors of Requirements Volatility, submitted to Journal of Software Measurement.

Page 8: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

VI CHAPTER

Paper 7Loconsole, A. and Börstler J., (2007) Are Size Measures Better than Expert Opinion? An In-dustrial Case Study on Requirements Volatility, in Proceedings of the 14th IEEE Asia PacificConference on Software Engineering, Dec. 5–7, 2007, Nagoya, Japan, IEEE Computer Press,pp. 238–245.

In addition to the publications included in the thesis, other publications have been producedin relation to the PhD studies. During the PhD studies the author has presented her work atseven international conferences.

• Gorschek, T., Svahnberg, M., Borg, A., Loconsole, A., Börstler, J., Sandahl, K., andEriksson, M. (2007). A Controlled Empirical Evaluation of a Requirements AbstractionModel. Information and Software Technology, 49, 7 (Jul. 2007).

• Loconsole, A., (2006) Definition and Validation of Requirements Volatility Measures, inProceeding of SERPS'06 Sixth Conference on Software Engineering Research and Practice inSweden, Umeå University, (Oct. 18–19, 2006).

• Loconsole, A., (2004) Empirical Studies on Requirement Management Activities, in Pro-ceeding of ICSE '04, 26th IEEE/ACM International Conference on Software Engineering,Edinburgh Scotland, UK, (May 2004).

• Loconsole, A., (2002) Non-Empirical Validation of Requirements Management Meas-ures, in Proceeding of WoSQ—Workshop on Software Quality at ICSE, International Con-ference on Software Engineering, Orlando, Fl, (May 2002).

• Loconsole, A., Rodriguez D., Börstler J., Harrison R.,(2001) Report on Metrics 2001:The Science & Practice of Software Metrics Conference. Software Engineering Notes —ACM SIGSOFT newsletter, 26, 6, (Nov. 2001).

• Loconsole, A., (2001) Measuring the Requirements Management Process Area. Abstractin Proceedings of the Thirty Second SIGCSE—Technical Symposium on Computer ScienceEducation, Charlotte, NC, (Feb. 21–25, 2001), ACM press.

Page 9: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

VII

AcknowledgementsThroughout my PhD studies, many people have helped me in different ways in the writingof this thesis. Without their support and contributions, this thesis would not exist.

First of all, I would like to thank my supervisor Jürgen Börstler: thanks for accepting meas PhD student, for giving me the freedom to choose my own topic, and for your guidanceand positive attitude in guiding me. Thank you also for pushing me the many times when Iwanted to give up! Thanks also to the “docent group”, in particular Frank Drewes and BoKågström for their constructive comments on the thesis, Lars-Erik Janlert for reviewing twoof the papers in this thesis; all of you have contributed to improving my thesis considerably.Thanks also Thomas P. for carefully reading and commenting all the thesis, and reviewing mypapers during all this time!

Outside Umeå university, Kristian Sandahl: thank you for reading my papers and givingme suggestions for how to proceed in my PhD studies. Marcela Genero, thank you for send-ing me your PhD thesis which has been a guidance for writing mine. James Bieman for emaildiscussion on theoretical and empirical validation. Nur Nurmuliani for email discussion onrequirements volatility. Thanks also Kjell Borg, Magnus Eriksson and all people at Hägglundsfor letting me perform empirical studies in a real industrial environment (which is often sodifficult to find for academics!).

Passing to my private life, there are many friends that have helped me to “survive” theSwedish winters, so long, dark, and cold! Maria Angeles, thank you for the breaks talkingabout the cultural differences between south and north of Europe over and over and the dif-ficulties for us to adapt to the Swedish culture!

Mike: thank you for all the discussions about PhD studies which annoyed Anna and Tho-mas! Thank you both Mike and Anna for all the weekends spent together drinking beer andtrying to “solve” world problems! You made our weekends something to look forward to!

One of the things I have learned in Sweden is to sport a lot, which for me has mainlymeant playing beach volley. Therefore I would like to thank all the beach players (more than30!) who have shared the fun of running on the sand and put out all the stress and anger byhitting a ball! In particular my beach enemies (;-) Thomas H. and Harriet: it is really reallyfun to play with you!

Daniel K., Ana, Francisco, Alison, Helena, Erik E., Claude, and Fabien thank you for yourcompany, the dinners, the beers together, and the dancing! Thanks Erik B., Ola R., ThomasN., Johanna, and Daniel S. for being my friends and finally making me feel integrated in theSwedish society! Dipak, a nice person who stops in my room to just say “hello, how are you!”Thank you for being close to me in a dark period of my PhD studies, when I have seriouslyconsidered to give up!

Page 10: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

VIII CHAPTER

It is nice to go to work and meet people who smile and say hello in the corridor! Therefore Iwould like to thank all the people who make the working environment nicer by showing hap-piness: Yvonne, Jeanette, Anne-Lie, Inger, Marie, Lena (P., H., and H.) and Mikael H., Jonny,Stefan H. and J., .. (I am going to forget someone here!) Thank you all for your smiles :-) Itmakes me happy! In general, thanks to all who have been social with me!

On the family side, I want to thank my cousin Gina (cuGina :-)) for all the Skype sessions!You have filled many of the lonely days when Thomas was away. Thank you! Thanks to Gertiand Stig for making me feel part of the family and for constrain me to talk Swedish! My Swed-ish has improved a lot just because of this! My parents, for accepting me living abroad and mymother for not being tired of talking to me every day on the phone! I could not have survivedhere without the daily phone calls! Giancarlo, Betta and my three nephews Danilo, Nicolo’,and Alessio thank you for not being afraid of the cold Swedish winter and for being ready tocome here in January for my defence! I am looking forward to have you here!

Finally, you! You have been my supervisor, my beach partner and teacher, my life partner!Thank you for believing in me, and this has given me the strength to continue the many timesI doubted of my capabilities (and this happens once a week :-/) you were there pushing andsaying: you can do it! Without you I wouldn’t have managed! Thanks!

Page 11: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

IX

Table of contents

1 Introduction ..........................................................................................11.1 Technical problem....................................................................................11.2 Goals ........................................................................................................21.3 Contributions ...........................................................................................31.4 Thesis outline ...........................................................................................4

2 Empirical software engineering and measurement ..................................52.1 Introduction .............................................................................................52.2 Empirical software engineering .................................................................62.3 Software measurement ..............................................................................92.4 Theoretical validation .............................................................................102.5 Empirical validation................................................................................142.6 Discussion ..............................................................................................15

3 Requirements engineering ...................................................................173.1 Introduction ...........................................................................................173.2 Requirements management.....................................................................183.3 Requirements volatility ...........................................................................213.4 Measuring requirements size ...................................................................23

4 Review of validated measures ..............................................................254.1 Introduction ...........................................................................................254.2 Publication selection ...............................................................................264.3 Description of the review ........................................................................264.4 Discussion ..............................................................................................32

5 Summary of papers.............................................................................355.1 Paper 1 ...................................................................................................355.2 Paper 2 ...................................................................................................365.3 Paper 3 ...................................................................................................365.4 Paper 4 ...................................................................................................385.5 Paper 5 ...................................................................................................395.6 Paper 6 ...................................................................................................405.7 Paper 7 ...................................................................................................40

6 Conclusions........................................................................................416.1 Summary ................................................................................................416.2 Have the goals been reached? ..................................................................446.3 Limitations, challenges, and open issues..................................................466.4 Using the requirements management measures .......................................46

Page 12: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

X CHAPTER

6.5 Future directions.................................................................................... 47

References ........................................................................................ 49

Appendix A ........................................................................................ 71

Some abbreviations used.................................................................... 91

Paper 1: Measuring the requirements management key process area ..................... 931. Introduction ......................................................................................... 942. The Requirements management KPA of the capability maturity model 953. The Goal/Question/Metric paradigm ................................................... 974. Application of the Goal/Question/Metrics to the CMM ....................... 985. Measures for the goals of the requirements management KPA ............ 1026. Testing the measures in a company ..................................................... 1037. Concluding remarks and future directions .......................................... 1038. Acknowledgements ............................................................................. 1049. References ........................................................................................... 104

Paper 2:Theoretical validation and case study of requirements management measures . ........................................................................................................... 1071. Introduction ....................................................................................... 1082. Software measurement ........................................................................ 1103. Measures validation ............................................................................. 1114. Theoretical validation of requirements management measures ............ 1135. Case study in a SE course .................................................................... 1226. Discussion and conclusion .................................................................. 1287. Future work ........................................................................................ 1298. Acknowledgement ............................................................................... 1309. References ........................................................................................... 130

10. Appendix ............................................................................................ 132

Paper 3: Preliminary results of two academic case studies on cost estimation of changes to requirements ................................................................................... 1391. Introduction ....................................................................................... 1402. Case studies overview .......................................................................... 1413. Definition of the case studies .............................................................. 1414. Case studies context and environment ................................................ 1425. Limitations of the case studies ............................................................. 1436. Hypotheses and plans .......................................................................... 1447. Results of case study one ..................................................................... 1468. Partial results of case study two ........................................................... 1489. Comparison of the two case studies ..................................................... 149

10. Conclusion and future work ............................................................... 150

Page 13: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

XI

11. Acknowledgement ...............................................................................15112. References ............................................................................................151

Paper 4: An industrial case study on requirements volatility measures ....................1531. Introduction ........................................................................................1542. Case study description .........................................................................1563. Conclusions .........................................................................................1634. References ............................................................................................1645. Appendices ..........................................................................................168

Paper 5: Construction and validation of prediction models for number of changes to re-quirements ..........................................................................................1771. Introduction ........................................................................................1782. Related work ........................................................................................1793. Background .........................................................................................1804. Description of the correlational study ..................................................1815. Construction of prediction models ......................................................1856. Discussion and conclusions ..................................................................1957. References ............................................................................................1978. Measurement rules ...............................................................................200

Paper 6: A correlational study on four size measures as predictors of requirements vola-tility ....................................................................................................2011. Introduction ........................................................................................2022. Related work ........................................................................................2033. Background .........................................................................................2034. Description of the present correlational study ......................................2045. Construction of prediction models ......................................................2096. Prediction model validation .................................................................2167. Threats to validity ................................................................................2168. Discussions and conclusions ................................................................2179. References ............................................................................................219

10. Measurement rules ...............................................................................222

Paper 7: Are size measures better than expert opinion? An industrial case study on re-quirements volatility .............................................................................2231. Introduction ........................................................................................2242. Empirical study description .................................................................2243. Results .................................................................................................2294. Validity evaluation ...............................................................................2315. Comparison of the two studies .............................................................2326. Conclusions .........................................................................................2347. References ............................................................................................2358. Instruments .........................................................................................237

Page 14: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

XII CHAPTER

Page 15: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

1

1Introduction

This chapter presents a summary of the thesis, the underlying problem for the studies, goals,and contributions.

1.1 Technical problemSoftware systems are becoming increasingly complex and large. In the software engineeringliterature, there are several examples of software project failures, such as late delivery, budgetoverruns, and unacceptable low quality [82]. Customers are often unhappy with the resultsof software products. Thus, building high quality software systems within schedule and budg-et is a challenge. Software development organisations are therefore recommended to improvethe quality of the software development activities (software process improvement or SPI). Aspointed out by Morasca [183], the phases of the software process which start early, like projectmanagement and requirements engineering, are crucial in order to construct a high qualitysoftware system.

Requirements engineering is the phase of the development process where requirements areelicited, analysed, documented, and validated [152]. Unfortunately, this field is still not ma-ture. Many studies have shown that software projects often fail due to problems with require-ments [35, 152, 241]. Frequently, analysts do not know how to write good requirementsspecifications. The result is incomplete, wrong, ambiguous, or badly phrased requirements.According to Boehm, “errors are most frequent during the requirements and design activitiesand are the more expensive the later they are removed” [34]. The high cost of errors removedin late development phases has been shown in several studies performed in the 1970s and1980s [31]. Furthermore, requirements are prone to change and the changes can come unex-pected, even during the late phases of the software development process. This may prevent

Page 16: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

2 CHAPTER 1

projects from being successful, and meeting the customer expectations [212]. Some studies[10, 82] show that only one third of the software projects today satisfy budget constraints andare delivered on time.

An important part of the requirements engineering phase is the management of changingrequirements (or requirements management, RM), a continuous process performed in paral-lel with other requirements engineering activities, that proceeds through all the phases of thesoftware development and long after delivery of the product [159]. It is the process of elicit-ing, documenting, organizing, and tracking changes to the system requirements and commu-nicating this information to the project team [80] (see Chapter 3). Tracking and monitoringchanging requirements make the practitioners aware that requirements will change. Quanti-fying and predicting the amount of changes to requirements (requirements volatility), maygive indications that a project is going off track. Predicting volatility helps in creating betterproject plans, by estimating costs for changes and allocating resources for future changes torequirements. RM affects the quality of the project and of the product positively, but it is of-ten ignored or not done properly [234, 241]. When requirements are not managed well, aproject can become more costly than intended, produce software of low quality or fail alto-gether.

The approach to requirements management suggested in this thesis is quantitative, i.e., tomonitor the RM activities and requirements volatility through software measurement. Thisapproach is suitable for both large and small companies. Measures1 for the early developmentphases are the most useful since the earliest phases have the largest impact on the software de-velopment. Measuring requirements volatility is very important because, by knowing howmuch the requirements will change, project managers and other stakeholders can take deci-sion in order to minimise the impact of volatility on the quality of the software project. It isone out of nine requirements management good practices suggested by Wiegers [246].

Few measures for the requirements engineering phase have been defined because the earlyphases are seldom formal. As suggested in [154], there is a lack of measurement for the man-agement of evolving requirements. Furthermore, as shown in Chapter 4, very few RM meas-ures have been validated. Measures need to be validated in order to demonstrate that they areuseful, i.e., related to some important quality attribute. Several validation methods have beenpresented in the literature, but there is not yet a standardised accepted way of validating meas-ures. Furthermore, practitioners are not sensitive to the importance of measures validation[200].

1.2 GoalsThe overall goal of the research presented in this thesis is to improve the management of re-quirements. As discussed in Section 1.1, software measurement could help to achieve thatgoal. More specifically, the aim is to provide practitioners with a set of useful measures for

1. In this thesis the term measure is preferred over metric, because a metric is associated with the dis-tance between two points in measurement theory.

Page 17: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

INTRODUCTION 3

monitoring and tracking requirements. A measure is useful if it is connected to some impor-tant attribute. One such important attribute is requirements volatility [246]. From the overallgoal, the following sub-goals can be identified:

• investigate existing measures for managing requirements and their validity;

• provide practitioners with a broad set of requirements management measures that satisfybasic measurement principles;

• show, through empirical validation, that the defined measures are useful to predict vola-tility and that they are better assessors2 of volatility than subjective estimations.

It is important to stress that the goal is not to decrease the number of changes to requirements;this is not always possible because change is inherent in and intrinsic to the nature of the de-velopment processes used and of the software product [246]. The work described in this thesisbelongs to the intersection of four research areas (see Figure 1). Other areas like softwaremaintenance, configuration management, change management, and project planning andtracking would also benefit from this research.

1.3 Contributions

Contribution as a wholeThe contribution of this thesis to the areas in Figure 1 consists of a broad set of requirementsmanagement measures. Four measures are shown to be predictors of requirements volatility.To the best of our knowledge, this is a novel result because, as will be shown in Chapter 4, norequirements management measure has never before been shown to be a predictor of require-ments volatility. The research methods used to obtain these results are described in Chapter 2.

2. A measure is considered to be an assessor if it is highly correlated to an attribute of an object.

Software Process Improvement

ProjectManagement

Requirements Engineering

SoftwareMeasurement

FIGURE 1. Application areas relevant for the research described in this thesis

Page 18: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

4 CHAPTER 1

Contributions in particular

A broad set of requirements management measuresThe first contribution of this research is a set of 45 general software measures for the manage-ment of requirements (Paper 1). This set constitutes a “pick list” from which a subset of soft-ware measures can be selected depending on the actual enterprise and domain.

Twenty theoretically validated measuresTwenty of the 45 measures are validated in Paper 2 and Paper 7 according to the fundamentalprinciples of measurement theory [260].

Four empirically validated measuresFour requirements size measures of the twenty theoretically validated measures are empiricallyvalidated in four industrial studies (Papers 4–7). The results from these studies support thecentral hypothesis of this thesis, namely that the requirements size measures are good predic-tors of volatility.

A literature overview of validated measures in software engineering.An extensive literature study and overview of validated software measures has been performedand is presented in Chapter 4. For each study, key data are listed, like the measures, the at-tributes connected to them, the methods used to validate the measures and the environmentwhere the validation is performed.

1.4 Thesis outlineThe remaining chapters of the thesis preamble are organised as follows:• Chapter 2 presents an introduction to software engineering, empirical research methods

in software engineering, and measurement theory.• Chapter 3 provides basic information in the field of requirements engineering, require-

ments management, requirements volatility, and empirical research in requirements engi-neering.

• Chapter 4 describes a literature review of validated measures.• Chapter 5 presents a summary of the papers included in this thesis.• Chapter 6 provides conclusions and future directions of research.

Page 19: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

2

2Empiricalsoftware engineering

and measurement

Empirical software engineering and software measurement are the foundations of the researchpresented in this thesis. After a brief introduction to software engineering, this chapter de-scribes the fields of empirical software engineering and software measurement. In particular,methods and previous research in software measure validation are described.

2.1 IntroductionSoftware engineering is defined in the IEEE standard glossary as the “application of a system-atic, disciplined, quantifiable approach to development, operation, and maintenance of soft-ware” [140]. In other words, software engineering is the application of scientific methods andtheories to practical software problems. Since it is still an immature field, it is debated whetherit should be considered engineering, a craft, an art, or science. Many researchers argue that alot of software is crafted and not all developers are software engineers [97, 211, 242]. Softwareengineering lacks important elements such as evaluation, and it is not always scientific in itsapproaches [242, 254]. As in other young fields, there is a poor level of standard basic con-cepts in software engineering [251]. Glass et al. [118] analysed a set of 369 papers publishedin six major computing journals over the years 1996–2001. They found that software engi-neering research is characterised by considerable diversity in topic (for instance problem-solv-

Page 20: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

6 CHAPTER 2

ing concepts, computer concepts, and others), but less breadth in terms of research approach(descriptive, evaluative or formulative) and methodology (action research, literature review,case studies etc.).

Many authors suggest that software engineering researchers should use industry-based lab-oratories in order to observe, build, and analyse models [18, 211, 230]. The need is reciprocal;industries need to build quality systems, therefore researchers and practitioners should collab-orate closely to support each other. However, as argued by Zelkowitz et al. [255], the twocommunities are too seldom in contact. Furthermore, since industries are different from eachother they can provide only local models. The ideal situation for researchers would be to con-duct many experiments in different environments in order to produce general results whichcan be applied in many contexts.

2.2 Empirical software engineeringSoftware engineers often have to take important decisions. For instance, the choice of the besttechnique to find faults, the best tool to support configuration management, or the choice ofthe best requirements specification language, or others. It is common to rely on the opinionof experienced “experts” in order to take these decisions. Ideally instead, the decisions shouldbe made on the basis of scientifically generated objective knowledge such as results from em-pirical studies.

An empirical study is an activity which involves collecting data about some aspects of soft-ware development, analysing data, and drawing some conclusions from the analysis. This isdone for the purpose of discovering something unknown, to test a hypothesis, and/or for con-structing and validating quality models [22].

Empirical studies can be quantitative or qualitative. Quantitative studies mainly gatherand interpret data like numbers or information in some finite scale, while qualitative studiesusually collect and interpret information like text and pictures. Empirical studies can also beclassified as correlational (which investigate the relationship between variables) like [164,167], based on the kind of subjects used (novice, experts), where they are done (in a real en-vironment, or in a laboratory) [254], and the amount of control (observational, or controlledexperiment) [18]. A common classification of empirical studies distinguishes between sur-veys, case studies, and experiments [249].

Survey

A survey is an investigation performed often in retrospect, for example to study a tool or tech-nique that has been in use for a while. Interviews and questionnaires are typically used to col-lect qualitative and quantitative data. The results from the survey are then analysed to derivedescriptive or explanatory conclusions. Surveys are very common within the social sciences.It is possible to compare surveys to similar ones, but surveys provide no control of the execu-tion or the measurement of the study. Surveys could, for example, be used for opinion pollsand market research.

Page 21: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT 7

Case study

A case study is an observational study. It is performed by observing a on-going project or ac-tivity. Based on data collected, statistical analysis can be carried out. Performing a case studyrequires the same steps as in the case of experiments. The level of control is higher comparedto surveys but lower than in experiments (see Table 1).

Case studies are valuable because they incorporate qualities that an experiment cannot eas-ily investigate, for example, scale, complexity, unpredictability, and dynamics. However, asmall or simplified case study is seldom a good instrument for discovering software engineer-ing principles and techniques, because by changing the context of the study the result may beaffected.

Experiment

Experiments are formal, rigorous, controlled investigations that are performed when the re-searchers want to evaluate a cause-effect relationship. Normally performed in controlled en-vironments, there is high control over the variables involved in the study, and it is possible tomanipulate these variables directly. The effect of the manipulation is measured, and statisticalanalysis can be performed. An experiment can be carried out under controlled conditions ina laboratory which simulates an environment (off-line) or in a real non-simulated environ-ment (on-line). Control over the variables is minor in an on-line experiment, because it mayonly be possible to control few factors while others may be impossible to control. The objec-tive is to manipulate one or more variables and control the other variables. In an experimenta state variable (the variables that can be controlled and changed in an experiment) can as-sume different values while in a case study it assumes only one value governed by the actualproject under study. An experiment consists of several steps [249]:

1. Definition: the problem and goals of the experiment are defined.

2. Planning: the context, design, subjects, objects, and instruments are determined.

3. Operation: data is collected in this phase.

4. Analysis and interpretation: the data collected is analysed and interpreted.

5. Presentation and package: the results of the experiment are presented.

TABLE 1: Empirical strategies

Type of empirical study

Quantitative/qualitative

Level of control How data is collected

Survey Both No manipulation of varia-bles

Usually in retrospect

Case study Both Little manipulation of varia-bles

By observations

Experiment Quantitative High (objective is to manipu-late variables)

In a controlled envi-ronment

Page 22: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

8 CHAPTER 2

Problems and solutions in empirical software engineering

In general, empirical software engineering is needed to objectively evaluate different methods,theories, and tools. Since it is based on verifiable facts of evidence, empirical research is closerto the real world compared to theoretical or analytical research [124]. As stressed by manyresearchers [33, 201, 243], experimentation in software engineering is an important tool tovalidate theoretical principles and best practices.

However, the amount of empirical research is small compared to other fields, and the wayit is conducted is considered poor [36, 143, 231, 242]. Tichy et al. analysed 400 research pa-pers in computer science published over the years 1991–1994, finding that the low amountof validated results indicates a serious weakness in computer science and in particular softwareengineering [242]. Wallace and Zelkovits analysed 612 papers published in IEEE TSE, IEEESoftware, and the conference ICSE in the years 1985, 1990, 1995. They found that 20% didnot contain empirical validation (compared to 5–10% in other fields like physics, psychology,anthropology, and behaviour therapy) and one third contained only a weak form of validation[253, 254]. The comparison was based on their own taxonomy of empirical studies. However,they found that the trends from 1985 to 1995 showed improvement in the software engineer-ing field.

Often computer scientists tend to adhere to an ad-hoc evaluation of their research [97]instead of evaluating it through an empirical study. This is due to the difficulties in perform-ing empirical studies, which are usually complex and time consuming. The software built isalways different from the previous one. Therefore, researchers are often unable to collect alarge amount of data that makes the statistical test used to analyse the data sufficiently reliable.Other reasons that render empirical studies difficult is the large amount of context variables.Goals, contexts, and processes are usually peculiar to the study; they are hard to control andtherefore limit the generalisability of the results. Therefore, replicating these studies is impor-tant in order to draw general conclusions.

Another difficult aspect of empirical software engineering is to find subjects willing to par-ticipate in the studies. People with knowledge in software engineering form a very small per-centage of the human population and only a small amount of them is available forexperimentation [22]. The consequences are studies with few subjects and consequently thereliability of the statistical test is low. For these reasons, experimenters often choose studentsas subjects. A study performed by Höst et al. [135], shows that there are no significant differ-ences between students and professional subjects in many contexts. However, there are onlyfew studies showing the differences between students and professionals.

Researchers often use studies performed in academic environments to pilot experimentsbefore they are carried out in industrial environments. Reports on these studies usually focuson the results obtained and issues such as their external validity. In contrast, as argued by Port,and Klappholz [210] the pedagogical challenges and value of these studies is hardly everstressed.

Page 23: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT 9

In order to handle the difficulties described above and to improve the empirical research insoftware engineering, many guidelines are available in the literature. Basic notions of empir-ical software engineering can be found in [142, 147, 249], and guidelines are described in [19,20, 145, 197, 199]. Furthermore, Basili et al. [18] propose to work with the Quality Improve-ment Paradigm (QIP), that addresses the role of experimentation and continuous process im-provement in the context of industrial development. The QIP does not require any specificmethod or approach to be used. However, it very strongly recommends the use of Goal Ques-tion Metrics (GQM) in order to define software measures.

2.3 Software measurementThe role of experimentalists is to observe and measure. Therefore, software measurement isessential in empirical software engineering. Software measurement allows for defining the de-gree of success or failure quantitatively, for a product, a process, or a person. It helps to decideon management and technical issues, to identify trends, and to estimate quantitatively andmeaningfully. Even when a project runs without problems, measurement is necessary becauseit allows to quantify the “health” of the project [98].

When a manager decides to measure a project or to perform an empirical study, she has todefine software measures and a model associated with them. The model has to describe theentity and attributes being measured, the domain and range of the measures, and the relation-ship among the measures. She also needs to know whether the model built is made for assess-ing or predicting product and process characteristics [200]. In order to construct this model,frameworks are needed to help in defining the measures. Furthermore, procedures are neededfor validating the defined measures.

Measures definition

In order to define measures, it is recommended to use a measurement framework to system-atically define measures suiting a particular purpose. Lott [170] describes seven goal-orientedframeworks to define measures. Among them, only the Goal Question Metrics (GQM)framework has become widely used and well respected [21, 103, 124, 232]. Extensions of theGQM method are available in the literature [194]. However, the extensions are not as popularas the original GQM. In this thesis, GQM has been used in order to define measures (see Pa-per 1).

Goal Question Metrics

GQM is a framework which assists software engineers to define goals and measures and tointerpret data. It is based on the idea that measurement should be goal-oriented, i.e., all datacollection in a measurement program should be based on an explicitly documented rationale[22]. There are no specified goals but rather a structure for defining goals and refining theminto a set of quantifiable questions that imply a specific set of measures and data to be collect-ed in order to achieve these goals. GQM consists of three steps:

Page 24: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

10 CHAPTER 2

• Specify a set of goals based on the needs of the organisation and its projects. Determine whatthe organisation wants to improve or learn. The process of goals definition is supportedby templates like the ones defined by Basili and Rombach [21]. By using these templates,it is possible to define the goals in terms of purpose, perspective, and environment. Theidentification of subgoals, entities, and attributes related to the subgoals is made in thisstep.

• Generate a set of quantifiable questions. Business goals are translated into operational state-ments with a measurement focus. Basili and Rombach [21] provide different sets ofguidelines to classify questions as product related or process related. The same questionscan be defined to support data interpretation of multiple goals.

• Define sets of measures that provide the quantitative information needed to answer the quan-tifiable questions. In this step, the measures suitable to give information to answer thequestions are identified and related to each question. Generally, each measure can supplyinformation to answer several questions, and sometimes a combination of measures isneeded to make up the answer of a question.

Once these steps are identified, data are collected and interpreted to produce an answer to thequantifiable questions defined in order to fulfil the business goals of the organisation.

Measurement theory

Measurement theory addresses the issue of whether measures are valid with respect to the at-tributes they are supposed to measure. It also gives clear definitions of terminology, criteriafor experimentation, conditions for validation of measures, foundations of prediction models,empirical properties of measures and criteria for measurement scales [106].

To assure that measures actually measure an intended property, their practical utilityshould be shown empirically [98, 146, 227]. The lack of validation of measures has led to alack of confidence in software measurement and this is considered one of the reasons for thepoor industrial acceptance of software measurement [28]. There are two different kinds ofvalidation that should be performed for a measure to be valid: theoretical and empirical vali-dation (or construct and predictive validity).

2.4 Theoretical validationTheoretical validation is based on the analysis of the properties of the attribute to be meas-ured. It also provides information about mathematical and statistical operations that can beperformed with the measure, which is essential when working with the measure. There aretwo main approaches to theoretical validation [183]:

• the representational theory of measurement;

• property-based approaches.

Page 25: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT 11

The representational theory of measurement is based on a mapping between the empiricalworld and the numerical world. Figure 2 shows the representation condition, which atteststhat the properties of the attributes in the real world should correspond to the measures inthe numerical world [200]. This implies the definition of an empirical and a numerical worldand the construction of a mapping between the two. This kind of validation is also called in-ternal validation [14, 98, 260], validation of measures for assessment [28, 96], and theoreticalvalidation [44, 45].

The property-based approaches are usually founded on a number of axioms the measuresmust satisfy and on properties of measurement scales [226]. This kind of theoretical valida-tion is also called axiomatic [208], analytical [176], and algebraic validation. Among the mostpopular set of axioms, Poels and Dedene [208] consider a measure as a distance function anddescribe attributes with proximity structures. Every software attribute is defined as the dis-tance from the software product that must be measured to some reference software product.Weyuker [245] identified nine properties which are useful to evaluate syntactic software com-plexity measures. Her properties have raised an intense discussion on their applicability to ob-ject-oriented complexity measures (see for instance [121, 132, 146, 184, 259]).

The fulfilment of properties is necessary but not sufficient to prove the validity of a meas-ure [49, 208]. Therefore, the most popular validation approaches combine the property-based with the representational theory of measurement. Briand et al. [49] describe propertiesof measures of size, complexity, length, coupling, and cohesion. They state that the represen-tation condition is an obvious prerequisite to validation of measures. A broader approach totheoretical validation is taken by Kitchenham et al. [146]. They suggest to validate attributes,units, instruments, and protocols (the data collection procedures). These two approaches arebriefly described below.

163 175163 175

M

Ann taller than Fred M(Ann) > M(Fred)

FIGURE 2. Illustration of the representation condition [98]

Page 26: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

12 CHAPTER 2

Representational theory approach of Kitchenham et al.

Kitchenham et al. propose a framework for validating software measures. The framework de-scribes basic notions of measurement (like entity, attributes, and units) and a structural modelof measurement (Figure 3), where the relationships among these elements are described. Fig-

ure 3 defines an entity to possess many attributes while an attribute can qualify many differententities. For instance, a program (entity) has properties such as size, complexity, and correct-ness (attributes). The figure also defines that an attribute can be measured in one or moreunits, for instance, the temperature can be measured in the Celsius or Fahrenheit scales. Fromthe structural model of measurement, the following properties for attributes are derived:

1. For an attribute to be measurable, it must allow different entities to be distinguished from oneanother, i.e., there must exist two entities for which the measure results in different val-ues.

2. A valid measure must obey the representation condition. The behaviour of the attribute inthe real world should be reflected in the behaviour of the measure in the numericalworld.

3. Each unit of an attribute contributing to a valid measure is equivalent. Since an attributemay be measured in many different ways, attributes are independent of the unit used tomeasure them. Thus any definition that implies a particular measurement scale is invalid.

Entity

Attribute(dimension)

Value(magnitude)

Unit

Scale type

MeasurementInstrument

Formal (mathematical) worldEmpirical (real) world

applies_to

measures

quantifies

determines

usesbelongs_to

possesses

expressed_in

Entity

relationship with direction

One-to-many relationship

One-to-one relationshipoptional

FIGURE 3. Structural model of measurement [146]

Page 27: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT 13

4. Different entities can have the same attribute value (within the limits of measurement error).This means that entity and attribute need not imply a one-to-one relationship.

These are properties that need to be satisfied by a valid measure. Further properties are listedfor indirect measures2, for validating instruments models, and measurement protocols. Theframework deals mainly with theoretical issues of software measurement. Some suggestionson empirical validation are described like the kind of statistical technique that can be used.

Property based approach by Briand et al.

Briand et al. [49] have suggested a set of mathematical properties that characterise and for-malise several important internal software attributes3 such as size, length, complexity, cohe-sion, and coupling. This framework is based on a graph-theoretic model of a software artifact,which is seen as a set of elements linked by relationships. The properties they provide can beapplied to any software artifacts produced along the software process. The framework in-cludes definitions of systems, modules, and modular systems; operations between the ele-ments (inclusion, union, and intersection) and the definitions of empty and disjoint modules.Finally, properties of internal attributes like size, length, complexity, cohesion, and couplingare described.

As an example according to Briand et al., the size properties of a software system are de-scribed here, together with some basic definitions (necessary in order to understand the prop-erties). The main idea is that size depends on the elements of the system.

A software system is represented as a pair S = <E, R>, where E is the finite set of elements ofS, and R is a binary relation among the elements of E:

Given a system S = <E, R>, a system m = <Em, Rm> is a module of S if and only if

A modular system is one where all the elements of the system have been partitioned into dif-ferent modules. Therefore, the modules of a modular system do not share elements, but theremay be relationships across modules. Definitions of inclusion, union, intersection for mod-ules, and the definitions of empty and disjoint modules are omitted because they are intuitive.

The size of a system S = <E, R> is a function Size(S) characterised by the following properties:

Property 1. Nonnegativity. Size(S) >= 0

Property 2. Null value. The size of a system S is null if E is empty:

2. An indirect measure is a measure obtained from an equation involving other measures. 3. An internal attribute is a property of a software artifact that can be measured by observing the artifact

by its own, separate from its behaviour. An external attribute is a property of a software artifact thatcan be measured by observing the artifact in environment [98].

R E⊆ E.×

Em E and Rm R.⊆ ⊆

E=∅ Size(S)⇒ 0=

Page 28: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

14 CHAPTER 2

Property 3. Module additivity. The size of a system S is equal to the sum of the sizes oftwo modules m1 = <Em1, Rm1> and m2 = <Em2, Rm2> such that any element of S is an ele-ment of either m1 or m2:

Property 3 can be generalised saying that the size of a system S is the sum of the sizes of itsdisjoint modules. Similar properties are described for the internal attributes length, complex-ity, cohesion, and coupling.

Many software measures introduced in the literature can be classified as measures of sizeby following these properties. Among the most known: lines of code (LOC), number of state-ments, number of modules, and number of procedures. The entity being measured in thesecases is source code (the system is a program). Briand et al. discuss the properties in the con-text of classic measurement theory and observe that their properties do not fully characterisea formal relational system since many properties will be specific to the working environmentand experience of the modeller.

2.5 Empirical validationSoftware measures are empirically valid if there is a consistent relationship between the meas-ure and an external attribute [260]. This is usually demonstrated by performing an empiricalstudy, proving that an external attribute X (for instance maintainability) verifies the equationX = f(Y) where Y is an internal attribute (for instance size). The empirical validation, alsocalled external validation [98, 260], can be a confirmation of a correlation between the inter-nal and external attributes. In this case, statistical methods like Pearson’ s and Spearman’ s cor-relation coefficients, Anova, Kendall’s tau, or others are applied. Alternatively, a predictionmodel can be constructed by applying different kinds of regression methods (linear, logistic,and others). In this case, the accuracy of the prediction model has to be evaluated [14, 28, 96].

According to Briand et al. [44], empirical validation implies data collection, identificationof measurement levels (scales), and choice of analysis method to formalise the relationship be-tween the internal and external attributes. Guidelines of how to validate measures empiricallyare described by El Emam [87] and Briand et al. [53, 54].

When performing empirical validation, the connection between internal and external at-tribute values is seldom sufficiently well verified. As stated by Fenton and Pfleeger [98], thereare two reasons for this: 1) it is difficult to perform controlled experiments and confirm rela-tionships between attributes and 2) there is still little understanding of measurement valida-tion and the proper ways to demonstrate these relationships.

The procedure to follow for experimental validation varies significantly depending on thepurpose of the measures (evaluation or prediction), the type, and the amount of data collected[44].

m1 Sand m2 Sand E⊆ ⊆ Em1 Em2and Em1 Em2∩∪ ∅= =( ) Size(S)⇒ Size m1( ) Size m2( ) ,+=

Page 29: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT 15

In this thesis, the Spearman statistic has been applied in Papers 4 and 7 with the purpose ofinvestigating the relationship between the attribute volatility and measures of size of use casedocuments. Linear regression has been applied in Papers 5 and 6 in order to construct andevaluate prediction models of requirements volatility.

2.6 DiscussionAs illustrated in this chapter, there are many problems in empirical software engineering andin particular in software measurement theory. Terminology is still fuzzy, an example being thelack of agreement on the terms “measure” and “metric” [104, 194]. Another source of confu-sion is connected to the term “theoretical validation”. The representation condition is a prop-erty that can be proved only empirically. Therefore it is somewhat confusing to call thetheoretical validation in [44, 146] “theoretical”, because a property that can only be provedempirically is part of the definition. This issue is further discussed in [162].

Many theoretical validation methods have been defined but there is not yet a standard wayto validate measures. There have been discussions about the relationships among the differentmethods but there is no agreement on the validation model that leads to a widely acceptedview of validity. The most common source of discussion in the research community is aboutthe inclusion of scale properties in the validation properties. Kitchenham et al. [146, 148] ar-gue that any definition of an attribute that implies a measurement scale is invalid (a measureshould be valid independently of the measurement scale) while Morasca et al. disagree on thisview [184]. This disagreement is a problem because validation could lead to contradictory re-sults if it is performed with different methods.

Examples of measures that are commonly used but not theoretically valid are shown in Ta-ble 2. Function points [5, 239] for instance, do not satisfy the theoretical validation proposedby Kitchenham et al. [155]. Another example is weighted methods per class (WMC) [69],which is defined as a complexity measure but does not satisfy the complexity properties byBriand et al., while instead satisfying their size properties. The control flow complexity meas-ure proposed by McCabe does not satisfy all the complexity properties of Briand et al. [49].In another paper [54], Briand et al. have performed a theoretical and empirical validation ofcommon OO design measures. They obtained all four combinations: measures that are boththeoretically and empirically valid or invalid, and measures that are only theoretically or onlyempirically valid. Their conclusion is that theoretical validity of software measures does notimply empirical validity and vice versa. Therefore, theoretically validated measures are notnecessarily good quality indicators. The problem with measures that are not theoretically butempirically valid is that it is not guaranteed that the measure quantifies the proper attribute.According to Briand et al. [54], not all the measures that satisfy some validation propertiesare important, but the measures that do not satisfy these properties can safely be excluded.

However, there is no empirical evidence that (theoretically or empirically) invalidatedmeasures have led to problems in practice. The only indication of the importance of valida-tion is that hundreds of measures have been defined in the past, but very few of them are usedin practice. The reason for this is that there has not been an attempt to show that the measures

Page 30: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

16 CHAPTER 2

defined are useful in practice. Therefore, empirical validation is extremely important. The

ideal, long term research goal, would be the identification of universally valid measures thatcan be used in different environments [48]. However, this goal cannot be achieved in the nearfuture, since validation is context dependent. Measures depend on measurement instrumentsand protocols which are usually different in different environments (for instance, the measureLOC can be counted in different ways, by including or skipping comments or blank lines).Therefore, every time a measure is used in a new environment it must be validated again. Fur-thermore, an empirical validation of measures would require a large amount of diverse dataand can therefore rarely produce results widely applicable.

Researchers often focus on complex measures and theories, with too little attention beingpaid to practical issues. The following factors can contribute to improvements in this field.1. Standards. A common agreement in the research community about standard terminology

and methods would increase the credibility of the area among the practitioners. The useof standard terminology, measurement protocols, instruments etc., would increase thevalidity of empirical studies and also the external validity of measures. This woulddecrease the necessity of validating measures in every new environment, and increase theapplicability of software measures.

2. Simplicity. Simple measures, easy to understand, that satisfy basic measurement princi-ples, and that are proven to be useful in practice, would be helpful for practitioners intheir business, and could tell them something about the quality of their software.

TABLE 2: Theoretically invalid measures used in practice

MeasureTheoretical

validation approachProblem

Albrecht’s function points (size measure) [5]

Kitchenham (repre-sentational theory)

Violates basic scale type constraints (you cannot sum ordinal scale meas-ures) and there is no unit value

Mark II function points (size meas-ure) [239]

Kitchenham (repre-sentational theory)

Cannot be considered a size measure according to the framework, but a measure of effort

Control flow complexity (McCabe) [174]

Briand property based

Does not satisfy property 4 of the 5 complexity axioms

CK suite [74]:• Weighted methods per class • Depth of inheritance tree• Coupling between objects• Response for a class• Lack of cohesion in methods

Briand property based

Defined as complexity measures, none of them satisfy the 5 complexity axioms but satisfy size, length, and coupling axioms

Page 31: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

3

3Requirements engineering

This chapter briefly describes requirements engineering, requirements management, volatili-ty, and measures for requirements.

3.1 IntroductionComplex software systems are frequently delivered too late, with budget overrun, and do notmeet the needs of the stakeholders4. As shown in many studies [35, 152, 241], this is oftendue to problems with requirements. A recent survey over 3800 organizations in 17 countriesconcluded that most of the perceived software problems are in the area of requirements spec-ification (>50%) and requirements management (50%) [157].

A requirement is a statement which describes a functionality (functional requirements) ora property (non-functional requirements) of an intended system5. Ideally, requirementsshould describe what the system should do rather than how. However, this is not realistic inpractice. It happens often that the system to be developed must be compatible with the envi-ronment and the organisational standards. In these cases an organisation needs to specify pol-icies and other constraints on the system. Requirements need to be elicited, analysed,documented, and validated (see Figure 4); these are the most important activities in a require-ments engineering process.

4. Stakeholders are the individuals or organisations that have an interest and contribute to write require-ments.

5. Some researchers (for instance [220]) classify requirements as functional, non-functional, or con-straints. Constraints are restrictions and limitations that apply to the project and the product underdevelopment (for instance the platform where the software will be used or the amount of time ormoney that may be spent on the project).

Page 32: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

18 CHAPTER 3

Requirements engineering is the area of software engineering that includes all the activities upto but not including the decomposition of the software into its architectural components[78]. The purpose of requirements engineering is to convert a poorly defined problem into awell defined problem.

However, requirements engineering is still a fuzzy area. This is due to many reasons. For in-stance requirements are difficult to discover, they change, some methods are inappropriate,project schedules are often unrealistic, and, finally, communication barriers separate develop-ers and users [133]. The most serious problems are caused by expectations that are not met.Many books have been written on the subject and there is evidence of the actual benefits ofrequirements engineering activities on projects, processes, and products [94, 233]. Neverthe-less, there is a lack of (good) real life examples of requirements specifications. There is also alack of education in requirements engineering. Therefore, very often developers do not knowhow to write good requirements specifications, which leads to incomplete, wrong, ambigu-ous, or badly phrased requirements.

3.2 Requirements managementThe main goal of requirements management is to handle changing requirements. Softwarerequirements often change to reflect the evolving needs of system stakeholders. Changes torequirements are also due to modifications in the environment where the system will be in-stalled, and modifications in laws and regulations etc. [152].

Requirements elicitation

Requirements analysis

Requirements documentation

Requirements validation

Requirements management

Users’ needs, domain informa-tion, existing sys-tem information, regulations stand-ards, etc.....

Agreed require-ments

Require-ments document

system specifica-tions

FIGURE 4. Requirements Engineering Process (adapted from [152])

Page 33: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

REQUIREMENTS ENGINEERING 19

The management of customer requirements is one of the main problem areas in software de-velopment [220, 241, 234]. The implementation of software requirements managementpractices is believed to be one of the first process improvements steps that a software develop-ment organisation should take [92, 77]. Requirements management practices ensure thatchanges can be monitored and tracked throughout the project life cycle. Without these prac-tices high quality software is difficult to achieve. In spite of the clear relationship between re-quirements management and project success [77, 86, 169], requirements management isoften ignored.

Requirements management activities

Requirements management is performed in parallel with the other requirements engineeringactivities (see Figure 4) [159, 152], and includes four main activities:

• controlling changes to the requirements baseline6;

• controlling versions of requirements and requirements documents;

• tracking the status of the requirements in the baseline;

• managing the logical links between the individual requirements and other work prod-ucts.

When controlling changes, a requirements engineer needs to perform several activities likecollecting, analysing, verifying, assessing changes, and determining the impact of a change onthe system. Based on the impact, a decision has to be taken (reject or accept the change),which will be notified to the developers and carried out. Besides these activities, it is also im-

6. Once the requirements are approved, the baseline requirement document is established, and itbecomes the official requirement document. Any change to the baseline document must be made viaa formal request.

Requirements management

Change control• proposing changes• analysing impact• making decisions• updating require-

ments documents• updating plans• measuring require-

ments volatility

Version control• defining a version

identification scheme

• identifying requirements document version

• identifying indi-vidual require-ment version

Requirements status tracking• defining possible

requirement sta-tus

• recording the sta-tus of each requirement

• reporting the sta-tus distribution of all requirements

Requirements tracing• defining links

to other requirements

• defining links to other system elements

FIGURE 5. Major requirements management activities [246]

Page 34: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

20 CHAPTER 3

portant to set priorities to requirements changes and to estimate their expected frequency,which can help to identify the functions that should be easy to modify. This has to be agreedupon by developers and customers in order to avoid conflicts between them.

Requirements changes need also to be tracked and recorded [159]. Users and developerscan decide to perform many changes to requirements, but if the changes are not recorded an-ywhere, the developers might forget or build something that the user did not expect.

Requirements management approaches

Companies that want to improve their requirements management activities can do it by qual-itative or quantitative approaches.

Qualitative approaches

The qualitative approaches are descriptive. They describe processes, techniques, guidelines,and best practices. Agile and traditional software processes are examples of qualitative ap-proaches. Agile approaches are more flexible, allowing software development teams to reactquickly to changing requirements. Agile processes like Extreme Programming (XP) [244],SCRUM [219] and other lightweight approaches were created in response to problem do-mains with frequent requirements changes. However, as suggested by Anton [10] and Wells[244], they cannot be applied on projects with a large number of developers.

Traditional approaches are more formal, and more documentation is produced. These ap-proaches are usually suitable for large organisations. Examples are the software process im-provement (SPI) framework like the capability maturity model (CMM) family, and SPICE.These SPI frameworks are not specific for managing requirements but were created to assessand improve the process maturity of an organisation. They consist of a set of guidelines andprocedures in order to meet some predefined goals. For example, in the CMM [198] thereare two predefined goals to be reached in order to improve the management of requirements.

Besides the agile and traditional approaches, several attempts have been made to definerequirements management frameworks. For instance, Zowghi and Offen [257] present a for-mal framework for modelling and reasoning about evolving requirements, but their frame-work can be complex for small companies. Lam et al. [155, 156] propose a measurement-action framework for managing requirements change and evolution. The framework ob-tained good results in a small company, but it is not tested in large companies.

Quantitative approaches

The approach to requirements management in this thesis is quantitative, using measures toassess the volatility of requirements. Software measurement can help to provide guidance tothe requirements management activities by quantifying and predicting changes to require-ments. In contrast to agile and traditional approaches, measurement is suitable for large andsmall companies. However, as argued by Lam and Shankararam [154], there is still a lack ofmeasurement for the management of evolving requirements.

Page 35: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

REQUIREMENTS ENGINEERING 21

3.3 Requirements volatilityRequirements are the product of the contributions of many individuals (usually called stake-holders), and these individuals often have conflicting needs and goals. Therefore require-ments have the tendency to be highly volatile, i.e., subjected to change.

The volatility of requirements is a frequent phenomenon in industrial software develop-ment [183]. Stark et al. [236] analysed 44 software releases spanning seven products withinone company. Of these 44 releases, 64% were characterised by requirements volatility. Theaverage requirements volatility7 in these 44 releases was 48%, while the largest observed vol-atility was 600%. In another study, Ebert analysed 15 projects developed in 2002 in a historydatabase. On average, these projects changed 73% of their requirements from the projectstart, to the time of the analysis. One third of these changes was technical (such as an infea-sible specification) while two thirds were commercial (covering the majority of requirementsuncertainties). The traditional rule of thumb for allocated requirements is to expect them tochange by 1 to 3 percent per month. For a two-year project, this typically translates intochanging (adding, deleting, or modifying) more than 30% of all requirements [85].

Volatility of requirements depends on many parameters, for example, organizational com-plexity (methodologies, type of product and processes used), the process maturity of the com-pany, the phase of the life cycle, the volatility of the market, the technology, etc. [72, 131,150]. Complex systems will tend to have higher volatility due to the number of interactingcomponents. Another cause of volatility is the weak domain knowledge by the developmentteam which performs an incomplete analysis of the requirements.

According to Boehm, much can be done in particular in using incremental developmentto control requirements volatility [32]. Using a single increment, it is very hard to postponethe stakeholders requests. As a result, the ripple effects of the changes are propagated throughthe product and the project’s schedules. With incremental development instead, it is relativelyeasy to schedule the new requests in the next increment. This allows each increment to followa plan less prone to changes. Requirements will change in any case but with incremental de-velopment it is easier to manage requirements volatility.

Estimating requirements volatility is important in order to predict the risk that the projectis going off track. High requirements volatility has impact on the quality of the software prod-uct (e.g., increasing defect density) [77, 139, 172, 191] and the project performance (e.g.,schedule delays, cost overruns, and amount of rework) [119, 181, 190, 235, 258]. Require-ments volatility is considered to be a major source of risk to the management of large andcomplex software projects [32, 192].

7. Requirements volatility in [258] is calculated as (100 * (added requirements + deleted requirements +changed requirements)) / (# of requirements in requirements specification). Requirements volatilityrepresents the total change traffic applied to an agreed-to-release plan. It can be greater than 100% ifmore changes occurred than the number of requirements originally planned for the release.

Page 36: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

22 CHAPTER 3

Definitions of volatilityThere is no generally accepted definition of requirements volatility. Many theoretical and op-erational definitions of requirements volatility can be found in the literature, while operation-al definitions can be found in [24, 71, 134, 193, 214, 221]. Most of the definitions expressthe changing nature of requirements during software development, and focus on the amountof changes (additions, deletions, and modifications) to requirements. They do not considerthe cause of change and the semantics of a change, i.e. in what way a change impacts devel-opment. In this thesis, volatility is treated as a quantitative measure and it is defined theoret-ically as the amount of changes to a requirements document over time. The operationaldefinitions of requirements volatility in this thesis are based on counting the number ofchanges to use case models. They differ from those available in the literature because volatilityis usually regarded as a property of the whole set of requirements while in this thesis the vol-atility of a single requirements document (usually a use case document) is investigated. Fur-thermore, the granularity of change data collected on requirements differs from thoseavailable in the literature. The most common way to collect change data on requirements isto look at the change requests8. However, this information is not always available, as in thecase of this thesis.

Measuring requirements volatilityMeasuring requirements volatility is one of the requirements management good practices[236, 246]. It allows to set up thresholds for how much requirements volatility is reasonablethroughout the life of a project.

For example, at the beginning of a project, in the phase of requirements elicitation (seeFigure 4), volatility is expected to be high in a healthy process. Low volatility might mean thatthe requirements engineering process is stalled. Projects with low volatility are typically stu-dents projects, where the requirements are well defined up-front and stable. In industrialprojects, volatility is expected to be low when designing and implementing the system.

If the data collected on a certain project exceeds the threshold, this suggests to analysedeeply the project, understand the reasons, and eventually take actions in order to adjust theprocess. If companies are weak at requirements engineering activities, they will have to facehigh percentages of volatility and their projects will be unpredictable. The threshold in thiscase will be lower than those of a company with stable requirements engineering activities.Furthermore, if a project is small, volatility is more easily handled. Therefore, the thresholdlevels should be put higher in these cases.

8. A “change request” is a formal request of change on the baseline document.

Page 37: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

REQUIREMENTS ENGINEERING 23

Measures of requirements volatility can be used to plot volatility trends (see Figure 6). Thenumber of changes should be captured on fixed project intervals and the rate of change plot-ted over time. Monthly data collection is recommended for large projects [75].

3.4 Measuring requirements sizeIn spite of the strong need for assessment and measurement of effects of requirements engi-neering practices, very little requirements measurement is done in practice [77, 79, 200].Practitioners and customers need measures early in the development cycle. Therefore effortsshould be spent on measuring aspects of requirements analysis and design.

During all phases of software development, the product (the requirements and design doc-uments, the source code, the test cases etc.) and the process can be measured. Requirementsproduct measures are usually used to evaluate the quality of the requirements specificationsdocuments. Process measures are defined to evaluate the quality of the process. For instancethe kind of documentation or the time spent in a certain phase can be measured to see howmuch time a process is consuming.

One of the most fundamental measurements of software is size, being as predictor of soft-ware cost [67, 158]. Counting the number of requirements is widely used to measure the sizeof a software application. The variation in requirements count over time has been suggestedas a means to quantify the volatility of requirements. However, there is no consensus on howto measure the size of requirements specification.

Modifications to requirements

0

2

4

6

8

10

12

14

16

jan feb mar april may june july august sept oct

Reporting period

Nunb

er o

f req

uire

men

ts

Additions

Deletions

Modifications

FIGURE 6. Requirements volatility chart

Page 38: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

24 CHAPTER 3

Measuring size in software requirements specifications depends on the notation used. Typi-cally, a requirements specification document consists of both text and diagrams. For instancein a data flow diagram the processes (bubbles), the external entities (boxes), data flows (arcs)and data stores (line nodes) can be counted. In an algebraic specification sorts, functions, op-erations, and axioms can be counted.

Requirements size has been the subject of several studies. One common measure of size inindustry is the number of pages of documentation [98]. Another one is use case points (UCP)[225], useful for estimating software development effort. However, UCPs are not generallyapplicable, not even when requirements specifications are based on use cases. The definitionof UCPs is based on a classification of use cases and a number of environmental factors (sim-ilar to the adjustment factors in function points). This information is not always available forthe projects analysed. The classification of use cases is a subjective activity, therefore it is com-plex and hard to collect UCPs automatically. In this thesis, requirements size is measured bynumber of lines and words, number of actors and use cases in use case models. These measuresare easy to understand and collect, generally applicable and can easily be automated.

Page 39: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

4

4Review of validatedmeasures

This chapter presents a review of studies on measures validation. The measures found in theliterature are classified based on the validation techniques used (theoretical, empirical) andthe environment where the validation was performed (academic or industrial).

4.1 IntroductionThe purpose of measuring properties of a software system is to predict various external qualityaspects of a software artifact. In order to use the measurements of properties effectively, qual-ity models need to be constructed (also called prediction models). Through a quality model,a certain measure is shown to be connected to some relevant external quality attribute. Forinstance the entity “program” has an external attribute “reliability” which could for examplebe measured with “fault density” (number of defects/KLOC). Therefore, fault density can beconsidered a predictor of the reliability of a program. Such prediction models can help deci-sion makers during the development of a software system.

A large number of measures have been defined in the literature, but not all are validated,i.e., part of a quality model. This chapter presents a literature review over validated measuresin software engineering. Similar reviews have been performed by the Alarcos group [109, 110,205], describing measures for conceptual data models and UML class diagrams. In addition,Briand and Wüst [53] focus on the empirical validation of object oriented measures as pre-dictors of some quality attributes and also on the evaluation of the prediction models. Fur-thermore, Briand and Wüst describe a procedure to construct prediction models. The reviewin this chapter has a wider scope compared to the ones of Briand and Wüst, and the Alarcos

Page 40: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

26 CHAPTER 4

group, by focusing on theoretical and empirical validation of measures in all areas of softwareengineering. A major effort has been made to cover practically all published measures in soft-ware engineering which have been theoretically or empirically validated to date.

4.2 Publication selectionThe literature search was conducted mainly through digital libraries (IEEE, ACM digital li-braries, Inspec database, and Google Scholar restricted to “Engineering, Computer Science,and Mathematics“). The keywords used in the search were “validated measures”, “empiricalvalidation”, and “theoretical validation”. The Inspec database was searched with the keywords“empirical validation”, which yielded 1172 results. The search was restricted to software qual-ity (44 results), software metrics (94 results), statistical analysis (66 results), and regressionanalysis (27 results), with overlaps among the results. A similar search was performed with thekeywords “theoretical validation”. From the results of the digital libraries and databases, 135publications were selected based on the content of the abstract, i.e., by investigating whetheror not the publication actually was dealing with theoretical or empirical validation. The stud-ies were published between 1991 and 2007.

As can be observed in Table 3, more than 90% of the publications inspected are journalor peer reviewed conference publications. The main journals and conferences contributing tothis review are IEEE TSE, Journal of Systems and Software, Information and Software Tech-nology, Empirical Software Engineering, and the IEEE conferences Metrics and ICSE (Inter-national Conference on Software Engineering).

4.3 Description of the reviewTable 1 in Appendix A shows the most important characteristics of the validated measuresfound in the literature search presented here. Each row in the table represents one publication(except in a few cases where a study has been replicated) describing one or more studies. Foreach publication the following information is displayed:

• the source of the publication (column Reference);

• the measures under investigation (column Measures);

TABLE 3: Publications overview

Type of publication

Number of publications

Years of publication

IEEE ACM Springer Elsevier Other

Conferences 61 1991–2007 50 3 5 — 3

Journal articles 63 1993–2007 26 — 11 20 6

Workshops 5 1997–2005 — 1 1 — 3

Technical reports 6 1999–2007 — — — — 6

Total 135 1991–2007 76 4 17 20 18

Page 41: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

REVIEW OF VALIDATED MEASURES 27

• the external attributes related to the measures, i.e., the dependent variables (columnAttribute);

• the entity or software object under analysis (column Entity);• whether the measures are theoretical validated and if so, the validation procedures used.

as well as the source of the previous study, in case the measures were theoretically vali-dated in earlier work (column Theoretical);

• whether the measures are empirically validated and if so, the kind of analysis applied —univariate, multivariate or other — (column Empirical);

• the fact whether the study is academic or industrial, and, in case the study is correla-tional, if the data collected comes from an academic or industrial software project (col-umn Environment).

Tables 4–9 in the present chapter show summaries of the review data presented in AppendixA. The percentages in the tables have been obtained by counting the occurrences of the itemsin Table 1 of Appendix A (entities, attributes etc.) and dividing them by the total number ofitems. Most of the publications investigate several measures and attributes simultaneously.Furthermore, the data analysis is often conducted using several statistical methods. In the fol-lowing, we present several of our conclusions drawn from Table 1 in Appendix A.

Research group The most active research group in validating measures is the Alarcos group in Spain. Theyhave performed more than 16% of the 135 publications in this review. Lionel Briand togetherwith other researchers have also contributed to constructing a body of knowledge in measuresvalidation (16 publications, more than 10% of the publications in this review), especially inobject-oriented design and code measures. Other relevant groups are El Emam et al., Harri-son et al., and Loconsole and Börstler, all of them with 5 publications each. It is interestingto note that, with the exception of Briand et al. [49], the research groups that have defined atheoretical validation method (for instance Kitchenham et al. [146] and Weyuker [245]) donot apply their methods in their own work or otherwise provide indications of the utility ofthe methods. This lack of evaluation increases the existing fussiness (see Chapter 2) and re-duces the credibility of research in measures validation.

Entity Each reviewed study investigated exactly one entity. The percentages in Table 4 are calculatedby counting the studies investigating a certain entity and dividing these numbers by the totalnumber of entities (135). As can be observed in Table 4, design, code documents, the com-bination of design and code, and database models, are the most investigated entities. The per-centages are affected by the two main groups who perform measures validation. Briand et al.validate mostly object oriented measures for the design and coding phases, while the Alarcosgroup validates mostly database and UML measures. It is also interesting to note that only3% of the publications have projects as the object of the study, while 6.7% have studied theprocesses. A very large portion of the publications investigates an entity which is a software

Page 42: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

28 CHAPTER 4

product (88.1% summing specification, design, code, database, user interface, and the wholesoftware system). Publications on requirements measures are very few (6.6%), even though itis often stressed that measures for the early development phases are the most useful (see Chap-ters 1 and 3). Only Loconsole and Börstler [163–167], Ambriola and Gervasi [9], Briand andMorasca [47], Hasting and Sajeev [130], and Morasca [182] have done studies of this kind.

Attribute

The percentages in Table 5 are calculated by counting the occurrences of the attribute anddividing by the total number of attributes (197). Attributes (or properties of an object) canbe organised in hierarchies like in the McCall, Boehm, and ISO 9126 quality models [203].Therefore, an attribute can implicitly be part of a study because it is a parent or child of an-other attribute. In order to draw conclusions in the review presented here, only the attributesexplicitly mentioned in the study were counted, dependencies between attributes were notconsidered. For instance, although complexity is a child of maintainability in several qualitymodels, only complexity was counted, in case the study did not mention maintainability. Asunderstandability, analysability, changeability, and stability/volatility are children of main-tainability, a widely investigated dependent variable is maintainability. In fact, by summingthe percentages for these 5 attributes, a total of 40% is obtained (see Table 5). After that, re-liability (often measured as fault proneness and defect density, 14.7%), and effort (12.7%)are the most studied attributes. Size is studied in 6.6% of the attributes. Very few studies in-vestigated project performance and similarity, and usability.

TABLE 4: Entities measured in the reviewedstudies

Entity measured % (135 entities)

Requirements/specification 6.6

Design 27.4

Design/ code 15.0

Code 16.2

A whole system 6.6

Data base 8.9

Interface 7.4

Process 6.7

Project 3.0

Other 2.2

TABLE 5: Dependent variables measured

Attributes measured% (197

attributes)

Fault proneness and density 14.7

Effort 12.7

Maintainability 11.2

Understandability 10.1

Complexity 9.1

Analysability 5.0

Changeability 9.1

Stability/volatility 4.6

Size 6.6

Others (usability, efficiency, effectiveness, reusability, performance, testability, completeness, correctness)

16.9(<3% each)

Page 43: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

REVIEW OF VALIDATED MEASURES 29

Measures

Most studies investigated structural properties, i.e., the measures are based on the structureof the entity measured (e.g., size or length). Furthermore, many studies investigated severalmeasures. As can be observed in Table 6, about half of the 135 publications are validations ofobject oriented (OO) design and code measures. Among these, 24 publications investigatethe Chidamber and Kemerer (CK) set of measures [69]. Size and complexity measures are alsowidely investigated. Many studies seem to be isolated, i.e., the measures are associated withentities rarely investigated and the studies are not replicated. Due to the large amount of workin OO quality models, OO measures are the most theoretically validated. About half of thetheoretical validations of OO measures are performed with the property-based approach ofBriand et al. [49]. Regression (both linear and logistic) is the most common method used toempirically validate object oriented measures (due to the fact that Briand et al. mostly use re-gression in their work). Spearman and Pearson’s correlation coefficients are frequently used toshow the association between the measure and the attribute but they are not suitable for con-structing a quality model.

Theoretical validation

Theoretical validations are described in about half of the 135 publications investigated. 71publications contain a theoretical validation and 7 of them have been performed with twodifferent approaches. The most common method to theoretical validation of measures is theproperty based approach defined by Briand et al. [49], which has been used in 29.5% of the78 theoretical validations. After that, the most used are the representational theory [146] andthe axiomatic approach proposed by Weyuker [245]. The distance framework of Poels andDedene [208] is used in 11.6% of the theoretical validations. However, the two biggest re-search groups (Alarcos group, and Briand et al.) contribute considerably to the high percent-ages. In fact, 6 out of 9 of the theoretical validations with the distance framework of Poels andDedene are performed by the Alarcos group, while Briand et al. chooses mostly their propertybased approach. Other approaches are also used in 19.2% of the 78 theoretical validations.

TABLE 6: Independent variables

Kind of measuresNumber of

studies

OO measures• Whereof CK suite

7124

Size measures 39

Complexity measures 27

TABLE 7: Theoretical validation approaches

Theoretical validation approaches

% (78 theoretical validations)

Briand’s properties approach 29.5

Weyuker’s axioms 12.8

Poels and Dedene 11.6

Representational theory 19.2

Zuse framework 7.7

Others 19.2

Page 44: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

30 CHAPTER 4

In many cases, there is no formal validation but rather a discussion of the construct validityof the measures. In many studies, the authors do not mention any theoretical validation.However, some measures have been theoretically validated in other studies. There are onlytwo publications [54, 120] discussing the advantages and disadvantages of different ap-proaches to theoretical validation (see also the discussion in Section 4.4).

Empirical validationA very large amount of the publications examined describe an empirical validation (115 ofthe 135 publications). Since there are several publications validating measures with severalmethods, the total number of empirical validations is 133 (as shown in Table 8). As can beobserved in Table 8, the most common approach to empirically validate measures is by per-forming a correlation between the measures and the attributes. Pearson’s correlation coeffi-cient is most commonly used for normally distributed data while Spearman’s rank correlationis most common for rank data. Linear and logistic regression are among the most used statis-tical methods to construct prediction models. About 24.8% of the empirical studies are per-formed with linear regression (partial, univariate, and multivariate), while 18% have usedlogistic regression (univariate, multivariate, and binary). Anova is used in few studies (2.2).Another big portion of the empirical studies uses other kinds of statistics methods like for ex-ample, f-statistic, and t-test.

Regarding Table 9, it is interesting to note that empirical validation of measures (115 pub-lications) is more common than theoretical validation (71 publications).

TABLE 8: Empirical validation

Empirical method% (133

empirical validations)

Pearson/Spearman 24.8

PCA 7.5

Anova 2.2

Linear regression 24.8

Logistic regression 18.0

Other (f-test, t-test....) 22.5

TABLE 9: Summary of validations

Validation typeNumber of publications

Both theoretical and empirical validation

51

Only theoretical validation 20

Only empirical validation 64

Page 45: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

REVIEW OF VALIDATED MEASURES 31

Environment

As can be observed in Table 10, 16 publications report an axiomatic validation yielding nodata, while 5 studies have a data set coming from both industrial and academic softwareprojects. Correlational studies are the majority (46.7% of the 135 publications). Other kindsof studies present in the review are experiments (29 publications), and case studies (16 pub-lications).

Observations and difficulties in performing the literature review

In performing the literature review, several problems were encountered, and in some cases adecision had to be taken.

• It was difficult to limit the selection of the publications for the literature review. The goalwas to restrict the review to peer reviewed publications. However, studies like [228] referto theoretical validations performed in another study that did not appear in the digitallibraries (often workshop papers or technical reports). It was decided that these studies,though not appearing in the digital libraries, should be included in the literature reviewpresented here.

• The keywords chosen for the search in the digital libraries, are not always mentioned inthe studies, even if the studies report investigations of relationships between measuresand attributes. Many studies describe for instance the construction of a prediction modelwithout mentioning empirical validation of measures or similar keywords. These studiesare not included in the review. Therefore the table cannot be considered complete ifregarded as a collection of prediction models.

• Sometimes publications can describe single studies or families of studies; sometimes onestudy investigates measures collected from different previous studies. Furthermore, oftenthe measures are theoretically validated once and then referred to in several empirical val-idations.

• It was not possible to classify measures as validated or unvalidated, because one measurecould be validated with respect to one attribute but not with respect to another attributeand in one study usually several attributes are investigated.

However, it is reasonable to believe that the review presented has a good coverage and that allimportant and most of the relevant works in the area have been included.

TABLE 10: Summary of context of studies

Environment Number of publications

Purely academic data 51

Purely industrial data 63

No data (axiomatic approach) 16

Both industrial and academic data 5

Page 46: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

32 CHAPTER 4

4.4 DiscussionExcept for object oriented measures, many areas of software engineering lack a body of knowl-edge in software measures. There might be several reasons for that, as described below.

• Very few research groups are regularly contributing with publications in software meas-ures validation. As mentioned earlier, the only two active groups in this area are Briand etal. and the Alarcos group in Spain.

• Few entities have been thoroughly investigated. In particular, there is a lack of research inproject, process, and management measures. The most common entities measured areobject oriented design, source code, and conceptual data models.

• Few attributes have been studied; the most investigated attributes are maintainability(summing the percentages for analysability, changeability, maintainability and under-standability) and reliability (measured as fault proneness and defect density).

• There are many isolated studies. Several measures have been empirically validated only inone single study, which is not sufficient to generalise the results. Few replications havebeen performed to confirm the results of the validation. Moreover, as observed by Gen-ero et al. [110], measures are mostly validated by their own authors. Replications shouldideally be performed by different research groups.

• Studies are often context dependent and the results of these studies have only local valid-ity. In order to generalise the results, replications of the studies are needed in differentenvironments where large data sets and large systems are available for investigations.

• Many of the proposed measures lack theoretical validation. Furthermore, as discussed inChapter 2, a measure can be theoretically validated with one method but not validatedwith another one. Few studies discuss which method leads to the most reliable results,i.e., which method leads to results valid in a multitude of contexts. Researchers are notpublishing on this topic any more. In the 1990s there were several discussions on thesubject but no agreement was found. Nowadays, there still is no agreement on a standardmethod to theoretically validate measures, and several different methods are still used inparallel. As an example of the lack of discussion about this subject, the Alarcos group isvalidating measures without questioning or justifying the methods they use. In fact,Chapters 4 and 6 of their book [110], describe a theoretical validation performed withthe distance framework, while in Chapter 5 they use the property based approach (inboth cases without justifying the choice of the approach). In Chapters 7 and 8, theauthors validate measures using both approaches without a discussion, although clearlystating the fact that they use an axiomatic and a representational theory approach. Whichmethod is the best? Why do they use two methods? In case one method is not enough,Chapters 4, 5, and 6 of their book describe measures not sufficiently validated.

• Few studies discuss how the theoretical validation affects the empirical one and vice versa.Is it useful to theoretically validate the measures? How does the theoretical validationaffect the empirical one (and vice versa)? These questions are rarely discussed (see Chap-ter 2). Only Gursaran [120] and Briand et al. [54] have discussed these topics.

Page 47: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

REVIEW OF VALIDATED MEASURES 33

• Comparison of prediction models with expert opinion is rare. According to Briand andWüst [53], an interesting question that has not been investigated in depth to date iswhether structural measures can perform as well as or better than experts in predictingquality attributes such as fault proneness. Only few investigations of this kind have beenperformed.

It is important to stress that few validations have been performed on requirements measures.As already pointed out by Briand and Wüst [53], and Morasca [183], the use of predictivemodels based on early artifacts and their capability to predict the quality of the final systemstill remains to be investigated.

Further work is necessary to validate measures theoretically and empirically in all areas ofsoftware engineering and to study the relationships between the two kinds of validations. Em-pirically validated measures are needed for managers and practitioners in order to take betterdecisions, which is the goal that any measurement program must pursue in order to be usefuland successful.

Page 48: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

34 CHAPTER 4

Page 49: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

5

5Summary of papers

This chapter presents brief summaries of the papers included in the thesis.

5.1 Paper 1 Loconsole, A., (2001) Measuring the Requirements Management Key Process Area — Application ofthe Goal Question Metric to the Requirements Management Key Process Area of the Capability Matu-rity Model, in Proceedings of ESCOM — European Software Control and Metrics Conference, London,UK. April 2001, Shaker Publishing, pp. 67–76.

A practical way to assist companies in managing their requirements is through software meas-urement. The purpose of this paper is to define a rich set of software measures that can helpto satisfy the goals of the Requirements Management (RM) Key Process Area (KPA) of theCapability Maturity Model (CMM), and to ease the adoption of the KPA.

In this contribution, the Goal Question Metric (GQM) is applied to the RM KPA. Thetwo predefined goals of the RM KPA are re-defined using the GQM template for the goalsdefinition. The result of the GQM application is a set of 15 questions and 33 measures forthe first goal, and a set of 7 questions and 12 measures for the second goal. A total of 20 ques-tions and 38 measures are presented. The set of measures defined is easy to understand andto use, and can be used as a pick list. The measures are general (not context dependent) andcomprehensive. Companies are suggested to tailor the wide set of measures to their ownneeds, depending on their maturity and to set priorities among the measures, and thresholdsfor each measure to help decisions makers in their activities.The paper also provides a guidance for people trying to implement the RM KPA and informsabout lessons learned and practical results. The metrics obtained could help immature com-panies to satisfy the goals of the Requirements Management KPA.

Page 50: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

36 CHAPTER 5

A subset of the measures defined is theoretically validated in Paper 2. Among the proposedmeasures, there is the “size of requirements”. In the subsequent papers, this measure is decom-posed in number of lines, number of words, number of actors, and number of use cases. Thesemeasures are empirically validated in Papers 4, 5, 6, and 7.

5.2 Paper 2Loconsole, A., and Börstler J., (2003) Theoretical Validation and Case Study of Requirements Manage-ment Measures, Department of Computing Science, Umeå University, Technical Report UMINF03.02, July 2003, 20 pages.

The goal of this paper was twofold. First a theoretical validation was performed of a subset often requirements measures from the set of 38 measures defined in Paper 1. In addition, a casestudy is described where the validated measures are used to estimate the cost of changes torequirements. The goal of the study was to demonstrate that cost estimations based on his-torical data are better than intuitive cost estimations. However, the data collected was notenough to draw statistically significant conclusions. The problems with the study were due tolow control of the students’ projects and a missing mandatory requirement for collecting data.

This work presents novel results in the sense that no requirements management measureshave been theoretically validated before. The focus of the scientific community is more onproduct measures, especially for source code (as seen in Chapter 4). The contribution of thecase study is on the methodological side. It is a good experience which can teach what issuesneed to be considered when performing an academic case study, for instance, how to engagestudents in the study.

5.3 Paper 3Loconsole, A. and Börstler J., (2004) A Comparison of two Academic Case Studies on Cost Estimationof Changes to Requirements — Preliminary Results, in Proceeding of SMEF — Software MeasurementEuropean Forum, 28–30 January 2004, Rome, Italy, pp. 226–236.

This paper describes partial results of another academic case study (study two) and comparesit with the one described in Paper 2 (study one). The purpose of the two studies was to inves-tigate intuitive cost estimation of changes to requirements by comparing them to estimationsbased on historical data (study one), and estimations based on detailed impact analysis (studytwo). Another goal of study two was to compare the case studies and show the methodologicalimprovements accomplished by using the lessons learned from the study one. The final results

Page 51: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

SUMMARY OF PAPERS 37

of case study two and its comparison with study one are listed in Tables 11 and 12. The error

in table 10 is obtained by calculating the difference between the actual and estimated time ofchange, and dividing the number obtained by the actual time per change. The average errorin Table 11 is obtained by averaging the errors in Table 10, considering the absolute error val-ues.

Teams 3, 4, and 5 have performed estimations following an impact analysis checklist whileteams 1 and 2 have performed intuitive estimations. Observing the table, there is no differ-ence in the estimations by comparing teams 1 and 2 (non controlled teams) with team 5 (con-trolled team)10. Therefore, the estimations based on a detailed impact analysis checklist seemto be equivalent to intuitive estimations.

Lessons learned in study two

The requirements were not as volatile as expected in this study. However, lessons learned arelisted below:

• it is important to keep track of change requests’ status;

• the amount of data to be collected should be kept small;

• requirements management is not an area to be investigated in academic course context;

TABLE 11: Comparison of the data collected in the two studies (the time is estimated in minutes)

MeasuresStudy one Study two

Team a Team b Team 1 Team 2 Team 3 Team 4 Team 5

Number of requirements 31 12 30 9 17 28 25

Number of changes 28 9 7 4 1 8 3

Estimated time per change 23.3 17 95 135 4800 — 35

Actual time per change 27.3 40 68 195 4500 — 25

Error -14.6% -57.5% +39.7% -30.7% +6.6% — +40%

TABLE 12: Summary of data collected in the two studies (the time is estimated in minutes)

Study one Study two

Total number of requirements 43 109

Total number of changes 37 23

Average estimated time per change 20.15 88.3 without teams 3 and 4

Average actual time per change 33.65 96 without teams 3 and 4

Average error 36.05% 29.25%

10. Team 3 is disregarded from the analysis because the estimated time per change and the measured timeof change are not reliable.

Page 52: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

38 CHAPTER 5

• in case of empirical studies performed in a course context, collection of data must be arequirement to pass the exam.

As a general conclusion of the two studies, university courses seem not suitable for studieswith changing requirements, the courses are too short and the projects are small and stable.Furthermore, no conclusions can be drawn about the measures and their usefulness. Some ofthese measures are therefore studied further in an industrial environment in Papers 4, 5, 6,and 7.

5.4 Paper 4Loconsole, A. and Börstler J., (2005) An Industrial Case Study on Requirements Volatility Measures, inProceeding of APSEC — 12th IEEE Asia Pacific Software Engineering Conference, 15–17 December 2005,Taipei, Taiwan, IEEE Computer Press, pp. 249–256.

This paper tackles an important issue worthy of empirical investigation. It reports on a casestudy conducted to investigate different measures of requirements volatility on a project. Thegoals of the study were: 1) to prove that a set of requirements size measures are associated withthe volatility of use case models (UCM); 2) to investigate the correlation between subjectiveand objective volatility. The study analyses whether the size of a requirement provides infor-mation about requirements’ tendency to change in the future. Its focus is actually on whetherthis correlates with developers’ own perceptions about volatility. The measures were obtainedfrom previous work (Paper 1 and Paper 3) choosing in particular the measures related to vol-atility. Measurement data was collected in retrospect for all UCMs of the software project. Inaddition, subjective volatility was determined by interviewing 10 stakeholders from a compa-ny in Sweden. The data analysis showed a high correlation between the size measures of UCMand total number of changes, indicating that the measures of size of UCMs are good indica-tors of requirements volatility. The study found no correlation between volatility perceptionsand the actual data. These results suggest that project managers at this company should meas-ure their projects because of the risk to take wrong decisions based on their own and the de-veloper’s perceptions. The larger the UCM, the more potential there would be for errors.Therefore, the results serve to re-emphasise some fundamental software engineering princi-ples regarding the need for modularity and cohesion in order to manage complexity and lo-calise change.

A completed real project was analysed in retrospect, therefore this study has the advantageof not affecting the outcome of the project. The project analysed and the number of peopleinterviewed were relatively small and in order to generalise the results a replication of this casestudy was performed in Paper 7.

The construction of prediction models in Papers 5 and 6 is based on the results of thisstudy. However, the measurement rules were changed and the “number of words changed”was counted. In this way the data collection became objective and could be automated.

Page 53: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

SUMMARY OF PAPERS 39

5.5 Paper 5Loconsole, A. and Börstler J., (2007) Construction and Validation of Prediction Models for Changes toRequirements, Department of Computing Science, Umeå University, Technical Report UMINF 2007–03, 26 pages.

Based on the results of Paper 4 (where the structural size measures were found good assessorsof number of changes to requirements), prediction models for number of changes to require-ments were constructed. This paper describes a correlational study with the goal of assessingthe ability of five size measures to predict the number of changes to requirements for a medi-um size software project. Although the number of changes does not measure volatility direct-ly, it is an important basic measure that can be used to easily compute other measures, likechange density or change frequency. The study is explorative, i.e., the data collected for fivemeasures of size of requirements (number of actors, use cases, words, lines, and revisions) isanalysed to find the best predictor of number of changes. No empirical validation of require-ments change measures as predictors has been performed in an industrial setting. Seven pre-diction models were built using data collected on the project described in Paper 4. Theaccuracy of five models was evaluated by applying them on a set of data collected for a secondproject at the same company. The best model showed a better accuracy than those of commoneffort prediction models. The results show that the best predictors of the number of changesare the length measures: number of lines and words. Other predictors of complexity and func-tionality were found less accurate. In an earlier study (described in Paper 4) it was shown thatdecisions solely based on developer perception are unreliable. Predictions models like the onepresented here constitute a better alternative.

The subject addressed is of great interest to the software engineering community, becausethe number of changes to requirements is of great concern to software managers and softwaredevelopers in general. The basic question is: if the requirements size is known, the number ofchanges is also known, but can the size be known early? The answer is positive, because theindependent variables selected are variations of requirements size measures, which precede thedependent variable during the development, thus making them very useful in order to predictthe number of changes.

Requirements volatility is a complex concept and depends on many factors. This workdemonstrates that size is one of these factors. Paper 6 describes another correlational studywhere prediction models for requirements volatility were constructed analysing the same dataused as in the present correlational study.

Page 54: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

40 CHAPTER 5

5.6 Paper 6Loconsole, A. and Börstler J., (2007) A Correlational Study on Four Size Measures as Predictors of Re-quirements Volatility, Submitted to Journal of Software Measurement.

This paper presents a correlational study investigating the relationship between requirementssize measures (NACTOR, NUC, NWORD, and NLINE) and requirements volatility. Basedon data collected from two industrial software projects, prediction models for requirementsvolatility have been built and evaluated. Performing a cross systems validation, it was foundthat only NLINE is a good predictor. This implies that larger requirements are more likelygoing to be affected by changes compared to shorter documents. This is perhaps not surpris-ing, however, it has not been scientifically shown before this work. Although the models inthis paper are likely to have only local validity, the general method for constructing the pre-diction models could be applied in any software development company.

This work is a continuation of previous work (Papers 4 and 5). Earlier results suggestedthat variations in size imply variations in the number of changes. The difference compared toprevious work is the way requirements volatility was measured. In this paper, it was measurednot simply as the number of changes but as the sum of change densities over time.

5.7 Paper 7Loconsole, A. and Börstler J., Are Size Measures Better than Expert Opinion? An Industrial Case Studyon Requirements Volatility. In Proceedings of the 14th IEEE Asia Pacific Conference on Software Engineer-ing, Dec. 5–7, 2007, Nagoya, Japan, IEEE Computer Press, pp. 238–245.

Expert judgement is a common estimation approach in industry. However, there is very littleresearch on the accuracy of expert judgement outside the area of effort estimation. This paperempirically compares subjective measures (expert judgement) and objective measures(NWORD, NLINE, NACTOR, NSCENARIO), which potentially have a correlation to re-quirements volatility. The study is a follow-up of the study described in Paper 4. Data wascollected in retrospect for all use cases of a medium-size software project. In addition, subjec-tive volatility was determined by interviewing developers and managers of the project. Theresults revealed that (1) subjective measures had no correlation with requirements volatilitywhile (2) objective measures were positively correlated. In other words, the data analysisshows that structural measures perform better than expert judgement in estimating the totalnumber of changes to use case based requirements. These results confirm the results describedin Paper 4.

The expert judgement might not be wrong, however, it is different from the objective data.One reason of the difference can be that the mechanical definition of volatility based on datais not appropriate, or that the experts did not remember. In both cases, project managersshould not rely on expert judgement alone for decision making.

Page 55: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

6

6Conclusions

This chapter analyses the results and the contributions of the thesis in the light of the goalslisted in the introduction chapter. Open issues and future directions of this research are alsodiscussed.

6.1 SummaryIn this thesis a set of requirements management measures has been presented, measures in-tended to facilitate for practitioners in assessing the state of requirements and eventually takeimprovement actions. Ideally, the activity of software measurement should be driven by goalsin order to focus and to limit project costs. For the sake of defining measures connected togoals, the goal question metric framework (GQM) was applied to the requirements manage-ment process area of the capability maturity model.

Twenty of the defined measures have been theoretically validated (see Papers 2, 6, and Ta-ble 1) in order to show that the measures are measuring what they are supposed to measure.

In order to investigate their practical utility, a subset of these measures were empiricallyvalidated by performing several studies. Two case studies were executed in an academic envi-ronment (during a project course at Umeå university), with the goal of showing that the meas-ures were useful for estimating the cost of the specific software under development. Two casestudies were performed in an industrial environment (at BAE Systems Hägglunds AB, Swe-den) with the goal of showing that five requirements structural measures are good estimatorsof requirements volatility. In the two industrial case studies, size measures were shown to begood estimators of number of changes to requirements. Subjective estimations of volatilitymade by developers were, however, not good assessments of requirements volatility andnumber of changes. Based on these results, two correlational studies were performed using the

Page 56: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

42 CHAPTER 6

data sets from the industrial case studies. In the correlational studies, prediction models forthe number of changes to requirements (Paper 5) and for requirements volatility (Paper 6)were constructed. All structural measures were found to be good predictors of the number ofchanges. The measure number of lines was found to be a good predictor of requirements vol-atility.

Besides the results described in Papers 1–7, an analysis of validated measures in the liter-ature was performed, showing that there is a lack of validated process, project, and require-ments measures in software engineering (see Chapter 4).

The results described in the thesis can be summarised as follows (see Table 13 for moredetails about the validated measures):

• a total of 45 measures have been defined;

• 20 of the measures satisfy basic measurement principles (i.e., they are theoretically vali-dated);

• 4 size measures are shown to be good assessors of the number of changes (empirically val-idated);

• 1 measure is shown to be a good predictor of volatility (empirically validated);

• developers’ perceptions were shown not to be good estimators of volatility of pastprojects.

TABLE 13: Measures defined and validated in this thesis

Measures DefinitionTheoretical validation

Empirical validation

Number of affected groups and individuals informed about “notification of change”

Paper 1 — —

Number of baselined requirements Paper 1 — —

Number of changes per requirement Paper 1 Paper 2 Papers 4–7

Status of change

Number of changes approved Paper 1 Paper 2 —

Number of changes incorpo-rated into base line

Paper 1 — —

Number of changes open Paper 1 Paper 2 —

Number of changes per unit of time

Paper 1 — —

Number of changes proposed Paper 1 Paper 2 —

Number of changes rejected Paper 1 Paper 2 —

Number of documents affected by a change Paper 1 — —

Number of final requirements Paper 1 Paper 2 —

Number of incomplete requirements Paper 1 — —

Number of inconsistencies Paper 1 — —

Number of inconsistent requirements Paper 1 — —

Number of initial requirements Paper 1 Paper 2 —

Page 57: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

CONCLUSIONS 43

Number of missing requirements Paper 1 — —

Number of requirements affected by a change Paper 1 — —

Number of requirements scheduled for each software build or release

Paper 1 — —

Number of TBDs (to be determined) in requirements specifi-cations

Paper 1 — —

Number of TBDs per unit of time Paper 1 — —

Number of test cases per requirement Paper 1 — —

Cost of change to require-ments

TimeNumber of people

Paper 1 Paper 2 Paper 3

Effort expended on require-ments management activity

Paper 1 Paper 2 —

Kind of documentation Paper 1 — —

Major source of request for a change to requirements Paper 1 — —

Notification of changes shall be documented and distributed as a key communication document

Paper 1 — —

Phase when requirements are baselined Paper 1 — —

Phase where change was requested Paper 1 — —

Reason of change to requirements Paper 1 Paper 2 —

Requirement type for each change to requirements Paper 1 — —

Size of a change to require-ments

Number of minor, moderate, and major changes

Paper 4 — Paper 4

Total number of changes Paper 4 Paper 6 Papers 4–7

Number of revision Paper 4 Paper 6 Papers 4–5

Size of requirements

Number of actors Paper 4 Paper 6 Papers 4–7

Number of lines Paper 4 Paper 6 Papers 4–7

Number of words Paper 4 Paper 6 Papers 4–7

Number of use cases Paper 4 Paper 6 Papers 4–6

Number of scenarios Paper 7 Paper 6 Paper 7

Status of each requirement Paper 1 Paper 2 —

Status of software plans, work products, and activities Paper 1 — —

The computer software configuration item(s) affected by a change to requirements

Paper 1 — —

Time spent in upgrading Paper 1 — —

Total number of requirements Paper 1 Paper 2 Papers 4–7

Type of change to requirements Paper 1 Paper 2 —

Volatility Papers 6–7 — —

TABLE 13: Measures defined and validated in this thesis

Measures DefinitionTheoretical validation

Empirical validation

Page 58: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

44 CHAPTER 6

The work presented in this thesis is novel in many ways. A broad set of requirements man-agement measures has not been defined up to now, and process measures are rare in the liter-ature (see Chapter 4). Among the few measures available in the literature, only a smallnumber have been validated and no prediction models of requirements volatility have beenconstructed before this work. Furthermore, the proposed measures are very simple. Accordingto people working in industry, measures should be very simple, easy to understand, have a lowcost, and their implementation should not be very sophisticated [168]. Apart from the con-tribution of a large set of measures to the field of requirements management, the thesis studieshow to apply some of them, objectively, revealing new insights about the requirements man-agement process:

• Size is relevant. Increasing the size, the number of changes to requirements increases.This is intuitive but it has never been shown formally in the case of requirements.

• Subjective estimations of requirements volatility are likely to be inaccurate. Stakeholdersare therefore suggested to complement the subjective estimations with the objective ones.

The results of the two correlational and the two case studies are valid for two differentprojects. Therefore, it is reasonable to believe that have at least some external validity. In thenext section, the results and their implications are discussed in some detail.

6.2 Have the goals been reached?As described in Chapter 1, the overall goal has been to improve the management of require-ments. More specifically, the goal was to provide practitioners with a set of useful measuresthat can help in monitoring and tracking requirements. As shown in the previous section andin Table 13, this goal has been reached.

The goal was decomposed in the following subgoals:

• investigate existing measures for managing requirements and their validity;

• provide practitioners with a broad set of requirements management measures that satisfybasic measurement principles;

• show, through empirical validation, that the defined measures are useful to predict vola-tility and that they are better assessors of volatility than subjective estimations.

As stated in the introduction of this thesis, the goal of this research is not to decrease thenumber of changes to requirements. This is in fact not always possible because changing re-quirements is natural and necessary in software development [246]. The goal is instead to pre-dict the amount of changes in order to be prepared for future changes, especially because thechanges tend to come unexpectedly.

The fulfilment of the partial objectives will now be analysed further.

Investigate existing measures for managing requirements and their validityInvestigations of existing measures for managing requirements has been performed in Papers1, 4, 5, and 6. One conclusive observation is that the vast majority of existing measures areneither theoretically nor empirically validated, except for [9, 47, 130, 182]. Furthermore, as

Page 59: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

CONCLUSIONS 45

shown in Chapter 4, most of the existing validated measures were defined for the design andcoding phases, especially in the field of object oriented development. This result confirmsprevious research [106].

Provide practitioners with a broad set of requirements management measures that satisfy basic measurement principles

As can be observed in Table 13, a broad set of requirements management measures is providedto practitioners. The defined measures can be useful to assess and predict volatility, stability,and traceability. Twenty of the defined measures satisfy basic measurement principles and cantherefore be readily used in an academic environment.

Show, through empirical validation, that the defined measures are useful to predict volatility and that they are better assessors of volatility than subjective estimations.

The results of four industrial empirical studies show that, in the particular environment:

• size measures are good assessors and predictors11 of total number of changes (see Table14);

• the number of lines is a good predictor of volatility;

• size measures are not good assessors or predictors of volatility except for the number oflines;

• subjective estimations of volatility are neither good assessors of volatility, nor are theygood assessors of number of changes. Instead, size measures were found to be better asses-sors.

These results indicate that the subgoals have been achieved.

11. In this context, a measure is a good assessor if the measure is highly correlated to the dependent orstate variable. A measure is a good predictor of a dependent variable if the prediction model con-structed with that measure has a high accuracy.

TABLE 14: Summary of results from the industrial empirical studies (Papers 4–7)

NCHANGE VOLATILITY

Paper 4 (case study)

Paper 7 (case study)

Paper 5 (corre-lational study)

Paper 6 (corre-lational study)

Paper 7 (case study)

NLINE High correlation High correlation Good predictor Good predictor Low correlation

NWORD High correlation High correlation Good predictor Predictor Low correlation

NUC High correlation — Good predictor Predictor —

NACTOR High correlation High correlation Good predictor Predictor Low correlation

NREVISION — — Predictor — —

NSCENARIO — Low correlation — — Low correlation

Subjective volatility

Low correlation Low correlation — — Low correlation

Page 60: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

46 CHAPTER 6

6.3 Limitations, challenges, and open issuesThere are some limitations to the results shown in the previous sections:• Local validity: the results are valid in the context of two different projects in the same

company. However, the projects are real and quite different in size and many otheraspects, therefore it is reasonable to believe that the results can be generalised.

• Operational definition of volatility: in this thesis requirements volatility has been measuredin several ways (see Section 3.3). The operational measure of volatility in Papers 6 and 7best expresses the theoretical definition, because it expresses the time factor and allowsrequirements documents to be compared to each other. However, the operational defini-tion of volatility is still open, because few empirical studies have been performed on howto measure volatility. Furthermore, in this thesis volatility has been measured for singlerequirements documents while it is more common to measure the volatility of the wholeset of requirements. In fact, up until now, measures of volatility for single requirementsdocuments have not been investigated at all. Studies are needed for investigating the bestmeasures of volatility of single requirements documents.

• Theoretical validation: there is still no standardised way of theoretically validating meas-ures. Several different methods are present in the literature (see Chapter 4). Furthermore,theoretical validation is context dependent and needs to be redone in every new environ-ment. This can be a problem when using the measures in industry, because practitionersare not willing to spend time and money in order to perform a theoretical validation foreach measure they plan to use [168].

• Abstraction level of requirements document: different organisations write requirements atdifferent levels of abstraction, ranging from vague product specifications to very detailedand precise descriptions of all aspects of a system [152]. In this thesis work, requirementsspecifications were measured at the most detailed level available for the specific project,because the requirements were more well defined. However, it is not sure that this is theideal level of abstraction to be measured. The most detailed abstraction level is the last tobe written before designing the system and this has the drawback of late measurements.Higher abstraction levels are written earlier therefore the measurement can start early butcan contain vague requirements.

6.4 Using the requirements management measures

The measures defined and validated in this thesis can be useful for practitioners for assessingthe state of requirements and eventually take improvements actions. Among the definedmeasures, stakeholders are suggested to choose a subset and validate them in order to be surethat the measures are associated to a certain chosen attribute.

Practitioners are also suggested to measure number of lines of requirements documents be-cause it was found to be the best predictor of volatility. By predicting the amount of changespractitioners could be prepared for future changes, especially because they come unexpected.

Page 61: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

CONCLUSIONS 47

Requirements stakeholders would be able to better manage requirements, identify risky re-quirements, and plan for changes to them (because they are not generally suggested to avoidrisky requirements). All in all, this would have a positive effect on the whole project planningphase. Project managers would be able to identify the most risky parts of the software devel-opment and assign extra resources (time and people) to these parts. They could review plansand prepare alternative plans, thereby minimising project risks caused by volatility. Althoughthe measures in this thesis are early measures, even late processes like software maintenancecould benefit from this research. Maintainers would identify the requirements that are proneto change, analyse them carefully, and eventually make the software affected by them morechangeable (preventive maintenance).

6.5 Future directionsIn general, this research opens up unexplored areas like validation of measures for require-ments and prediction of volatility. Besides performing validations of the whole set of require-ments measures in different environments, an interesting question to be investigated wouldbe the implications of theoretical validation on empirical validation and vice versa. What hap-pens if using measures that are not proved to be theoretically valid? Furthermore, an investi-gation of theoretical validation approaches is needed, i.e., given a certain measure, it wouldbe interesting to validate it using several different theoretical approaches and study the differ-ences among the results.

Interesting questions that need to be investigated are the relationships between change re-quests and the size of change, by exploring for instance the correlation between the type andthe size of change. It would also be interesting to confront the stakeholders of the case studiesin Papers 4 and 7 with the real change data and their estimates and let them comment aboutthe reason of the differences between subjective and objective data.

The changes to requirements and its classification in size in Paper 4 needs to be investigat-ed further. The way change was defined may be different from the subjects’ views. In fact, ithas been observed in practice that people have different views on what constitutes a require-ments change. It is often confused with bug reporting or defect fixing. It would be interestingalso to investigate how to handle dependencies between the changes in different classifica-tions, e.g., what if a minor change affects a major change elsewhere?

Page 62: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

48 CHAPTER 6

Page 63: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

References[1] Abrahão, S., Condori-Fernández, N., Olsina, L., and Pastor, O. 2003. Defining and

Validating Metrics for Navigational Models. In Proceedings of the 9th InternationalSymposium on Software Metrics (Sep. 03–05, 2003). IEEE Computer Society, Wash-ington, DC, 200.

[2] Abrahão, S., Poels, G., and Pastor, O. 2004. Evaluating a Functional Size Measure-ment Method for Web Applications: An Empirical Analysis. In Proceedings of the10th International Symposium on Software Metrics (Sep. 11–17, 2004). IEEE Com-puter Society, Washington, DC, 358–369.

[3] Abran, A. and Robillard, P. N. 1996. Function Points Analysis: An Empirical Study ofIts Measurement Processes. IEEE Transaction on Software Engineering, 22, 12 (Dec.1996), 895–910.

[4] Alagar, V.S., Li, Q., and Ormandjieva, O.S. 2001. Assessment of Maintainability inObject-Oriented Software. In Proceedings of the 39th International Conference andExhibition on Technology of Object-Oriented Languages and Systems (Jul. 29–Aug.03, 2001). IEEE Computer Society, Washington, DC, 194.

[5] Albrecht, A.J. and Gaffney, J.E. 1983. Software Function, Source Lines of Code, andDevelopment Effort Prediction: A Software Science Validation. IEEE Transaction onSoftware Engineering, 9, 6 (Nov. 1983), 639–648.

[6] Allen, E.B., Gottipati, S., and Govindarajan, R. 2007. Measuring Size, Complexity,and Coupling of Hypergraph Abstractions of Software: An Information-TheoryApproach. Software Quality Journal, 15, 2 (Jun. 2007), 179–212.

[7] de Almeida, M.A., Lounis, H., and Melo, W.L. 1998. An Investigation on the Use ofMachine Learned Models for Estimating Correction Costs. In Proceedings of the 20thInternational Conference on Software Engineering (Kyoto, Japan, Apr. 19–25, 1998).IEEE Computer Society, Washington, DC, 473–476.

Page 64: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

50 CHAPTER

[8] Alshayeb, M. and Li, W. 2003. An Empirical Validation of Object-Oriented Metricsin Two Different Iterative Software Processes. IEEE Transaction on Software Engi-neering, 29, 11 (Nov. 2003), 1043–1049.

[9] Ambriola, V. and Gervasi, V. 2000. Process Metrics for Requirements Analysis. InProceedings of the 7th European Workshop on Software Process Technology (Feb.21–25, 2000). R. Conradi, Ed. Lecture Notes In Computer Science, 1780. Springer-Verlag, London, 90–95.

[10] Anton, A.I. 2003. Successful Software Projects Need Requirements Planning, IEEESoftware, 20, 3 (May–Jun. 2003), 44–47.

[11] Antoniol, G., Fiutem, R., and Lokan, C. 2003. Object-Oriented Function Points: AnEmpirical Validation. Empirical Software Engineering, 8, 3 (Sep. 2003), 225–254.

[12] Arisholm E., Briand L.C., and Føyen A. 2004. Dynamic Coupling Measurement forObject-Oriented Software. IEEE Transaction on Software Engineering, 30, 8 (Aug.2004), 491–506.

[13] Bahli, B. and Rivard, S. 2003. A Validation of Measures Associated with the Risk Fac-tors in Information Technology Outsourcing. In Proceedings of the 36th AnnualHawaii International Conference on System Sciences, 8 (Jan. 06–09, 2003). IEEEComputer Society, Washington, DC, 269.1.

[14] Baker, A.L., Bieman, J.M., Fenton, N., Gustafson, D.A., Melton, A., and Whitty, R.1990. A Philosophy for Software Measurement. Journal of Systems and Software, 12,3 (Jul. 1990), 277–281.

[15] Bandi, R.K., Vaishnavi, V.K., and Turk, D.E. 2003. Predicting Maintenance Perform-ance Using Object-Oriented Design Complexity Metrics. IEEE Transaction on Soft-ware Engineering, 29, 1 (Jan. 2003), 77–87.

[16] Barnard, J. 1998. A New Reusability Metrics for Object-Oriented Measures. SoftwareQuality Journal, 7, 35–50.

[17] Basili, V.R., Briand, L.C., and Melo, W.L. 1996. A Validation of Object-OrientedDesign Metrics as Quality Indicators. IEEE Transaction on Software Engineering, 22,10 (Oct. 1996), 751–761.

[18] Basili, V.R. 1996. The Role of Experimentation in Software Engineering: Past, Cur-rent, and Future. In Proceedings of the 18th International Conference on SoftwareEngineering (Berlin, Germany, Mar. 25–29, 1996). IEEE Computer Society, Wash-ington, DC, 442–449.

[19] Basili, V.R., Selby, R.W., and Hutchens, D.H. 1986. Experimentation in SoftwareEngineering. IEEE Transaction on Software Engineering, 12, 7 (Jul. 1986), 733–743.

[20] Basili, V.R. and Weiss, D.M. 1984. A Methodology for Collecting Valid SoftwareEngineering Data, IEEE Transaction on Software Engineering, 10, 6 (Nov. 1984),728–738.

Page 65: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

51

[21] Basili, V.R. and Rombach, H.D. 1988. The TAME Project: Towards Improvement-Oriented Software Environments, IEEE Transaction on Software Engineering, 14, 6(Jun. 1988), 758–773.

[22] Basili, V.R., Shull, F., and Lanubile, F. 1999. Building Knowledge through Families ofExperiments. IEEE Transaction on Software Engineering, 25, 4 (Jul. 1999), 456–473.

[23] Baudry, B., Traon, Y.L., and Jézéquel, J. 2001. Robustness and Diagnosability of OOSystems Designed by Contracts. In Proceedings of the 7th International Symposiumon Software Metrics (Apr. 04–06, 2001). IEEE Computer Society, Washington, DC,272.

[24] Baumert, J.H. and McWhinney, M.S. 1992. Software Measures and the CapabilityMaturity Model, Software Engineering Institute Technical Report, CMU/SEI-92-TR-25, ESCTR-92-0.

[25] Benlarbi, S., El Emam, K., Goel, N., and Rai, S. 2000. Thresholds for Object-Ori-ented Measures. In Proceedings of the 11th International Symposium on SoftwareReliability Engineering (Oct. 08–11, 2000). IEEE Computer Society, Washington,DC, 24–37.

[26] Benlarbi, S. and Melo, W.L. 1999. Polymorphism Measures for Early Risk Prediction.In Proceedings of the 21st International Conference on Software Engineering (LosAngeles, California, May 16–22, 1999). IEEE Computer Society Press, Los Alamitos,CA, 334–344.

[27] Bieman, J.M., Jain, D., and Yang, H.J. 2001. OO Design Patterns, Design Structure,and Program Changes: An Industrial Case Study. In Proceedings of the IEEE Interna-tional Conference on Software Maintenance (Nov. 07–09, 2001). IEEE ComputerSociety, Washington, DC, 580.

[28] Bieman, J.M., Fenton, N., Gustafson, D., Melton, A., and Ott, L. 1996. Fundamen-tal Issues in Software Measurement, In Software Measurement: Understanding Soft-ware Engineering. A. Melton, Editor, International Thomson Publishing (ITP), 39–74.

[29] Binkley, A.B. and Schach, S.R. 1998. Validation of the Coupling Dependency Metricas a Predictor of Run-time Failures and Maintenance Measures. In Proceedings of the20th International Conference on Software Engineering (Kyoto, Japan, Apr. 19–25,1998). IEEE Computer Society, Washington, DC, 452–455.

[30] Binkley, A.B. and Schach, S.R. 1996. A Comparison of Sixteen Quality Metrics forObject-Oriented Design. Information Processing Letter, 58, 6 (Jun. 1996), 271–275.

[31] Boehm, B.W. 1981 Software Engineering Economics. 1st. Prentice Hall PTR.

[32] Boehm, B.W. and Papaccio, P.N. 1988. Understanding and Controlling SoftwareCosts. IEEE Transaction on Software Engineering, 14, 10 (Oct. 1988), 1462–1477.

[33] Boehm, B.W. and Basili, V.R. 2000. Gaining Intellectual Control of Software Devel-opment. Computer, 33, 5 (May 2000), 27–33.

Page 66: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

52 CHAPTER

[34] Boehm, B.W., McClean, R.K., and Urfrig, D.B. 1975. Some Experience with Auto-mated Aids to the Design of Large-scale Reliable Software. In Proceedings of theInternational Conference on Reliable Software (Los Angeles, California, Apr. 21–23,1975). ACM Press, New York, NY, 105–113.

[35] Bray, I., Introduction to Requirements Engineering, Addison-Wesley, 2003.

[36] Briand, L.C., Arisholm, E., Counsell, S., Houdek, F., and Thévenod-fosse, P. 1999.Empirical Studies of Object-Oriented Artifacts, Methods, and Processes: State of theArt and Future Directions. Empirical Software Engineering, 4, 4 (Dec. 1999), 387–404.

[37] Briand, L.C., Bunse, C., Daly, J.W., and Differding, C. 1997. An Experimental Com-parison of the Maintainability of Object-Oriented and Structured Design Docu-ments. Empirical Software Engineering, 2, 3 (Mar. 1997), 291–312.

[38] Briand, L.C., Bunse, C., and Daly, J.W. 2001. A Controlled Experiment for Evaluat-ing Quality Guidelines on the Maintainability of Object-Oriented Designs. IEEETransaction on Software Engineering, 27, 6 (Jun. 2001), 513–530.

[39] Briand, L.C., Daly, J.W., Porter V., and Wüst, J. 1998. Predicting Fault-Prone Classeswith Design Measures in Object-Oriented Systems. In Proceedings of the the NinthInternational Symposium on Software Reliability Engineering (Nov. 04–07, 1998).IEEE Computer Society, Washington, DC, 334.

[40] Briand, L.C., Daly, J.W., and Wüst, J. 1998. A Unified Framework for CohesionMeasurement in Object-Oriented Systems. Empirical Software Engineering, 3, 1 (Jul.1998), 65–117.

[41] Briand, L.C., Daly, J.W., and Wüst, J. 1999. A Unified Framework for CouplingMeasurement in Object-Oriented Systems. IEEE Transaction on Software Engineer-ing, 25, 1 (Jan. 1999), 91–121.

[42] Briand, L.C., Wüst, J., Daly, J.W., and Porter, V. 1998. A Comprehensive EmpiricalValidation of Design Measures for Object-Oriented Systems. In Proceedings of the5th International Symposium on Software Metrics (Mar. 20–21, 1998). IEEE Com-puter Society, Washington, DC, 246–257

[43] Briand, L.C., Devanbu, P., and Melo, W. 1997. An Investigation into Coupling Meas-ures for C++. In Proceedings of the 19th International Conference on Software Engi-neering (Boston, Massachusetts, May 17–23, 1997). ACM Press, New York, NY,412–421.

[44] Briand, L.C., El Emam, K., and Morasca, S. 1995. Theoretical and Empirical Valida-tion of Software Product Measures, Technical Report, Centre de Recherche Informa-tique de Montréal.

[45] Briand, L.C., El Emam, K., and Morasca, S. 1996. On the Application of Measure-ment Theory in Software Engineering, Empirical Software Engineering, 1, 1 (Jan.1996), 61–88.

Page 67: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

53

[46] Briand, L.C., Melo, W.L., and Wüst, J. 2002. Assessing the Applicability of Fault-Proneness Models Across Object-oriented Software Projects. IEEE Transaction onSoftware Engineering, 28, 7 (Jul. 2002), 706–720.

[47] Briand, L.C. and Morasca, S. 1997. Software Measurement and Formal Methods: ACase Study Centered on TRIO+ Specifications. In Proceedings of the 1st Interna-tional Conference on Formal Engineering Methods (Nov. 12–14, 1997). IEEE Com-puter Society, Washington, DC, 315.

[48] Briand, L.C., Morasca, S., and Basili, V.R. 2002. An Operational Process for Goal-Driven Definition of Measures. IEEE Transaction on Software Engineering, 28, 12(Dec. 2002), 1106–1125.

[49] Briand, L.C., Morasca, S., and Basili, V.R. 1996. Property-Based Software Engineer-ing Measurement. IEEE Transaction on Software Engineering, 22, 1 (Jan. 1996), 68–86.

[50] Briand, L.C., Morasca, S., and Basili, V.R. 1999. Defining and Validating Measuresfor Object-Based High-Level Design. IEEE Transaction on Software Engineering, 25,5 (Sep. 1999), 722–743.

[51] Briand, L.C. and Wüst, J. 2001. The Impact of Design Properties on DevelopmentCost in Object-Oriented Systems. In Proceedings of the 7th International Symposiumon Software Metrics (Apr. 04–06, 2001). IEEE Computer Society, Washington, DC,260–71.

[52] Briand, L.C. and Wüst, J. 2001. Modeling Development Effort in Object-OrientedSystems Using Design Properties. IEEE Transaction on Software Engineering, 27, 11(Nov. 2001), 963–986.

[53] Briand, L.C. and Wüst, J. 2002. Advances in Computers, Chapter: Empirical Studiesof Quality Models in Object-Oriented Systems. Academic Press, INC, USA, 2002.

[54] Briand, L.C., Wüst, J., Daly, J.W., and Porter, D.V. 2000. Exploring the Relationshipbetween Design Measures and Software Quality in Object-Oriented Systems. Journalof Systems and Software, 51, 3 (May 2000), 245–273.

[55] Briand, L.C., Wüst, J., Ikonomovski, S.V., and Lounis, H. 1999. Investigating Qual-ity Factors in Object-Oriented Designs: an Industrial Case Study. In Proceedings ofthe 21st International Conference on Software Engineering (Los Angeles, California,May 16–22, 1999). IEEE Computer Society Press, Los Alamitos, CA, 345–354.

[56] Briand, L.C., Wüst, J., and Lounis, H. 1999. Using Coupling Measurement forImpact Analysis in Object-Oriented Systems. In Proceedings of the IEEE Interna-tional Conference on Software Maintenance (Aug. 30– Sep. 03, 1999). IEEE Compu-ter Society, Washington, DC, 475.

[57] Briand, L.C., Wüst, J., and Lounis, H. 2001. Replicated Case Studies for Investigat-ing Quality Factors in Object-Oriented Designs. Empirical Software Engineering, 6, 1(Mar. 2001), 11–58.

Page 68: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

54 CHAPTER

[58] Brito e Abreu, F. and Melo, W.L. 1996. Evaluating the Impact of Object-OrientedDesign on Software Quality. In Proceedings of the Third International Software Met-rics Symposium (Mar. 1996).

[59] Calero, C., Piattini, M., and Genero, M. 2001. A Case Study with Relational Data-base Metrics. In Proceedings of the ACS/IEEE International Conference on Compu-ter Systems and Applications (Jun. 25–29, 2001). IEEE Computer Society,Washington, DC, 485.

[60] Calero, C., Piattini, M., and Genero, M. 2001. Empirical Validation of ReferentialIntegrity Metrics. Information and Software Technology, 43, 15 (Dec. 2001), 949–957.

[61] Calero, C., Piattini, M., Pascual C., and Serrano, M.A. 2001. Towards Data Ware-house Quality Metrics. In Proceedings of the 3rd Workshop on Design and Manage-ment of Data Warehouses, Interlaken, Suiza (Jun. 2001).

[62] Calero, C., Sahraoui, H.A., and Piattini, M. 2002. An Empirical Study with Metricsfor Object-Relational Databases. In Proceedings of the 7th International Conferenceon Software Quality (Jun. 09–13, 2002). J. Kontio and R. Conradi, Eds. LectureNotes In Computer Science, 2349. Springer-Verlag, London, 298–309.

[63] Calero, C., Sahraoui, H.A., Piattini, M., and Lounis, H. 2001. Estimating Object-Relational Database Understandability Using Structural Metrics. In Proceedings ofthe 12th International Conference on Database and Expert Systems Applications(Sep. 03–05, 2001). H.C., Mayr, J., Lazanský, G., Quirchmayr, and P. ,Vogel, Eds.Lecture Notes In Computer Science, 2113. Springer-Verlag, London, 909–922.

[64] Canfora, G., García, F., Piattini, M., Ruiz, F., and Visaggio, C.A. 2005. A Family ofExperiments to Validate Metrics for Software Process Models. Journal of Systems andSoftware, 77, 2 (Aug. 2005), 113–129.

[65] Cartwright, M. and Shepperd, M. 2000. An Empirical Investigation of an Object-Oriented Software System. IEEE Transaction on Software Engineering, 26, 8 (Aug.2000), 786–79.

[66] Carver, J., Jaccheri, L., Morasca, S., and Shull, F. 2003. Issues in Using Students inEmpirical Studies in Software Engineering Education. In Proceedings of the 9th Inter-national Symposium on Software Metrics (Sep. 03–05, 2003). IEEE Computer Soci-ety, Washington, DC, 239.

[67] Chen, Y., Boehm, B.W., Madachy, R., and Valerdi, R. 2004. An Empirical Study ofeServices Product UML Sizing Metrics. In Proceedings of the 2004 InternationalSymposium on Empirical Software Engineering (Aug. 19–20, 2004). IEEE ComputerSociety, Washington, DC, 199–206.

[68] Chidamber, S.R., Darcy, D.P., and Kemerer, C.F. 1998. Managerial Use of Metrics forObject-Oriented Software: An Exploratory Analysis. IEEE Transaction on SoftwareEngineering, 24, 8 (Aug. 1998), 629–639.

[69] Chidamber, S.R. and Kemerer, C.F. 1994. A Metrics Suite for Object OrientedDesign. IEEE Transaction on Software Engineering, 20, 6 (Jun. 1994), 476–493.

Page 69: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

55

[70] Chidamber, S.R. and Kemerer, C.F. 1991. Towards a Metrics Suite for Object Ori-ented Design. In Proceedings on Object-Oriented Programming Systems, Languages,and Applications (Phoenix, Arizona, Oct. 06–11, 1991). A. Paepcke, Ed. ACM Press,New York, NY, 197–211.

[71] Chrissis, M.B., Konrad, M., and Shrum, S. 2003. CMMI Guidlines for Process Inte-gration and Product Improvement. Addison-Wesley Longman Publishing Co., Inc.

[72] Christel, M.G. and Kang, K.C. 1992. Issues in Requirements Elicitation, CarnegieMellon University, Pittsburgh TR.CMU/SEI-92-TR-12, (Sep. 1992).

[73] Costagliola, G., Ferrucci, F., Tortora, G., and Vitiello, G. 2005. Class Point: AnApproach for the Size Estimation of Object-Oriented Systems, IEEE Transaction onSoftware Engineering, 31, 1 (Jan. 2005), 52–74.

[74] Costagliola, G., Ferrucci, F., Tortora, G., and Vitiello, G. 2000. A Metric for the SizeEstimation of Object-Oriented Graphical User Interfaces International Journal ofSoftware Engineering and Knowledge Engineering, 10, 5 (Oct. 2000), 581–603.

[75] Costello, R.J. and Liu, D. 1995. Metrics for Requirements Engineering. In SelectedPapers of the Sixth Annual Oregon Workshop on Software Metrics (Silver Falls, Ore-gon). W. Harrison and R.L. Glass, Eds. Elsevier Science, New York, NY, 39–63.

[76] Dagpinar, M. and Jahnke, J.H. 2003. Predicting Maintainability with Object-Ori-ented Metrics - An Empirical Comparison. In Proceedings of the 10th Working Con-ference on Reverse Engineering (Nov. 13–17, 2003). IEEE Computer Society,Washington, DC, 155.

[77] Damian, D., Chisan, J., Vaidyanathasamy, L., and Pal, Y. 2003. An Industrial CaseStudy of the Impact of Requirements Engineering on Downstream Development. InProceedings of the International Symposium on Empirical Software Engineering (Sep.30– Oct. 01, 2003). IEEE Computer Society, Washington, DC, 40.

[78] Davis, A.M. 1993. Software Requirements: Objects, Functions, and States. Prentice-Hall, Inc.

[79] Davis, A.M., Overmyer, S., Jordan, K., Caruso, J., Dandashi, F., Dinh, A., Kincaid,G., Ledeboer, G., Reynolds, P., Sitaram, P., Ta, A., and Theofanos, M. 1993. Identify-ing and Measuring Quality in a Software Requirements Specification, In Proceedingsof the First International Software Metrics Symposium. (May 1993) 141–152 Balti-more, MD, USA ISBN: 0-8186-3740-4.

[80] Davis, A.M. and Leffingwell, D.A. 1999. Making Requirements Management Workfor You, CrossTalk—The Journal of Defense Software Engineering (Apr 1999), 10–13.

[81] De Lucia, A., Pompella, E., and Stefanucci, S. 2005. Assessing Effort EstimationModels for Corrective Maintenance through Empirical Studies, Information and Soft-ware Technology, 47, 1 (Jan. 2005) 3–15.

[82] Dekkers, C.A. 2005. Creating Requirements-Based Estimates Before RequirementsAre Complete CrossTalk - The Journal of Defense Software Engineering (Apr 2005)13–15.

Page 70: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

56 CHAPTER

[83] Devanbu, P., Karstu, S., Melo, W., and Thomas, W., 1996. Analytical and EmpiricalEvaluation of Software Reuse Metrics. In Proceedings of the 18th International Con-ference on Software Engineering, IEEE, Berlin, 1996, 189.

[84] Díaz, O., Piattini, M., and Calero, C. 2001. Measuring Triggering-interaction Com-plexity on Active Databases. Information System, 26, 1 (Mar. 2001), 15–34, ElsevierScience Ltd. Oxford, UK.

[85] Ebert, C. 2006. Understanding the Product Life Cycle: Four Key Requirements Engi-neering Techniques. IEEE Software, 23, 3 (May 2006), 19–25.

[86] Ebert, C. and De Man, J. 2005. Requirements Uncertainty: Influencing Factors andConcrete Improvements. In Proceedings of the 27th International Conference onSoftware Engineering (St. Louis, MO, May 15–21, 2005). ACM Press, New York,NY, 553–560.

[87] El Emam, K. 2000. A Methodology for Validating Software Product Metrics.National Research Council of Canada: Ottawa, Ontario, Canada NCR/ERC-1076,(Jun. 2000).

[88] El Emam, K., Benlarbi, S., Goel, N., and Rai, S.N. 2001. The Confounding Effect ofClass Size on the Validity of Object-Oriented Metrics. IEEE Transaction on SoftwareEngineering, 27, 7 (Jul. 2001), 630–650.

[89] El Emam, K., Benlarbi, S., Melo, W., Lounis, H., and Rai, S.N. 2000. The OptimalClass Size for Object-Oriented Software: A Replicated Case Study, NRC/ERB-1074.(Mar. 2000), NRC 43653, National Research Council of Canada.

[90] El Emam, K., Benlarbi, S., Goel, N., and Rai, S. 1999. A Validation of Object-Ori-ented Metrics, Technical Report ERB-1063, NRC, 1999.

[91] El Emam, K. and Birk, A. 2000. Validating the ISO/IEC 15504 Measure of SoftwareRequirements Analysis Process Capability. IEEE Transaction on Software Engineer-ing, 26, 6 (Jun. 2000), 541–566.

[92] El Emam, K. and Hoeltje, D. 1997. Qualitative Analysis of a Requirements ChangeProcess. Empirical Software Engineering, 2, 2 (Feb. 1997), 143–152.

[93] El Emam, K., Melo, W., and Machado, J.C. 2001. The Prediction of Faulty ClassesUsing Object-Oriented Design Metrics. Journal of Systems and Software, 56, 1 (Feb.2001), 63–75.

[94] El Emam, K., Quintin, S., and Madhavji, N.H. 1996. User Participation in theRequirements Engineering Process: An Empirical Study. In Requirements Engineer-ing Journal, Springer-Verlag, 1, 1 (Mar. 1996), 4–26.

[95] Etzkorn, L., Davis, C., and Li, W., 1998. A Practical Look at the Lack of Cohesion inMethods Metric, JOOP (Sep. 1998), 27–34.

[96] Fenton, N. 1994. Software Measurement: A Necessary Scientific Basis. IEEE Transac-tion on Software Engineering, 20, 3 (Mar. 1994), 199–206.

[97] Fenton, N., Pfleeger, S.L., and Glass, R.L. 1994. Science and Substance: A Challengeto Software Engineers. IEEE Software, 11, 4 (Jul. 1994), 86–95.

Page 71: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

57

[98] Fenton, N. and Pfleeger, S.L. 1997. Software Metrics (2nd Ed.): a Rigorous and Prac-tical Approach. PWS Publishing Co.

[99] Fernández, L. and Dolado, J.J. 1999. Measurement and Prediction of the VerificationCost of the Design in a Formalized Methodology. Information and Software Technol-ogy, 41, 7 (May 15, 1999), 421–434, Elsevier Science.

[100] Ferneley, E.H. 1999. Design Metrics as an Aid to Software Maintenance: an EmpiricalStudy. Journal of Software Maintenance, 11, 1 (Jan. 1999), 55–72.

[101] Fioravanti, F. and Nesi, P. 2001. A Study on Fault-Proneness Detection of Object-Ori-ented Systems. In Proceedings of the Fifth European Conference on Software Mainte-nance and Reengineering (Mar. 14–16, 2001). IEEE Computer Society, Washington,DC, 121.

[102] Fioravanti, F. 2001. A Metric Framework for the Assessment of Object-Oriented Sys-tems. In Proceedings of the IEEE International Conference on Software Maintenance(Nov. 07–09, 2001). IEEE Computer Society, Washington, DC, 557.

[103] Futrell, R.T., Shafer, L.I., and Shafer, D.F. 2001. Quality Software Project Manage-ment, Prentice Hall PTR Upper Saddle River, NJ, USA.

[104] Garcia, F., Bertoa, M.F., Calero, C., Vallecillo, A., Ruiz, F., Piattini, M., and Genero,M. 2006. Towards a Consistent Terminology for Software Measurement, Informationand Software Technology, 48, 8 (Aug. 2006), 631–644.

[105] Garcia, F., Piattini, M., Ruiz, F., and Visaggio, C.A. 2005. Maintainability of SoftwareProcess Models: An Empirical Study. In Proceedings of the Ninth European Confer-ence on Software Maintenance and Reengineering (Mar. 21–23, 2005). IEEE Com-puter Society, Washington, DC, 246–255.

[106] Genero, M. 2002. Defining and Validating Metrics for Conceptual Models, Ph.D.Thesis, University of Castilla- La Mancha, 2002.

[107] Genero, M., Piattini, M., and Calero, C. 2001. Assurance of Conceptual Data ModelQuality Based on Early Measures. In Proceedings of the Second Asia-Pacific Confer-ence on Quality Software (Dec. 10–11, 2001). IEEE Computer Society, Washington,DC, 97.

[108] Genero, M., Piattini, M., and Calero, C. 2002. Empirical Validation of Class Dia-gram Metrics. In Proceedings of the 2002 International Symposium on EmpiricalSoftware Engineering (Oct. 03–04, 2002). IEEE Computer Society, Washington,DC, 195–203.

[109] Genero, M., Piattini, M., and Calero, C. 2005. A Survey of Metrics for UML ClassDiagrams. Journal of Object Technology, 4, 9 (Nov.–Dec. 2005), 59–92.

[110] Genero, M., Piattini, M., and Calero, C. (Eds.) 2004. Metrics For Software Concep-tual Models. Imperial College Press, UK.

[111] Genero, M., Poels, G., and Piattini, M. 2002. Defining and Validating Measures forConceptual Data Model Quality. In Proceedings of the 14th International Conferenceon Advanced information Systems Engineering (May 27–31, 2002). A. B. Pidduck, J.

Page 72: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

58 CHAPTER

Mylopoulos, C.C. Woo, and M.T. Özsu, Eds. Lecture Notes In Computer Science,2348. Springer-Verlag, London, 724–727.

[112] Genero, M., Piattini, M., and Manso, E. 2004. Finding "Early" Indicators of UMLClass Diagrams Understandability and Modifiability. In Proceedings of the 2004International Symposium on Empirical Software Engineering (Aug. 19–20, 2004).IEEE Computer Society, Washington, DC, 207–216.

[113] Genero, M., Piattini, M., and Jimenez, L. 2001. Empirical Validation of Class Dia-gram Complexity Metrics. In Proceedings of the XXI Internatinal Conference of theChilean Computer Science Society, IEEE Computer Society Press, Chile, 2001.

[114] Genero, M. and Piattini, M. 2001. Empirical Validation of Measures for Class Dia-gram structural Complexity through Controlled Experiments, 5th InternationalECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engi-neering (Budapest, Hungary, (Jun. 2001).

[115] Genero, M., Piattini, M., and Calero, C. 2002. An Empirical Study to Validate Met-rics for Class Diagrams. In Proceedings of the International Database Engineering andApplications Symposium, Edmonton, Canada, (Jul. 17–19, 2002).

[116] Genero, M., Piattini, M., Manso, E., and Cantone, G. 2003. Building UML ClassDiagram Maintainability Prediction Models Based on Early Metrics. In Proceedingsof the 9th International Symposium on Software Metrics (Sep. 03–05, 2003). IEEEComputer Society, Washington, DC, 263.

[117] Glasberg, D., El Emam, K., Melo, W., and Madhavji, N. 2000. Validating Object-Oriented Design Metrics on a Commercial Java Application, TR ERB- 1080, NRC(Sep. 2000).

[118] Glass, R.L., Vessey, I., and Ramesh, V. 2002. Research in Software Engineering: anAnalysis of the Literature. Information and Software Technology, 44, 491–506.

[119] Gopal, A., Mukhopadhyay, T., and Krishnan, M.S. 2002. The Role of Software Proc-esses and Communication in Offshore Software Development. Commununication ofACM, 45, 4 (Apr. 2002), 193–200.

[120] Gursaran, 2001. Viewpoint Representation Validation: a Case Study on two Metricsfrom the Chidamber and Kemerer Suite. Journal of Systems and Software, 59, 1(Oct. 2001), 83–97, Elsevier Science Inc. New York, NY.

[121] Gursaran and Roy, G. 2001. On the Applicability of Weyuker Property 9 to Object-Oriented Structural Inheritance Complexity Metrics. IEEE Transaction on SoftwareEngineering, 27, 4 (Apr. 2001), 381–384.

[122] Gyimothy, T., Ferenc, R., and Siket, I. 2005. Empirical Validation of Object-OrientedMetrics on Open Source Software for Fault Prediction. IEEE Transaction on SoftwareEngineering, 31, 10 (Oct. 2005), 897–910.

[123] Hall, G.A., Tao, W., and Munson, J.C. 2005. Measurement and Validation of ModuleCoupling Attributes. Software Quality Journal, 13, 3 (Sep. 2005), 281–296.

Page 73: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

59

[124] Harrison, R., Badoo, N., Barry, E., Biffl, S., Parra, A., Winter, B., and Wuest, J. 1999.Directions and Methodologies for Empirical Software Engineering Research. Empiri-cal Software Engineering, 4, 4 (Dec. 1999), 405–410.

[125] Harrison, R. and Counsell, S.J., 1998. The Role of Inheritance in the Maintainabilityof Object-Oriented Systems. In Proceedings of ESCOM ‘98, 449–457.

[126] Harrison, R., Counsell, S.J., and Nithi, R.V. 2000. Experimental Assessment of theEffect of Inheritance on the Maintainability of Object-Oriented Systems. Journal ofSystems and Software, 52, 2–3 (Jun. 2000), 173–179.

[127] Harrison, R., Counsell, S.J., and Nithi, R.V. 1998. An Evaluation of the MOOD Setof Object-Oriented Software Metrics. IEEE Transaction on Software Engineering, 24,6 (Jun. 1998), 491–496.

[128] Harrison, R., Counsell, S.J., and Nithi, R.V. 1998. Coupling Metrics for Object-Ori-ented Design. In Proceedings of the 5th International Symposium on Software Met-rics (Mar. 20–21, 1998). IEEE Computer Society, Washington, DC, 150.

[129] Harrison, R., Samaraweera, L.G., Dobie, M.R., Lewis, P.H. 1996. An evaluation ofCode Metrics for Object-Oriented Programs, Information and Software Technology,38, 7 (Jul. 1996), 443–450, Elsevier Science.

[130] Hastings, T.E. and Sajeev, A.S. 2001. A Vector-Based Approach to Software SizeMeasurement and Effort Estimation. IEEE Transaction on Software Engineering, 27,4 (Apr. 2001), 337–350.

[131] Heales, J. 2000. Factors Affecting Information System Volatility. In Proceedings of theTwenty First International Conference on Information Systems (Brisbane, Queens-land, Australia). Association for Information Systems, Atlanta, GA, 70–83.

[132] Hitz, M. and Montazeri, B. 1996. Chidamber and Kemerer's Metrics Suite: A Meas-urement Theory Perspective. IEEE Transaction on Software Engineering, 22, 4 (Apr.1996), 267–271.

[133] Hsia, P., Davis, A., and Kung, D. 1993. Status Report: Requirements Engineering.IEEE Software, 10, 6 (Nov. 1993), 75–79.

[134] Hyatt, L.E. and Rosenberg L.H. 1996. A Software Quality Model and Metrics forIdentifying Project Risks and Assessing Software Quality. In ESA 1996 Product Assur-ance Symposium and Software Product Assurance Workshop, ESTEC, Noordwijk,The Netherlands, European Space Agency, 209–212.

[135] Höst, M., Regnell, B., and Wohlin, C. 2000. Using Students as Subjects—A Compar-ative Study ofStudents and Professionals in Lead-Time Impact Assessment. EmpiricalSoftware Engineering, 5, 3 (Nov. 2000), 201–214.

[136] Idri, A. and Abran, A. 2001. A Fuzzy Logic Based Set of Measures for Software ProjectSimilarity: Validation and Possible Improvements. In Proceedings of the 7th Interna-tional Symposium on Software Metrics (Apr. 04–06, 2001). IEEE Computer Society,Washington, DC, 85.

Page 74: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

60 CHAPTER

[137] Idri, A., Abran, A., and Khoshgoftaar, T.M. 2002. Estimating Software Project Effortby Analogy Based on Linguistic Values. In Proceedings of the 8th International Sym-posium on Software Metrics (Jun. 04–07, 2002). IEEE Computer Society, Washing-ton, DC, 21.

[138] Ivory, M.Y., Sinha, R.R., and Hearst, M.A. 2001. Empirically Validated Web PageDesign Metrics. In Proceedings of the SIGCHI Conference on Human Factors inComputing Systems (Seattle, Washington). ACM Press, New York, NY, 53–60.

[139] Javed, T., Maqsood, M.E., and Durrani, Q.S. 2004. A Study to Investigate the Impactof Requirements Instability on Software Defects. SIGSOFT Software EngineeringNotes, 29, 3 (May 2004), 1–7.

[140] Jay, F. and Mayer, R. 1990. IEEE Standard Glossary of Software Engineering Termi-nology, Std.610.12-1990 IEEE Std.

[141] Jung, H.W., Lim, Y., and Chung, C.S. 2004. Modeling Change Requests Due toFaults in a Large-scale Telecommunication System, The Journal of Systems and Soft-ware, 72, 2 (Jul. 2004), 235–247 - Elsevier.

[142] Juristo, N. and Moreno, A.M. 2001. Basics of Software Engineering Experimentation,2001 - Kluwer Academic Publishers Boston.

[143] Jørgensen, M. and Sjøberg, D. 2004. Generalization and Theory-Building in SoftwareEngineering Research, 8th Internation Conference on Empirical Assessment in Soft-ware Engineering, Edinburgh, Scotland, UK, (May 24–25, 2004), 29 –35.

[144] Jørgensen, M. 1995. Experience With the Accuracy of Software Maintenance TaskEffort Prediction Models. IEEE Transaction on Software Engineering, 21, 8 (Aug.1995), 674–681.

[145] Kitchenham, B.A., Pfleeger, S.L., Pickard, L.M., Jones, P.W., Hoaglin, D.C., Emam,K.E., and Rosenberg, J. 2002. Preliminary Guidelines for Empirical Research in Soft-ware Engineering. IEEE Transaction on Software Engineering, 28, 8 (Aug. 2002),721–734.

[146] Kitchenham, B.A., Pfleeger, S.L., and Fenton, N. 1995. Towards a Framework forSoftware Measurement Validation. IEEE Transaction on Software Engineering, 21, 12(Dec. 1995), 929–944.

[147] Kitchenham, B.A. 1996–2002. Evaluating Software Engineering Methods and Tools,parts 1-to 12, ACM SIGSOFT Software Engineering Notes.

[148] Kitchenham, B.A., Pfleeger, S.L., and Fenton, N. 1997. Reply to: Comments on"Towards a Framework for Software Measurement Validation". IEEE Transaction onSoftware Engineering, 23, 3 (Mar. 1997), 189.

[149] Khoshgoftaar, T.M. and Allen, E.B. 1998. Classification of Fault-Prone SoftwareModules: Prior Probabilities, Costs, and Model Evaluation. Empirical Software Engi-neering, 3, 3 (Sep. 1998), 275–298.

[150] Klir, G.J. and Elias, D. 2002 Architecture of Systems Problem Solving. 2. Da CapoPress, Incorporated.

Page 75: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

61

[151] Koru, A.G. and Liu, H. 2005. An Investigation of the Effect of Module Size onDefect Prediction Using Static Measures. In Proceedings of the 2005 Workshop onPredictor Models in Software Engineering (St. Louis, Missouri, May 15–15, 2005).ACM Press, New York, NY, 1–5.

[152] Kotonya, G. and Sommerville, I. 2002. Requirements Engineering Processes andTechniques, John Wiley & Sons, Chichester, UK, 2002.

[153] Lakshmanan, K.B., Jayaprakash, S., and Sinha, P.K. 1991. Properties of Control-FlowComplexity Measures. IEEE Transaction on Software Engineering, 17, 12 (Dec.1991), 1289–1295.

[154] Lam, W. and Shankararaman, V. 1998. Managing Change in Software DevelopmentUsing a Process Improvement Approach. In Proceedings of the 24th Euromicro Con-ference, 2 (Aug. 25–27, 1998). IEEE Computer Society, Washington, DC, 20779,779–786.

[155] Lam, W., Loomes, M., and Shankararaman, V. 1999. Managing RequirementsChange Using Metrics and Action Planning. In Proceedings of the Third EuropeanConference on Software Maintenance and Reengineering (Mar. 03–05, 1999). IEEEComputer Society, Washington, DC, 122.

[156] Lam, W. and Shankararaman, V. 1999. Requirements Change: A Dissection of Man-agement Issues. In Proceedings of the 25th Euromicro Conference, 2 (Sep. 1999)244–251.

[157] van Lamsweerde, A. 2000. Requirements Engineering in the Year 00: a Research Per-spective. In Proceedings of the 22nd International Conference on Software Engineer-ing (Limerick, Ireland, Jun. 04–11, 2000). ACM Press, New York, NY, 5–19.

[158] Laranjeira, L.A. 1990. Software Size Estimation of Object-Oriented Systems. IEEETransaction on Software Engineering, 16, 5 (May 1990), 510–522.

[159] Lauesen, S. 2002. Software Requirements - Styles and Techniques. Addison-Wesley,London.

[160] Li, W. and Henry, S. 1993. Object-Oriented Metrics that Predict Maintainability.Journal of Systems and Software, 23, 2 (Nov. 1993), 111–122.

[161] Li, W., Henry, S., Kafura, D., and Schulman, R. 1995. Measuring Object-OrientedDesign, JOOP 8, 4 (Jul./Aug. 1995), 48–55.

[162] Loconsole, A., 2002. Non-Empirical Validation of Requirements Management Meas-ures, in Proceeding of WoSQ - Workshop on Software Quality, ICSE, Orlando, Fl,(May 24, 2002), 40–42.

[163] Loconsole, A. and Börstler, J. 2003. Theoretical Validation and Case Study ofRequirements Management Measures, Department of Computing Science, UmeåUniversity, Technical Report UMINF 03.02, (Jul. 2003).

[164] Loconsole, A. and Börstler, J. 2005. An Industrial Case Study on Requirements Vola-tility Measures. In Proceedings of the 12th Asia-Pacific Software Engineering Confer-ence (Dec. 15–17, 2005). IEEE Computer Society, Washington, DC, 249–256.

Page 76: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

62 CHAPTER

[165] Loconsole, A. and Börstler, J. 2007. Construction and Validation of Prediction Mod-els for Changes to Requirements. Department of Computing Science, Umeå Univer-sity, Technical Report UMINF 2007-03, (Mar. 2007).

[166] Loconsole, A. and Börstler, J. 2007. A Correlational Study on Four Size Measures asPredictors of Requirements Volatility. Submitted to the Journal of Software Measure-ment, (Dec. 2007).

[167] Loconsole, A. and Börstler, J. 2007. Are Size Measures Better Than Expert Opinion?An Industrial Case Study on Requirements Volatility. In Proceedings of the 14th Asia-Pacific Software Engineering Conference, (Dec. 5–7, 2007). IEEE Computer Society,Washington, DC, 238–245.

[168] Loconsole, A., Rodriguez, D., Börstler, J., and Harrison, R. 2001. Report on Metrics2001: the Science & Practice of Software Metrics Conference. SIGSOFT SoftwareEngineering Notes, 26, 6 (Nov. 2001), 52–57.

[169] Lormans, M., Dijk, H. v., Deursen, A. v., Nocker, E., and Zeeuw, A. d. 2004. Manag-ing Evolving Requirements in an Outsourcing Context: An Industrial ExperienceReport. In Proceedings of the 7th International Workshop on Principles of SoftwareEvolution, (Sep. 06–07, 2004). IEEE Computer Society, Washington, DC, 149–158.

[170] Lott, C.M. 1996. Measurement-Based Feedback in a Process-Centered Software Engi-neering Environment. PhD Thesis, Internal Report 283/96, Kaiserslautern (Maj1996).

[171] MacDonell, S. 1997. Metrics for Database Systems: An Empirical Study. In Proceed-ings of the 4th International Symposium on Software Metrics (Nov. 05–07, 1997).IEEE Computer Society, Washington, DC, 99–107.

[172] Malaiya, Y.K. and Denton, J. 1999. Requirements Volatility and Defect Density. InProceedings of the 10th International Symposium on Software Reliability Engineering(Nov. 01–04, 1999). IEEE Computer Society, Washington, DC, 285–294.

[173] Manso, M.E., Genero, M., and Piattini, M. 2003. No-redundant Metrics for UMLClass Diagram Structural Complexity, CAiSE 2003, Lecture Notes in Computer Sci-ence Publisher Springer Berlin / Heidelberg, 2681/2003, 127–142.

[174] McCabe, T.J. 1976. A Complexity Measure. In Proceedings of the 2nd InternationalConference on Software Engineering (San Francisco, California, Oct. 13–15, 1976).IEEE Computer Society Press, Los Alamitos, CA, 407.

[175] McEneaney, J.E. 2001. Graphic and Numerical Methods to Assess Navigation inHypertext, International Journal of Human-Computer Studies, 55, 5 (Nov. 2001)761–786, Academic Press, Inc. Duluth, MN, USA.

[176] Melton, A.C., Gustafson, D.A., Bieman, J.M., and Baker, A.L. 1990. A MathematicalPerspective for Software Measures Research. Software Engineering Jounal, 5, 5 (Sep.1990), 246–254.

[177] Mendes, E. 2000. Investigating Metrics for a Development Effort Prediction Model ofWeb Applications, Australian Software Engineering Conference, IEEE ComputerSociety, Washington, DC, 31.

Page 77: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

63

[178] Mendes, E, Hall, W., and Harrison, R. 1998. Applying Metrics to the Evaluation ofEducational Hypermedia Applications, UCS: Journal of Universal Computer Science,4, 4 (Apr. 1998), 382–403.

[179] Miranda, D., Genero, M., and Piattini M. 2003. Empirical Validation of Metrics forUML Statechart Diagrams, Fifth International Conference on Enterprise InformationSystems, 1, 87–95.

[180] Mišic, V.B. and Tešic, D.N. 1998. Estimation of Effort and Complexity: an Object-Oriented Case Study. Journal of Systems and Software, 41, 2 (May 1998), 133–143.

[181] Mohagheghi, P. and Conradi, R. 2004. An Empirical Study of Software Change: Ori-gin, Acceptance Rate, and Functionality vs. Quality Attributes. In Proceedings of the2004 International Symposium on Empirical Software Engineering (Aug. 19–20,2004). IEEE Computer Society, Washington, DC, 7–16.

[182] Morasca, S. 1999. Measuring Attributes of Concurrent Software Specifications inPetri Nets. In Proceedings of the 6th International Symposium on Software Metrics(Nov. 04 – 06, 1999). IEEE Computer Society, Washington, DC, 100.

[183] Morasca, S. 2001. Software Measurements. in Handobook of Software Engineeringand Knowledge Engineering, 1, World Scientific Publishing Co. Pte. Ltd.

[184] Morasca, S., Briand, L.C., Weyuker, E.J., Basili, V.R., and Zelkowitz, M.V. 1997.Comments on "Towards a Framework for Software Measurement Validation". IEEETransaction on Software Engineering, 23, 3 (Mar. 1997), 187–188.

[185] Morasca, S. and Russo, G. 2001. An Empirical Study of Software Productivity. InProceedings of the 25th International Computer Software and Applications Confer-ence on Invigorating Software Development (Oct. 08–12, 2001). IEEE ComputerSociety, Washington, DC, 317–322.

[186] Moser, S., Henderson-Sellers, B., and Mišic, V.B. 1999. Cost Estimation Based onBusiness Models. Journal of Systems and Software, 49, 1 (Dec. 1999), 33–42.

[187] Moses, J., Farrow, M., and Smith, P. 2002. Statistical Methods for Predicting andImproving Cohesion Using Information Flow: An Empirical Study. Software QualityJournal, 10, 1 (Jul. 2002), 11–46.

[188] Nagappan, N. 2004. Toward a Software Testing and Reliability Early Warning MetricSuite. In Proceedings of the 26th International Conference on Software Engineering(May 23–28, 2004). IEEE Computer Society, Washington, DC, 60–62.

[189] Nesi, P. and Querci, T. 1998. Effort Estimation and Prediction of Object-OrientedSystems. Journal of Systems and Software, 42, 1 (Jul. 1998), 89–102.

[190] Nidumolu, S. 1996. Standardization, Requirements Uncertainty and Software ProjectPerformance, Information & Management, 31, 3 (Dec. 1996), 135–150.

[191] Nikora, A.P. and Munson J.C. 2005. An Approach to the Measurement of SoftwareEvolution, Journal of Software Maintenance and Evolution, 17, 1 (Jan. 2005), 65–91.

[192] Nurmuliani, N., Zowghi, D., and Williams, S.P. 2004. Using Card Sorting Techniqueto Classify Requirements Change. In Proceedings of the 12th IEEE International

Page 78: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

64 CHAPTER

Requirements Engineering Conference (Sep. 06–10, 2004). IEEE Computer Society,Washington, DC, 240–248.

[193] Nurmuliani, N., Zowghi, D., and Fowell, S. 2004. Analysis of Requirements Volatil-ity During Software Development Life Cycle. In Proceedings of the 2004 AustralianSoftware Engineering Conference, (Apr. 13–16, 2004). IEEE Computer Society,Washington, DC, 28.

[194] Offen, R.J. and Jeffery, R. 1997. Establishing Software Measurement Programs. IEEESoftware, 14, 2 (Mar. 1997), 45–53.

[195] Olsson, T. and Runeson, P. 2001. V-GQM: A Feed-Back Approach to Validation of aGQM Study. In Proceedings of the 7th International Symposium on Software Metrics(Apr. 04–06, 2001). IEEE Computer Society, Washington, DC, 236.

[196] Orme, A.M., Yao, H., and Etzkorn, L.H. 2006. Coupling Metrics for Ontology-Based Systems. IEEE Software, 23, 2 (Mar. 2006), 102–108.

[197] Ott, L.M., Kinnula, A., Seaman, C., and Wohlin, C. 1999. The Role of EmpiricalStudies in Process Improvement. Empirical Software Engineering, 4, 4 (Dec. 1999),381–386.

[198] Paulk, M.C., Curtis, B., Chrissis, M.B., and Weber, C.V. 1993. Capability MaturityModel for Software Version 1.1, Software Engineering Institute Technical Report,CMU/SEI-93-TR-24, ESC-TR-93-177, Pittsburgh, PA.

[199] Perry, D.E., Porter, A.A., and Votta, L.G. 2000. Empirical Studies of Software Engi-neering: a Roadmap. In Proceedings of the Conference on the Future of SoftwareEngineering (Limerick, Ireland, Jun. 04–11, 2000). ACM Press, New York, NY, 345–355.

[200] Pfleeger, S.L., Jeffery, R., Curtis, B., and Kitchenham, B. 1997. Status Report on Soft-ware Measurement. IEEE Software, 14, 2 (Mar. 1997), 33–43.

[201] Pfleeger, S.L. 1999. Albert Einstein and Empirical Software Engineering. Computer,32, 10 (Oct. 1999), 32–38.

[202] Pfleeger, S.L. 2005. Soup or Art? The Role of Evidential Force in Empirical SoftwareEngineering. IEEE Software, 22, 1 (Jan. 2005), 66–73.

[203] Pfleeger, S.L. and Atlee, J.M. 2006. Software Engineering: Theory and Practice. 3/EPrentice-Hall, Inc.

[204] Phalp, K., Counsell, S., Chen, L., and Shepperd M. 1997. Using Counts as Heuristicsfor the Analysis of Static Models. ICSE Workshop on Process Modeling and EmpiricalStudies of Software Engineering.

[205] Piattini, M., Genero, M., and Calero, C. 2002. Data Model Metrics, in Handbook ofSoftware Engineering & knowledge Engineering. 2, Emerging Technologies, Editedby Chang S.K., World Scientific.

[206] Piattini, M., Genero, M., Jiménez, L. 2001. A Metric-based Approach for PredictingConceptual Data Models Maintainability. International, Journal of Software Engi-neering and Knowledge Engineering, 11, 6 (Dec. 2001), 703–729.

Page 79: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

65

[207] Piattini, M., Calero, C., and Genero, M. 2001. Table Oriented Metrics for RelationalDatabases. Software Quality Journal, 9, 2 (Jun. 2001), 79–97.

[208] Poels, G. and Dedene, G. 2000. Distance-based Software Measurement: Necessaryand Sufficient Properties for Software Measures, Information and Software Technol-ogy, 42, 1, (Jan. 2000), 35–46.

[209] Pons, C., Prieto, M., and Olsina, L. 2000. A Formal Mechanism for Assessing Poly-morphism in Object-Oriented Systems. In Proceedings of the the First Asia-PacificConference on Quality Software, (Oct. 30–31, 2000). IEEE Computer Society,Washington, DC, 53.

[210] Port, D. and Klappholz, D. 2004. Empirical Research in the Software EngineeringClassroom. In Proceedings of the 17th Conference on Software Engineering Educa-tion and Training (Mar. 01–03, 2004). IEEE Computer Society, Washington, DC,132–137.

[211] Pour, G., Griss, M.L., and Lutz, M. 2000. The Push to Make Software EngineeringRespectable, Computer, 33, 5 (May 2000), 35–43.

[212] Procaccino, J.D., Verner, J.M., Shelfer, K.M., and Gefen, D. 2005. What do SoftwarePractitioners Really Think About Project Success: an Exploratory Study. Journal ofSystems and Software, 78, 2 (Nov. 2005), 194–203.

[213] Ragu-Nathan, B., Ragu-Nathan, T.S., Tu, Q., and Shi Z. 2001. Information Manage-ment (IM) Strategy: the Construct and its Measurement, Journal of Strategic Infor-mation Systems, 10, 4, 265–289.

[214] Raynus, J. 1999. Software Process Improvement with CMM, Artech House Publish-ers, Boston.

[215] Rajaraman, C. and Lyu, M. 1992. Reliability and Maintainability Related SoftwareCoupling Metrics in C++ Programs. In Proceedings of the 3rd IEEE InternationalSymposium on Software Reliability Engineering, (Oct. 7–10 1992), IEEE ComputerSociety, Washington, DC, 303–311.

[216] Rauterberg, M. 1996. An Empirical Validation of Four Different Measures to Quan-tify User Interface Characteristics based on a General Descriptive Concept for Interac-tion Points. In Proceedings of the IEEE Symposium and Workshop on Engineering ofComputer Based Systems (Mar. 11–15, 1996). IEEE Computer Society, Washington,DC, 435.

[217] Rauterberg, M. 1995. Usability Evaluation: An Empirical Validation of DifferentMeasures to Quantify Interface Attributes, in: T. Sheridan(Ed.), Analysis, Design andEvaluation of Man-Machine Systems, 2. Pergamon, Oxford, 467–472.

[218] Reynoso, L., Genero, M., Piattini, M., and Manso, E. 2005. Assessing the Impact ofCoupling on the Understandability and Modifiability of OCL Expressions withinUML/OCL Combined Models. In Proceedings of the 11th IEEE International Soft-ware Metrics Symposium (Sep. 2005). IEEE Computer Society, Washington, DC, 14.

[219] Rising, L. and Janoff, N.S. 2000. The Scrum Software Development Process for SmallTeams. IEEE Software, 17, 4 (Jul. 2000), 26–32.

Page 80: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

66 CHAPTER

[220] Robertson, J. and Robertson, S. 2000. Requirements Management: A CinderellaStory, Requirements Engineering Journal, 5, 2 (Sep. 2000), 134–136.

[221] Rosenberg L.H. and Hyatt L. 1996. Developing an Effective Metrics Program, Euro-pean Space Agency Software Assurance Symposium, the Netherlands (Mar. 1996).

[222] Rossi, P. and Fernandez, G. 2003. Definition and Validation of Design Metrics forDistributed Applications. In Proceedings of the 9th International Symposium onSoftware Metrics (Sep. 03–05, 2003). IEEE Computer Society, Washington, DC,124.

[223] Ruhe, M., Jeffery, R., and Wieczorek, I. 2003. Using Web Objects for EstimatingSoftware Development Effort for Web Applications. In Proceedings of the 9th Inter-national Symposium on Software Metrics (Sep. 03–05, 2003). IEEE Computer Soci-ety, Washington, DC, 30.

[224] Saward, G., Hall, T., and Barker, T. 2004. Assessing Usability through Perceptions ofInformation Scent. In Proceedings of the 10th International Symposium on SoftwareMetrics (Sep. 11–17, 2004). IEEE Computer Society, Washington, DC, 337–346.

[225] Schneider, G. and Winters, J.P. 1998. Applying Use Cases: a Practical Guide. Addi-son-Wesley Longman Publishing Co., Inc.

[226] Schneidewind, N.F. 2002. Body of Knowledge for Software Quality Measurement.Computer, 35, 2 (Feb. 2002), 77–83.

[227] Schneidewind, N.F. 1992. Methodology for Validating Software Metrics. IEEE Trans-action on Software Engineering, 18, 5 (May 1992), 410–422.

[228] Serrano, M., Calero, C., and Piattini, M. 2003. Experimental Validation of Multidi-mensional Data Models Metrics. In Proceedings of the 36th Annual Hawaii Interna-tional Conference on System Sciences, 9 (Jan. 06–09, 2003). IEEE Computer Society,Washington, DC, 327.2.

[229] Sharma, N., Joshi, P., and Joshi, R.K. 2006. Applicability of Weyuker's Property 9 toObject Oriented Metrics. IEEE Transaction on Software Engineering, 32, 3 (Mar.2006), 209–211.

[230] Shaw, M. 1990. Prospects for an Engineering Discipline of Software. IEEE Software,7, 6 (Nov. 1990), 15–24.

[231] Sjoberg, D.I.K., Hannay, J.E., Hansen, O., By Kampenes, V., Karahasanovic, A.,Liborg, N., and C. Rekdal, A. 2005. A Survey of Controlled Experiments in SoftwareEngineering. IEEE Transaction on Software Engineering, 31, 9 (Sep. 2005), 733–753.

[232] van Solingen, R. and Berghout, E. 1999. The Goal/Question/Metric Method A Prac-tical Guide for Quality Improvement of Software Improvement, McGraw-Hill, Cam-bridge, UK.

[233] Sommerville, I. and Ransom, J. 2005. An Empirical Study of Industrial RequirementsEngineering Process Assessment and improvement. ACM Transaction on SoftwareEngineering Methodologies, 14, 1 (Jan. 2005), 85–117.

Page 81: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

67

[234] Sommerville, I. and Sawyer P. 2000. Requirements Engineering a Good PracticeGuide. John Wiley and Sons.

[235] Stark, G.E. 1996. Measurements for Managing Software Maintenance. In Proceedingsof the 1996 International Conference on Software Maintenance (Nov. 04–08, 1996).IEEE Computer Society, Washington, DC, 152.

[236] Stark, G.E., Oman, P., Skillicorn, A., and Ameele, A. 1999. An Examination of theEffects of Requirements Changes on Software Maintenance Releases. Journal of Soft-ware Maintenance, 11, 5 (Sep. 1999), 293–309.

[237] Stensrud, E., Foss, T., Kitchenham, B., and Myrtveit, I. 2002. An Empirical Valida-tion of the Relationship Between the Magnitude of Relative Error and Project Size. InProceedings of the 8th International Symposium on Software Metrics (Jun. 04–07,2002). IEEE Computer Society, Washington, DC, 3.

[238] Subramanyam, R. and Krishnan, M.S. 2003. Empirical Analysis of CK Metrics forObject-Oriented Design Complexity: Implications for Software Defects. IEEE Trans-action on Software Engineering, 29, 4 (Apr. 2003), 297–310.

[239] Symons, C.R. 1988. Function Point Analysis: Difficulties and Improvements. IEEETransaction on Software Engineering, 14, 1 (Jan. 1988), 2–11.

[240] Tang, M., Kao, M., and Chen, M. 1999. An Empirical Study on Object-OrientedMetrics. In Proceedings of the 6th International Symposium on Software Metrics(Nov. 04–06, 1999). IEEE Computer Society, Washington, DC, 242.

[241] The Standish Group. The CHAOS Report. Dennis, MA: The Standish Group, 1994.

[242] Tichy, W.F., Lukowicz, P., Prechelt, L., and Heinz, E.A. 1995. Experimental Evalua-tion in Computer Science: a Quantitative Study. Journal of Systems and Software, 28,1 (Jan. 1995), 9–18.

[243] Tichy, W.F. 1998. Should Computer Scientists Experiment More?. Computer, 31, 5(May 1998), 32–40.

[244] Wells, D. 1999. When should Extreme Programming be Used? http://www.extreme-programming.org/when.html

[245] Weyuker, E.J. 1988. Evaluating Software Complexity Measures. IEEE Transaction onSoftware Engineering, 14, 9 (Sep. 1988), 1357–1365.

[246] Wiegers, K.E. 2003. Software Requirements. 2. Microsoft Press.

[247] Wilkie, F.G. and Hylands, B. 1998. Measuring Complexity in C++ application Soft-ware. Softw. Pract. Exper. 28, 5 (Apr. 1998), 513–546.

[248] Wilkie, F.G. and Kitchenham, B.A. 2000. Coupling Measures and Change Ripples inC++ application Software. Journal of Systems and Software, 52, 2–3 (Jun. 2000),157–164.

[249] Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., and Wesslén, A.2000. Experimentation in Software Engineering: an Introduction. Kluwer AcademicPublishers.

Page 82: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

68 CHAPTER

[250] Wood, M., Daly, J., Miller, J., and Roper, M. 1999. Multi-method Research: anEmpirical Investigation of Object-Oriented Technology. Journal of Systems and Soft-ware, 48, 1 (Aug. 1999), 13–26.

[251] Xia, F. 1999. Look Before You Leap: on Some Fundamental Issues in Software Engi-neering Research, Information and Software Technology, 41, 10 (Jul. 15, 1999), 661–672.

[252] Yu, P., Systa, T., and Muller, H. 2002. Predicting Fault-Proneness Using OO Metrics:An Industrial Case Study. In Proceedings of the Sixth European Conference on Soft-ware Maintenance and Reengineering (Mar. 2002). IEEE Computer Society, Wash-ington, DC, 99.

[253] Zelkowitz, M.V. and Wallace, D.R. 1997. Experimental Validation in Software Engi-neering Information and Software Technology, 39, 11 (Nov. 1997), 735–743.

[254] Zelkowitz, M.V. and Wallace, D.R. 1998. Experimental Models for Validating Tech-nology. Computer, 31, 5 (May 1998), 23–31.

[255] Zelkowitz, M.V. Wallace, D.R., and Binkley, D. 1998. Culture Conflicts in SoftwareEngineering Technology Transfer. In Proceedings of the 23rd NASA Goddard SpaceFlight Center Software Engineering Workshop (Maryland, Dec. 1998), 52–62.

[256] Zhang, L. and Xie, D. 2002. Comments on "On the Applicability of Weyuker Prop-erty 9 to Object-Oriented Structural Inheritance Complexity Metrics". IEEE Transac-tion on Software Engineering, 28, 5 (May 2002), 526–527.

[257] Zowghi, D. and Offen, R. 1997. A Logical Framework for Modeling and ReasoningAbout the Evolution of Requirements. In Proceedings of the 3rd IEEE InternationalSymposium on Requirements Engineering (Jan. 05–08, 1997). IEEE Computer Soci-ety, Washington, DC, 247.

[258] Zowghi, D. and Nurmuliani, N. 2002. A Study of the Impact of Requirements Vola-tility on Software Project Performance. In Proceedings of the Ninth Asia-Pacific Soft-ware Engineering Conference (Dec. 2002). IEEE Computer Society, Washington,DC, 3.

[259] Zuse, H. 1997. Reply to: "Property-Based Software Engineering Measurement". IEEETransaction on Software Engineering, 23, 8 (Aug. 1997), 533.

[260] Zuse, H. 1997 A Framework of Software Measurement. Walter de Gruyter & Co.

Page 83: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

69

Page 84: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

70 CHAPTER

Page 85: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

App

endi

x A

.

Page 86: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

72C

HA

PTER

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Abrahao et al. [1]

Measures of size and structural complexity of web applications: 8 measures for navi-gational maps, 5 measures for naviga-tional context

Maintainability of web application (analysability, changeability, stability)

Navigational models

Distance frame-work of Poels and Dedene

Spearman corre-lation

Academic experiment

Abrahao et al. [2]

Functional size (OO-method function points for the web)

Efficiency and effectiveness (measurement time, repro-ducibility), perceived easy of use, usefulness and inten-tion to use

Web applications—One tailed t-testAcademic experiment

Abran and Robillard [3]

Function pointsWork-effortSoftware applica-tion

Analysis of FP (scale perspective)

Linear regression37 industrial projects

Alagar et al. [4]Architectural complexity measuresComplexity, reliabilityMaintenance process

Representational theory [98]

——

Allen et al. [6]5 system level and 5 module level meas-ures

Measures of size, length, complexity, coupling, cohe-sion

Software system in form of a graph

Briand properties [49] and repre-sentational theory of Kitchenham [146]

——

de Almeida et al. [7]

Product measures (number of operands, operators, declarative, inline comments, distinct operators, blank lines, comments/size, cyclom complexity, lines of comments, LOC, executable, statements)

Effort to isolate and correct a faulty component

Ada software components

—ML algorithm (newID, CN2, C4.5, FOIL)

GSS reuse asset library (NASA)

Alshayeb and Li [8]

CK suiteLOC added, deleted, changed, maintenance and refactoring effort

Short-long cycled processes

—Multivariate lin-ear regression

Industrial data-set, laboratory analysis

Page 87: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

73

Ambriola and Gervasi [9]

Measures of stability (amount of informa-tion contained in requirements at time t) and efficiency

Amount of rework, per-ceived work efficiency

Requirements analysis process

—Plot of dataAcademic experiment

Antoniol et al. [11]

Classes, methods, associations, inherit-ance, a combination of them, OOFP-related measures

SizeOO software—Univariate linear regression

4 industrial projects

Arisholm et al. [12]

Size, static coupling, 12 dynamic coupling measure (import and export coupling)

Change pronenessOO softwareBriand properties [41]

PCA, univariate/multivariate lin-ear regression

Large open source java sys-tem

Bahli and Rivard [13]

14 measures of risk factors (uncertainty, internal/external relatedness, degree of expertise with the IT operation and out-sourcing, asset specificity, number of sup-plier)

Risk factorsInformation Tech-nology outsourc-ing

—Expert opinion, PLS

Db of 3000 industrial organisations

Bandi et al. [15]Design complexity, maintenance task, pro-grammer ability (interaction level, interface size, and operation argument complexity)

Maintenance performance (time)

OO Design Previously vali-dated with Wey-uker properties in Bandi’s PhD thesis

ANOVA, correla-tion, linear regression

Academic experiment

Barnard [16]CK suite, loc, methods, attributes, mean-ingfulness of variable name

Subjective perception of reusability

OO Code classes—Ad hoc compari-son

Academic experiment with expert develop-ers; second experiment classes from known libraries

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 88: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

74C

HA

PTER

Basili et al. [17]Adjusted measures of CK suite (Weighted methods per class, depth of inheritance tree, #children of a class, coupling between object classes, response for a class, lack of cohesion on methods), code measures nesting level, funct def, funct call

Fault proneness OO design classes

Performed in [49]and [132] with Briand and Wey-uker properties

Univariate/multi-variate logistic regression

8 c++ informa-tion manage-ment systems, academic

Baudry et al. [23]

Robustness and diagnosability measures—OO systems designed by con-tracts

Axiomatic approach of Shepperd 93

Application of measures to 3 systems

3 industrial case studies

Benlarbi et al. [25]

CK suite without LCOM Fault proneness OO classes—Univariate logis-tic regression

Commercial application, c++ telecom-munication sys-tem

Benlarbi and Melo [26]

Measures of polymorphism based on com-pile time linking decisions and based on run-time binding decisions

Fault pronenessOO design classes

—Univariate/multi-variate logistic regression

Industrial OO software system

Bieman et al. [27]

Patterns, design attributes, class size (total number of attributes, operations, number of friends methods, of methods that are overridden, depth of inheritance, number of direct child classes, number of descend-ents)

Number of changesOO classesDiscussion of con-struct validity only for the dependent variable

Spearman and Pearson correla-tions, linear regression

Industrial case study

Binkley and Schach [29]

Coupling measuresRun-time failures, mainte-nance time, faults due to maintenance activity

Design modules—Spearman corre-lation

4 industrial case studies

Binkley and Schach [30]

16 measures: coupling, inheritance tree, size of response data abstraction, permit-ted interaction, coupling dependency

Quality (impact on pre-dicted implementation, maintenance effort)

OO design—Expert opinion evaluation

Industrial experts

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 89: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

75

Briand et al. [37]Procedural vs. OO design, adherence to

common principles of good design (Cou-pling, cohesion, clarity of design, generali-sation/specialization depth, keeping object and classes simple)

Understandability (Que__%), effectiveness of modification (Mod_%) mod-ification rate of impact anal-ysis (Mod_rate)

OO and struc-tured design doc-uments

Discussion of con-struct validity

2x2 factorial Design, ANOVA, paired t- test

Academic experiment

Briand et al. [38]

Time and correctness of understandability, time, completeness, correctness and rate of modification

OO and struc-tured design doc-uments

Discussion of con-struct validity

Academic experiment, replication of [37]

Briand et al. [39]

Measures of coupling, cohesion, and inheritance

Probability of fault detectionOO design sys-tem classes

Performed in [41] and [46] with Bri-and properties (except inheritance measures)

Univariate/ mul-tivariate logistic regression

Academic, 8 systems, 180 classes (correla-tional study)

Briand et al. [40]

Several kinds of LCOM (lack of cohesion in methods), connectivity, tight and loose class cohesion, information flow based cohesion, ratio of cohesive interaction (RCI), neutral, pessimistic, and optimistic RCI

OO systemsBriand properties ——

Briand et al. [41]

Coupling between objects (CBO), response for class (RFC), message passing coupling (MPC) data abstraction coupling DAC, efferent and afferent coupling (CE CA), coupling factor COF, information flow based coupling ICP, +coupling measures defined in [43]

OO systemsBriand properties ——

Briand et al. [42]

Measures of coupling, cohesion, inherit-ance

Probability of Fault detectionOO design sys-tem classes

Performed in [41] and [46] with Bri-and properties

PCA, univariate logistic regres-sion

Academic experiment

Briand et al. [43]

Relationship, locus, and type coupling measures (the last can be: class-attribute, class-method, method-method interac-tions)

Fault pronenessC++ classes—Logistic regres-sion

Academic case study

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 90: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

76C

HA

PTER

Briand et al. [46]

Coupling, polymorphism measures, subset of CK suite and size measures (22 meas-ures in total)

Fault pronenessOO design—Univariate/multi-variate logistic regression, mul-tivariate adaptive regression splines

Industrial soft-ware systems

Briand and Morasca [47]

Axiom size measure and item size meas-ures

Quality indicators of specifi-cation change and effort

Formal specifica-tion language (TRIO+)

—Univariate/ mul-tivariate logistic/linear regression

Industrial case study

Briand et al. [50]

Interaction based measure for cohesion and coupling, measures based on USES and IS_COMPONENT_OF relationship

Fault pronenessObject based high level design

Briand properties Pearson correla-tion, univariate/multivariate logistic regres-sion

Real scale industrial case study

Briand and Wüst [51, 52]

Size (classes, attributes and methods), cohesion (relationship between methods and class attributes), coupling, and inherit-ance (relationship between classes, and between methods) complexity, (~50 meas-ures including CK suite and CFOOD)

Development effortClass (source code)

Coupling and cohesion meas-ures validated in [40] and [41]

Negative bino-mial, Poisson Regression Hybrid with CART regression trees

C++ system, university setting

Briand et al. [54]

All the design measures proposed in the lit-erature up to 1999 including CK suite and CFOOD (28coupling, 10cohesion, 11 inheritance), size measures; (50 measures)

Fault proneness (acceptance testing)

OO system classes

Performed in [41] and [46] with Bri-and properties. Inheritance meas-ures non validated

PCA, univariate/ multivariate logistic regres-sion

8 c++ students systems

Briand et al. [55]

28 Measures of coupling, 10 cohesion Fault pronenessOO design sys-tem classes

Performed in [41] and [46] with Bri-and properties

PCA, univariate/ multivariate logistic regres-sion

Industrial sys-tem by profes-sionals

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 91: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

77

Briand et al. [56]

Measures from [41] + 3 more measures based on counting direct and indirect method invocation and aggregation rela-tionships

Likelihood of ripple changesOO systems—PCA, LR ranking-based model

Commercial c++ system

Briand et al. [57]

~50 measures including CK suite and CFOOD

Fault pronenessOO systemsPartially per-formed in [40, 41] with Briand prop-erties

Univariate/multi-variate logistic regression

Commercial system

Brito e Abreu et al. [58]

MOOD setMaintainability (normalised rework) reliability (defect density) fault density

OO design—Pearson correla-tion, multivari-ate linear regression

Academic, cor-relational study

Calero et al. [59]

Num of tables in a database schema, num of attributes, num of foreign keys

Analysability, changeability, stability, testability

Database schemaPerformed previ-ously with a meas-urement theory and a property based approach

Spearman corre-lation

4 large db from research centre

Calero et al. [60]

Referential integrity measures (num of for-eign keys, depth of referential tree)

AnalysabilityRelational data-base

Performed in [207] with Briand properties

F statisticIndustrial exper-iment

Calero et al. [61]

Table, star, schema—Data warehouse Zuse framework——

Calero et al. [62]

Table size, number of involved classes, number of shared classes, percentage of complex columns

Complexity

Object relational database

Discussion of con-struct validity

Pearson and Spearman corre-lation

Academic experiment

Calero et al. [63]

Measures from [62]+ referential degree (number of foreign keys) and depth of rela-tional tree

Understandabilityts, drt and rd are validated in [117]

ML alg (C4.5), top down induc of decis trees, RoC

Academic experiment

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 92: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

78C

HA

PTER

Canfora et al. [64]

Structural properties measures (size, com-plexity, coupling)

Maintainability (analysabil-ity, understandability, modi-fiability)

Process modelsDistance frame-work of Poels and Dedene

Spearman corre-lation

5 experiment academic and professionals

Cartwright and Shepperd [65]

13 measures (depth of inheritance tree, number of: children, attributes, states, events, reads, writes, deletes, locs defects)

Defect proneness, sizeOO classes—Linear regression (univariate-multi-variate)

Industrial soft-ware system

Chidamber et al. [68]

CK suiteProductivity, reuse effort, design effort

OO design classes

—Linear regression3 OO systems from an euro-pean bank

Chidamber and Kemerer [69], [70]

6 design measures (CK suite) (Weighted methods per class, depth of inheritance tree, #children of a class, coupling between object classes, response for a class, lack of cohesion on methods)

—OO designWeyuker complex-ity properties

—2 industrial projects

Size and complexityOO designWeyuker complex-ity properties

——

Costagliola et al. [73]

Class points Development effortOO productsBriand propertiesUnivariate linear regression

40 student projects

Costagliola et al. [74]

Class pointsSize in LOCOO guiBriand propertiesUnivariate linear regression

35 java student systems

Dagpinar et al. [76]

Measures of size, coupling, cohesion inher-itance

MaintainabilityOO classes—Univariate/multi-variate linear regression

Academic soft-ware system

De Lucia et al. [81]

Size, number of maintenance tasks split in 3 categories

Effort Software mainte-nance process

—Multivariate lin-ear regression.

Industrial

Devambu et al. [83]

Reuse level, reuse frequency, size and fre-quency, reused source instructions

Productivity, defect density, rework effort

Software reuseModification of Weyuker’s proper-ties

Linear regressionAcademic

Diaz et al. [84]Trigger complexity (triggering potential, number of anchors, distance of a trigger)

Understandability (meas-ured with time)

Active DBMSZuse framework [260]

F-statisticAcademic industrial exper-iment

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 93: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

79

El Emam et al. [88]

CK suite, and a subset of Lorentz and Kidd measures (18 in total)

Fault proneness and class size (confounding effect of size)

OO design classes

Performed in [41] and [46] with Bri-and in [69] with Weyuker proper-ties

Spearman corre-lation, logistic regression

Industrial soft-ware

El Emam et al. [89]

Stmts (number of declaration and executa-ble statements in the methods), number of methods, number of attributes

Fault pronenessC++ classes—Univariate logis-tic regression

2 c++ system, 1 java industrial

El Emam et al. [90]

24 measures: (CK suite, and measures from Briand et al. [43])

Fault pronenessC++ classes—Univariate/multi-variate logistic regression

Telecommunica-tion system (c++)

El Emam and Birk [91]

Software Requirements Analysis process capability measure (expert opinion)

Project performanceISO/IEC 15504Cromback alphaPearson correla-tion, linear regression

Database, industrial

El Emam et al. [93]

10 measures, subset of CK suite and Bri-and (part of CFOOD, DIT, NOC, Attrs)

Fault pronenessJava classes—Univariate/multi-variate logistic regression

Commercial java application

Etzkorn et al. [95]

LCOM (lack of cohesion in methods), two versions [69, 160]

Perceived class cohesionOO classes—Linear regressionC++ gui pack-ages (commer-cial)

Fernandez and Dolado [99]

MACOVs and MACOVt: space and time (num of instructions) and (binary storage)

Algorithmic cost of verifica-tion

DesignBriand properties Logical conse-quences of rela-tionships in the models pre-sented

2 commercial systems, 3 aca-demic case studies

Ferneley [100]AIMS set (automated information flow measurement set) based on coupling and control flow

Corrective and preventive maintenance (error rate)

Design stage of software develop-ment

—Spearman corre-lation

Medium size industrial project

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 94: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

80C

HA

PTER

Fioravanti and Nesi [101]

226 OO measuresFault pronenessClasses —PCA, multivari-ate logistic regression

8 academic sys-tems

Fioravanti [102]Class complexityDevelopment and mainte-nance effort

OO systemBriand propertiesLinear regression18 projects, academic

Garcia et al. [105]

Structural properties measures (size, com-plexity, coupling)

Maintainability (understand-ability, modifiability)

Process models—Spearman corre-lation

2 Academic experiment

Genero et al. [107]

Structural complexity measures (entity, attributes and relationship measures)

Maintainability (analysabil-ity, understandability, modi-fiability)

Conceptual data model (ERD)

Discussion in con-struct validity

Spearman corre-lation

Academic experiment

Genero et al. [108]

Structural complexity measuresMaintainability (analysabil-ity, understandability, modi-fiability)

Class diagramsPerformed in [60] distance frame-work of Poels and Dedene

Spearman corre-lation

Academic experiment

Genero et al. [111]

6 measures of structural complexityUnderstandability, modifia-bility

Conceptual data model (ERD)

Performed in [106] distance framework of Poels and Dedene

Pearson correla-tion

Academic experiment

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 95: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

81

Genero et al. [112]

Structural complexity measures (num of: classes, attributes, methods, associations, aggregation, dependencies, generalisa-tion, generalisation hierarchies, maximum depth of the generalization hierarchies, maximum height of the aggregation hier-archies)

Understandability, modifia-bility

UML class dia-gram

—Pearson correla-tion multivariate linear regression

Academic experiment

Genero et al. [113]

ComplexityUML class dia-grams

Discussion in con-struct validity

Fuzzy rules method

Academic experiment

Genero and Piattini [114]

Maintainability (analysabil-ity, understandability, modi-fiability)

UML class dia-gram

—Spearman corre-lation, fuzzy rules method

Academic experiment

Genero et al. [115]

Maintenance timeUML class dia-gram

Performed in [106] with dis-tance framework of Poels and Dedene

Spearman corre-lation

Academic experiment

Genero et al. [116]

8 structural complexity measures, 3 size measures

Maintainability (understand-ability time, modifiability time, correctness and com-pleteness)

UML class dia-gram

Performed in [106]

PCA, univariate linear regression

2 academic studies

Glasberg et al. [117]

Subset of CK suite and Briand [157] (DIT, NOC, Mthds, part of CFOOD)

Fault pronenessJava classesCognitive theoryUnivariate multi-variate logistic regression

Xpose, commer-cial java

Gursaran [120]DIT, NOC, (2 measures from CK suite)—C++ classesRepresentational theory Fenton[98]

—Academic sur-vey

Gursaran and Roy [121]

Inheritance measures from CK suite and from Brito e Abreu and Carapuca

ComplexityOO classesWeyuker property 9

——

Gyimothy et al. [122]

CK suiteFault pronenessOpen source soft-ware

—Multivariate logistic and lin-ear, regression and ML

Mozilla source code

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 96: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

82C

HA

PTER

Hall et al. [123]Parameter (size, structure, computation uses, predicate uses), passed arguments, global variables (Glbv), Glbv (changed, computation uses, predicate uses, function (calls, returns, void uses) returned struc-tures + 15 size measures

Coupling complexity (size of interface, type of informa-tion flow, type of passed data, global connection, interaction with other mod-ules)

Design modules—PCAClone of the unix “spell” util-ity

Harrison and Counsell [125]

CK suite inheritance (DIT, NOC), NMO, NMI

Fault density, non comment source lines, software understanding (subjective complexity)

C++ classes—Spearman corre-lation

Industrial and academic sys-tems

Harrison et al. [126]

DIT (flat vs deep inheritancestructure)

Maintainability, understand-ability

OO software—4x12 between subject, chi square

Academic experiment, 2 c++ systems

Harrison et al. [127]

MOOD set (6 measures) Inheritance, coupling, encapsulation, polymor-phism

OO classesRepresentational theory of Kitchen-ham [146]

—Commercial software

Harrison et al. [128]

Coupling between objects, number of associations

Understandability, num errors, error density

OO class design—Spearman corre-lation

5 (commercial) systems

Harrison et al. [129]

Non comment source lines, lib/non-lib functions called, depth in call graph, #function declarations /definitions, #domain specific function

Subjective complexity, #faults in testing, time to fix faults #modification requests, time to modify

OO source code—Spearman and Pearson correla-tion, Kendall

C++ systems developed by the researchers

Hastings and Sajeev [130]

Functionality (num of atomic units in the signature section of the spec) problem complexity (num of atomic units in the semantic section of the specification)

SizeSoftware (speci-fied with alge-braic specification lan-guage)

Representational theory of Kitchen-ham [146]

——

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 97: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

83

Hitz and Mon-tazeri [132]

CK suiteOO designRepresentation condition [14] and Weyuker proper-ties

——

Idri and Abran [136]

Fuzzy logic measuresSimilaritySoftware projectTheir own axio-matic approach

—Cocomo projects

Idri et al. [137] Fuzzy logic measures (distance between projects) based on numerical and linguistic values

Project effortSoftware devel-opment

Performed in [136]

Fuzzy analogyCocomo 81 dataset

Ivory et al. [138]

Page composition (word, link and graphic count) page formatting (emphasized text, text positioning, text clusters) overall page characteristics (page size, download speed)

Content, structure &naviga-tion, visual design, function-ality, interactivity and overall experience (judge rates)

Web page design—Linear regres-sion, linear dis-criminant analysis, t-test

1898 pages from the webby awards cate-gory

Jung et al. [141]

LOC, halstead software science (num of unique operators and operands, total num of operators and operands), McCabe’s cyclomatic num, num of signals, num of database accesses, num of library calls

Number of change requests due to faults in testing

Modules in test-ing

—Negative bino-mial regression

Large scale tele-communication system (816 modules in Chill)

Jørgenssen [144]

Cause of task, change degree on code, type of operation on code and confidence on maintainer

Maintenance task effortMaintenance process

—Linear regres-sion, neural net-work, pattern recognition

109 mainte-nance tasks in a large organisa-tion with 20,000 employ-ees

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 98: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

84C

HA

PTER

Khoshgoftaar and Allen [149]

Modules used, total and unique calls, if then conditional arcs, loops, nesting level, span of conditional arcs, span of loops, McCabe cyclomatic complexity, indicator of reuse from the prior release and prior pro-totypes, measures of development process prior to integration

Fault pronenessModule—Non parametric measure discri-minant analysis, PCA, univariate, multivariate regression

2 industrial case studies

Koru and Liu [151]

Static measures of size, class and method level measures for one of the products

DefectModules and classes

—ML algorithms (j48 and KStar)

5 nasa products

Lakshmanan et al. [153]

Cyclomatic number, total adjusted com-plexity, scope ratio, npath, mebow

ComplexityControl flow (source code)

Weyuker proper-ties

——

Li and Henry [160]

5 CK suite measures (out of 6), 3 coupling measures, a class interface increment and 2 size measures

Maintenance effort (number of lines changed per class)

OO systems— Multivariate lin-ear regression

2 commercial systems

Li et al. [161]CK suite - CBO, MPC, DAC, 2xSize (Stmts, Mth+Attr)

Maintenance effort surro-gate

OO classes—Multivariate lin-ear regression

2 ada commer-cial system

Loconsole and Börstler [163]

Total number of requirements, number of Initial, current, and final requirements, sta-tus of requirements, number of changes per requirement, status, type, reason, and cost of change to requirements

Size and status of require-ments specifications, size and status of changes to Requirements

Requirements management

Representational theory of Kitchen-ham [146]

—Academic case study

Loconsole and Börstler [164]

Num lines, words, actors, use cases, sub-jective volatility

Volatility (num changes, major, moderate, minor, revisions)

Requirements specifications

—Spearman corre-lation

Industrial soft-ware

Loconsole and Börstler [165]

Num lines, words, actors, use cases Volatility (num changes)Linear regression

Loconsole and Börstler [166]

NACTOR, NUC, NLINE, NWORD VolatilityRequirements specifications

—Linear regressionIndustrial soft-ware

Loconsole and Börstler [167]

NACTOR, NLINE, NWORD, NSCENARIO, subjective volatility

Total NCHANGE, VOLATIL-ITY

Requirements specifications

Briand propertiesSpearman corre-lation

industrial case study

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 99: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

85

Mac Donnel [171]

5 functional specifications measures, 3 data models measures

Project size, development effort

4GL database systems

—Pearson and Spearman corre-lation, stepwise linear regression

74 systems sen-ior students

Manso et al. [173]

8 structural complexity and 3 size meas-ures (same set used in [113])

Maintainability (analysabil-ity, understandability, modi-fiability)

Uml class dia-grams

Performed in [106] with dis-tance framework Poels and Dedene

PCA, Spearman and Pearson cor-relation

3 academic experiments

McEneaney [175]

Path measures (compactness, stratum)Performance in hypertext search task (print ability, computer experience, pages viewed, order of path mtrix)

Hypertext network—Pearson correla-tion, multivari-ate linear regression

2 academic studies

Mendes [177]Hyperdocument size, connectivity, per-ceived compactness and stratumReusability, maintainability,

development effort (link generality)

Web applicationsRepresentational theory of Kitchen-ham

Spearman corre-lation

Academic case study

Mendes et al. [178]

Highlighting of anchors, representation and type of links, size and structure of the application, connectivity, compactness, stratum, role and experience of the author

Educational hypermedia applications

—Gamma correla-tion

Academic sur-vey

Miranda et al. [179]

Num of transition, num of states, num of activities, num of entry and exit actions

Understandability (struc-tural complexity)

Behavioural dia-grams (uml state-charts)

Performed in [106] with dis-tance framework Poels and Dedene

Spearman corre-lation

Academic experiment, replication of [106]

Misic and Tesic [180]

5 class and 7 source code structural meas-ures

Effort (project- wide) and source code complexity

OO late design phase

—Pearson correla-tion, linear regression

7 projects from a small software company

Morasca [182]3 size, 1 length, 3 structural complexity, 2 coupling

—Concurrent soft-ware specifica-tions in petri nets

Briand properties ——

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 100: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

86C

HA

PTER

Morasca and Russo [185]

FP, LOC, num of modules (NOM), num of employees, relative experience on the application, start date of activities

Productivity of FP, LOC, NOM

Software devel-opment and maintenance

—Univariate logis-tic regression

Italian public administration environment

Moser et al. [186]

Dome System Meter Function Points and Function points

Effort (project- wide)Information sys-tems

—Univariate quad-ratic regression

36 industrial projects

Moses et al. [187]

Information flow architectural and detailed design (IFAD, IFDD), fanin, fanout

CohesionModules at archi-tectural and detailed design levels

Briand properties Binary logistic regression

Academic experiment

Nagappan [188]

STREW suite (num of test cases/LOC, num of test cases/num of requirements, testlines of code/LOC, num of asserts/LOC, code coverage)

Software reliability, quality of testing effort

OO language—Linear regression 13 senior stu-dents programs

Nesi and Querci [189]

Complexity and size (method level, class level, system level measures)

Class effort OO C++ sys-tems

— Multivariate lin-ear regression

3 industrial soft-ware

Orme et al. [196]

Num of external classes, reference to exter-nal classes, referenced includes

Coupling (subject’s opinion)Ontology based system (written in xml)

Representational theory of Kitchen-ham

Pearson correla-tion

Commercial software

Phalp et al. [204]

Coupling within RAD (role coupling factor, and system coupling factor)

Project characteristics (development team size, project size)

Static business process models (role activity dia-grams)

Briand properties [41]

Spearman corre-lation

2 case studies, large telecom-munications sys-tem

Piattini et al. [206]

Structural complexity (num of: entities, attributes, derived, composite and multival-ued attributes, 1-ary, 2-ary, M-ary, N-ary relationships, IS_A, reflexive, and redun-dant relationships)

Subject rate of maintainabil-ity (simplicity, analysability, understandability, modifia-bility, stability, and testabil-ity)

ERDBriand properties Induction of fuzzy rules

Academic experiment

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 101: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

87

Piattini et al. [207]

Num of attributes, depth of the referential tree, referential degree, combination of DRT and RD

Subject rate of maintainabil-ity (analysability, changea-bility, stability, compliance and testability)

Relational data-bases

Zuse frameworkF-statisticAcademic experiment

Pons et al. [209]Polymorph_measure based on hierarchies, classes, methods, core hierarchies, width, and children

PolymorphismConceptual model of a sys-tem in UML

Representational theory of Kitchen-ham and Zuse framework

——

Ragu-Nathan et al. [213]

Aggressive promotion, analysis-based development, defensive management, future-oriented development, proactive management, conservative management of information systems

IS performance effective-ness (measured on a five points Likert scale)

Information man-agement strategy

Content, unidi-mensionality, reli-ability, discriminant valid-ity with guidelines from IS area

T-test231 IS execu-tives, industrial

Rajaraman and Lyu [215]

Class inheritance and non inheritance related coupling, class coupling, average method coupling

Perceived complexityC++ software—Pearson correla-tion

5 c++ system professional researchers

Rauterberg [216, 217]

Functional feedback, interactive direct-ness, application flexibility, dialogue flexi-bility

Usability (task solving time, target discrepancy)

User interface characteristics

— ANOVALaboratory experiment

Reynoso et al. [218]

Measure for uml/ocl languageUnderstandability, modifia-bility

Object constraint language (OCL) expressions

Briand propertiesSpearman corre-lation, Kendall’s tau

3 academic experiments

Rossi and Fern-andez [222]

8 structural, 6 behaviouralStructural and behavioural coupling

Distributed appli-cations

Distance frame-work of Poels and Dedene

——

Ruhe et al. [223]

Size measures (function points, web objects)

Subjective and measured estimation of development effort

Web applications—Linear regression12 applications from a small company

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 102: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

88C

HA

PTER

Saward et al. [224]

Perceived information scent (age, gender, internet experience, experience with retailer online and with retailer stores, preferred navigation method)

System usability (task com-pletion, task time, user per-ception of easy of use, target item in expected place)

Web information retrieval systems

—Kendall’s tauE-commerce application, academic study

Serrano et al. [228]

Number of fact and dimensional tables UnderstandabilityMultidimensional data models

Performed in [61] with Zuse fram-work

F- statisticAcademic experiment

Sharma et al. [229]

LCOM, mult (number of classes that uses multiple inheritance), number of polymor-phic dispatches (NOPD)

ComplexityOO systemsWeyuker’s prop-erty 9

——

Stensrud et al. [237]

MMREProject sizePrediction systems accuracy

—T-test, Mann -Whitney, scatter Plots

5 large datasets

Subramanyam and Krishnan [238]

Complexity measures (subset of CK suite: WMC, CBO, DIT)

Fault pronenessOO applications—Weighted linear regression

Industrial data

Tang et al. [240]

CK suite, inheritance coupling, coupling between methods, num of object memory allocation, average method complexity

Fault pronenessOO classes—Logistic regres-sion

Industrial soft-ware

Wilkie and Hylands [247]

CK suite, various interpretations for WMC (McCabe, Halstead)

Effort for field fault fixes; effort for functional enhancements

C++ classes—Univariate/multi-variate linear regression

Conferencing system,114 C++ classes

Wilkie and Kitchenham [248]

CBO, #public functions per class, #func-tions per class

Number and proportion rip-ple changes a class partici-pates in, changes in a class

C++ classes—Kruskal Wallis,Kendall’s tau

Wood et al. [250]

Flat vs deep inheritancestructure

Maintainability (time for maintenance task)

OO software—Wilcoxon3 academic experiment

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 103: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

89

Yu et al. [252]LOC, fanin, and CK suiteFault proneness (number of faults in a class)

Java classes—Regression and discriminant analysis

Large network service man-agement system

Zhang and Xie [256]

Inheritance measures from CK suite and from Brito e Abreu and Carapuca

ComplexityOO classesWeyuker property 9

——

TABLE 15: Theoretical and empirical validations of measures in software engineering

ReferenceMeasuresAttributesEntityTheoretical valid.Empirical valid.Environment

Page 104: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures
Page 105: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

91

Some abbreviations used

Acronyms Explanation

ACM Association for Computer Machinery

CMM Capability Maturity Model

CMMI Capability Maturity Model Integrated

CK Chidamber and Kemerer

DB DataBase

DL Digital Libraries

ESE Empirical Software Engineering

FP Function Points

GQM Goal Question Metrics

ICSE International Conference on Software Engineering

IEEE Institution for Electrical and Electronic Engineering

ISO International Standard Organisation

KLOC Kilo Lines Of Code

KPA Key Process Area

LOC Lines Of Code

MMRE Mean Magnitude of Relative Error

MdMRE Median of the Magnitude of Relative Error

OO Object-Oriented

PM Project Management

QIP Quality Improvement Paradigm

RE Requirements Engineering

RM Requirements Management

SEL Software Engineering Laboratories

SPI Software Process Improvement

SPICE Software Process Improvement Capability dEtermination

SRS Software Requirements Specifications

TBD To Be Determined

TSE Transaction on Software Engineering

UCP Use Case Points

UML Unified Modelling Language

WMC Weighted Methods per Class

XP eXtreme Programming

Page 106: Definition and validation of requirements management measuresumu.diva-portal.org/smash/get/diva2:141153/FULLTEXT01.pdf · Definition and validation of requirements management measures

92 CHAPTER