10
An Evaluation of Applying Use Cases to Construct Design versus Validate Design Erik Syversen 1 , Bente Anda 2 and Dag I.K. Sjøberg 2 1 Department of Informatics University of Oslo P.O. Box 1080 Blindern NO–0316 Oslo NORWAY [email protected] 2 Simula Research Laboratory P.O. Box 134 NO–1325 Lysaker NORWAY Tel. +47 67828200 Fax. +47 67828201 {bentea,dagsj}@simula.no Abstract Use case models capture and describe the functional requirements of a software system. A use case driven development process, where a use case model is the principal basis for constructing an object-oriented design, is recommended when applying UML. There are, however, some problems with use case driven development processes and alternative ways of applying a use case model have been proposed. One alternative is to apply the use case model in a responsibility-driven process as a means to validate the design model. We wish to study how a use case model best can be applied in an object-oriented development process and have conducted a pilot experiment with 26 students as subjects to compare a use case driven process against a responsibility-driven process in which a use case model is applied to validate the design model. Each subject was given detailed guidelines on one of the two processes, and used those to construct design models consisting of class and sequence diagrams. The resulting class diagrams were evaluated with regards to realism, that is, how well they satisfied the requirements, size and number of errors. The results show that the validation process produced more realistic class diagrams, but with a larger variation in the number of classes. This indicates that the use case driven process gave more, but not always more appropriate, guidance on how to construct a class diagram The experiences from this pilot experiment were also used to improve the experimental design, and the design of a follow-up experiment is presented. 1. Introduction The authors of the Unified Modeling Language (UML) recommend a use case driven process for developing object-oriented software with UML [4,8,9,15]. In a use case driven development process, a use case model, possibly in combination with a domain model, serves as the basis for deriving a design model. Use case driven development processes have, however, been criticized for not providing a sufficient basis for the construction of a design model. For example, it is claimed that such a development process leads to: a too wide gap between the use case model and the class diagram [13], missing classes as the use case model is insufficient for deriving all necessary classes, and the developers mistaking requirements for design [15]. An alternative to a use case driven process is to use another development process, for example, a responsibility driven process [12], and subsequently apply a use case model to validate the design. In the following, the term validation process is used to denote such development processes. We are interested in investigating how a use case model best can be applied in an object-oriented design process, and have conducted a pilot experiment to investigate differences imposed by the use case driven process and the validation process on the resulting design models. This may influence the choice of development process, and also how to teach object-oriented design, even though the choice of development process in a development project is typically determined by characteristics of that project, for example the experience Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

[IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

  • Upload
    dik

  • View
    212

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

An Evaluation of Applying Use Cases to Construct Design versus Validate Design

Erik Syversen1, Bente Anda2 and Dag I.K. Sjøberg2

1Department of InformaticsUniversity of Oslo

P.O. Box 1080 BlindernNO–0316 Oslo

[email protected]

2Simula Research LaboratoryP.O. Box 134

NO–1325 LysakerNORWAY

Tel. +47 67828200Fax. +47 67828201

{bentea,dagsj}@simula.no

Abstract

Use case models capture and describe the functionalrequirements of a software system. A use case drivendevelopment process, where a use case model is theprincipal basis for constructing an object-oriented design,is recommended when applying UML. There are,however, some problems with use case drivendevelopment processes and alternative ways of applying ause case model have been proposed. One alternative is toapply the use case model in a responsibility-drivenprocess as a means to validate the design model. We wishto study how a use case model best can be applied in anobject-oriented development process and have conducteda pilot experiment with 26 students as subjects tocompare a use case driven process against aresponsibility-driven process in which a use case model isapplied to validate the design model. Each subject wasgiven detailed guidelines on one of the two processes, andused those to construct design models consisting of classand sequence diagrams. The resulting class diagramswere evaluated with regards to realism, that is, how wellthey satisfied the requirements, size and number of errors.The results show that the validation process producedmore realistic class diagrams, but with a larger variationin the number of classes. This indicates that the use casedriven process gave more, but not always moreappropriate, guidance on how to construct a classdiagram The experiences from this pilot experiment werealso used to improve the experimental design, and thedesign of a follow-up experiment is presented.

1. Introduction

The authors of the Unified Modeling Language (UML)recommend a use case driven process for developingobject-oriented software with UML [4,8,9,15]. In a usecase driven development process, a use case model,possibly in combination with a domain model, serves asthe basis for deriving a design model. Use case drivendevelopment processes have, however, been criticized fornot providing a sufficient basis for the construction of adesign model. For example, it is claimed that such adevelopment process leads to:• a too wide gap between the use case model and the

class diagram [13],• missing classes as the use case model is insufficient for

deriving all necessary classes, and• the developers mistaking requirements for design [15].

An alternative to a use case driven process is to useanother development process, for example, aresponsibility driven process [12], and subsequently applya use case model to validate the design. In the following,the term validation process is used to denote suchdevelopment processes.

We are interested in investigating how a use casemodel best can be applied in an object-oriented designprocess, and have conducted a pilot experiment toinvestigate differences imposed by the use case drivenprocess and the validation process on the resulting designmodels. This may influence the choice of developmentprocess, and also how to teach object-oriented design,even though the choice of development process in adevelopment project is typically determined bycharacteristics of that project, for example the experience

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 2: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

and skill of the developers, the problem domain andexisting architecture.

The pilot experiment had 26 undergraduate students assubjects, and the task was to construct a design model,consisting of class and sequence diagrams, for a librarysystem. The resulting design models were evaluatedrelative to how well they implemented the requirementsand to what extent they were specified at an appropriatelevel of syntactic granularity. We also investigateddifferences in number of classes and number of errorsbetween the class diagrams constructed respectively bythe two processes.

The results show that the validation process resulted indesign models that better described the requirements. Thesubjects following the use case driven process mostlymapped the steps of the use case descriptions directly ontomethods in the class diagram, while those following thevalidation process were more successful in derivingappropriate methods from the written requirementsdocument. The results also show that the validationprocess led to class diagrams with larger variationregarding number of classes. The use case driven processled to more errors in the class diagrams. This indicatesthat the use case driven process provides more, but notnecessarily better, guidance. In our opinion, the resultssupport the claims that a use case model is insufficient forderiving all necessary classes and may lead the developersto mistake requirements for design.

We will use the experiences from the pilot experimentto improve the experimental design, and the design of afollow-up experiment is presented.

The remainder of this paper is organized as follows.Section 2 describes the two processes investigated in thisexperiment. Section 3 discusses evaluation of theprocesses. Section 4 presents the measurement frameworkused in the experiment. Section 5 describes theexperimental design, results and, analysis. Section 6describes an improved experimental design. Section 7concludes and suggests future work.

2. Two Processes for Applying Use CaseModels in Object-Oriented SoftwareDevelopment

The UML meta-model defines a use case as a subclass ofthe meta-model element Classifier [16]. This implies thata use case contains a state and behaviour from whichclasses, attributes, and methods can be derived. A use casedriven development process prescribes a stepwise anditerative transitioning of a use case model to a designmodel [3,9]. The use case model, possibly together with a

design model, serves as a basis for deriving the classesnecessary for implementing the system. Sequencediagrams are used for identifying the system classesnecessary to realize the use cases and for allocatingresponsibilities to the classes. The result is a completeclass diagram for the system. A design model is thus arefinement of the analysis model. In our opinion, a usecase driven development process implicitly assumes thedefinition of a use case as a classifier. The steps of a usecase driven process is outlined in figure 1 and described intable 1.

In practice, use cases often do not describe a state thatcan be represented by classes. Hence, an alternativedefinition of use case has been proposed where a use caseis a behavioural feature of a system and thus onlydescribes entities of behaviour of a system. Severalproblems with use case driven development processeshave also been identified as described in the last section.

A proposed solution to these problems is to apply theuse case driven process in combination with anotherprocess, for example, a responsibility driven processes[12]. A responsibility driven process starts by identifyingresponsibilities in a textual requirements specification.The responsibilitites of the system are the high-leveloperations that the system must be able to perform. Aninitial class diagram with system classes to which theresponsibilities can be allocated, is thus derived in parallelwith the development of a use case model. A use casemodel can then be used to validate the class diagram. Aset of reading techniques for inspecting the quality andconsistency of diagrams and textual descriptions, and forvalidating them according to the requirementsspecification, is presented in [14]. In our opinion, thevalidation process supports the definition of a use case asa behavioral feature of a system. Figure 2 shows the stepsof a responsibility driven process where a use case modelis applied in validating the design, and the steps aredescribed in Table 2.

In our experience, few organisations apply a use casemodel in a completely systematic way in theirdevelopment process. Therefore, the processes describedand evaluated in this paper represent recommendedpractice more than actual practice, but we believe thatrecommended practice should be subject to evaluationbefore becoming actual practice. The evaluation of arecommended software development process will giveindications about its strengths and weaknesses, and aboutwhen the process is particularly suitable, something whichmay facilitate its transfer into actual practice.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 3: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

Table 1. The steps in the use case drivenprocess

1. Identify the use cases of system behaviour.2. Construct a domain model showing real-world

objects in the application domain.3. Describe each use case in detail, for example,

using a template format.4. Define a scenario for each “interesting path”

through the use case, and draw a sequencediagram for each scenario. Use objects fromthe domain model, and add new objects wherenecessary.

5. Identify the methods needed in every scenario,thus all the methods needed for the realizationof the use cases.

6. Transfer the objects and methods from thesequence diagrams to a class diagram.

Table 2. The steps in the validation process

1. Identify the classes in the system usingabstractions or noun phrases in requirementsdocuments.

2. Make a list of the responsibilities of the objectsof each class.

3. Draw a class diagram to show the classes andtheir responsibilities, that is their methods.

4. Identify the use cases of system behaviour.5. Describe each use case in detail, for example,

using a template format.6. Define a scenario for each “interesting path”

through the use case, and draw a sequencediagram for each scenario. Use objects from theclass diagram.

7. Identify the methods needed in every scenario,thus all the methods needed for the realizationof the use cases.

8. Use the sequence diagrams to validate that theclass diagram contains all the necessarymethods. Add or rearrange methods ifnecessary.

Figure 1. The use case driven process Figure 2. The validation process

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 4: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

3. Comparing the Processes

The two design processes can be evaluated in terms ofquality attributes of the resulting design models, and interms of direct quality attributes of the processesthemselves. The following quality attributes of designmodels were used in our pilot experiment:• The realism of the class diagrams, that is, to what

extent the processes are successful in constructingdesign models that implement the requirements.

• The correspondence between the class and thesequence diagrams.

• The level of syntactic granularity in the class diagrams.• The size of the class diagrams, that is, the number of

classes, attributes, methods and associations.• The number of false classes, attributes, methods and

associations in the class diagrams.• The number of superfluous classes, attributes, methods

and associations in the class diagrams.

The obvious direct process quality attribute is• the time spent on creating the design models.

To compare the two development processes, we tested thefollowing hypotheses:

H10:There is no difference in the realism of the designmodels.

H20:There is no difference in the correspondencebetween the class and sequence diagrams.

H30:There is no difference in the level of detail in theclass diagrams.

H40:There is no difference in the size of the classdiagrams.

H50:There is no difference in the number of falseelements in the class diagrams.

H60:There is no difference in the number of superfluouselements in the class diagrams.

H70:There is no difference in time spent creating thedesign models.

4. Measurement Framework

This section describes both qualitative and quantitativemetrics that can be used to measure the quality attributesintroduced in Section 3. The qualitative metrics are usedto test hypotheses H1-H3. The quantitative metrics areused to test hypotheses H4-H7.

4.1. Qualitative Metrics

Our qualitative metrics are based on the marking schemefor evaluating quality properties of a use case descriptionpresented in [1].

The realism of the class diagrams, that is, to whatextent the processes are successful in constructing designmodels that implement the requirements, can be measuredalong three dimensions:• Realism in class abstractions – to what extent the class

diagrams contain the necessary classes.• Realism in class attributes – to what extent the

necessary attributes are identified, and whether they arespecified as attributes of the correct classes.

• Realism in class methods – to what extent thenecessary methods are identified, and whether they arespecified as methods of the correct classes.

The realism of sequence diagrams is similar to the realismof class methods. The advantage of measuring sequencediagrams is that it is easier to follow the flow ofsuccessive method calls and flow of parameters than it isin class diagrams.

Correspondence between class and sequence diagramsis measured by verifying that• For the use case driven process the objects used in the

sequence diagrams are derived from the domain model,and the class diagram contains exactly the methodsfound in the sequence diagrams.

• For the validation process, the objects used in thesequence diagrams are derived from the class diagram,and the class diagram contains complementary methodsto those found in the sequence diagrams.

• For both processes, the direction of method callsbetween objects in the sequence diagrams should beconsistent with the way the methods are defined in theclass diagrams.

The level of syntactic granularity is measured relative tothe syntactic elements used in the resulting class diagrams.The class diagrams should show the• visibility of attributes and methods,• type and names of attributes, methods and parameters,

and• navigation and cardinality of associations.

Table 3 summarizes the qualitative metrics describedabove. The score of each metric is made according to achecklist1 with yes/no questions for each property. Themarking scales match the number of questions (between 6and 9) in the checklists.

1 The checklist and the other experimental material can be found athttp://www.ifi.uio.no/~isu/forskerbasen.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 5: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

Table 3. Qualitative metrics Table 4. Quantitative metrics

Hypo-thesis

Property Description

H4 NC Total Number of Classes

NA Total Number of Attributes

NM Total Number of Methods

NAssoc Total Number of Associations

H5 NFC Total Number of False Classes

NFA Total Number of False Attributes

NFM Total Number of False Methods

NFAssoc Total Number of FalseAssociations

H6 NSC

NSA

NSM

NSAssoc

Total Number of SuperfluousClassesTotal Number of SuperfluousAttributesTotal Number of SuperfluousMethodsTotal Number of SuperfluousAssociations

H7 Time Time used to develop a designmodel

4.2 Quantitative Metrics

Many design metrics for object-oriented code have beenproposed, but only a few are applicable to high-leveldesign [11]. The metrics suggested in [6,7] are adaptedto UML class diagrams and have been empiricallyvalidated [7]. To examine the size of the class diagrams,we use a subset of those metrics: the total number ofclasses, attributes, methods and associations. Faults inclass diagrams are measured in terms of the number ofclasses, attributes, methods and associations that arefalse relative to the problem domain, and those that aresuperfluous, that is, those that do not contribute to theimplementation of the requirements. Time measures thetotal time spent on constructing the design model. Thequantitative metrics are summarized in Table 4.

5. An Experiment to Evaluate the twoProcesses

This section describes the design, results and analysis ofthe pilot experiment conducted to evaluate the twoprocesses. To the authors’ knowledge, no empiricalstudies have been conducted to compare alternativeways of applying use case models in a developmentprocess. This study is therefore explorative; the goal ofthe evaluation is to get indications of differencesbetween the two processes.

5.1. Experimental Design

5.1.1. Subjects. The subjects were 26 undergraduatestudents following a course in software engineering. Halfof the subjects received guidelines for the use casedriven process; the other half received guidelines for thevalidation process. The assignment of guidelines wasdone at random. The experiment was voluntary, and thesubjects were paid for their participation. The subjectshad learned the basics of UML and had constructed an

Hypo-thesis

Property Mark Comment

H1 Realism inclassabstractions

0-6 0 = all wrong,6 = all correct

Realism inclass attributes

0-7 0 = all wrong,7 = all correct

Realism inclass methods

0-8 0 = all wrong,8 = all correct

Realism insequencediagrams

0-8 0 = all wrong,8 = all correct

H2 Class andsequencediagramcorrespondence

0-9 0 = all wrong,9 = all correct

H3 Level of detailin classdiagram

0-6 0 = all wrong,6 = all correct

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 6: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

object-oriented design as part of a compulsoryassignment in the course.

5.1.2. Procedure of the Experiment. The experimentlasted for three hours. The subjects used pen and paper.They wrote down the exact time they started andfinished each exercise. The experiment consisted of twoparts. The first part contained three exercises guiding thesubjects in developing a design model with a classdiagram and three sequence diagrams, modelling threefunctional services of a library system. The second partwas not included in our analysis, but was part of theexperiment to make sure that all the subjects had enoughto do for three hours. The subjects had no training inneither of the two alternative development processes, sodetailed guidelines were given.

5.1.3. Experimental Material. The task of theexperiment was to construct a design model for threefunctional services of a library system. This case isdescribed in many books on UML, for example [12,15].It was chosen because it is a well-known domain andsimple enough for students just introduced to UML. Thesubjects were given a use case model with the followinguse cases:

• Checking out an item.• Checking in an item.• Checking the status of an item.

The use cases were described using a template formatbased on those given in [5]. Those following thevalidation process also received a textual requirementsdocument of the system. The guidelines were based onthe descriptions presented in Tables 1 and 2. In order toprovide complete design processes, some additions weremade to the original descriptions of the validationprocess regarding how to identify classes andresponsibilities and regarding how to validate the classdiagram with regards to the sequence diagram. Theseadditions were based on the reading techniques forobject-oriented design described in [14]. Because of thetime constraint on the experiment, some steps of theoriginal descriptions were also removed. The subjectswere given use cases instead of being asked to makethem themselves, and in the validation approach thelisting of responsibilities for each class was excluded.Table 5 shows the detailed guidelines used in theexperiment.

5.2 Results and Analysis

The design models were analysed according to themeasurement framework presented in Section 4. TheKruskal-Wallis statistical test was performed on theresults. This non-parametric test was chosen because thedata distributions were non-normal. A p-value of 0.1 waschosen as the level of significance for all the tests due tothe explorative nature of the experiment. Table 6summarises the results of the statistical tests. Some ofthe subjects did not complete a design model and weretherefore discarded from the analysis. The total numberof subjects following the use case driven process were10, and 11 followed the validation process. Table 6shows the results of the statistical tests.

5.2.1. Assessment of realism, Hypothesis H1. The teston realism in class methods shows a significantdifference in favour of the validation process. Webelieve that the reason for this result was that thesubjects mostly mapped the steps in the use casedescriptions directly onto methods when creatingsequence diagrams. This led to problems both withinappropriate names of the methods and flaws in theorder of the method calls. In the use case driven processthe sequence diagrams are used to identify all themethods needed in the class diagram, while in thevalidation process the sequence diagrams are only usedto detect any necessary functionality overlooked in thefirst attempt at a class diagram. Therefore, the problemsconcerning the direct mapping of methods had a muchlarger impact on the design models constructed with theuse case driven process. The tests on realism in classabstractions, class attributes, and sequence diagramsshowed no difference with regards to realism betweenthe two processes.

5.2.2. Assessment of correspondence, Hypothesis H2.The test on correspondence showed no significantdifference, but there is a difference in the median infavour of the use case driven process. We expected adifference in favour of the use case driven process sincea claimed strength of this process is that it assurestraceability between the diagrams in a design model.

5.2.3. Assessment of level of syntactic granularity,Hypothesis H3. We neither expected nor found adifference in the level of syntactic granularity of theclass diagrams.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 7: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

Table 5. The exercise guidelines

Guidelines for the use case driven process Guidelines for the validation processExercise 1: Domain model1. Underline each noun phrase in the use case

descriptions. Decide for each noun phrase if it is aconcept that should be represented by a classcandidate in the domain model.

2. For the noun phrases that do not represent classcandidates, decide if these concepts should berepresented as attributes in a domain model instead.(Not all attributes are necessarily found this way.)

Exercise 1: Class diagram1. Underline all noun phrases in the requirements document.

Decide for each noun phrase if it is a concept that shouldbe represented by a class in the class diagram.

2. For the noun phrases that do not represent classes, decideif these concepts should be represented as attributes in theclass diagram instead. (Not all attributes are necessarilyfound this way.)

3. Find the verbs or other sentences that represent actionsperformed by the system or system classes. Decide ifthese actions should be represented by one ore moremethods in the class diagram. (Not all methods needed arenecessarily identified this way.)

Exercise 2: Sequence diagrams1. Create one sequence diagram for each use case.2. Study each use case description carefully, and

underline the verbs or sentences describing an action.Decide for each action if it should be represented byone or more methods in the sequence diagrams.

3. The sequence diagrams should contain only themethods derived from the use case descriptions, andthe objects from the domain model from exercise 1.

(Note! Not all methods needed are necessarily identifiedthis way)

Exercise 2: Sequence diagrams1. Create one sequence diagram for each use case.2. Study each use case description carefully, and underline

the verbs or sentences describing an action. Decide foreach action if it should be represented by one or moremethods in the sequence diagrams.

3. The sequence diagrams should contain only the methodsderived from the use case descriptions, and the objectsfrom the class diagram from exercise 1.

Exercise 3: Class diagram1. Transfer the domain model from exercise 1 into a

class diagram.2. For each method in the sequence diagram:

1. If an object of class A receives a method callM, the class A should contain the method Min the class diagram.

2. If an object of class A calls a method of classB, there should be an association between theclasses A and B.

Exercise 3: Validation of the class diagram1. For each method in the sequence diagram, draw a circle

around it. If several methods together form a systemservice, treat them as one service.

2. For each method or service circled out:1. Validate that the class that receives the method

call contains the same or matching functionality.2. If an object of class A calls a method of class B,

there should be an association between theclasses A and B in the class diagram. If the classdiagram contains any hierarchies, remember thatit may be necessary to trace the hierarchyupwards when validating it.

3. If the validation in the previous steps failed, make thenecessary updates in the class diagram.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 8: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

Table 6. Statistical results

Hypothesis Process Median P-value

Reject

H10 - Use casedriven

4.0

Realism Validation 6.0Overall 0.03 Yes

H20 - Use casedriven

7.0

Correspon- Validation 6.0dence Overall 0.12 NoH30 - Use case

driven5.0

Syntactic Validation 4.0Granularity Overall 0.40 NoH40 - Use case

driven6.0

Size Validation 6.0Overall 0.68 No

H50 - Use casedriven

1.0

False Validation 0.0Overall 0.06 Yes

H60 - Use casedriven

0

Superfluous Validation 0Overall 0.43 No

H70 - Use casedriven

113,5

Time Validation 125Overall 0.16 No

5.2.4. Assessment of size, Hypothesis H4. We expecteda difference in the size of the class diagrams, with largerclass diagrams produced with the validation process,because the use case driven process provides stricterguidance on how to identify classes, attributes, methodsand associations. However, the tests regarding size didnot show any difference. There is, however, a largervariance in the number of classes obtained using thevalidation process as opposed to using the use casedriven process. In our opinion this indicates that the usecase driven process gives stricter guidance on how toidentify classes. No difference was found in number ofattributes, methods or associations.

5.2.5. Assessment of false elements, Hypotheses H5.The test showed a significant difference in favour of thevalidation process, that is, the use case driven processled to more false classes. Combined with the assessmentof hypothesis H4, this indicates that the use case drivenprocess gave stricter, but not better guidance on how to

identify classes. No difference was found in the numberfalse attributes, methods or associations.

5.2.6. Assessment of superfluous elements,Hypotheses H6. We expected a difference insuperfluous elements in the class diagrams created bythe two processes for the same reason as we expected adifference in size. However, none of the tests onsuperfluous elements showed any difference.

5.2.7. Assessment of time spent, Hypothesis H7. Adifference in time spent creating the design models wasexpected as exercises 1 and 3 (Table 5) were morecomprehensive for the validation process. Thosefollowing the validation process spent more time thandid those following the use case driven process, but thedifference was not significant.

6. Improving the Experimental Design

This pilot experiment was exploratory, and a first studyaimed at investigating strengths and weaknesses ofdifferent ways of applying a use case model in an object-oriented design process. The threats to the validity of theresults from the pilot experiment and the design of animproved follow-up experiment are discussed below.

6.1. Subjects

A threat to the validity of our results is that the subjectswere novices to modelling with UML. Therefore, moreexperienced subjects, or training in the processes inadvance might have led to different results. Our follow-up experiment will be conducted with twice as manystudents as subjects. These subjects will be moreexperienced with UML, and when we have gainedsufficient experience from conducting the experimentwith students, we will replicate it with professionalsoftware developers as subjects.

6.2. Procedure of the Experiment

Another threat to the validity of our results is that theprocedure of the experiment differed in several waysfrom how software developers actually work whendesigning with UML:

• The experiment was conducted with pen and paper,instead of using a UML case tool.

• The subjects worked individually, while design istypically done in teams.

• The guidelines led the subjects to work linearly andconstruct the diagrams one after another, while design

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 9: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

is usually done iteratively, developing the diagrams inparallel.

• There was a time constraint on the experiment, so thesubjects were not permitted to spend as much time asthey wanted. Actual software development projectsalso have time limits, but since almost 20% of thesubjects did not finish, the experiment may have beentoo comprehensive for the three hours that wereallocated for the experiment.

Some of these problems will be remedied in thefollow-up experiment. The subjects will work on acomputer using a UML case-tool with which they arefamiliar, and an experiment support environment,described in [2], will be used in the data collection. Theuse of a case tool will make the procedure of theexperiment more realistic, and it will permit the subjectsto work in a more iterative.

In later experiments we will let some subjects work inteams, and also let them spend as long time as they needon the experiment.

6.3. Experimental material

Other threats to the validity of our results are caused bythe experimental material:

• The problem domain was very small because of thetime limit on the experiment. This resulted in smalldesign models, leaving little room for variations instructure.

• The guidelines were quite detailed and restricted thesubjects in several ways. This was done to ensure thatthe subjects actually followed the processdescriptions. One example of a restriction was thatonly the subjects following the validation processreceived the textual requirements document andperformed a validation step, another restriction wasthat the subjects were asked to make only onesequence diagram for each use case.

• We provided the subjects with a use case modelinstead of letting them construct one for themselves.The analysis showed that the use case descriptionswere important in the use case driven process becausethe subjects often assumed a direct mapping from thesteps in the use case descriptions to methods in theclass diagram.

We will also attempt to remedy some of theseproblems in the follow-up study.• The problem domain will be increased,• the guidelines will be less strict;• all the subjects will receive the same material,

• the use case driven process will also include avalidation step,

• and the subjects will decide for themselves how manysequence diagrams to make for each use case.To ensure process conformance even with less strict

guidance, we will log the subjects actions. At regulartime intervals, i.e. every 15 minutes, a small screen willpop up and ask the subject to write down exactly whathe/she is doing. This has been done in a previousexperiment described in [10]. Logging the subjectsactions will give us an idea about how the subjectsactually worked while solving the task. The pop-upscreen will also remind the subjects about what theyshould be working at at this point in the experiment.

We will, however, provide the subjects with use casemodels in the follow-up study, since the format used is atypical one and there will be time constraints. In laterexperiments we may ask the subjects to construct the usecase model themselves.

6.4. Measurements

We were not able to measure all the quality attributes onthe design models, nor on the process, that we wanted inthe pilot experiment. In the follow-up experiment alarger problem will make it possible to measurecomplexity of the design models in terms of couplingand cohesion, in addition to size. Logging what thesubjects are doing at regular intervals, will let us assessprocess conformance in a better way than what waspossible in the pilot experiment.

7. Conclusions and Future Work

We conducted a pilot experiment to evaluate twodifferent approaches to applying use case models in anobject-oriented design process. One approach was usecase driven, in such processes a use case model serves asthe basis for constructing a design model. The otherapproach was a validation process where a use casemodel was applied in validating the design. The aim ofthe experiment was to investigate differences betweenthe two approaches with regards to the quality of theresulting design models defined in terms of realism aswell as size and number of errors in the class diagrams.

The results show that the validation process led todesign models with greater realism in the methodcomposition of the class diagrams. No significantdifference regarding size was found, but there was alarger variance in the number of classes of the classdiagrams constructed with the validation process thanwith the use case driven process. The use case drivenprocess led to more erroneous classes relative to the

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE

Page 10: [IEEE 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the - Big Island, HI, USA (2003.01.9-2003.01.9)] 36th Annual Hawaii International Conference

problem domain than did the validation process. Thisindicates that the use case driven process gives more, butnot necessarily better, guidance on how to identifyclasses and their attributes and methods. In our opinion,the results support the claims that a use case model isinsufficient for deriving all necessary classes and maylead the developers to mistake requirements for design[15]. The results also indicate that it may be moreappropriate to consider a use case as a behaviouralfeature of the system against which class diagrams canbe validated, rather than consider a use case as having astate and methods from which the design can be derived.

This study was exploratory. One experiment can onlyprovide insight on how the alternative processes performin a limited context. To gain knowledge which istransferable to actual software development practice, itis therefore necessary to conduct a series of experimentsin different environments. An experiment permits the in-depth study of some aspects of a development process,but an experimental context will necessarily differ fromactual work practice, so experiments should becombined with other types of studies, for example, casestudies. We plan to conduct further studies to investigatehow to best apply a use case model in an object-orienteddesign process, and in particular we intend to improvethe experimental design through further studiesinvestigating how a use case model best can be appliedin object-oriented design. A follow-up experiment isplanned which will incorporate the modificationsdescribed in Section 6. We also plan to conduct anexperiment with professional software developers.

Acknowledgement

We acknowledge all the students at University of Oslowho took part in the experiment. We thank Ray Wellandand the anonymous referees for their constructivecomments on this paper.

References

1. Anda, B., Sjøberg, D. and Jørgensen, M. Qualityand Understandability in Use Case Models. 13th

European Conference on Object-OrientedProgramming (ECOOP2001), June 18-22,Budapest, Hungary, LNCS 2072, Springer Verlag,pp. 402-428, 2001.

2. Arisholm, E., Sjøberg, D., Carelius, G.J. andLindsjørn, Y. SESE – an Experiment SupportEnvironment for Evaluating Software EngineeringTechnologies, NWPER’2002 (Tenth NordicWorkshop on Programming and Software

Development Tools and Techniques), pp. 81-98,Copenhagen, Denmark, 18-20 August, 2002.

3. Arlow, J. and Neustadt I. UML and the UnifiedProcess. Practical Object-Oriented Analysis andDesign. Addison-Wesley, 2002.

4. Booch, G., Rumbaugh, J. and Jacobson, I. TheUnified Modeling Language User Guide. Addison-Wesley, 1999.

5. Cockburn, A. Writing Effective Use Cases.Addison-Wesley, 2000.

6. Genero, M., Piattini, M. and Calero, C. EarlyMeasures for UML Class Diagrams. L’Objet.Vol. 6No. 4, Hermes Science Publications, pp. 489-515,2000.

7. Genero, M. and Piattini, M. Empirical validation ofmeasures for class diagram structural complexitythrough controlled experiments. 5th InternationalECOOP workshop on Quantitative Approaches inObject-Oriented Software Engineering, June 2001.

8. Jacobson, I., Christerson, M., Jonsson P. andOvergaard, G. Object-Oriented SoftwareEngineering: A Use Case Driven Approach.Addison-Wesley, 1992.

9. Jacobson, I., Booch, G., and Rumbaugh, J. TheUnified Development Process. Addison-Wesley,1999.

10. Karahasanovic, A. and Sjøberg, D. VisualizingImpacts of Database Schema Changes – AControlled Experiment, In 2001 IEEE Symposiumon Visual/Multimedia Approaches to Programmingand Software Engineering, Stresa, Italy, September5-7, 2001, pp 358-365, IEEE Computer Society.

11. Reissing, R. Towards a Model for Object OrientedDesign Measurement. 5th International ECOOPworkshop on Quantitative Approaches in Object-Oriented Software Engineering, June 2001.

12. Richter, C. Designing Flexible Object-OrientedSystems with UML. Macmillan TechnicalPublishing, 1999.

13. Rosenberg, D. & Kendall, S. Applying Use CaseDriven Object Modeling with UML. An AnnotatedE-commerce Example. Addison-Wesley, 2001.

14. Shull, F., Travassos, G., Carver, J. & Basili, V.Evolving a Set of Techniques for OO Inspections.University of Maryland Technical Report CS-TR-4070, October 1999.

15. Stevens, P. and Pooley, R. Using UML. SoftwareEngineering with Objects and Components.Addison-Wesley, 2000.

16. The UML meta-model, version 1.3. www.omg.org,2001.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) 0-7695-1874-5/03 $17.00 © 2002 IEEE