17
The Journal of Systems and Software 85 (2012) 227–243 Contents lists available at SciVerse ScienceDirect The Journal of Systems and Software jo u rn al hom epage: www.elsevier.com/locate/jss Applying and evaluating concern-sensitive design heuristics Eduardo Figueiredo a,, Claudio Sant’Anna b , Alessandro Garcia c , Carlos Lucena c a Computer Science Department, Federal University of Minas Gerais UFMG, Brazil b Software Engineering Lab (LES), Computer Science Department, Federal University of Bahia UFBA, Brazil c Opus Research Group, Computer Science Department, Pontifical Catholic University of Rio de Janeiro PUC-Rio, Brazil a r t i c l e i n f o Article history: Received 1 March 2010 Received in revised form 7 September 2011 Accepted 29 September 2011 Available online 20 October 2011 Keywords: Design heuristics Modularity Crosscutting concerns Software metrics Aspect-oriented software development a b s t r a c t Manifestation of crosscutting concerns in software systems is often an indicative of design modular- ity flaws and further design instabilities as those systems evolve. Without proper design evaluation mechanisms, the identification of harmful crosscutting concerns can become counter-productive and impractical. Nowadays, metrics and heuristics are the basic mechanisms to support their identification and classification either in object-oriented or aspect-oriented programs. However, conventional mecha- nisms have a number of limitations to support an effective identification and classification of crosscutting concerns in a software system. In this paper, we claim that those limitations are mostly caused by the fact that existing metrics and heuristics are not sensitive to primitive concern properties, such as either their degree of tangling and scattering or their specific structural shapes. This means that modularity assessment is rooted only at conventional attributes of modules, such as module cohesion, coupling and size. This paper proposes a representative suite of concern-sensitive heuristic rules. The proposed heuristics are supported by a prototype tool. The paper also reports an exploratory study to evaluate the accuracy of the proposed heuristics by applying them to seven systems. The results of this exploratory analysis give evidences that the heuristics offer support for: (i) addressing the shortcomings of conven- tional metrics-based assessments, (ii) reducing the manifestation of false positives and false negatives in modularity assessment, (iii) detecting sources of design instability, and (iv) finding the presence of design modularity flaws in both object-oriented and aspect-oriented programs. Although our results are limited to a number of decisions we made in this study, they indicate a promising research direction. Further analyses are required to confirm or refute our preliminary findings and, so, this study should be seen as a stepping stone on understanding how concerns can be useful assessment abstractions. We conclude this paper by discussing the limitations of this exploratory study focusing on some situations which hinder the accuracy of concern-sensitive heuristics. © 2011 Elsevier Inc. All rights reserved. 1. Introduction Software systems are always changing to address new stake- holders’ concerns. Design modularity improves the stability of software by decoupling design concerns that are likely to change so that they can be maintained independently. A concern is any consideration that can impact on the design and maintenance of program modules (Robillard and Murphy, 2007). According to this general definition, concerns can be, for instance, non-functional requirements, domain-specific features, design patterns, or imple- mentation idioms (Hannemann and Kiczales, 2002; Kiczales et al., 1997). Despite the efforts of modern programming languages, Corresponding author. Tel.: +55 3134095878. E-mail addresses: [email protected], [email protected] (E. Figueiredo), [email protected] (C. Sant’Anna), [email protected] (A. Garcia), [email protected] (C. Lucena). some concerns cannot be well modularised in the system design and implementation (Hannemann and Kiczales, 2002). These con- cerns, called crosscutting concerns, are often blamed to hinder design modularity and stability (Figueiredo et al., 2008; Green- wood et al., 2007). Inaccurate concern modularisations can lead to a wide range of design flaws (Filho et al., 2006; Greenwood et al., 2007), ranging from concern tangling and scattering (Figueiredo et al., 2009; Kiczales et al., 1997) to specific code smells, such as feature envies or god classes (Monteiro and Fernandes, 2005). Without proper design evaluation mechanisms, the identifica- tion of harmful categories of crosscutting concerns can become counter-productive and impractical. Nowadays, metrics and heuristics are the basic mechanisms to support the identification and classification of crosscutting concerns (Figueiredo et al., 2009). In fact, metrics and heuristics are traditionally the fundamen- tal mechanisms for assessing design modularity (Chidamber and Kemerer, 2004; Lanza and Marinescu, 2006; Marinescu, 2004; Riel, 0164-1212/$ see front matter © 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.jss.2011.09.060

Applying and evaluating concern-sensitive design heuristics

Embed Size (px)

Citation preview

Page 1: Applying and evaluating concern-sensitive design heuristics

A

Ea

b

c

a

ARRAA

KDMCSA

1

hsscpgrm1

sl

0d

The Journal of Systems and Software 85 (2012) 227– 243

Contents lists available at SciVerse ScienceDirect

The Journal of Systems and Software

jo u rn al hom epage: www.elsev ier .com/ locate / j ss

pplying and evaluating concern-sensitive design heuristics

duardo Figueiredoa,∗, Claudio Sant’Annab, Alessandro Garciac, Carlos Lucenac

Computer Science Department, Federal University of Minas Gerais – UFMG, BrazilSoftware Engineering Lab (LES), Computer Science Department, Federal University of Bahia – UFBA, BrazilOpus Research Group, Computer Science Department, Pontifical Catholic University of Rio de Janeiro – PUC-Rio, Brazil

r t i c l e i n f o

rticle history:eceived 1 March 2010eceived in revised form 7 September 2011ccepted 29 September 2011vailable online 20 October 2011

eywords:esign heuristicsodularity

rosscutting concernsoftware metricsspect-oriented software development

a b s t r a c t

Manifestation of crosscutting concerns in software systems is often an indicative of design modular-ity flaws and further design instabilities as those systems evolve. Without proper design evaluationmechanisms, the identification of harmful crosscutting concerns can become counter-productive andimpractical. Nowadays, metrics and heuristics are the basic mechanisms to support their identificationand classification either in object-oriented or aspect-oriented programs. However, conventional mecha-nisms have a number of limitations to support an effective identification and classification of crosscuttingconcerns in a software system. In this paper, we claim that those limitations are mostly caused by thefact that existing metrics and heuristics are not sensitive to primitive concern properties, such as eithertheir degree of tangling and scattering or their specific structural shapes. This means that modularityassessment is rooted only at conventional attributes of modules, such as module cohesion, couplingand size. This paper proposes a representative suite of concern-sensitive heuristic rules. The proposedheuristics are supported by a prototype tool. The paper also reports an exploratory study to evaluate theaccuracy of the proposed heuristics by applying them to seven systems. The results of this exploratoryanalysis give evidences that the heuristics offer support for: (i) addressing the shortcomings of conven-tional metrics-based assessments, (ii) reducing the manifestation of false positives and false negativesin modularity assessment, (iii) detecting sources of design instability, and (iv) finding the presence of

design modularity flaws in both object-oriented and aspect-oriented programs. Although our results arelimited to a number of decisions we made in this study, they indicate a promising research direction.Further analyses are required to confirm or refute our preliminary findings and, so, this study shouldbe seen as a stepping stone on understanding how concerns can be useful assessment abstractions. Weconclude this paper by discussing the limitations of this exploratory study focusing on some situationswhich hinder the accuracy of concern-sensitive heuristics.

. Introduction

Software systems are always changing to address new stake-olders’ concerns. Design modularity improves the stability ofoftware by decoupling design concerns that are likely to changeo that they can be maintained independently. A concern is anyonsideration that can impact on the design and maintenance ofrogram modules (Robillard and Murphy, 2007). According to thiseneral definition, concerns can be, for instance, non-functional

equirements, domain-specific features, design patterns, or imple-entation idioms (Hannemann and Kiczales, 2002; Kiczales et al.,

997). Despite the efforts of modern programming languages,

∗ Corresponding author. Tel.: +55 3134095878.E-mail addresses: [email protected], [email protected] (E. Figueiredo),

[email protected] (C. Sant’Anna), [email protected] (A. Garcia),[email protected] (C. Lucena).

164-1212/$ – see front matter © 2011 Elsevier Inc. All rights reserved.oi:10.1016/j.jss.2011.09.060

© 2011 Elsevier Inc. All rights reserved.

some concerns cannot be well modularised in the system designand implementation (Hannemann and Kiczales, 2002). These con-cerns, called crosscutting concerns, are often blamed to hinderdesign modularity and stability (Figueiredo et al., 2008; Green-wood et al., 2007). Inaccurate concern modularisations can leadto a wide range of design flaws (Filho et al., 2006; Greenwoodet al., 2007), ranging from concern tangling and scattering(Figueiredo et al., 2009; Kiczales et al., 1997) to specific code smells,such as feature envies or god classes (Monteiro and Fernandes,2005).

Without proper design evaluation mechanisms, the identifica-tion of harmful categories of crosscutting concerns can becomecounter-productive and impractical. Nowadays, metrics andheuristics are the basic mechanisms to support the identification

and classification of crosscutting concerns (Figueiredo et al., 2009).In fact, metrics and heuristics are traditionally the fundamen-tal mechanisms for assessing design modularity (Chidamber andKemerer, 2004; Lanza and Marinescu, 2006; Marinescu, 2004; Riel,
Page 2: Applying and evaluating concern-sensitive design heuristics

2 ystems and Software 85 (2012) 227– 243

1oe2cS2p(ne2d

iw(voe2HdE“taiocu

alSittsddc

1

2

3

cFtts

Table 1Examples of traditional software metrics (Ceccato and Tonella, 2004; Chidamberand Kemerer, 2004; Sant’Anna et al., 2003).

Metrics Definitions

Number of Components (NC)(Sant’Anna et al., 2003)

Counts the number of classes, interfacesand aspects of the system

Number of Attributes (NOA)(Sant’Anna et al., 2003)

Counts the number of attributes of eachclass, interface or aspect of the system

Number of Operations (NOO)(Sant’Anna et al., 2003)

Counts the number of methods,constructors, advice, and intertypemethods of each class, interfaces, or aspect

Coupling between Components(CBC) (Ceccato and Tonella,

Counts the number of other classes,interfaces, and aspects which a component

28 E. Figueiredo et al. / The Journal of S

996). To date, design modularity assessment either in object-riented or aspect-oriented designs has been mostly rooted atxtensions of module-level metrics (Chidamber and Kemerer,004; Sant’Anna et al., 2003; Zhao, 2002) that have been histori-ally explored in the software engineering literature. For instance,ant’Anna (Sant’Anna et al., 2003), Ceccato (Ceccato and Tonella,004) and their colleagues defined metrics for aspectual cou-ling and cohesion based on Chidamber and Kemerer’s metricsChidamber and Kemerer, 2004). However, these measures doot treat concerns as first-class assessment abstractions. Similarly,xisting heuristic rules (Lanza and Marinescu, 2006; Marinescu,004) are also based on such traditional modularity metrics ando not promote concern-sensitive design evaluation either.

The recognition that concern identification and analysis aremportant through software design activities is not new. In fact,

ith the emergence of aspect-oriented software developmentAOSD) (Kiczales et al., 1997), there is a growing body of rele-ant work in the software engineering literature focusing eithern concern representation and identification techniques (Ducasset al., 2006; Figueiredo et al., 2009; Robillard and Murphy,007) or on concern analysis tools (Robillard and Murphy, 2007).owever, there is not much knowledge on the efficacy of concern-riven assessment mechanisms for design modularity and stability.ven though we can qualify some recently-proposed metrics asconcern-oriented” (Ducasse et al., 2006; Sant’Anna et al., 2003),here is a lack of design heuristics to support concern-sensitivessessment. More fundamentally, there is no systematic study thatnvestigates if this category of heuristic rules addresses limitationsf conventional metrics and enhances the processes not only oflassifying crosscutting concerns but also of evaluating design mod-larity and stability.

In this context, the contributions of this paper are fourfold. First,fter revisiting existing assessment mechanisms, it discusses theimitations of conventional metrics-based heuristics (Section 2).econd, it presents a suite of heuristic rules with the distinguish-ng characteristic of exploiting concerns as explicit abstractions inhe design assessment process (Section 4). We do not claim thathe suite of heuristics is complete and its purpose is to demon-trate the key concepts behind the general idea of concern-sensitiveesign analysis. The heuristic rules rely on a set of concern-riven metrics (Section 3) and target at addressing complementaryoncern-driven design analyses:

. Identification and classification of crosscutting concerns accordingto their primitive properties, such as tangling and scattering.These heuristics are fundamental as concern scattering tendsto both destabilise software modularity properties and facili-tate the propagation of changes according to previous empiricalassessments (Figueiredo et al., 2008; Greenwood et al., 2007).

. Identification and classification of specific crosscutting patternsaccording to more sophisticated properties of crosscutting con-cerns in artefacts. These heuristics are complementary to the firstgroup of heuristics since it has been recently found that certainrecurring forms of crosscutting concerns may not be harmfulto design stability (Figueiredo et al., 2009). Therefore, one maywant to know the harmful crosscutting concerns.

. Detection of classical bad smells (Fowler, 1999; Riel, 1996) thatare often sensitive to the way concern realisations are organisedin the source code, such as feature envies and god classes.

Third, this paper also presents a prototype tool, called Con-ernMorph (Section 5), which supports the proposed heuristics.

inally, it provides an exploratory evaluation on the accuracy ofhe concern-sensitive heuristics in the context of seven applica-ions. These applications were selected after revisiting previoustudies (Cacho et al., 2006; Filho et al., 2006; Garcia et al., 2004,

2004; Chidamber andKemerer, 2004; Sant’Annaet al., 2003)

is coupled to

2006; Greenwood et al., 2007) which investigated their designmodularity and stability issues by using metrics. We have analysedboth OO and aspectual designs of such systems, which encompassheterogeneous forms of crosscutting and non-crosscutting con-cerns (Section 6). The overall results of our exploratory evaluationgive evidences that concern-sensitive design heuristics (Section 7):(i) address most of the shortcomings of conventional assessmentmechanisms, (ii) support effective identification and classifica-tion of crosscutting concerns with around 80% of accuracy, and(iii) present superior identification rates of crosscutting anomalies(compared to conventional heuristics) that are harmful to mod-ularity and stability both in object-oriented and aspect-orientedprograms. Our results are limited to a number of decisions we madein this study (Section 8). After discussing these limitations, typicalfor exploratory studies, we point out directions for future work(Section 9).

2. Design modularity assessment

This section discusses aspect-oriented metrics (Section 2.1) andconventional heuristics (Section 2.2) for modularity evaluation ofaspect-oriented (AO) and object-oriented systems, respectively.It also points out limitations of these conventional assessmentmechanisms (Section 2.3) for modularity assessment. Studies onaspect-oriented measurement and conventional heuristics rep-resent some work related to the proposed concern-sensitiveheuristics. To the best of our knowledge, no work on concern-sensitive heuristics has been carried out before.

2.1. Metrics for aspect-oriented designs

A number of metrics (Ceccato and Tonella, 2004; Sant’Anna et al.,2003; Zhao, 2002) have been recently defined to quantify designmodularity properties of AO systems. However, most of these met-rics, named AO metrics for short, are based on extensions of metricsfor OO design assessment (Briand et al., 1999), such as Chidamberand Kemerer’s Coupling between Objects (CBO) (Chidamber andKemerer, 2004). Table 1 presents four conventional AO metrics usedlater in this paper (Section 4). Table 1 also includes references (1stcolumn) and a short description of each metric (2nd column). Thistable includes metrics to quantify the number of components (NC),attributes (NOA), and operations (NOO) of a software design. A com-ponent is a unified abstraction to both aspectual and non-aspectualmodules. For instance, a component represents either a class or aninterface in OO designs, and a component is a class, an interface, oran aspect in AO designs (Kiczales et al., 1997). Similarly, operations

can be methods, constructors, advice, or intertype methods in AOdesigns.

There are also some AO metrics based on dependency graphs(Zhao, 2002) which focus on specific mechanisms available in AO

Page 3: Applying and evaluating concern-sensitive design heuristics

ystem

laaAtta(tam

2

rioMpstlw

Mm1(sccctca

S

bC2oC2ahafirmaSChteafro

E. Figueiredo et al. / The Journal of S

anguages. For instance, Zhao (Zhao, 2002) and Ceccato (Ceccatond Tonella, 2004) propose suites of metrics that measure couplingnd cohesion based on mechanisms and abstractions available inspectJ (Kiczales et al., 2001), such as pointcut, advice, and inter-

ype declarations. We focus on more general AO metrics, such ashose one presented in Table 1, in order to allow our approach beingpplied to a variety of AO languages, such as AspectJ and CaesarJMezini and Ostermann, 2003), in addition to OO systems. Notehat, all these AO metrics are rooted at attributes of modules, suchs syntax-based module cohesion, coupling between modules, andodule interface complexity.

.2. Conventional heuristic assessment

Despite the extensive use of metrics, if used in isolation met-ics are often too fine grained to comprehensively quantify thenvestigated modularity flaw (Marinescu, 2002, 2004). In order tovercome this limitation of metrics, some researchers (Lanza andarinescu, 2006; Marinescu, 2002, 2004; Sahraoui et al., 2000)

roposed a mechanism called design heuristic rule (or detectiontrategy) for formulating metrics-based rules that capture devia-ions from good design principles. A heuristic rule is a composedogical condition, based on metrics, which detects design fragments

ith specific problems.For instance, we present below a heuristic rule proposed by

arinescu (2002). This rule aims at detecting a specific kind ofodularity flaw, namely the Shotgun Surgery bad smell (Fowler,

999). Bad Smells are proposed by Kent Beck in Fowler’s bookFowler, 1999) to diagnose symptoms that may be indicative ofomething wrong in the design. Shotgun Surgery occurs when ahange in a characteristic (or concern) of the system implies manyhanges to a lot of different places (Fowler, 1999). The reason forhoosing Shotgun Surgery as illustrative is because it is believedo be a symptom of design flaws caused by poor modularisation ofoncerns (Monteiro and Fernandes, 2005). Therefore, it might bevoided with the use of aspects.

hotgun Surgery:=((CM, TopValues(20%))and(CM, HigherThan(10)))

and (CC, HigherThan(5))

The Marinescu’s heuristic rule for detecting Shotgun Surgery isased on two conventional coupling metrics, namely CM and CC.M stands for the Changing Method metric (Lanza and Marinescu,006; Marinescu, 2002), which counts the number of distinct meth-ds that access an attribute or call a method of the given class.C stands for the Changing Classes metric (Lanza and Marinescu,006; Marinescu, 2002), which counts the number of classes thatccess an attribute or call a method of the given class. To avoidand counting, we rely on the implementation of these metricsvailable at Together Architect 2006. TopValues and HigherThan areltering mechanisms which can be parameterised with a valueepresenting the threshold. TopValues selects the percentage of ele-ents with the highest values for a metric and HigherThan selects

ll elements with values above a given threshold. For instance, thehotgun Surgery heuristic above says that a class should not haveC higher than 5 and should not have methods with the top 20%

ighest CM if these methods have CM higher than 10. Otherwise,his class is a Shotgun Surgery candidate. To the best of our knowl-dge, all current design heuristic rules, such as the one presentedbove, are based on conventional module-driven metrics. There-ore, a common characteristic of all those heuristics is that they areestricted to properties of modularity units explicitly defined in AOr OO languages, such as classes, aspects, and operations.

s and Software 85 (2012) 227– 243 229

2.3. Limitation of conventional heuristics

Although many design modularity flaws are related to the inad-equate modularisation of concerns (Cacho et al., 2006; Garciaet al., 2006; Greenwood et al., 2007), most of the current quan-titative assessment approaches, such as AO metrics (Section 2.1)and heuristic rules (Section 2.2), do not explicitly consider concernas a measurement abstraction. This imposes certain shortcomingsin order to effectively detect and correct design impairments. Thislimitation becomes more apparent in the age of AOSD since dif-ferent forms of design composition and decompositions have beenbrought to separate crosscutting concerns. To illustrate the limi-tations of conventional metrics-based heuristic rules, we analysethe effectiveness of the Marinescu’s rule (Marinescu, 2002) pre-sented in Section 2.2. This heuristic rule is applied to a partial designshowed in Fig. 1 with the purpose of detecting the Shotgun Surgerybad smell. This figure shows a partial class diagram realising thePersistence concern in a Web-based system, called Health Watcher(Greenwood et al., 2007). Elements of the design are shadowed toindicate that they implement this concern.

By applying Together Architect 2006 to measure CC and CM,we obtain the CC and CM values for each Health Watcher classas indicated in Fig. 1. Based on these values and computing theMarinescu’s heuristic, the PersistenceMechanism class is regardedas a suspect of Shotgun Surgery. This occurs because its valuesfor CC and CM are 8 and 25, respectively. Nevertheless, develop-ers might find it difficult to understand why this class is ShotgunSurgery (and others are not) just looking at its design and code.That is, why changing this class would trigger many other changesin many classes of the system? The main reason is because the Per-sistenceMechanism class is implementing Persistence, which is acrosscutting concern in Health Watcher. Therefore, we claim thatthe problem is not with the PersistenceMechanism class but withthe Persistence concern.

This example aims to illustrate how conventional heuristic rulesare limited to point out the overall influence of one concern –Persistence in this case – in other parts of the design. Particu-larly, although the Marinescu’s rule was able to detect an instanceof Shotgun Surgery in this design slice, it was unable to pointout the actual cause of design flaw. In fact, this rule could nothighlight the complete impact of the Persistence concern becauseit considers only measures based on the module and methodabstraction. Therefore, it fails to detect all classes which realisedPersistence and, therefore, suffering from the Shotgun Surgery badsmell.

3. Concern-driven metrics

This section defines the concern metrics used by the suite ofheuristics presented in Section 4. All metrics defined in this sec-tion have a common underlying characteristic that distinguishesthem from conventional metrics (Section 2.1): they capture infor-mation about concerns traversing one or more design modularityunits. A concern is often not defined by the “boundaries” of modulesin modelling or programming languages (Robillard and Murphy,2007), such as components (e.g., classes or aspects) and opera-tions (e.g., methods or advice). Instead, a concern spreads over thedesign elements and is realised by a heterogeneous set of theseelements.

Metrics definition: Each concern metric in this section is pre-sented in terms of a definition, the measurement purpose, and an

example. As far as the example is concerned, we rely on an object-oriented design slice of a middleware system (Cacho et al., 2006)presented in Fig. 2. This figure shows a partial class diagram real-ising both Factory Method and Observer design patterns (Gamma
Page 4: Applying and evaluating concern-sensitive design heuristics

230 E. Figueiredo et al. / The Journal of Systems and Software 85 (2012) 227– 243

IPersistenceMechanism

RDBRepositoryFactory

ComplaintRepository

HealthWatcherFacade

repositoryFactorypersistenceMechanism

...

TransactionException

PersistenceException

PersistenceMechanism

HealthUnitRepository

DiseaseTypeRepository

EmployeeRepository

SpecialityRepository

...

...

...

...

...

update()...

update()...

update()...

update()...

update()...

Persistence

CM = 0CC = 2

CM = 1CC = 2

CM = 0CC = 2

CM = 1CC = 3

CM = 0CC = 2CM = 25

CC = 8

CM = 1CC = 1

CM = 17CC = 2

CM = 5CC = 1

CM = 1CC = 2

CM = 3CC = 1

e con

edbCicpsme(

Fig. 1. Scattering of the Persistenc

t al., 1995). Elements of the design are shadowed to indicate whichesign pattern they implement. The concern metrics are computedased on the mapping of concerns to design elements. We usedoncernMapper (Robillard and Weigand-Warr, 2005) for support-

ng the mapping of concerns and ConcernMorph (Section 5) foroncern measurement. Although these tools were used, the map-ings of concerns to design elements depend on the developers’

kills (Figueiredo et al., 2011). The choice of the concerns to beeasured depends on the nature of the assessment goals; some

xamples will be given in this section and through our evaluationSections 6 and 7).

refresh()

<<interface>>MetaObserver

addObserver()removeObserver()notifyObservers()

<<interface>>MetaSubject

nextPreHandlenextPosHandle

addPreMethod(addPostMethodhandlePreMethhandlePostMet

MetaObjEnca

observers...addObserver()removeObserver()notifyObservers()

ConcreteBind

observers...addObserver()removeObserver()notifyObservers()

Component

Observer design pattern

NOCANOCOCDCConcerns4106Factory Method2126Observer

(a)ComponeMetaObjC

(b)

refresh()

Fig. 2. Concern metrics applied to the Obs

cern over Health Watcher classes.

3.1. Concern scattering and tangling

The metric Concern Diffusion over Components (CDC) (Sant’Annaet al., 2003) counts the number of classes and aspects that havesome influence of a certain concern, thereby enabling the designerto assess the degree of concern scattering. For instance, Fig. 2 showsthat there is behaviour related to the Factory Method design pattern

in six components (MetaObject, MetaObjFactory, and respectivesubclasses). Therefore, the value of the CDC metric for the FactoryMethod concern is six (Fig. 2, table (a)). Unlike CDC, the metric Num-ber of Concerns per Component (NCC) quantifies the concern tangling

createMetaObject()

<<interface>>MetaObjFactory

createMetaObject()

FactoryEncapsule

createMetaObject()

FactoryComposite

state

getInstanceName()

MetaObject

rr

)()ods()hods()

psule

graphcreateGraph()rebind()refresh()

MetaObjComposite

Factory Method design pattern

ObserverFactory MethodICSCCSCICSCCSCNCCnt

11062omposite

erver and Factory Method patterns.

Page 5: Applying and evaluating concern-sensitive design heuristics

ystem

fopd(o

3

atOttmtts(

pttS(ntcitFirao

4

tmccb3sccchlh

tTrlacottifi

E. Figueiredo et al. / The Journal of S

rom the system components’ point of view. It counts the numberf concerns each class or aspect implements. Its goal is to sup-ort designers on the observance of the intra-component tanglingegree. The value of the NCC metric for MetaObjComposite is twoFig. 2, table (b)), since this component implements the concernsf both Factory Method and Observer design patterns.

.2. Concern materialisation and coupling

The metric Number of Concern Attributes (NOCA) counts thettributes (including intertype attributes in aspects) that contributeo the realisation of a certain concern. Similarly, Number of Concernperations (NOCO) counts the methods, constructors, and advices

hat realise the concern. The goal of NOCA and NOCO is to quan-ify how many internal component members are necessary for the

aterialisation of a specific concern. The example in Fig. 2 showshat the value of NOCO is ten for the Factory Method design pat-ern, since this is the number of operations implementing it. In theame example we see that four attributes realise this design patternNOCA = 4).

Concern-Sensitive Coupling (CSC) quantifies the number of com-onents a given class or aspect realising the concern is coupledo. In other words, CSC counts the number of coupling connec-ions associated to the concern of interest in a specific component.imilarly, the metric Intra-Component Concern Sensitive CouplingICSC) counts the number of internal attributes accessed and inter-al methods called by a concern in a given component. Additionally,hese attributes and methods (i) have to implement another con-ern rather than the assessed one and (ii) cannot be inherited norntroduced in the class by intertype declaration. Fig. 2 shows thathe MetaObjComposite class is coupled to six components due toactory Method (CSC = 6). Moreover, members of MetaObjCompos-te realising Factory Method do not use members of the same classealising the Observer design pattern (ICSC = 0). Sequence diagramsre required to count CSC and ICSC at the design level, but wemitted them for the sake of simplicity.

. Concern-sensitive heuristics

This section presents heuristic rules to address the limita-ions of conventional heuristic rules discussed in Section 2. Our

ain goal is to define heuristics that make the evaluation pro-ess sensitive to the design concerns. The proposed heuristics,alled concern-sensitive heuristic rules, are defined in terms of com-ined information collected from both concern metrics (Section) and conventional modularity metrics. Each heuristic expres-ion embodies modularity knowledge about the realisation of aoncern in the respective OO or AO designs. The motivation ofoncern-sensitive heuristics is to minimise the shortcomings ofonventional metrics-based heuristics illustrated in Section 2. Ourypothesis to be tested in Section 6 is that most of these prob-

ems can be ameliorated through the application of concern-awareeuristics.

Heuristics structure. All heuristic rules are expressed using condi-ional statements in the form: IF <condition> THEN <consequence>.he condition part encompasses one or more metrics’ outcomeselated to the design concern under analysis. One concern is ana-ysed each time. If the condition is not satisfied, then the concernnalysis is concluded for that concern and, as a result, the concernlassification is not refined. In case the condition holds, the rolef the consequence part is to describe a change or refinement of

he target concern classification. The heuristic rules were struc-ured in such a way that the classification is systematically refinednto a more specialised category. In other words, when used tond bad symptoms they gradually generate warnings with higher

s and Software 85 (2012) 227– 243 231

gravity. The generated warnings encompass information that helpsthe designers to concentrate on certain concerns or parts of thedesign which are potentially problematic. The proposed heuristicssuite is structured to detect three major groups of concern-relatedflaws: (i) concern diffusion (Section 4.1), (ii) patterns of crosscut-ting concerns (Section 4.2), and (iii) specific design flaws (Section4.3).

4.1. Concern diffusion

Heuristic rules of this section, named concern diffusion heuris-tics, classify the way each concern manifests itself through thesoftware modularity units. A concern can be classified into one(or more) of the six categories: Isolated, Tangled, Little Scattered,Highly Scattered, Well Localised, and Crosscutting. A tangled con-cern is interleaved with other concerns in at least one component(i.e., class or aspect). If the concern is not tangled in any component,it is considered as isolated. A scattered concern spreads over mul-tiple components. Our classification makes a distinction betweenhighly scattered and little scattered concerns based on the numberof affected components. A concern is crosscutting only if it is bothtangled with other concerns and scattered over multiple systemcomponents. As we present later, even little scattered concernsmight be considered as crosscutting in some scenarios. Crosscuttingconcerns generate warnings of inadequate separation of concernsand, consequently, indicate opportunities for refactoring (Fowler,1999; Monteiro and Fernandes, 2005).

Fig. 3 presents the definitions of the concern diffusionheuristic rules (left) denoting transitions between two concernclassifications (diagram on the right). The diagram makes itexplicit the application order of the concern diffusion heuris-tics. This order was defined based on our empirical knowledgeon analysing concern metrics (Figueiredo et al., 2009; Gar-cia et al., 2004, 2006; Sant’Anna et al., 2003). For instance, weusually check whether a concern is tangled or not (by means ofNCC) before proceeding with further concern scattered analysis (bymeans of CDC). The first two rules, R01 and R02, use the metricNumber of Concerns per Component (NCC) to classify the concernas isolated or tangled. If the NCC value is one for every componentrealising the analysed concern, it means that there is only the ana-lysed concern in these components and, therefore, the concern isisolated. However, if NCC is higher than one in at least one com-ponent, it means that the concern is tangled with other concernsin that component. For instance, the Factory Method and Observerdesign patterns are both implemented in the MetaObjCompositeclass (Fig. 2). Therefore, these patterns are tangled in that system.

Rules R03 and R04 (Fig. 3) verify how much scattered a tangledconcern is. These heuristics use the metrics Concern Diffusion overComponents (CDC) and Number of Components (NC) to calculatethe percentage of system components affected by the concern ofinterest. Based on this percentage, the concern is classified as highlyscattered or little scattered. As you might have already noticed,one of the most sensitive parts in a heuristic rule is the selectionof threshold values. Our strategy for these rules is to use 50% asdefault, but our tool (Section 5) allows customised threshold values.Note that, developers should be aware of highly scattered concernsbecause they can potentially cause design flaws, such as ShotgunSurgery (Fowler, 1999) – further discussed in Section 4.3.

As tangling and scattering might mean different levels of cross-cutting, both highly scattered and little scattered concerns canbe classified as well-localised or crosscutting concerns. Rules R05and R06 decide whether a little scattered concern is either well-

localised or crosscutting. R07 and R08 perform similar analyses fora highly scattered concern. These four rules use the metrics Numberof Concern Attributes (NOCA) and Number of Concern Operations(NOCO) and two size metrics (Table 1): Number of Attributes (NOA)
Page 6: Applying and evaluating concern-sensitive design heuristics

232 E. Figueiredo et al. / The Journal of Systems and Software 85 (2012) 227– 243

Tangled

R02R01

R03 R04

R05 R06 R07 R08

CrosscuttingWellLocalised

Isolated

HighlyScattered

LittleScattered

ConcernBad symptomsGood symptoms

R01 - Isolated: if NCC = 1 for every component realising CONCERNthen C ONCERN is ISOLATED

R02 - Tangled: if NCC > 1 for at least one component then C ONCERN is TANGLED

R03 - Little Scattered: if (CDC / NC) of C ONCERN < 0.5 then TANGLED CONCERN is LITTLE SCATTERED

R04 - HighlyScattered: if (CDC / NC) of C ONCERN ≥ 0.5 then TANGLED CONCERN is HIGHLY SCATTERED

R05 - Well Encapsulated: if (NOCA / NOA ≥ 0.5) and (NOCO / NOO ≥ 0.5) for every componentthen LITTLE SCATTERED CONCERN is WELL-L OCALISED

R06 - Crosscutting: if (NOCA / NOA < 0.5) and (NOCO / NOO < 0.5) for at least one componentthen LITTLE SCATTERED CONCERN is CROSSCUTTING

R07 - Well Encapsulated: if (NOCA / NOA ≥ 0.5) and (NOCO / NOO ≥ 0.5) for every componentthen HIGHLY SCATTERED CONCERN is WELL-L OCALISED

R08 - Crosscutting: if (NOCA / NOA < 0.5) and (NOCO / NOO < 0.5) for at least one componentthen HIGHLY SCATTERED CONCERN is CROSSCUTTING

nd he

ant

ttilitcaaiw

4

dotstctwranstOhcAtTsd

Fig. 3. Concern classification a

nd Number of Operations (NOO). They calculate for each compo-ent the percentage of attributes and operations which implementshe concern being analysed.

The role of the heuristics R05 and R07 is to detect componentshat dedicate a large number of attributes and operations (morehan 50%) to realise the dominant concern. If a concern is dominantn all components where it is, this concern is classified as well-ocalised. The reasoning behind this classification is that a concerns not harmful to classes or aspects in which it is dominant, and,herefore, it does not need to be removed. Similarly, a concern islassified as crosscutting (rules R06 and R08) if the percentage ofttributes and the percentage of operations related to the concernre low (less than 50%) in at least one component. Hence, a concerns classified as crosscutting if it is located in at least one component

hich has another concern as dominant.

.2. Patterns of crosscutting concerns

Inspired by Ducasse, Girba and Kuhn (Ducasse et al., 2006), weefined in previous work (Figueiredo et al., 2009) several patternsf crosscutting concerns (or crosscutting patterns). The identifica-ion of such crosscutting patterns complements the tangling andcattering analysis (Section 4.1). The reason for crosscutting pat-ern identification is that recent studies have pointed out thatertain shapes of tangled and scattered concerns are more harmfulo design stability than others (Figueiredo et al., 2009; Green-ood et al., 2007). Even though certain crosscutting patterns are

ecurrently found in software systems and are likely to negativelyffect software maintainability (Figueiredo et al., 2009), there iso set of design heuristic rules defined for their identification. Thisection illustrates how the concern-sensitive heuristics can be usedo identify five of these crosscutting patterns, namely Black Sheep,ctopus, God Concern, Data Concern, and Behavioural Concern. Weave chosen them because they are the most primitive forms ofrosscutting patterns (Ducasse et al., 2006; Figueiredo et al., 2009).dditional heuristics can be further defined to cover other crosscut-

ing patterns through the specialisation of the primitive heuristics.he remainder of this section briefly describes each of these fiveelected crosscutting patterns and the associated heuristic rules toetect them.

uristics for concern diffusion.

Black Sheep: The Black Sheep crosscutting pattern is defined asa concern that crosscuts the system but is realised by very fewelements in distinct components (Ducasse et al., 2006; Figueiredoet al., 2009). For instance, Black Sheep occurs when only a few scat-tered operations and attributes are required to realise a concern indesign artefacts. Also, there is not a single component whose mainpurpose is to implement this concern. Fig. 4(a) presents an abstractrepresentation of the Black Sheep concern. The shadow grey areasin this figure indicate elements of the components (represented byboxes) realising the concern under consideration. Taking these fourcomponents into account, the Black Sheep instance is realised byfew elements of two components.

Fig. 5 shows five heuristic rules, R09 to R13, which aim at iden-tifying each of the selected crosscutting patterns. This figure alsodefines two conditions A (Little Dedication) and B (High Dedication)used by some of the rules. We explicitly separate the conditionsfrom the rules not only to make the rules easier to understandbut also to reuse the conditions in more than one heuristic rule. Inthese rules, a concern previously classified as crosscutting (Section4.1) is thoroughly inspected in terms of its crosscutting patterns. Aconcern can be classified in more than one crosscutting pattern.

The heuristic rule R09 classifies a crosscutting concern as BlackSheep if all components which have this concern dedicate only afew percentage points of attributes and operations to the concernrealisation (less than 33%). We choose the value of 33% for the littlededication condition based on (i) threshold values used by otherauthors (Ducasse et al., 2006; Lanza and Marinescu, 2006), and (ii)a meaningful ratio that represents the definition ‘touches very fewelements’ of black sheep (Ducasse et al., 2006). In fact, Lanza andMarinescu (Lanza and Marinescu, 2006) suggest the use of mean-ingful threshold values, such as 0.33 (1/3), 0.5 (1/2), and 0.67 (2/3).Among these values, 1/3 seems the most appropriate one to matchthe Black Sheep definition.

Octopus: Octopus is defined as a crosscutting concern which ispartially well localised by one or more components, but it alsospreads across a number of other components (Ducasse et al., 2006;

Figueiredo et al., 2009). Fig. 4(b) presents an illustrative abstractrepresentation of the Octopus crosscutting pattern. Note that, thereis a component in this figure completely dedicated to the concernrealisation (it represents the octopus’ body). In addition, three other
Page 7: Applying and evaluating concern-sensitive design heuristics

E. Figueiredo et al. / The Journal of Systems and Software 85 (2012) 227– 243 233

Data

Behaviour

Data

Data

Behaviour

Data

(a) Black Sheep (c) God Concern(b) Octopus

(e) Behavioural Concern(d) Data Concern

tation

crfitp(tctpood

te(tmpaGd

Behaviour

Fig. 4. Abstract represen

omponents (representing tentacles) dedicate only small parts toealise the same concern. The next rule presented in Fig. 5, R10, veri-es if a crosscutting concern can be classified as Octopus. Accordingo this heuristic, a concern is classified as Octopus if every com-onent realising parts of this concern has either little dedicationcondition A) or high dedication (condition B) to the concern. Addi-ionally, at least two components have to be little dedicated to theoncern (tentacles of the octopus) and at least one component haso be highly dedicated (body of the octopus). We define a com-onent as highly dedicated to a concern when the percentagesf attributes and operations are higher than 67%. Again, thresh-lds values try to be a meaningful approximation of the octopus’efinition (Ducasse et al., 2006; Figueiredo et al., 2009).

God Concern: Inspired by the God Class bad smell (Riel, 1996),he God Concern crosscutting pattern occurs when a concernncompasses too much scattered functionality of the systemFigueiredo et al., 2009). In other words, similarly to God Class,he elements realising God Concern address too much or do too

uch across the software system. The basic idea behind modularrogramming is that a widely-scoped concern should be analysed

s smaller “modular” sub-parts (divide and conquer). However,od Concern instances do not adhere to this principle; thus, hin-ering concern analysis. Fig. 4(c) presents the abstract structure

Condition A - Little Dedication: (NOCA / NOA < 0.33) and (NOCCondition B - High Dedication: (NOCA / NOA ≥ 0.67) and (NOCR09 - Black Sheep:if (Little Dedication) for every component with C ONCERN then C R10 - Octopus:if ((Little Dedication) or ( High Dedication) for every component

and ((Little Dedication) for at least 2 components and (High then C ROSSCUTTING CONCERN is OCTOPUS

R11-God Concern: if ((CDC / NC) > 0.33) and (High Dedication) for most componethen CROSSCUTTING CONCERN is GOD CONCERN

R12 - Data Concern: if ( (NOCA/NOA) > (NOCO/NOO) ) for every component with CthenC ROSSCUTTING CONCERN is DATA CONCERN

R13 - Behavioural Concern: if (NOCA = 0) and (NOCO > 0) for every component with CONthen CROSSCUTTING CONCERN is BEHAVIOURAL CONCERN

Fig. 5. Heuristics for cro

Behaviour

of crosscutting patterns.

of this crosscutting pattern. This figure highlights that God Con-cern is a widely-scoped crosscutting concern and requires a lotof functionality in its realisation. In particular, there are at leasttwo components whose main purpose is to realise this crosscuttingconcern.

The third heuristic rule in Fig. 5, R11, aims at finding God Con-cern. A crosscutting concern is classified as God Concern if twoconditions are satisfied: (i) the given concern is realised by morethan 33% of the system’s components and (ii) most of these compo-nents are highly dedicated to the concern realisation (Condition B).To verify the first condition, R11 calculates the percentage of com-ponents realising the given concern (i.e., CDC/NC). With respect tothe second condition, not all components in the system are highlydedicated to the concern realisation, since this concern was previ-ously classified as crosscutting (Section 4.1). In other words, thisconcern was previously classified as crosscutting in at least onecomponent (see rules R06 and R08 in Section 4.1). Therefore, wejust need to verify if the concern is dominant (via the High Dedi-cation condition) in more than 50% of the components where it islocated.

Data Concern: The fourth crosscutting pattern, Data Con-cern, represents a crosscutting concern mainly composed of data(Figueiredo et al., 2009). Data may be represented, for instance, by

O / NOO < 0.33) O / NOO ≥ 0.67))

ROSSCUTTING CONCERN is BLACK SHEEP

with CONCERN ) Dedication) for at least 1 component with CONCERN )

nts with CONCERN

ONCERN

CERN

sscutting patterns.

Page 8: Applying and evaluating concern-sensitive design heuristics

2 ystem

chaIestmaimdtmcaDiap1g

C((datcgaOtftcrNcC

4

cbi((bcsaflab

to specify the projection of concerns into syntactic elements, such

34 E. Figueiredo et al. / The Journal of S

lass variables and fields in an object-oriented design. Data Concernas no (or very little) assigned behaviour. Behaviour can be associ-ted with methods and constructors in an object-oriented design.n fact, existing behaviour in a Data Concern instance is due to thexistence of data accessors (e.g., get and set methods). Fig. 4(d)hows the abstract representation of this crosscutting pattern. Notehat, there is a logical division between data and behavioural ele-

ents of each box (i.e., component). Moreover, one component has small piece of behaviour assigned to the analysed concern, whichs accepted in the definition of this crosscutting pattern. However,

ost of the shadowed areas in the Data Concern instance representata. The heuristic rule for Data Concern, R12 in Fig. 5, is based onwo concern metrics (NOCA and NOCO) and two traditional size

etrics (NOA and NOO). Then, the heuristic strategy is, for everyomponent, to investigate if many attributes and few operationsre realising the concern. Note that a concern can be classified asata Concern even if some operations are used to realise it. That

s, the Data Concern definition also allows some operations, suchs getting and setting methods, realising a concern matching thisattern. Moreover, we do not guarantee that the heuristic rule is00% accurate and, therefore, some operations which are neitheretting nor setting methods may be misled by this heuristic.

Behavioural Concern: The last crosscutting pattern, Behaviouraloncern, identifies a concern only composed of behavioure.g., methods and constructors in an object-oriented design)Figueiredo et al., 2009). Behavioural Concern has no associatedata as illustrated by its abstract representation in Fig. 4(e). Fig. 5lso shows a concern-sensitive heuristic rule, R13, aiming to iden-ify Behavioural Concern. A concern is classified by R13 in thisrosscutting pattern if it is only composed of operations. For pro-rams written in Java, for instance, Behavioural Concern indicates

concern which is only realised by methods and constructors.n the other hand, advice may also be considered as an opera-

ion in AspectJ systems (Kiczales et al., 2001). This decision, inact, depends on the instantiation of specific language constructso the metrics which compose a heuristic rule. To check whether arosscutting concern is Behavioural Concern, the heuristic rule R13elies on the metrics NOCA and NOCO. Then, this rule verifies if theOCA value is zero and NOCO is greater than zero for the analysedoncern. If so, this crosscutting concern is classified as Behaviouraloncern.

.3. Concern-aware design flaws

This section describes how the concern-sensitive heuristicsould be also applied to detect well-known design flaws, such asad smells (Fowler, 1999; Riel, 1996). We use four bad smells to

llustrate this kind of concern-sensitive heuristics: Shotgun SurgeryFowler, 1999), Feature Envy (Fowler, 1999), Divergent ChangeFowler, 1999), and God Class (Riel, 1996). We selected these fourad smells because (i) previous work related them to crosscuttingoncerns (Monteiro and Fernandes, 2005) and (ii) they are repre-entatives of three different groups of flaws discussed by Lanzand Marinescu (2006). Hence, addressing different categories ofaws allows us to correlate our results with their studies (Lanzand Marinescu, 2006; Monteiro and Fernandes, 2005). These fourad smells can be briefly defined as follows.

Shotgun Surgery occurs when a change in a characteristic (or con-cern) of the system implies many changes to a lot of different places(Fowler, 1999).

Feature Envy is related to the misplacement of a piece of code, suchas an operation or attribute, i.e., the feature seems more interestedin a component other than the one it actually is in (Fowler, 1999).

s and Software 85 (2012) 227– 243

Divergent Change occurs when one component commonly changesin different ways for several different reasons (Fowler, 1999).Depending on the number of concerns a given component realises,it can suffer unrelated changes.

God Class refers to a component (class or aspect) which performsmost of the work, delegating only minor details to a set of trivialcomponents and using data from them (Riel, 1996).

Fig. 6 shows concern-sensitive heuristic rules for detect-ing the four bad smells aforementioned. The first rule, R14,aims at detecting Feature Envy and uses a combination ofconcern metrics, coupling metrics, and size metrics. To be con-sidered Feature Envy, a crosscutting concern has to satisfy twoconditions: C (High Inter-Component Coupling) and D (LowIntra-Component Coupling). The concern under assessment hashigh inter-component coupling when its percentage of coupling(CSC/CBC) is higher than the percentage of internal componentmembers ((NOCA + NOCO)/(NOA + NOO)) that realise this concern.In other words, the ratio of coupling to size (measured in terms ofattributes and operations) of such crosscutting concern is higherthan the same ratio for all other concerns in the same com-ponent. Similar computation is performed for identifying lowintra-component coupling.

The following two heuristics for bad smells R15 and R16 (Fig.6) are intended to detect Shotgun Surgery and Divergent Change,respectively. Differently from the previous rules, R15 is composedof the outcomes from other heuristics. More precisely, a concern isclassified as Shotgun Surgery if it was previously identified as Tan-gled (R02), Highly Scattered (R04), and Crosscutting (R08). UnlikeShotgun Surgery, which is related to concern scattering, DivergentChange is related to concern tangling. The heuristic rule for Diver-gent Change, R16, detects components with this bad smell if theyrealise more than 3 concerns each (NCC > 3). That is, there is a highdegree of tangling in these components. In addition, a componentsuffering from Divergent Change also depends on many other com-ponents (CBC > 5). These default threshold values can be adjustedfor specific projects based on the designer’s experience.

Finally, a component is flagged as God Class by the last heuristicrule (R17) if, besides a high number of internal members (attributesand operations), it implements many concerns. R17 distinguisheshigh and low number of internal members by comparing them withthe average size of all other system components (NOAsystem/NCand NOOsystem/NC). This last rule uses a threshold value of 2 for themetric Number of Concern per Component (NCC). The reasoning isthat a component implementing more than two concerns tends toaggregate too much unrelated responsibility.

5. ConcernMorph: extensible metrics-based heuristicanalysis

Concern-sensitive heuristic analysis may be a tedious task with-out proper tool support. This section describes ConcernMorph,1 atool to automatically apply the proposed concern-sensitive heuris-tic rules. ConcernMorph is implemented as an Eclipse plug-in andsupports the heuristic rules presented in Section 4. The tool alsoimplements a set of concern metrics (Section 3) used by the heuris-tic rules.

The ConcernMorph architecture: Concern analysis requires users

as classes, methods, and attributes. An existing tool, ConcernMap-per (Robillard and Weigand-Warr, 2005), offers good support forthis task and, so, it is used by ConcernMorph. In other words,

1 Available for download at http://www.dcc.ufmg.br/∼figueiredo/concern/plugin/.

Page 9: Applying and evaluating concern-sensitive design heuristics

E. Figueiredo et al. / The Journal of Systems and Software 85 (2012) 227– 243 235

Condition C - High Inter-Component Coupling: (CSC / CBC) > ((NOCA+NOCO) / (NOA+NOO)) Condition D - Low Intra-Component Coupling: (ICSC / ((NOA+NOO)-(NOCA+NOCO))) < ((NOCA+NOCO) / (NOA+NOO)) R14 - Feature Envy: if (High Inter-Component Coupling) and ( Low Intra-Component Coupling) and (NCC > 1) then C ROSSCUTTING CONCERN is FEATURE ENVY

R15 - Shotgun Surgery: if CONCERN is (Tangled) and (HighlyScattered) and (Crosscutting) then C ROSSCUTTING CONCERN is SHOTGUN SURGERY

R16 - Divergent Change: if (NCC > 3) and (CBC > 5) thenC OMPONENT has Divergent Change R17 - God Class:

stem/

rules

omotpmchitsct

itCcatwpsd

ittAap

if (NCC >2) and (NOA > (NOAsystem/NC)) and (NOO > (NOOsy

Fig. 6. Heuristic

ur tool relies on the projection of concerns onto syntactic ele-ents provided by ConcernMapper. Fig. 7 shows the architecture

f ConcernMorph and its relationships with ConcernMapper andhe Eclipse Platform. Both tools, ConcernMorph and ConcernMap-er, are coupled to the Eclipse Platform. ConcernMorph defines twoain modules (Fig. 7): (i) Metric Collector which is responsible for

omputing concern metrics, and (ii) Rule Analyser which applieseuristic rules to classify crosscutting concerns (Section 4.1) and

dentify specific crosscutting patterns (Section 4.2). Even thoughhe current implementation of ConcernMorph does not support badmell detection yet, this only requires the extension of the metricollector with new metrics and the specification of new design ruleshrough the rule analyser.

The user interface: Each module of ConcernMorph presentedn Fig. 7 adds a new view to the Eclipse Platform. Fig. 8 showshe two views of ConcernMorph: Concern Metrics (View A) androsscutting Patterns (View B). The Concern Metrics view showsoncern measurements while the Crosscutting Patterns view showsll instances of crosscutting patterns in the target system. In addi-ion to these two views, ConcernMorph extends ConcernMapperith an additional preference page. The ConcernMorph preferenceage supports the configuration of specific threshold values for theystem under analysis. This page also allows the user to activate oreactivate a particular heuristic rule.

We use one of our case studies, called MobileMedia, in thellustrative example of Fig. 8. In the Concern Metrics view ofhis figure, we see that the Photo concern, for instance, is scat-

ered over 18 components (CDC), 100 operations (NOCO), and 53ttributes (NOCA). The Crosscutting Patterns view shows not onlyll instances of crosscutting patterns but also the number of com-onents (between brackets) taking part in each pattern instance.

Design Model

Metric Collector

Rule Analyser

Detection Strategies

Collected Data

Crosscutting Patterns

Concern-Oriented

Concern Model

mapping

Concern-Oriented

ConcernMapper ConcernMorph

Eclipse Platform

Fig. 7. The ConcernMorph Architecture.

NC)) thenC OMPONENT is a GOD CLASS

for bad smells.

For example, the Album concern is classified as Octopus and 8classes compose this Octopus instance (Fig. 8). ConcernMorph alsoallows users to investigate specific parts of a crosscutting patterninstance. For instance, the tool records which components are play-ing the role of the body and tentacles in an Octopus concern. Thisinformation can be observed by double-clicking a concern in theCrosscutting Patterns view. After this action is performed, a newwindow shows detailed information about the chosen concern.

6. Evaluation settings

This section describes the design and settings of an exploratorystudy with the goal of evaluating the concern-sensitive heuristicrules presented in Section 4. Exploratory studies are intended to laythe groundwork for further empirical work (Seaman, 1999; Wohlinet al., 2000). Section 6.1 formulates the hypothesis and researchquestions while Section 6.2 presents the seven applications used inour evaluation.

6.1. Hypothesis and research questions

We evaluated the accuracy and applicability of the heuristicassessment technique (Section 4) for identifying and classifyingcrosscutting concerns. In this particular case, we aimed to verifythe following hypothesis.

Hypothesis: Heuristic assessment techniques which are based onconcern properties enhance traditional quantitative analysis.

To test this hypothesis, we followed three steps. First, weselected 23 concerns projected into implementation artefacts ofseven systems. Second, we used the concern-sensitive heuristicspresented in Section 4.1 to heuristically identify crosscutting con-cerns. In addition to identify crosscutting concerns, the motivationof this evaluation was to verify if concern-sensitive heuristicsminimise the shortcomings of traditional metrics-based strate-gies (discussed in the literature review, Section 2). As stated inthe hypothesis above, we aim to verify whether problems of tra-ditional strategies can be ameliorated through the applicationof concern-sensitive analysis. Third, we evaluated whether theheuristic classification of crosscutting concerns indicates designflaws not reported by traditional software metrics. Additionally,we evaluated the accuracy of the heuristic identification of cross-cutting concerns by comparing it to previous knowledge about theanalysed concerns.

The following research questions are derived from the afore-mentioned hypothesis. Each subsection of Section 7 aims to answerone of these research questions.

Research Question 1. If concern-sensitive assessment techniquesproperties enhance traditional quantitative analysis, in which casesdo they succeed or fail?

Page 10: Applying and evaluating concern-sensitive design heuristics

236 E. Figueiredo et al. / The Journal of Systems and Software 85 (2012) 227– 243

ents

6

2Aspc2aftttcaawo

tbc

TS

Fig. 8. Screenshot showing concern measurem

Research Question 2. Do concern-sensitive heuristics providemore accurate results compared to traditional detection strategies?

Research Question 3. Does a high number of crosscutting patternsimpact positively, negatively or have no impact on design stability?

Research Question 4. Which bad smells can be detected byconcern-sensitive heuristic rules?

.2. Selection of the target applications

In previous comparative studies (Cacho et al., 2006; Filho et al.,006; Garcia et al., 2004, 2006; Greenwood et al., 2007), we appliedO metrics (e.g., metrics in Table 1, Section 2) to a number ofoftware systems. In this paper, we selected seven systems of ourrevious studies in order to apply and assess the accuracy of ouroncern-sensitive heuristics. Five of these systems (Cacho et al.,006; Figueiredo et al., 2006, 2008; Garcia et al., 2004; Hannemannnd Kiczales, 2002) are medium-sized academic prototypes care-ully designed with modularity attributes as main drivers. The otherwo systems, namely Health Watcher (Greenwood et al., 2007) andhe CVS plugin (Filho et al., 2006), are larger software projects. Sincehere are many releases available for some of these systems, wehose the 9th release of Health Watcher (Greenwood et al., 2007)nd the 6th release of MobileMedia (Figueiredo et al., 2008). Similarnalysis for additional Health Watcher and MobileMedia releasesere also performed. However, since the results are similar, we

nly make them available in a Ph.D. dissertation (Figueiredo, 2009).

Table 2 summarises the seven systems analysed in our study,

heir size (2nd column), number of releases (3rd column), and num-er of developers (last column). We chose systems with differentharacteristics. For instance, the design patterns implementations

able 2elected system releases.

Systems

(1) A Design Pattern library (Hannemann and Kiczales,2002)

Builder

Chain of Resp.

Factory Method

Mediator

Observer

(2) OpenOrb Middleware (Cacho et al., 2006)

(3) AJATO (Cacho et al., 2006; Figueiredo et al., 2006)

(4) Eclipse CVS Plugin (Filho et al., 2006)

(5) Health Watcher, R9 (Greenwood et al., 2007)

(6) Portalware (Garcia et al., 2004)

(7) MobileMedia, R6 (Figueiredo et al., 2008)

(View A) and crosscutting patterns (View B).

required just some hundreds of lines of code (LOC) in their imple-mentations. On the other hand, the largest case study, EclipseCVS Plugin, has almost 20 KLOC. The systems also vary in termsof releases available and developers involved in their implemen-tations as presented in Table 2. These studies were selected forseveral reasons. First, they encompass both AO and OO implemen-tations and their assessments were previously based on metrics.They allowed us to evaluate whether concern-sensitive heuristicsare helpful to enhance a design assessment exclusively based onmetrics (Section 6.2). Furthermore, the modularity problems inall the selected systems have been correlated with external soft-ware attributes, such as reusability (Filho et al., 2006; Garcia et al.,2004; Hannemann and Kiczales, 2002), design stability (Cacho et al.,2006; Greenwood et al., 2007), and manifestation of ripple effects(Greenwood et al., 2007).

Table 3 presents the nature and a complete list of the con-cerns for each analysed system. A particular reason for selectingthese systems is the heterogeneity of concerns found (2nd column,Table 3), which include widely-scoped architectural concerns, suchas persistence and exception handling, and more localised ones,such as some design patterns. They also encompass different char-acteristics and different degrees of complexity. These systems arerepresentatives of different application domains, ranging from sim-ple design pattern instances (1st system) to reflective middlewaresystems (2nd), real-life Web-based applications (5th), multi-agentsystems (6th), and software product lines (7th). Finally, such sys-tems will also serve as effective benchmarks because they involve

scenarios where the “aspectisation” of certain concerns is far fromtrivial (Cacho et al., 2006; Filho et al., 2006; Garcia et al., 2004;Greenwood et al., 2007). We took the analysed concerns from pre-vious studies and the complete descriptions of such concerns and

LOC No. of releases No. of developers

168 (OO)/177 (AO) 2 2213 (OO)/234 (AO) 2 2135 (OO)/146 (AO) 2 2253 (OO)/253 (AO) 2 2265 (OO)/363 (AO) 2 2

∼1.5 KLOC (OO) ∼2 KLOC (AO) 1 3

∼1 KLOC (OO) ∼1.1 KLOC (AO) 1 1

∼19 KLOC (OO) ∼22 KLOC (AO) 5 8

∼8 KLOC (OO) ∼7 KLOC (AO) 10 >10

∼1.4 KLOC (OO)∼1.6 KLOC (AO) 1 6

∼2.2 KLOC (OO)∼2.6 KLOC (AO) 9 11

Page 11: Applying and evaluating concern-sensitive design heuristics

E. Figueiredo et al. / The Journal of System

Table 3Selected concerns per system.

Systems Nature of the concerns Concern names

(1) A DesignPattern library

Roles of designpatterns

Director role (Builder), Handlerrole (Chain of Responsibility),Creator role (Factory Method),Subject and Observer roles(Observer), Colleague andMediator roles (Mediator)

(2) OpenOrbMiddleware

Design patterns andtheir compositions

Factory Method, Observer,Facade, Singleton, Prototype,State, Interpreter, and Proxy(3) AJATO

(4) Eclipse CVSPlugin

Recurring architecturalcrosscutting andnon-crosscuttingconcerns

Business, Concurrency,Distribution, ExceptionHandling, and Persistence(5) Health

Watcher(6) Portalware Domain-specific

concernsAdaptation, Autonomy, andCollaboration

(7) MobileMedia Product line features Controller, Sorting, Exception

so

7

citaf

TR

Handling, Favourites,Persistence, and Labelling

ystems can be found in their respective publications (1st columnf Table 2).

. Results and discussion

This section introduces a systematic evaluation of theoncern-sensitive heuristics in seven applications (Section 6.2)

mplemented in Java (OO) and AspectJ (AO). We applied the heuris-ic rules to 23 concern instances of both OO and AO versions of thesepplications. The concerns were selected because they exercise dif-erent heuristics. The heuristics evaluation is undertaken under

able 4esults of the heuristics for concern diffusion and crosscutting patterns in the OO system

Studies Concerns Heuristic rules

01 02 03 04 0

(1) Patterns library(Hannemann and Kiczales,2002)

Director role n y y n yHandler role n y n y

Creator role n y n y

Observer role n y n y

Subject role n y n y

Colleague role n y n y

Mediator role n y n y

(2) OpenOrb Middleware(Cacho et al., 2006)

Fact. Method n y y n yObserver n y y n nFac ade y n

Singleton n y y n n

(3) Measurement tool (Cachoet al., 2006; Figueiredo et al.,2006)

Prototype n y y n nState n y y n yInterpreter n y y n nProxy n y y n n

(4) CVS plugin (Filho et al., 2006) Exc. Handling n y n y

(5) Health Watcher, R9(Greenwood et al., 2007)

Business n y n y

Concurrency n y n y

Distribution n y n y

Persistence n y n y

(6) Portalware (Garcia et al.,2004)

Adaptation n y y n nAutonomy n y y n nCollaboration n y n y

(7) MobileMedia, R6(Figueiredo et al., 2008)

Controller n y n y

Sorting n y y n nExc. Handling n y n y

Favourites n y y n nPersistence n y n y

Labelling n y n y

s and Software 85 (2012) 227– 243 237

four dimensions: (i) applicability for addressing measurement limi-tations (Section 7.1); (ii) accuracy to identify crosscutting concernsin both OO and AO systems (Section 7.2); (iii) ability to pinpointsources of design instability (Section 7.3) and (iv) the usefulnessto detect design flaws in comparison with conventional heuristics(Section 7.4). Each dimension of the evaluation maps to one of thefour research questions presented in Section 6.1.

7.1. Solving measurement shortcomings

The goal of this section is to discuss whether and howconcern-sensitive heuristics address the limitation of conventionalmodularity assessment. To achieve this goal, we used our tool (Sec-tion 5) to apply the proposed heuristic rules to the selected systems(Section 6.1). Table 4 provides an overview of the results for thefirst two suites of rules, namely concern diffusion (Section 4.1) andcrosscutting patterns (Section 4.2). The 1st and 2nd columns in thistable show the seven system and all analysed concerns, respec-tively. The columns labelled 01–13 present the answers ‘yes’ (y)or ‘no’ (n) of the heuristics for concern diffusion (rules 01–08) andcrosscutting patterns (rules 09–13). Blank cells mean that the ruleis not applicable. Finally, the last column indicates how the heuris-tics classify each concern. The obtained heuristic classification ofconcerns includes isolated, well-localised (‘well-loc.’), crosscutting,Black Sheep (BS), Octopus (Oct), God Concern (GC), Data Concern(DC), and Behavioural Concern (BC). Based on these results, thissection presents and discusses below six of problems (labelled A–F)that affect not only conventional metrics but also the use of con-

cern metrics (Section 3). It also classifies these problems into fourmain categories. For simplicity reasons, we do not show all theresults of the conventional and concern metrics for all systems,but we made the complete raw data available in a supplementary

s.

Concernclassifications

5 06 07 08 09 10 11 12 13

n well-loc.y n well-loc.n y n n n n y BCn y n y n n y Oct + BCn y n y n n n Octn y n y n n n Octy n well-loc.

n well-loc. y n y n n n Oct

isolated y y n n n n BS

y y n n n y BS + BC n well-loc.

y n y n n n Oct y n n n n n crosscutting

n y n y n n y Oct + BC

y n well-loc.n y n y n n n Octn y n y n n n Octn y n y y n n Oct + GC

y n n n n n crosscutting y n n n n n crosscutting

n y n y n n n Oct

n y n y n n n Oct y y n n n n BS

n y n y n n n Oct y y n n n n BS

n y n y y n n Oct + GCn y n y n n n Oct

Page 12: Applying and evaluating concern-sensitive design heuristics

2 ystems and Software 85 (2012) 227– 243

wH

taPdqSdttot(ispoMmcFn(

darTFtttwSobs

lwmewtAoC

Table 5Traditional and concern metrics for the Collaboration concern of Portalware.

NOA NOCA NOO NOCO

SearchingPlan 0 0 2 0SearchResultReceivingPlan 0 0 2 0AvailabilityPlan 0 0 2 0SearchAskAnsweringPlan 0 0 2 0ContentDistributionPlan 0 0 2 0ResponseReceivingPlan 0 0 2 0Collaboration 0 0 7 7CollaboratorCore 2 2 11 11CollaboratorRole 1 1 8 8CollaborativeAgent 1 1 6 6SharedObject 4 4 6 6Answerer 1 1 4 4Caller 1 1 3 3ContentSupplier 1 1 2 2Editor 0 0 4 4

Summation 11 11 63 51

38 E. Figueiredo et al. / The Journal of S

ebsite (Applying and Evaluating Concern-Sensitive Designeuristics, 2010).

False crosscutting warnings: We identified in this study at leastwo examples of problems in this category: (A) false scatteringnd tangling warnings and (B) false coupling or cohesion warnings.roblem A occurs when concern metrics erroneously warn theeveloper of a possible crosscutting concern. However, a subse-uent careful analysis shows that the concern is well-localised.imilarly, Problem B leads developers to an unnecessary search foresign flaws, but, in this situation, the false warning is caused byraditional metrics, such as coupling and cohesion ones. Fig. 2 (Sec-ion 3) presents an example of this problem category in an instancef the Factory Method pattern (Gamma et al., 1995). OO abstrac-ions provide adequate means of modularising Factory Methode.g., the main purpose of classes which implement this patterns to realise it). However, the values of the concern metrics showome level of scattering and tangling in the classes realising thisattern (tables in Fig. 2). In this case, the false warning was a resultf the Observer-specific concerns crosscutting classes of the Factoryethod pattern, i.e., it is not a problem of this latter pattern imple-entation at all. Our case studies indicate that shortcomings of this

ategory are ameliorated with concern-sensitive heuristic support.or instance, Problem A discussed above (Factory Method) doesot produce a false warning when the heuristic rules are appliedTable 4).

Hiding concern-sensitive flaws: Sometimes, design flaws are hid-en in measurements just because (C) metrics are not able to revealn existing modularity problem. We illustrate this limitation of met-ics in the light of a partial class diagram presented in Fig. 9.his figure highlights elements that implement the Singleton andacade design patterns. It also shows some concern metrics forhese patterns. Although Singleton has a lower average metric valuehan Facade, the former presents a crosscutting nature and the lat-er does not. Therefore, in this example, concern metrics do notarn the developer about the crosscutting phenomena relative to

ingleton (Cacho et al., 2006; Garcia et al., 2006). The applicationf concern heuristics overcomes this measurement shortcomingy correctly classifying the Singleton pattern as black sheep – apecialised category of crosscutting concerns.

Lack of mapping to flaw-causing concerns: Two examples of prob-ems were identified in this study: (D) measurement does not show

here (which design elements) the problem is and (E) measure-ent does not relate design flaws to concerns causing them. One

xample of Problem D is the Collaboration concern of the Portal-are system which is highly scattered and tangled. Table 5 shows

he components of Portalware and four metrics, namely Number ofttributes (NOA), Number of Concern Attributes (NOCA), Numberf Operations (NOO), and Number of Concern Operations (NOCO).oncern metrics in Table 5 – NOCA e NOCO – indicate that 15

Fig. 9. Concern metrics for

Portalware components implement Collaboration. Nevertheless, ina more detailed analysis we found out that only six componentshave tangling among Collaboration and other system concerns.That is, the top six components listed in Table 5. Hence, focus-ing on Collaboration the developer needs to inspect (and perhapsimprove) the design of only six components (and not 15). In orderto solve this problem, the concern-sensitive heuristics classifiedCollaboration as octopus (Table 4). Furthermore, Rule 10 identi-fied that the octopus’ tentacles only touch six components whichare indeed crosscut by Collaboration. The remaining 9 componentswell encapsulate Collaboration.

Controversial outcomes from concern metrics: A problem of thiscategory occurs if (F) the results of different metrics do not converge tothe same outcome, making the designer’s interpretation difficult. Wehave identified some occurrences of this problem in our study, likethe Adaptation concern (Portalware). Applying the concern met-rics to Adaptation, the CDC value is 3 (indicating low scattering)while other concern metrics present high values (e.g., NOCA = 10and NOCO = 22) (Applying and Evaluating Concern-Sensitive DesignHeuristics, 2010). Hence, the concern metrics have contradictoryvalues in the sense that it is hard to conclude whether Adaptation isa crosscutting concern. The concern-sensitive heuristics addressedthis category of shortcomings in a number of cases. For instance,although Adaptation has contradictory results for concern metrics,the heuristics have successfully identified it as an octopus (Table 4),meaning that it is also a crosscutting concern.

Facade and Singleton.

Page 13: Applying and evaluating concern-sensitive design heuristics

E. Figueiredo et al. / The Journal of Systems and Software 85 (2012) 227– 243 239

Table 6Results of the heuristics for concern diffusion in the OO and AO systems.

Studies Concerns Best solution Concern classification Results

OO AO OO AO

(1) Patterns library (Hannemann andKiczales, 2002)

Director role OO well-loc. well-loc. hit hitHandler role AO well-loc. well-loc. FAIL hitCreator role OO crosscutting well-loc. FAIL hitObserver role AO crosscutting crosscutting hit FAILSubject role AO crosscutting crosscutting hit FAILColleague role AO crosscutting crosscutting hit FAILMediator role AO well-loc. crosscutting FAIL FAIL

(2) OpenOrb Middleware (Cacho et al.,2006)

Fact. Method OO well-loc. isolated hit hitObserver AO crosscutting isolated hit hitFac ade OO isolated isolated hit hitSingleton AO crosscutting isolated hit hit

(3) Measurement tool (Cacho et al.,2006; Figueiredo et al., 2006)

Prototype AO crosscutting isolated hit hitState AO well-loc. isolated FAIL hitInterpreter AO crosscutting isolated hit hitProxy AO crosscutting isolated hit hit

(4) CVS plugin (Filho et al., 2006) Exc. Handling AO crosscutting crosscutting hit hita

(5) Health Watcher, R9 (Greenwoodet al., 2007)

Business OO well-loc. well-loc. hit hitConcurrency AO crosscutting isolated hit hitDistribution AO crosscutting well-loc. hit hitPersistence AO crosscutting well-loc. hit hit

(6) Portalware (Garcia et al., 2004)Adaptation AO crosscutting well-loc. hit hitAutonomy AO crosscutting crosscutting hit FAILCollaboration AO crosscutting isolated hit hit

(7) MobileMedia, R6(Figueiredo et al., 2008)

Controller AO crosscutting crosscutting hit hitb

Sorting AO crosscutting isolated hit hitExc. Handling AO crosscutting crosscutting hit hita

Favourites AO crosscutting isolated hit hitPersistence AO crosscutting crosscutting hit hitb

Labelling AO crosscutting crosscutting hit hitb

Media

7

oA(GiobEotaptboTrTWTstdiH

fr

number of concerns. According to data in Table 7, the heuristicsfailed in less than 20% of the cases (6 false positives and 3 falsenegatives). Focusing on the OO designs, we found out 1 case falsepositive and 3 cases of false negative. The false positive occurs

Table 7Statistics for concern diffusion heuristics.

a Exception Handling was only partially aspectised in the CVS plugin and Mobileb Controller, Persistence, and Labelling were not aspectised in MobileMedia.

.2. Accuracy of the concern-sensitive heuristics

This section evaluates the heuristics accuracy comparing theirutcomes with the best means of modularisation (the OO orO decomposition) based on specialists and literature claims

Cacho et al., 2006; Filho et al., 2006; Garcia et al., 2004, 2006;reenwood et al., 2007; Hannemann and Kiczales, 2002). Special-

sts are researchers that participated in development, maintenance,r assessment of the target systems. Their opinions were acquiredy asking them to fill a questionnaire (available at Applying andvaluating Concern-Sensitive Design Heuristics, 2010). In this stepf our evaluation, in addition to the original designs, we also discusshe results for the aspectual ones. The aspectual implementationsim at modularising one or more crosscutting concerns. Table 6resents the results of this analysis for the concern diffusion heuris-ics (Section 4.1). We focus on the heuristics for concern diffusionecause most of the specialists (e.g., developers who answeredur questionnaire) are not familiar with the crosscutting patterns.herefore, it would be hard to verify from this perspective the cor-ectness of the heuristic rules for crosscutting pattern detection.he first two columns of this table are the same presented in Table 4.e show these columns just to facilitate the analysis of the results.

he 3rd column presents the specialists’ opinion or the best knownolution (OO or AO). The results of the heuristic rules are given inhe 4th and 5th columns, labelled Concern Classification. Detailedata for the heuristics application in the AO systems are available

n our website (Applying and Evaluating Concern-Sensitive Design

euristics, 2010).

Of course, since the implementations of the concerns are dif-erent (object-oriented vs. aspect-oriented versions), the heuristicules may give different results. For instance, the Creator role

.

(Design Patterns Library) is classified as crosscutting in the object-oriented design and as well-localised in the aspect-oriented one.Finally, the last two columns indicate if the heuristic classificationmatches with the best proposed solution (‘hit’) or not (‘fail’). Forthis analysis, we quantify how many times the heuristics matchprevious knowledge (hits) and how many times they do not match(false positives or false negatives). A false positive is a case wherethe concern heuristics answered ‘yes’ while it should have been ‘no’,i.e., where they erroneously reported a warning. On the other hand,a false negative is the case where the rules answered ‘no’ while itshould have been ‘yes’; thus, where they failed to identify a designflaw.

Table 7 provides overviews of the hits, false positives (FP) andfalse negatives (FN) of the rules for the 58 concern instancesinvolved in this experiment (29 in OO and 29 in AO designs).The rows of this table are organised in three parts: the object-oriented instances (OO), the aspect-oriented instances (AO), andthe general data of both paradigms (OO + AO). Each row describesthe absolute numbers and its percentage in relation to the total

Studies Hits (%) FP (%) FN (%) Total (%)

OO 25 (86.2) 1 (3.5) 3 (10.3) 29 (100)AO 24 (82.8) 5 (17.2) 0 (0.0) 29 (100)OO + AO 48 (84.5) 6 (10.3) 3 (5.2) 58 (100)

Page 14: Applying and evaluating concern-sensitive design heuristics

2 ystems and Software 85 (2012) 227– 243

wtama(itw

oMCpatpafics

pmAatrccgp

tatfOMcaMwcdoaSt

fioAcoipoc

7

cyW

Table 8Unstable MobileMedia components and crosscutting patterns (Release 6).

Components Crosscuttingpatterns

Total ofpatterns

Total ofchanges

BS Oct GC DC BC

BaseController 0 4 1 0 0 5 5MediaAccessor 0 3 1 0 0 4 5MediaController 2 4 1 0 0 7 5MediaUtil 2 4 1 0 0 7 5PhotoViewScreen 0 3 0 0 0 3 4

Complete list is at Applying and Evaluating Concern-Sensitive DesignHeuristics (2010)SmsReceiverThread 0 2 0 0 0 2 1SmsSenderThread 0 2 0 0 0 2 1UnavailablePhoto

AlbumException0 2 0 0 0 2 1

BaseThread 0 0 0 0 0 0 0SplashScreen 0 0 0 0 0 0 0

terns and design stability for the object-oriented design of HealthWatcher. We focus our discussion here on these representativeMobileMedia and Health Watcher releases, but additional data are

Table 9Unstable Health Watcher components and crosscutting patterns (R09).

Components CrosscuttingPatterns

Total ofpatterns

Total ofchanges

BS Oct GC DC BC

HealthWatcherFacade 0 3 1 0 0 4 6HWServlet 0 1 1 0 0 2 6UpdateHealthUnitSearch 0 3 1 0 0 4 6UpdateComplaintData 0 2 1 0 0 3 5GetDataForSearchBy

DiseaseType0 2 1 0 0 3 4

Complete list is at Applying and Evaluating Concern-Sensitive DesignHeuristics (2010)Subject 0 0 0 0 0 0 0SymptomRecord 0 3 1 0 0 4 0ThreadLogging 0 0 0 0 0 0 0

40 E. Figueiredo et al. / The Journal of S

ith the Creator role of the Factory Method pattern (Table 6). Inhis pattern, the GUIComponentCreator class which plays the Cre-tor role has also GUI-related elements, such as the showFrame()ethod (Hannemann and Kiczales, 2002). Our conclusion is that,

lthough the Factory Method pattern is not a crosscutting concernGarcia et al., 2006; Hannemann and Kiczales, 2002), this particularnstance presents a problem of separation of concerns. Therefore,he concern-sensitive heuristics were able to detect a problemhich has not been reported by the specialists.

Simple program slices: The heuristics also presented 3 instancesf false negatives: (i) Chain of Responsibility (CoR) pattern, (ii)ediator role of the Mediator pattern, and (iii) State pattern. The

oR pattern was not detected as a crosscutting concern because theattern instance is too simple (due to its library nature (Hannemannnd Kiczales, 2002)) in order to expose the pattern’s crosscut-ing nature (Cacho et al., 2006; Garcia et al., 2006). In fact, classeslaying the Handler role have only three members: the successorttribute, the handleClick() method, and one constructor. Since therst two realise the CoR pattern, the heuristics classify this con-ern as well-localised (i.e., the main purpose of all components). Aimilar situation occurs with the Mediator design pattern (Table 4).

Selection and granularity of concerns: Moreover, the State designattern is a false negative in the third case study due to the assess-ent strategy of using patterns, instead of roles, as concerns.lthough State has two roles (Context and State), it was considereds a single concern in the third application. Additionally, the stateransitions (part of State role) occur in classes playing the Contextole. In other words, one role crosscuts only the other role, and theoncern heuristics were unable to expose the pattern as a cross-utting concern. Hence, our conclusion for OO designs is that finerrained concerns, such as roles of design patterns, enable higherrecision of the heuristics.

Table 7 also presents five instances of false positive in the heuris-ic assessment of aspectual systems. Similarly to State discussedbove, recurring false positives occur in AO designs when a pat-ern defines more than one role. For instance, four (out of five)alse positives match this criterion: Subject and Observer roles (thebserver design pattern) and Mediator and Colleague roles (theediator design pattern). Although each of these patterns is suc-

essfully modularised using abstract protocol aspects and concretespects, their roles share these components. Then, the Observer andediator roles were classified as crosscutting in the aspect protocolhile their counterparts (Subject and Colleague) were classified as

rosscutting in the concrete aspects. Therefore, differently from OOesigns, our analysis suggests selecting the whole pattern (insteadf its roles) for AO designs. In other words, the selection of thessessment granularity has to match the granularity of the aspects.o, since design patterns were aspectised, they should representhe granularity of the assessed concerns.

Aspect precedence: The Autonomy concern is also misclassi-ed as crosscutting in the AO design of Portalware. This situationccurs due to a precedence definition between the Adaptation andutonomy aspects. The Adaptation aspect has a declare precedencelause with a reference to Autonomy. The concern heuristics pointut Autonomy code inside the Adaptation aspect and, therefore,ndicate that the former aspect crosscuts the latter. Although thisroblem really exists, there is no way of improve the separationf these two concerns in the Portalware system due to AspectJonstraints (Greenwood et al., 2007).

.3. Correlating crosscutting patterns and design stability

This section analyses the correlation between the number ofrosscutting patterns and changes in a component. In this anal-sis, we selected two systems, namely MobileMedia and Healthatcher, presented in Section 6.1. This part of the evaluation

Average 0.3 2.9 0.7 0.0 0.0 3.8 1.8Correlation 41.2% 40.5% 27.6% 0.0% 0.0% 48.5%

comprises three steps. First, we identified the most unstable com-ponents by analysing real changes throughout the MobileMediaand Health Watcher software evolutions. We extracted thesechanges from 7 successive releases of MobileMedia and 10 releasesof Health Watcher. Second, we heuristically detected crosscuttingpatterns by analysing concerns in the MobileMedia and HealthWatcher designs (Section 6.2). Finally, we contrasted the number ofcrosscutting pattern instances found in each component with thelist of unstable components (first step).

Table 8 illustrates our results of MobileMedia by showing5 of the most unstable components (top components) and 5of the most stable components (bottom components) for the6th object-oriented release. This analysis focuses on the object-oriented design because crosscutting patterns revealed to bemore common in this type of software decomposition technique(Figueiredo et al., 2009). These tables also show the numberof changes (last column) and the number of crosscutting pat-terns (intermediary columns) per component of MobileMedia.Considering that a system evolves towards a more stable struc-ture, we focused on Release 2 to represent a later (more stable)design. Table 9 presents similar information about crosscutting pat-

TransactionException 0 3 0 0 0 3 0UpdateEntryException 0 1 0 0 0 1 0

Average 0 1.5 0.5 0 0 2.0 1.1Correlation 0 31.1% 47.9% 0 0 46.9%

Page 15: Applying and evaluating concern-sensitive design heuristics

ystems and Software 85 (2012) 227– 243 241

pH

Toccoic

(ctoilpcoti

titoTpiBHc

iscTitHwisi

7

fRSDiaasstst

eria

Table 10Statistics for the heuristics related to bad smells.

Bad smell Concern-sensitive rules Conventional rules

Hits (%) FP (%) Hits(%) FP (%)

Shotgun Surgery 10 (76.9%) 3 (23.1%) 9 (47.4%) 10 (52.6%)Feature Envy 6 (85.7%) 1 (14.3%) 2 (25.0%) 6 (75.0%)

E. Figueiredo et al. / The Journal of S

resented at (Applying and Evaluating Concern-Sensitive Designeuristics, 2010).

Correlating crosscutting patterns and changes: From data ofables 8 and 9, it is easy to recognise that classes with higherccurrences of changes are also the locus of more instances of cross-utting patterns. For instance, classes of MobileMedia (Table 8)hanging in 5 out of 7 releases are affected by at least 4 instancesf crosscutting patterns. On the other hand, usually no more than 2nstances of crosscutting patterns could be found in the most stablelasses of both systems, with just two exceptions.

In our analysis, we also performed the Spearman’s correlationMyers and Well, 2003) between crosscutting patterns and designhanges as presented in the last rows of Tables 8 and 9. Addi-ionally, we used the following criteria to interpret the strengthf correlation (Myers and Well, 2003): less than 10% means triv-al, 10–30% means minor, 30–50% means moderate, 50–70% meansarge, 70–90% means very large, and more than 90% means almosterfect. This indicator shows a moderate correlation betweenrosscutting patterns and design in both analysed systems. Basedn these preliminary results, we observed that the proposed heuris-ic rules for crosscutting patterns detection can be used as intuitivendicators of design instability.

Analysis of specific crosscutting patterns: We also investigatedhe correlation of specific crosscutting patterns and design stabil-ty. In order to investigate how each crosscutting pattern relateso design stability, we quantified per component (i) the numberf changes and (ii) the number of individual crosscutting patterns.his analysis is based on the results of the Spearman’s correlationresented in the last row of Tables 8 and 9. As it can be observed

n these tables, we could not find instances of Data Concern andehavioural Concern in the two target systems (MobileMedia andealth Watcher). Therefore, our analysis focuses on the other threerosscutting patterns, Black Sheep, Octopus, and God Concern.

Interestingly, data in Tables 8 and 9 indicate that, whennstances of the analysed crosscutting patterns can be found in aystem, they usually present moderate correlation with modulehanges (i.e., between 30% and 50% correlation). Data of Octopus inables 8 and 9, for instance, presented a correlation of 40% and 31%n MobileMedia and Health Watcher, respectively. Minor correla-ion was only found for God Concern in the MobileMedia analysis.owever, the correlation of God Concern and components changesas moderate in Health Watcher. Although preliminary, our results

ndicate that, even when used in isolation, a heuristic rule for onepecific crosscutting pattern may be a valuable indicator of designnstability.

.4. Specific design flaws detection

This section evaluates whether our concern heuristics are use-ul to detect specific design flaws. In particular, we applied rules14 to R17 (Fig. 6) to identify instances of four bad smells –hotgun Surgery (Fowler, 1999), Feature Envy (Fowler, 1999),ivergent Change (Fowler, 1999), and God Class (Riel, 1996) –

n the seven target systems (Section 6.1). We also independentlypplied conventional heuristic rules proposed by Marinescu (Lanzand Marinescu, 2006; Marinescu, 2002) for detecting the same badmells. The application of each rule pointed out design fragmentsuspect of having one of the three aforementioned bad smells. Wehen compared the results from the two suites of rules (concern-ensitive and conventional) with a reference list which indicateshe actual bad smells in these systems.

The bad smells reference list: We relied on two specialists of

ach system to build a reference list for each analysed bad smell;esulting in four reference lists for each system. Specialists werenstructed to individually use their own strategy and knowledgebout the systems to detect “smelly” classes. Not surprisingly, the

Divergent Change 4 (57.1%) 3 (42.9%) Not supportedGod Class 9 (81.8%) 2 (18.2%) 2 (25.0%) 6 (75.0%)

two sets containing potential bad smell instances of a system werenot exactly the same, although in some cases they have manyclasses in common. In order to reach an agreement, we undertookmanual investigation in all suspect design fragments. The man-ual investigation allowed us to verify whether the suspect designfragments were indeed affected by the design flaw.

After this inspection, the bad smells indicated by heuristic ruleswere classified into two categories: (i) ‘Hits’: those heuristicallysuspect fragments that have confirmed to be affected by the badsmell, and (ii) ‘False Positives’: those heuristically suspect frag-ments that revealed not to be affected by the bad smell. Table 10summarises the results of applying both concern-sensitive heuris-tics and conventional ones. This table shows the total number ofhits and false positives (FP) for each bad smell and the percentage(under brackets) regarding the total number of suspect fragments.Note that the results in Table 10 are given in different view points. Inconcern-sensitive rules, the values for Shotgun Surgery and FeatureEnvy (R11 and R12) represent the number of concerns (since R11and R12 classify concerns). On the other hand, God Class, DivergentChange, and conventional rules present the results in terms of thenumber of suspected classes. Moreover, the conventional heuristicrules (Lanza and Marinescu, 2006; Marinescu, 2002) do not supportthe detection of Divergent Change.

Data of Table 10 show that concern-sensitive heuristics pre-sented superior accuracy than conventional rules for detecting allstudied bad smells. The former presented no more than 25% of falsepositives, whereas the latter fails in more than 50% of the cases.We could also verify that this advantage in favour of our rules wasmainly due to the fact that they are sensitive to the design con-cerns. For instance, many false positives of the conventional rulefor Shotgun Surgery occurred because its metrics do not distin-guish coupling between classes of the same concern and classeswith different concerns.

Based on these values, we observed that the types of code smellsthat benefit most from the use of the concern-sensitive heuristicrules were Feature Envy and God Class. An example is the BaseC-ontroller class in the MobileMedia system which was identifiedby the concern-sensitive heuristics for both God Class and FeatureEnvy bad smells. This class was not indicated for any bad smell byconventional heuristics. The main reason for the concern-sensitiveheuristics to detect these bad smell instances is because this classis an outlier both in terms of its size and in terms of the numberof concerns it realises. For instance, BaseController is a large classand realises 5 different concerns. Therefore, it was considered aGod Class instance in the reference list. Additionally, some of itsmethods, such as showImageList and handleCommand, were con-sidered in the wrong place and, therefore they should be moved toa different class. This situation indicates a Feature Envy instance.

8. Study constraints

The goal of this paper was to propose and to investigate

concern-sensitive heuristics. In particular, we analyse whetherand how these heuristics can address some issues of metricsand conventional heuristics. Although we have not assessed thereliability and feasibility of concern identification, previous work
Page 16: Applying and evaluating concern-sensitive design heuristics

2 ystem

(atacltFtpHi

tctonnsNta6

aMesMouima

9

reTticF(w

crshatpapdfwbtc7

42 E. Figueiredo et al. / The Journal of S

Figueiredo et al., 2011) has shown that concerns can be manu-lly identified with about 80% of precision. For instance, identifyinghe Exception Handling concern in the source code is almost 100%utomated because it is clearly marked in the systems by languageonstructs, such as try-catch blocks. On the other hand, concernsike Persistence, Distribution and Concurrency are more applica-ion dependents leading to a more complex identification process.ortunately, recently proposed approaches and tools facilitatehe concern mining (Figueiredo et al., 2008; Robillard and Mur-hy, 2007). We also relied on specialists and literature claims.owever, these can be themselves considered a threat to the valid-

ty of our study.We evaluated the applicability of the approach by defining thir-

een concern-sensitive heuristic rules that are classified in threeategories. Based on their application to seven heterogeneous sys-ems, we have shown that concern heuristics are accurate meansf flaw detection and can be usable in practice. However, we doot claim that our data are statistically relevant due to the reducedumber of studied applications. We have also not used rigoroustatistical methods in all dimensions of the empirical evaluation.onetheless, we believe the selected applications are representa-

ive of subset of software systems due to their varied characteristicsnd to the heterogeneity of concerns involved in this study (Section.1).

The definition of proper threshold values intrinsically char-cterises any assessment approach (Lanza and Marinescu, 2006;arinescu, 2002, 2004). Our strategy based on our empirical

vidence was to use: (i) 50% as default for the concern diffu-ion heuristics (Section 4.1), (ii) a meaningful ratio (Lanza andarinescu, 2006) that represents the definition of black sheep and

ctopus (Ducasse et al., 2006) (Section 4.2), and similar thresholdssed by Marinescu (Lanza and Marinescu, 2006; Marinescu, 2002)

n the heuristics for bad smells (Section 4.3). Of course those valuesight need to be varied depending on application particularities

nd assessment goals.

. Concluding remarks

This paper (i) presented a suite of concern-sensitive heuristicules, and (ii) investigated the hypothesis that these heuristics offernhancements over typical metrics-based assessment approaches.he heuristics suite can be extended and, by no means, we claimhat the suite of rules proposed in this paper is complete. Fornstance, Section 4.2 presents heuristics for only five from aatalogue of 13 crosscutting patterns (Figueiredo et al., 2009).urthermore, the heuristic suite for concern-aware design flawsSection 4.3) focuses on bad smells and could be extended to dealith additional sorts of modularity flaws.

Our investigation indicated promising results in favour ofoncern-sensitive heuristics. However, we cannot generalise ouresults since the heuristic rules were applied to a limited set ofystems and concerns. In our investigation, the concern-sensitiveeuristics performed well for determining concerns that should bespectised. Our preliminary evaluation pointed out that the heuris-ics have around 80% of accuracy (Section 7.2). Moreover, heuristicsrovided enhancements both in purely OO and in aspectual designssessment. Second, our investigation also pointed out that theresence of some crosscutting patterns might be correlated withesign instability (Section 7.3). It indicated that the classes that suf-ered from a high number of changes during the systems’ evolutionere also locus of a high number of crosscutting patterns detected

y our heuristic rules. Finally, our evaluation provided evidenceshat the concern-sensitive rules were in general more accurate thanonventional rules to detect four well-known bad smells (Section.4).

s and Software 85 (2012) 227– 243

We also identified, however, some problems in the accuracy ofthe heuristics (Section 7.2) in our study when: (i) they were appliedto small design slices (patterns library (Hannemann and Kiczales,2002)) and (ii) there was a presence of concern overlapping. Similarproblems remained when the heuristics were applied to the cor-responding aspectual designs. For instance, some concerns weremisclassified in scenarios involving an explicit precedence decla-ration between two or more aspects. However, even in presence ofthese intricate concern interactions (all systems, but the patternslibrary) the heuristics presented acceptable rates (Table 7).

This exploratory study is a stepping stone towards more inten-tional assessment of the system designs based on their drivenconcerns. It is intended to lay the groundwork for further empiricalwork. For instance, results of this study are limited to a numberof decisions we made (Section 8), it is necessary to undertakeadditional studies in order to assess the usefulness of concern-sensitive heuristics in different contexts. Additionally, the proposedtool, ConcernMorph (Section 5), could be extended to incorporatemore sophisticated means for mining and mapping concerns todesign elements. Its current implementation analyses just Java codeand, so, additional modelling and programming languages could beincorporated to ConcernMorph in order to allow users analysingmulti-language systems.

Acknowledgments

This work has received funding from the following agenciesand projects: Eduardo – FAPEMIG Universal grant APQ-02932-10;Alessandro – PUC-Rio distinguished young researcher grant, FAPERJgrant E-26/102.211/2009, CNPq grant 305526/2009-0; Projects:National Institute of Science and Technology for Software Engi-neering (INES), and CNPq grants 479344/2010-8, 480374/2009-0,483882/2009-7, 483699/2009-8, and 573964/2008-4.

References

Applying and Evaluating Concern-Sensitive Design Heuristics, 2010.http://www.dcc.ufmg.br/∼figueiredo/concern/heuristics (accessed 09.05.11).

Briand, L.C., Daly, J.W., Wüst, J.K., 1999. A unified framework for coupling measure-ment in object-oriented systems. IEEE Transactions on Software Engineering(TSE) 25, 91–121.

Cacho, N., et al., 2006. Composing design patterns: a scalability study of aspect-oriented programming. In: Proceedings of the 5th International Conference onAspect Oriented Software Development (AOSD), Bonn, Germany, pp. 109–121.

Ceccato, M., Tonella, P., 2004. Measuring the effects of software aspectization. In:Proceedings of the 1st workshop on Aspect Reverse Engineering, Paper 1, TheNetherlands.

Chidamber, S., Kemerer, C., 2004. A metrics suite for object oriented design. IEEETransactions on Software Engineering, 476–493.

Ducasse, S., Girba, T., Kuhn, A., 2006. Distribution Map. In: Proceedings of the Inter-national Conference on Software Maintenance (ICSM), Philadelphia, USA, pp.203–212.

Figueiredo, E., Garcia, A., Maia, M., Ferreira, G., Nunes, C., Whittle, J., 2011. Onthe impact of crosscutting concern projection on code measurement. In: Pro-ceedings of the 10th International Conference on Aspect-Oriented SoftwareDevelopment (AOSD), Porto de Galinhas, Brazil, pp. 81–92.

Figueiredo, E., 2009. Concern-Oriented Heuristic Assessment of Design Stability.Ph.D. Dissertation. Computing Department, Lancaster University, U.K., p. 255.

Figueiredo, E., et al., 2009. Crosscutting patterns and design stability: an exploratoryanalysis. In: Proceedings of the International Conference on Program Compre-hension (ICPC), Vancouver, Canada, pp. 138–147.

Figueiredo, E., et al., 2008. Evolving software product lines with aspects: an empiricalstudy on design stability. In: Proceedings of the 30th International Conferenceon Software Engineering (ICSE), Leipzig, Germany, pp. 261–270.

Figueiredo, E., Garcia, A., Lucena, C., 2006. AJATO: An AspectJ Assessment Tool.In: Proceedings of the European Conference on Object-Oriented Programming(ECOOP), demo D9, Nantes, France.

Filho, F., et al., 2006. Exceptions and aspects: the devil is in the details. In: Proceed-ings of International Symposium on Foundations of Software Engineering (FSE),

Portland, USA, pp. 152–162.

Fowler, M., 1999. Refactoring: Improving the Design of Existing Code. Addison-Wesley, Reading, MA, USA.

Gamma, E., et al., 1995. Design Patterns: Elements of Reusable Object-OrientedSoftware. Addison-Wesley, USA.

Page 17: Applying and evaluating concern-sensitive design heuristics

ystem

G

G

G

H

K

K

LM

M

M

M

M

R

R

R

S

S

S

E. Figueiredo et al. / The Journal of S

arcia, A., et al., 2006. Modularizing Design Patterns with Aspects: A QuantitativeStudy. Transactions on Aspect-Oriented Software Development 1, 36–74.

arcia, A., et al., 2004. Separation of concerns in multi-agent systems: an empiri-cal study. In: Software Engineering for Multi-Agent Systems II. LNCS 2940, pp.49–72.

reenwood, P., et al., 2007. On the impact of aspectual decompositions on designstability: an empirical study. In: Proceedings of European Conference on Object-Oriented Programming (ECOOP), Berlin, Germany, pp. 176–200.

annemann, J., Kiczales, G., 2002. Design Pattern Implementation in Java andAspectJ. In: Proceedings of the International Conference on Object-OrientedProgramming, Systems, Language and Applications (OOPSLA), Seattle, USA, pp.161–173.

iczales, G., Hilsdale, E., Hugunin, J., Kersten, M., Palm, J., Griswold, W., 2001. Anoverview of Aspect J. In: Proceedings of the European Conference on Object-Oriented Programming (ECOOP), Springer, Berlin, Germany, pp. 327–354.

iczales, G., et al., 1997. Aspect-Oriented Programming. In: Proceedings of theEuropean Conference on Object-Oriented Programming (ECOOP), LNCS 1241,Springer, pp. 220–242.

anza, M., Marinescu, R., 2006. Object-Oriented Metrics in Practice. Springer, USA.arinescu, R., 2004. Detection strategies: metrics-based rules for detecting design

flaws. In: Proceedings of the International Conference on Software Maintenance(ICSM), Chicago, USA, pp. 350–359.

arinescu, R., 2002. Measurement and Quality in Object-Oriented Design. PhDThesis. Faculty of Automatics and Computer Science, Politehnica University ofTimisoara.

ezini, M., Ostermann, K., 2003. Conquering aspects with Caesar. In: Proceedingsof the 2nd International Conference on Aspect-Oriented Software Development(AOSD), Boston, MA, pp. 90–99.

onteiro, M., Fernandes, J., 2005. Towards a catalog of aspect-oriented refactorings.In: Proceedings of the International Conference on Aspect-Oriented SoftwareDevelopment (AOSD), Chicago, USA, pp. 111–122.

yers, J., Well, A., 2003. Research Design and Statistical Analysis, 2nd edition.Lawrence Erlbaum.

iel, A., 1996. Object-Oriented Design Heuristics. Addison-Wesley, Reading, MA,USA.

obillard, M., Murphy, G., 2007. Representing concerns in source code. ACM Trans-actions on Software Engineering and Methodology (TOSEM) 16, 1.

obillard, M., Weigand-Warr, F., 2005. ConcernMapper: simple view-based separa-tion of scattered concerns. In: Proceedings of the Eclipse Technology Exchangeat OOPSLA.

ahraoui, H., Godin, R., Miceli, T., 2000. Can metrics help to bridge the gap betweenthe improvement of OO design quality and its automation? In: Proceedings ofthe International Conference on Software Maintenance (ICSM), IEEE ComputerSociety, San Jose, USA, pp. 154–162.

ant’Anna, C., et al., 2003. On the reuse and maintenance of aspect-oriented soft-ware: an assessment framework. In: Proceedings of the Brazilian Symposiumon Software Engineering (SBES), Manaus, Brazil, pp. 19–34.

eaman, C., 1999. Qualitative methods in empirical studies of software engineering.IEEE TSE. 25 (4), 557–572.

s and Software 85 (2012) 227– 243 243

Wohlin, C., Runeson, P., Host, M., Ohlsson, M.C., Regnell, B., Wesslen, A., 2000. Exper-imental Software Engineering – An Introduction. Kluwer Academic Publishers,ISBN: 0-7923-8682-5.

Zhao, J., 2002. Towards a Metrics Suite for Aspect-Oriented Software. Technical-Report SE-136-25, Information Processing Society of Japan (IPSJ).

Eduardo Figueiredo is currently an assistant professor at Federal University ofMinas Gerais in Brazil. He holds a PhD degree in Computer Science certified in2009 by Lancaster University (U.K.). His research interest includes software metricsand heuristic analysis, empirical software engineering, software product lines, andaspect-oriented software development. Eduardo has consistently been publishingresearch papers in premier software engineering conferences, such as ICSE, ECOOP,and FSE. He has also co-organised two editions of the International Workshop onAssessment of Contemporary Modularization Techniques (ACoM) in 2007 and 2008and the two editions of the Workshop on Empirical Evaluation of Software Compo-sition Techniques (ESCOT) in 2010 and 2011. http://www.dcc.ufmg.br/∼figueiredo.

Claudio Sant’Anna is an assistant professor at the Federal University of Bahia(UFBA), Brazil. He received his PhD in computer science from the Pontifi-cal Catholic University of Rio de Janeiro (PUC-Rio), Brazil on April 2008, incooperation with Lancaster University, UK. His research interests include soft-ware metrics, empirical evaluation of contemporary modularization techniques,aspect-oriented software development, software design, and software architecture.http://wiki.dcc.ufba.br/LES/ClaudioSantAnna.

Alessandro Garcia has been an assistant professor in the Informatics Department atPUC-Rio, Brazil, since January 2009. He was a Lecturer in the Computing Departmentat Lancaster University, United Kingdom, from 2005 to 2009. He was previously asoftware architect at the IBM Almaden Research Center and a visiting researcher atthe University of Waterloo, Canada. His main research interests are in the areas ofexception handling, advanced modular programming, software architecture, soft-ware product lines, and empirical software engineering. He has been serving as acommittee member of premier international conferences on software engineering,such as ICSE, AOSD, SPLC, OOPSLA/SPLASH, and AAMAS. He received many awardsand distinctions, including the Best Dissertation Award (Computer Brazilian Society,2000), Best Researcher Award (Lancaster University, 2006), Distinguished YoungScholar (PUC-Rio, 2009), Research Productivity Grant (CNPq, 2009), and YoungScientist Fellowship (FAPERJ). He has been involved in several projects, fundedby EC and Brazilian research agencies, and his research contributions have beenexploited by a number of industry partners, such as Siemens, SAP, IBM, and Motorola.http://www-di.inf.puc-rio.br/∼afgarcia/.

Carlos Lucena received a BSc degree from the Pontifical Catholic University of Riode Janeiro (PUC-Rio), Brazil, in 1965, a MMath degree in computer science from theUniversity of Waterloo, Canada, in 1969, and a PhD degree in computer science from

the University of California at Los Angeles in 1974. He has been a full professor in theDepartamento de Informatica at PUC-Rio since 1982. His current research interestsinclude software design and formal methods in software engineering. He is memberof the editorial board of the International Journal on Formal Aspects of Computing.http://www-di.inf.puc-rio.br/∼lucena/.