Upload
jeremy-rose
View
24
Download
9
Tags:
Embed Size (px)
DESCRIPTION
Citation preview
Software Engineering
Introduction
2SOE: introduction
course objectives
• to understand the difference between traditional and agile approaches to system development
• to understand the primary software engineering tools and techniques and how and when to apply them
• to develop the capacity to use these understandings in your own practice
• overview course – not learn this technique and learn how to apply it in the exercises
3SOE: Introduction
course design
development approaches
agile (Ivan Aaen)
traditional
miniproject 1:DA
tools, techniques, practices
miniproject 2:TTP
SPI
miniproject 3:evaluation
the problem• development project success rates in the US: 29%
• serious problems: 53%• complete failure: 18%
• by budget• <$750,000: success = 55%• >$10,000,000 success = 0%
• England (public sector) 84% partial or total failure• estimated overall: 20-30% projects are total failures
(abandoned)• ”failure of large and complex information system
developments is largely unavoidable”
4SOE: introduction
Source: Dangerous Enthusiams: Gauld and Goldfinch
the problem elaborated
• the requirements problem: the software does not match the needs of users• the analysis problem: the software contains a model of the external world which
cannot be recognized or adapted to by its users• the design problem: the software design inadequately solves the problem, creating
issues such as maintainability, adaptivity, portability, security• the quality problem: the system is delivered with bugs, service and usability
problems that make it difficult to use• the project management problem: the project becomes delayed and/or over
budget; in extreme cases so much so that it is aborted• the change problem: changes in problem or solution, or the project’s environment
(such as an economic crisis, or market change) which affect the project• the complexity problem: the interaction of any combination of the above
5SOE: introduction
one answer:• the application of engineering principles to software
development• “the discipline, art and profession of acquiring and applying technical, scientific and
mathematical knowledge to design and implement materials, structures, machines, devices, systems, and processes that safely realize a desired objective or inventions” Wikipedia
• “the creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property” American Engineers Council
• “the application of scientific and mathematical principles to practical ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems”
6SOE: introduction
7SOE: introduction
software engineering
“(1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. (2) The study of approaches as in (1).”
- the IEEE Computer Society- SWEBOK (the software engineering body of knowledge)
- (not clear that SE is a (natural) science)
8SOE: introduction
SE - related disciplines
SWEBOK (2004):
SE in this course
• a development approach• process model• associated tools,
techniques, practices• a set of (unspoken)
assumptions about the nature of software development and its contexts
• e.g. traditional waterfall
• a set of complimentary SE tools, techniques and practices designed to support the underlying development approach
• such as project management, configuration management, estimation
9SOE: introduction
traditional development approach (process)
a.k.a• SDLC (Systems Development Life Cycle)• waterfall• linear sequential• big design up front
10SOE: introduction
Requirements
Design
Implementation
Test
11SOE: introduction
typical SDLC activities understand what the customer or user wants
(requirements) understand the context or work process that the
computer system will support (analysis) write a description of the system to be built
(specification) make a upfront paper design for the program (design) program the system (coding) debug the resulting program (test) install or implement the system support the system in use and redevelop as necessary
(operation and maintenance)
12SOE: introduction
NATO-seminar in Garmisch 1968
waterfall model
13SOE: introduction
14SOE: introduction
Boehm’s Seven Principles (1976)
• manage using a sequential life cycle plan• perform continuous validation• maintain disciplined product control• use enhanced top-down structured programming• maintain clear accountability• use better and fewer people• maintain commitment to improve process
15SOE: introduction
SWEBOK = traditional development approach
16SOE: introduction
SWEBOK + associated tools and techniques
Rayleigh Curve17SOE: introduction
early waterfallproblems
NATO-seminar in Garmisch 1968
A rational design process (Parnas et al, 1986)
• ‘ideally, we would like to derive our programs from a statement of requirements in the same sense that theorems are derived from axioms in a published proof’
• impossible because of:1. imperfect requirements2. learning during design work3. human cognitive limits4. external change5. human error6. preconceived design ideas7. economic considerations (e.g. reuse of software)
• solution: fake it: continuous requirements documentation
18SOE: introduction
Poppendieck 2000: its time to stop faking it
• accept that the rational (traditional) model cannot be achieved• focus on iterative and incremental development
• requirements and architecture (40%) first• then construction and test
19SOE: introduction
traditional approach: many alternatives
20SOE: introduction
alternative examples perceived flaw in traditional software process
participatory development
ETHICS, Scandinavian school responds to lack of serious user involvement
context‐aware methods
Contextual Design responds to heavy focus on computer system design
rapid development RAD responds to poor speed of delivery
test driven development
TDD responds to lack of rigor in delivering bug free code
agile methods XP, SCRUM responds to process‐rigid, analysis‐heavy and programmer unfriendly development style
open source LINUX, REDHAT projects responds to hierarchical and commercially oriented development style
business‐focused Business Process Re‐engineering
responds to inability to focus on business process innovation and automation of existing business process
systems theory‐focused
Soft‐Systems Methodology, User Centred Design
responds to heavy focus on rational analysis and hard systems tradition
formal methods Z, UPPAAL responds to perceived lack of mathematical or logical rigour in traditional process
the Aalborg SE tradition
• understand the meta-principles upon which traditional software development is based (not learn how to do it again)
• learn agile methods and one alternative set of meta-principles which respond to a particular set of concerns with the traditional development approach
• learn software engineering tools, techniques and practices and how to apply them in both situations
21SOE: introduction
development approaches
agile (Ivan Aaen)
traditional
tools, techniques, practices
22SOE: introduction
Software Engineering
development approaches: process models
waterfall model
• linear sequential
2SOE: process models
3SOE: process models
V-model
prototyping
4SOE: process models
• throwaway• evolutionary
Communicat ion
Qu ick p lan
Const ruct ion of prot ot ype
Mo d e lin g Qu ick d e sig n
De live ry & Fe e dback
Deployment
iterative
5SOE: process models
incremental
6SOE: process models
C o m m u n i c a t i o nP l a n n i n g
M o d e l i n g
C o n s t r u c t i o n
D e p l o y m e n t d e l i v e r y f e e d b a c k
analy s is
des ign c ode
t es t
increment # 1
increment # 2
delivery of 1st increment
delivery of 2nd increment
delivery of nt h increment
increment # n
project calendar t ime
C o m m u n i c a t i o nP l a n n i n g
M o d e l i n gC o n s t r u c t i o n
D e p l o y m e n t d e l i v e r y f e e d b a c k
analy s is des ign c ode
t es t
C o m m u n i c a t i o nP l a n n i n g
M o d e l i n gC o n s t r u c t i o n
D e p l o y m e n t d e l i v e r y f e e d b a c k
analy s is des ign c ode
t es t
7SOE: process models
spiral model (Boehm, 1988)
8SOE: process models
unified process
9SOE: process models
SCRUM
10SOE: process models
XP
11SOE: process models
linear v. iterative
12SOE: process models
OOAD –Mathiassen et al
13
ClassStructureBehavior
UsageFunctionsInterfaces
CriteriaComponentsProcesses
Model ComponentFunction ComponentConnecting components
14SOE: process models
C o m m u n i c a t i o nP l a n n i n g
M o d e l i n g
C o n s t r u c t i o n
D e p l o y m e n t d e l i v e r y f e e d b a c k
ana ly s is
des ign c ode
t es t
increment # 1
increment # 2
delivery of 1st increment
delivery of 2nd increment
delivery of nt h increment
increment # n
project calendar t ime
C o m m u n i c a t i o nP l a n n i n g
M o d e l i n gC o n s t r u c t i o n
D e p l o y m e n t d e l i v e r y f e e d b a c k
ana ly s is
des ign c ode
t es t
C o m m u n i c a t i o nP l a n n i n g
M o d e l i n gC o n s t r u c t i o n
D e p l o y m e n t d e l i v e r y f e e d b a c k
ana ly s is
des ign c ode t es t
Communicat ion
Qu ick p lan
Const ruct ion of prot ot ype
Mo d e lin g Qu ick d e sig n
De live ry & Fe e dback
Deployment
lifecycle XP
incremental prototyping
Software Engineeringtraditional: requirements
requirements
analysis
(programming
)
design
test
early project activities
• objective: start the project and find out what is to be built
• can include:• feasibility study• cost benefit analysis• risk analysis• project initiation• system concept• early planning• team setup• contract negotiation
• and requirements analysis/engineering
2SOE: requirements
the problem addressed:
• the requirements problem: the software does not match the needs of users• software features are missing or incomplete• (costly) features are provided which are unnecessary• as a result the user struggles to complete their work task or achieve
their objectives, even though the software functions according to its specification and is bug free
3SOE: requirements
a requirement is:• a specification for what should be implemented - a description of:
• how the system should behave• application domain information• constraints on the system's operation• specification of a system property or attributes
• a system capability needed by the user to solve a problem or achieve an objective, and/or a system capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification or other formally imposed document
• acquired by dialogue with users• classical division between:
• functional• non-functional
4SOE: requirements
1.1 System Feature 1
<Don’t really say “System Feature 1.” State the feature name in just a few words.> 3.1.1 Description and Priority <Provide a short description of the feature and indicate whether it is of High,
Medium, or Low priority. You could also include specific priority component ratings, such as benefit, penalty, cost, and risk (each rated on a relative scale from a low of 1 to a high of 9).>
3.1.2 Stimulus/Response Sequences <List the sequences of user actions and system responses that stimulate the
behavior defined for this feature. These will correspond to the dialog elements associated with use cases.>
3.1.3 Functional Requirements <Itemize the detailed functional requirements associated with this feature. These
are the software capabilities that must be present in order for the user to carry outthe services provided by the feature, or to execute the use case. Include how the product should respond to anticipated error conditions or invalid inputs. Requirements should be concise, complete, unambiguous, verifiable, and necessary. Use “TBD” as a placeholder to indicate when necessary information is not yet available.>
<Each requirement should be uniquely identified with a sequence number or a meaningful
tag of some kind.>
requirements analysis is
• a series of analysis techniques to address shortfalls in what users/stakeholders are able to express as their needs and wishes for the system including:• many modelling techniques• evolutionary or throwaway prototyping
• and to improve developers’ understanding of their users and their users’ situations
5SOE: requirements
requirements engineering is:• the attempt to add scientific precision to users’ incomplete accounts of their needs
and wishes and developers flawed attempts to understand them• ”Requirements engineering is the branch of software engineering concerned with
the real-world goals for, functions of, and constraints on software systems. It is also concerned with the relationship of these factors to precise specifications of software behaviour, and to their evolution over time and across software families."
6SOE: requirements
the requirements specification: the end result
7SOE: requirements
• the baseline for all future project activities• future contract negotiations• project planning, estimation,
scheduling, cost planning, risk management
• analysis and design• acceptance testing• tradeoffs• change control
• and the beginning of most its problems
• churn – change over the lifetime of the project due to poor initial analysis or natural situational evolution
requirements: problems
• from the user: incomplete, contested, badly explained, ambiguous, misunderstood, without technical understanding, socially contested
• by the developer: badly understood, poorly interpreted through lack of domain knowledge, poorly documented, under negotiated
8SOE: requirements
requirements: managing for change
9SOE: requirements
requirements: iterative ‘good enough’ strategy
10SOE: requirements
Requirements management tools
11SOE: requirements
the requirements problem expressed visually
12SOE: requirements
developer domain
dialogue an agreed,mutually understood and relatively stable account of what to build
use domain
difficult or inappropriate situations
• users are• very many and/or• very different• difficult to communicate with (children)
• the use domain is • unusually expert (eye surgery, investment
management)• poorly defined (start-up consultancy)
• the software is not primarily determined by user needs
• consider: embedded software, missile control system, computer game, ERP system
13SOE: requirements
use domain
classical requirements analysis - supplements and alternatives
• domain engineering (overlaps with system analysis) – obtaining a more precise understanding of the use domain through a variety of domain modelling techniques e.g.:• object modelling• business process modelling• ontology construction• and very many others
• formal specification language (e.g. Z)• prototyping, paper prototyping• iterative process• use cases and user stories
14SOE: requirements
SessionVars
mode: MODE operator: OPERATOR patient: PATIENT field: FIELD names: PATIENT fields: FIELD PRESCRIPTION counters: FIELD ACCUMULATION
operator = no_operator operator operators mode = experiment operator physicists names = if mode = therapy then patients else
studies
supplements and alternatives
• ethnography• participatory development (e.g.
ETHICS, User-Centred Design, Contextual Design, Joint Application Design)
• low-tech models (e.g. rich picture)
• user workshops, focus groups, virtual communities
• on-site customer, product owner
15SOE: requirements
16SOE: requirements
the requirements problem
• traditional solution: application of greater engineering rigour to requirements gathering by means of• structured data gathering with users• better planning• comprehensive, structured documentation, • additional modelling techniques• management of requirements throughout development
a research roadmap: Nuseibah + Easterbrook
• better modelling and analysis of problem domains, as opposed to the behaviour of software.
• development of richer models for capturing and analysing non-functional requirements.
• bridging the gap between requirements elicitation approaches based on contextual enquiry and more formal specification and analysis techniques.
• better understanding of the impact of software architectural choices on the prioritisation and evolution of requirements.
• reuse of requirements models to facilitate the development of system families and the selection of COTS (commercial off-the-shelf).
• multi-disciplinary training for requirements practitioners.
17SOE: requirements
Software Engineeringtraditional: analysis
requirements
analysis
(programming
)
design
test
the analysis (representation) problem
• the software contains a model of the external world which cannot be recognized or adapted to by its users
2SOE: requirements
analysis
• many hundreds of systems analysis and design methodsaka:
• requirements analysis• requirements modelling• systems analysis (as in systems analysis and design)• domain analysis• structured analysis• object-oriented analysis• problem and application domain analysis (OOA+D)
3SOE: analysis
4SOE: analysis
#include <functional>
/* class for the compose_f_gx adapter*/template <class OP1, class OP2>class compose_f_gx_t: public std::unary_function<typename OP2::argument_type,
typename OP1::result_type>{private:OP1 op1; // process: op1(op2(x))OP2 op2;
public:// constructorcompose_f_gx_t(const OP1& o1, const OP2& o2): op1(o1), op2(o2) {}
// function calltypename OP1::result_typeoperator()(const typename OP2::argument_type& x) const {
return op1(op2(x));}
};
/* convenience function for the compose_f_gx adapter*/template <class OP1, class OP2>inline compose_f_gx_t<OP1,OP2>compose_f_gx (const OP1& o1, const OP2& o2) {
return compose_f_gx_t<OP1,OP2>(o1,o2);}
domain models
code models
use domain
analysis models• abstractions• contain:
• a set of terms, concepts and relationships, often with a theoretical background
• a standardised representation form or modelling language (e.g. UML)
• a (usually hidden) set of assumptions about the nature of reality, how it’s understood and what’s important
• model forms:• textual (system definition, event list)• pictorial (rich picture)• diagrammatic (entity model, object
model, dataflow diagram)• algorithmic (pseudo code, Z)
5SOE: analysis
an interpretation problem
6SOE: analysis
use domain
users developers formalised
informal
user-near program-near
analysis models
classical systems analysis
• a classical systems analysis is a description of relevant parts of a use domain, not a plan for a software implementation
• ensures that a software design is based on a sound understanding of the context that the software will later be used in
• this understanding is often extremely difficult for software engineers to acquire and share
• analysis provides a • common communication language for developers• process structure (what to do, when)• consistency and completeness checking• stepwise refinement• programming-related modelling forms which can
be the basis for design7SOE: analysis
use domain
what is modelled:
8SOE: analysis
• classical systems analysis - three types of analysis:1. data structure (the structure of information) e.g. ERM2. process (transformations or operations on data) e.g. dataflow diagram3. sequence or dynamics (the behaviour of the system over time) e.g. state
transition diagram• object orientation conflates the first two
En t ity -R e la tio nsh ip
D ia gra m
D a ta F lowD iagr a m
State -Tr a nsit io nD ia gra m
D a ta D ic tion ar y
P roc es s S pec ific a tio n (P S P E C )
C o ntro l S p ecif ica tion (C S P E C )
D ata O b jec t D e sc r ipt ion
archi ectur lde ig
datadesign
a bewildering array of other analysis forms29th Conference on Conceptual Modelling
Information Modeling Concepts, including Ontologies; Ontological and Semantic Correctness in Conceptual Modeling; Logical Foundations of Conceptual Modeling; Cognitive Foundations of Conceptual Modeling;Conceptual Modeling and Web Information Systems; Business Process Modeling; Conceptual Modeling and Enterprise Architecture; The Semantic Web; Semi-structured Data and XML; Integration of Conceptual Models and Database Schemas; Information Retrieval, Filtering, Classification, Summarization, and Visualization; Methodologies and Tools for Conceptual Design; Evaluation and Comparisons of Conceptual Models and Modeling Methods;Requirements Engineering; Reuse, Patterns, and Object-Oriented Design; Reverse Engineering and Conceptual Modeling;Quality and Metrics of Conceptual Models; Empirical Studies of Conceptual Modeling; Conceptual Change and Schema Evolution; Maintenance of Conceptual Models;Management of Integrity Constraints;Active Concepts in Conceptual Modeling; Spatial, Temporal, and Multimedia Aspects in Conceptual Models; Metadata, its Interpretation and Usage; Conceptual Models and Knowledge Management Systems; Data warehousing, data mining, and business intelligence; andOther Advanced and Cross-Disciplinary Applications of Conceptual Models
9SOE: analysis
10SOE: analysis
case tool support
• diagram support and debugging, code generation, document generation, tailored development method support, consistency and completeness checking, reverse engineering, import, model driven architecture support, IDE integration, integration with estimation and scheduling tools
11SOE: analysis
diagrammer code generator
repository
classical systems analysis: four critiques 1
XP
• classical systems analysis is unnecessary and time consuming• it promotes the role of the analyst over the programmer• the programmer can obtain the necessary domain understanding from the
customer and write it directly into the program
12SOE: analysis
classical systems analysis: four critiques 2
Soft Systems Methodology
• classical systems analysis does not capture what is really important in a user domain (e.g. the underlying work system – only some minor things which contribute to software design)
• it encourages a (false) impression that there is one correct view of a user domain that the analyst can determine and later use as the basis for design
13SOE: analysis
classical systems analysis: four critiques 3
Contextual Design
• classical systems analysis is the property of the expert systems analyst
• it does not promote real dialogue with the stakeholders and users or involve them in any future work
14SOE: analysis
classical systems analysis: four critiques 4
Business Process Re-engineering
• classical systems analysis focuses on automating an existing (manual) system
• it provides no incentive for changing or radically improving the underlying work process
15SOE: analysis
analysis: summary
problem: accurate representation of user domainobjective: understand the use domain and represent it in a software
friendly notation
tools and techniques various forms of semi-formal modelling
underlying theory systems theory +
purpose user domain understanding - develop accurate and communicable domain models which can later be represented as software
known weaknesses not always very close either to programmer or to user
scope limitation situational – access to a suitable use domain is required
principal danger goal displacement
16SOE: analysis
17SOE: analysis
SOE
traditional: testrequirements
analysis
(programming
)
design
test
the quality problem• the system is delivered with bugs, service and usability problems that make
it difficult to use• software released too early - users run untested code, which crashes or
delivers system-generated error messages• requested functionality is missing, the software runs slowly, interfaces
contain major usability errors• calculations are wrong, data is lost or corrupted
2SOE: test
3SOE: test
generates 28 test cases (= paths through the code)
traditional engineering response
4SOE: test
why test?•problems with requirements, analysis and design revealed late•programmer error in executing the design
•validationto demonstrate to the developer and the system customer that the software meets its requirements
•verificationto establish that the program runs as intended without defects
•unit test•integration test•regression test•GUI test•smoke test•performance/load test•interface test•system test•acceptance/operational test•alpha/beta test
•a test strategy•a test plan•a test suite with test cases•test execution•debugging•test report
a testing-oriented development process:v model
+ effective project management
5SOE: test
a planned process
6SOE: test
design testcases
prepare testdata
r un programwith test data
compare resultsto test cases
testcases
testdata
testresults
testreports
7SOE: test
policies and guidelines
•exhaustive test is impossible on non-trivial systems•test policies define the approach to be used in selecting system tests e.g.:
•all functions accessed through menus should be tested;•combinations of functions accessed through the same menu should be tested;•where user input is required, all functions must be tested with correct and incorrect input.
•define procedures for tests and test cases, e.g.:
•choose inputs that force the system to generate all error messages;•design inputs that cause buffers to overflow;•repeat the same input or input series several times;•force invalid outputs to be generated;•force computation results to be too large or too small.
an experienced team
• a professional testing team, with their own tools and structured procedures
8SOE: test
independent tester
must learn about the system,but will attempt to break itand is driven by quality
developer
understands the system but will test "gently"and is driven by "delivery"
errorsrequirements conformance
performance
an indicationof quality
9SOE: test
unit test designed environment
Test cases
interface
local data structures
boundary conditions
independent paths
error handling paths
driver
module
stub stub
RESULTS
10SOE: test
structured testing: top down integration
A
FB G
C
D E
top module is tested withstubs
stubs are replaced one ata time, “depth or breadth first”
as new modules are integrated,some subset of tests is re-run
11SOE: test
white-box test
... our goal is to ensure that all statements and conditions have been executed at least once ...
requirements
eventsinput
output
black-box test
12SOE: test
mathematical and engineering techniques, e.g.:basis path test
2
4
1
7
8
35 6
first, we compute the cyclomatic complexity:number of simple decisions + 1
ornumber of regions
ornumber of edges – number of nodes + 2in this case, v(g) = 4
next, we derive the independent paths:
since v(g) = 4, there are up to four paths
path 1: 1,2,3,6,7,8path 2: 1,2,3,5,7,8path 3: 1,2,4,7,8path 4: 1,2,4,7,2,4, …7,8
finally, we derive test cases to exercise these paths.
13SOE: test
object class test• complete test coverage
of a class involves• test all operations
associated with an object;
• setting and interrogating all object attributes;
• exercising the object in all possible states.
• inheritance makes it more difficult to design object class tests as the data to be tested is not localised.
• define test cases for reportWeather, calibrate, test, startup and shutdown.
• using a state model, identify sequences of state transitions to be tested and the event sequences to cause these transitions
• for example:• waiting > calibrating > test >
transmitting > waiting
14SOE: test
test automation
• test is an expensive process phase - test workbenches provide a range of tools to reduce the time required and total test costs.
• bug tracker• unit test: xUnit• record and play• acceptance test: Fit,
FitNesse, EasyAccept• …….• ………..
HP QuickTest Professional HP
IBM Rational Functional Tester IBM Rational
Rational robot IBM Rational
Selenium OpenSource Tool
Silk Test Microfocus
Test Complete AutomatedQA
TestPartner Micro Focus
Watir OpenSource Tool
SilkCentral - Test ManagementSilkTest - Automated functional and regression testingSilkPerformer - Automated load and performance testingSilkMonitor - 24x7 monitoring and reporting of Web, application and database serversSilkPilot - Unit testing of CORBA objects SilkObserver - End-to-end transaction management and monitoring for CORBA applications SilkMeter - Access control and usage meteringSilkRealizer - Scenario testing and system monitoringSilkRadar - Automated defect tracking
systematic debugging: symptoms & causes
• symptom and cause may be geographically separated
• symptom may disappear when another problem is fixed
• cause may be due to a combination of non-errors
• cause may be due to a system or compiler error
• cause may be due to assumptions that everyone believes
15SOE: test
causesymptom
traditional test ideals: summary
• a testing-oriented development process• a planned process• policies and guidelines• an experienced team• a designed environment• white/black box testing• test automation• systematic de-bugging
16SOE: test
traditional test: alternatives and supplements
• test driven development (agile)• usability testing• walkthrough/code review• user satisfaction testing• business performance evaluation• model-driven testing (UPPAAL)
17SOE: test
18SOE: test
Software Engineering
traditional:design
requirements
analysis
(programming
)
design
test
• the design problem: the software design inadequately solves the problem, creating issues such as maintainability, adaptivity, portability, security
2SOE: design
Operating System SLOC (Million)
Windows NT 3.1 4-5
Windows NT 3.5 7-8
Windows NT 4.0 11-12
Windows 2000 more than 29
Windows XP 40
Windows Server 2003 50
Debian 2.2 55-59
Debian 3.0 104
Debian 3.1 215
Debian 4.0 283
why design?
• program structure• completeness and consistency• complexity management• performance• communication between
programmers• planning and organization of
development work• maintenance• change readiness
3SOE: design
4SOE: design
traditional software design
requirements
analysis design process
design principals, patterns
architectural models
design specification
some design tasks
• design strategies – priorities and tradeoffs• architecture design• sub-system/module/component design• detailed component design• data structure (database)• user interface design• processing design• algorithm design• logical/physical design
5SOE: design
system characteristics:
•performance•security•safety•availability•maintainability
design strategy
• trade-off of different desirable program characteristics in relation to design context and known principles, criteria, heuristics
6SOE: design
quality criteria:•usable•secure•efficient•correct•reliable•.....•.....•....
modular design principles:•information hiding•cohesion (high)•coupling (low)
modular design heuristics:•evaluate the first design iteration to reduce coupling and improve cohesion•strive for fan-in depth•.........................•...............................
7SOE: design
the traditional ideal:structured design, planned, top down, sequential, output dependence
architecturaldesign
abstractspecification
interfacedesign
componentdesign
datastructuredesign
algorithmdesign
systemarchitecture
softwarespecification
interfacespecification
componentspecification
datastructure
specification
algorithmspecification
requirementsspecification
design activities
design products
8SOE: design
the traditional ideal – design derived from analysis
Entity-Relationship
Diagram
Data FlowDiagram
State-TransitionDiagram
Data Dictionary
Process Specification (PSPEC)
Control Specification (CSPEC)
Data Object Description
THE ANALYSIS MODEL
proceduraldesign
interfacedesign
architecturaldesign
datadesign
THE DESIGN MODEL
in practice
9SOE: design
Entity-Relationship
Diagram
Data FlowDiagram
State-TransitionDiagram
Data Dictionary
Process Specification (PSPEC)
Control Specification (CSPEC)
Data Object Description
THE ANALYSIS MODEL
proceduraldesign
interfacedesign
architecturaldesign
datadesign
THE DESIGN MODEL
•stepwise refinement of existing analysis model so that it can be programmed•add many missing elements: plug-in components, interface, navigation, network communication, database communication etc•partition into separate modules, components•structure program according to performance requirements•describe at several levels of abstraction for different stakeholders
10SOE: design
system architecture
11SOE: design
traditional ideal - deriving architectures from analysis models
transform mapping
transaction mapping
control hierarchy example
traditional ideal: rationality
12SOE: design
architecture in relation to design strategy
performancelocalise critical operations and minimise communications - use large rather than fine-grain components.
securityuse a layered architecture with critical assets in the inner layers.
safetylocalise safety-critical features in a small number of sub-systems.
availabilityinclude redundant components and mechanisms for fault tolerance.
maintainabilityuse fine-grain, replaceable components.
architecture trade-offs
•using large-grain components improves performance but reduces maintainability.•introducing redundant data improves availability but makes security more difficult.•localising safety-related features usually means more communication so degraded performance.
design considerations
•is there a generic application architecture that can be used?•how will the system be distributed?•what architectural styles are appropriate?•what approach will be used to structure the system?•how will the system be decomposed into modules?•what control strategy should be used?•how will the architectural design be evaluated?•how should the architecture be documented?
traditional ideal: generic design structures imposed externally
• OOAD – generic architecture
• matches problem/application area division
13SOE: design
«component»Interface
«component»System interface
«component»User interface
«component»Function
«component»Model
«component»Technical platform
«component»UIS
«component»DBS
«component»NS
14SOE: design
architectural styles
repository
client server
layered
function oriented pipelining (pipes and filters)
traditional ideal – top down decomposition: detailed component design
15SOE: design
<variable> = <expression>
if <condition>do stuff;
elsedo other stuff;
while <condition>do stuff;
for <variable> from <first value> to <last value> by <step>
do stuff with variable;
function <function name>(<arguments>)do stuff with arguments;return something;
<function name>(<arguments>) // Function call
pseudocode
data dictionary
the design document
16SOE: design
alternatives and complementary techniques
• evolutionary design with refactoring (agile)• low-tech design (contextual design)• design pattern movement• metaphors• design standards• guidebooks
17SOE: design
the design problem: traditional engineering solutions
• structured design, planned, sequential, output dependence• top down design, decomposition• design derived (quasi algorithmically) from earlier analysis• rational argumentation of design trade-offs based on known
design principles• generic design structures imposed from outside• precise and detailed documentation that can later be used by a
programmer with no prior knowledge of analysis
18SOE: design
SOE
Unified ProcessRational Unified Process
2SOE: Unified Process
UP: iterative
not (necessarily) agile
3SOE: Unified Process
• Unified Software Development Process – widely used industry standard software engineering process
• commonly referred to as the "Unified Process" or UP• generic process for the UML • free - described in "The Unified Software Development Process", ISBN:0201571692"
• UP is:• use case (requirements) driven• risk driven• architecture centric• iterative and incremental
• UP is a generic software engineering process – must be customised (instantiated) for your project
• in-house standards, document templates, tools, databases, lifecycle modifications, …• Rational Unified Process (RUP) is the commercial instantiation of UP
• marketed and owned by Rational Corporation• also has to be instantiated for your project
Unified Process at a glance
SOE: Unified Process
ManagementEnvironment
Business Modeling
ImplementationTest
Analysis & Design
Preliminary Iteration(s)
Iter.#1
PhasesProcess Workflows
Iterations within phases
Supporting Workflows
Iter.#2
Iter.#n
Iter.#n+1
Iter.#n+2
Iter.#m
Iter.#m+1
Deployment
Configuration Mgmt
Requirements
Elaboration TransitionInception Construction
iterations, phases, workflows, milestones
5SOE: Unified Process
inception elaboration construction transition
iter 2 iter 3 iter 4 iter 5 iter 6
life-cycleobjectives
life-cyclearchitecture
initial operationalcapability
productrelease
milestone
phase
iterations
a d i tr
iter 1
5 core workflows … … … … …
artefacts (work products), activities, workers (roles)
some prominent work products:
• vision: summary of objectives, features, business case
• software architecture document: short learning aid to understand the system
• test plan: summary of goals and methods of testing
• iteration plan: detailed plan for the next iteration
• change request: uniform way to track all requests for work, e.g. defects
6SOE: Unified Process
instantiation
7SOE: Unified Process
green-field maintenance hot fix
business modelling
requirements
analysis and design
implementation
test
deployment
configuration management
management
environment
agile
problems inaccurate understanding
of end-user needs inability to deal with
changing requirements modules don’t integrate it is difficult to maintain or
extend the software late discovery of flaws poor quality and
performance of the software
no coordinated team effort build-and-release issues
causes insufficient requirements
specification and their ad hoc management
ambiguous and imprecise communication
brittle architecture overwhelming complexity undetected inconsistencies in
requirements, design, and implementation
poor and insufficient testing subjective assessment of project
status failure to attack risk uncontrolled change propagation insufficient automation
UP practice develop software
iteratively manage
requirements use component-
based architectures
visually model software
continuously verify software quality
control changes to software
development................
SOE: Unified Process
develop iteratively, manage requirements: case-driven development
in inception• use case model outlined • use cases briefly
described• use cases ranked• elaboration iterations are
planned and organized on the basis of the ranked use cases
in elaboration• use cases are iteratively specified
and realized
elaborationiteration 1
elaborationiteration 2
elaborationiteration 3
use case a
sketch
use case a
full version
.........
.........
use case g..................
use case f..................
use case b..................
SOE: Unified Process
develop iteratively: architecture-centric development
in inception• initial candidate architecture
based on requirements
• in elaboration• software architecture gradually
defined• software architecture finally
baselined
elabo-rationiteration 1
elabo-rationiteration 2
elabo-rationiteration 3
architectural views
distribution
access control and security
storing persistent data
analysis of architectural factors
control flow
SOE: Unified Process
4+1 view model of architecture
logical viewan abstraction of the design model that identifiesmajor design packages,subsystems and classes
implementation viewan organization of static software modules (sourcecode, data files, components,executables, and others …)
process viewa description of the concurrentaspects of the system at runtime - tasks, threads, orprocesses as well astheir interactions
deployment viewvarious executables andother runtime componentsare mapped to the underlyingplatforms or computing nodes
use-case viewkey use-case and scenarios
11soe: unified process
use component-based architectures
• component-based development • resilient software architecture • enables reuse of components from many available sources• systems composed from existing parts, off-the-shelf third-party parts, (few)
new parts that address the specific domain and integrate the other parts together.
• iterative approach involves the evolution of the system architecture. • each iteration produces an executable architecture that can be measured,
tested, and evaluated against the system requirements.
SOE: Unified Process
visually model software (uml standard)
activitydiagrams
models
sequencediagrams
collaborationdiagrams
statechartdiagrams
deploymentdiagrams
componentdiagrams
objectdiagrams
classdiagramsuse-case
diagrams
SOE: Unified Process
continuously verify software quality
• software problems are exponentially more expensive to find and repair after deployment than beforehand.
• verifying system functionality involves creating test for each key scenario that represents some aspect of required behavior.
• since the system is developed iteratively every iteration includes testing = continuous assessment of product quality.
cost
timeSOE: Unified Process
control changes to software
• the ability to manage change - making certain that each change is acceptable, and being able to track changes - is essential in an environment in which change is inevitable.
• maintaining traceability among elements of each release is essential for assessing and actively managing the impact of change.
• in the absence of disciplined control of changes, the development process degenerates rapidly into chaos.
SOE: Unified Process
16SOE: Unified Process
best practices summarizedtime-boxed iterations• avoid attempting large, up-
front requirements strive for cohesive architecture
and reuse existing components
on large projects: requirements & core architecture developed by small co-located team; then early team members divide into sub-project leaders
continuously verify quality• test early, often, and
realistically by integrating all software each iteration
visual modeling• prior to programming, do
at least some visual modeling to explore creative design ideas
manage requirements • find, organize, and track
requirements iteratively through skillful means with use tool support.
manage change• disciplined configuration
management and version control, change request protocol, base-lined releases at the end of each iteration
17SOE: Unified Process
Risk list1. Xxx xxx2. Xx xx xxx3. Xx xxxxxxx
SoftwareArchitectureDocument
Use case modelDesign model
Implementationmodel
Deploymentmodel
Test
Test model
Iteration plan
Analysis model
Artifact
ActivityRole
Contents of a workflow
ArtifactArtifact
$
I1 I2 ...
Architecture & system
Domain knowledge
Risk & iterations
ArtifactsBusiness Modelling
Requirements
Analysis & Design
Implementation
Test
Deployment
Configuration Mgmt.
Project Management
Environment
FaserInception Elaboration Construction Transition
Iterations
Process Workflows
Supporting Workflows
Prelimenaryiterations I-1 I-2 I-9I-8I-7I-6I-5I-4I-3
Project Manager
Architect
User
RUP - key elements - Phases - Iterations - Workflows - Activities - Roles - Artifacts
RUP - philisophy - Iterations / Increments - Use case driven - Architecture centered - Visual (UML) - Configurable process - Risk driven
RUP - overview
18SOE: Unified Process
bottom-up design: patterns
the pattern movement
2SOE: refactoring and patterns
pattern
• a pattern addresses a recurring problem that arises in specific situations
• patterns document existing, well-proven design experience• patterns identify and specify abstractions that are above the level of
single classes and instances• patterns provide a common vocabulary and understanding for design
principles• patterns are a means of documenting software architectures• patterns support the construction of software with defined properties• patterns help you build complex and heterogeneous software
architectures• patterns help you manage software complexity
standard documentation formpattern name and classification: a descriptive and unique name that helps in identifying and
referring to the pattern intent: a description of the goal behind the pattern and the reason for using it also known as: other names for the pattern motivation (forces): a scenario consisting of a problem and a context in which this pattern can be
used applicability: situations in which this pattern is usable; the context for the pattern structure: a graphical representation of the pattern – class diagrams and interaction diagrams may
be used for this purpose participants: a listing of the classes and objects used in the pattern and their roles in the design collaboration: a description of how classes and objects used in the pattern interact with each
other consequences: a description of the results, side effects, and trade offs caused by using the pattern implementation: a description of an implementation of the pattern; the solution part of the
pattern sample code: an illustration of how the pattern can be used in a programming language known uses: examples of real usages of the pattern related patterns: other patterns that have some relationship with the pattern; discussion of the
differences between the pattern and similar patterns
3SOE: refactoring and patterns
4SOE: refactoring and patterns
types of patterns
• analysis patterns• design patterns/GRASP• software architecture patterns• organizational and process patterns
5SOE: refactoring and patterns
analysis patterns
• patterns that reflect the generic conceptual structure of business processes rather than actual software implementations
• simple, specialized notation (very similar to entity-relationship diagram notation)
analysis problem
analysis pattern‘Analysis Patterns: Reusable Object Models’, Martin Fowler
organisational and process patterns
• research into social and behavioural patterns in software firms which lead to successful outcomes
6SOE: refactoring and patterns
Coplien’s top ten patterns
• unity of purpose• engage customers• domain expertise in roles• architect controls product• distribute work evenly• function owner and
component owner• mercenary analyst• architect also implements• firewalls• developer controls process
design anti-patterns• big ball of mud: a system with no recognizable structure • database-as-ipc: using a database as the message queue for routine inter-process
communication where a much more lightweight mechanism would be suitable • gas factory: an unnecessarily complex design • gold plating: continuing to work on a task or project well past the point at which
extra effort is adding value • inner-platform effect: a system so customizable as to become a poor replica of the
software development platform • input kludge: failing to specify and implement handling of possibly invalid input • interface bloat: making an interface so powerful that it is extremely difficult to
implement • magic pushbutton: coding implementation logic directly within interface code,
without using abstraction• race hazard: failing to see the consequence of different orders of events • stovepipe system: a barely maintainable assemblage of ill-related components
7SOE: refactoring and patterns
8SOE: refactoring and patterns
design patterns
• design patterns provide abstract, reusable “micro-architectures” that can be applied (“instantiated”) to resolve specific design issues (forces) in previously-used, high-quality ways
• GoF (Gang of Four)• creational - manage instantiation - can be further divided into class-
creation (use inheritance effectively) patterns and object-creational (use delegation) patterns
• structural - concern class and object composition - use inheritance to compose interfaces and define ways to compose objects to obtain new functionality
• behavioural - concerned with communication between objects
GAMMA, E., HELM, R., JOHNSON, R. & VLISSIDES, J. (1995) Design Patterns, Boston, Addison-Wesley.
GoF creational patterns:
• abstract factory groups object factories that have a common theme
• builder constructs complex objects by separating construction and representation
• factory method creates objects without specifying the exact class to create
• prototype creates objects by cloning an existing object• singleton restricts object creation for a class to only one
instance
9SOE: refactoring and patterns
GoF structural patterns:
• adapter allows classes with incompatible interfaces to work together by wrapping its own interface around that of an already existing class
• bridge decouples an abstraction from its implementation so that the two can vary independently
• composite composes zero-or-more similar objects so that they can be manipulated as one object
• decorator dynamically adds/overrides behaviour in an existing method of an object
• façade provides a simplified interface to a large body of code• flyweight reduces the cost of creating and manipulating a large number of
similar objects• proxy provides a placeholder for another object to control access, reduce
cost, and reduce complexity
10SOE: refactoring and patterns
GoF behavioral patterns: concerned with communication between objects
• chain of responsibility delegates commands to a chain of processing objects• command creates objects which encapsulate actions and parameters• interpreter implements a specialized language• iterator accesses the elements of an object sequentially without exposing its
underlying representation• mediator allows loose coupling between classes by being the only class that has
detailed knowledge of their methods• memento provides the ability to restore an object to its previous state (undo)• observer is a publish/subscribe pattern which allows a number of observer objects
to see an event• state allows an object to alter its behavior when its internal state changes• strategy allows one of a family of algorithms to be selected on-the-fly at runtime• template method defines the skeleton of an algorithm as an abstract class, allowing
its subclasses to provide concrete behavior• visitor separates an algorithm from an object structure by moving the hierarchy of
methods into one object
11SOE: refactoring and patterns
pattern use example
12SOE: refactoring and patterns
state pattern
charge()
Price
charge()
ChildrensPrice
charge()
RegularPrice
charge()
NewReleasePrice
1 1 1
charge()
Movie1
<<code>>return priceCode.charge()
singleton pattern
GRASP (General Responsibility Assignment Software Patterns)
• information expert: allocating responsibilities (methods, computed fields etc.) by determining which class has the most relevant data variables
• creator: determines which class should govern creation of new instances of classes in non-trivial situations. Given two classes (A,B), class B should be responsible for the creation of A if class B contains or compositely aggregates, records, closely uses or contains the initializing information for class A (see also factory)
• controller: assigns the responsibility of dealing with system events to a non-UI class that represents a use case scenario(s) - the first object beyond the UI layer that receives and coordinates a system operation. The controller should delegate to other objects the work that needs to be done
• low coupling: determines low dependency between classes, low impact in a class of changes in other classes and high reuse potential
13SOE: refactoring and patterns
GRASP 2
• high cohesion: the responsibilities of a given element are strongly related and highly focused
• polymorphism: responsibility for defining the variation of behaviors based on type is assigned to the types for which this variation happens
• pure fabrication: a class that does not represent a concept in the problem domain but is added to achieve low coupling, high cohesion, and the reuse potential thereof
• indirection: supports low coupling between two elements by assigning the responsibility of mediation between them to an intermediate object e.g. controller in MVC.
• protected variations: protects elements from variations on other elements by wrapping the focus of instability with an interface and using polymorphism to create various implementations of this interface.
14SOE: refactoring and patterns
architectural patterns
15SOE: refactoring and patterns
BUSCHMAN, F., MEUNIER, R., ROHNERT, H., SOMMERLAD, P. & STAL, M. (1996) Pattern-oriented Software Architecture, Chichester, Wiley.
“.....an architectural pattern expresses a fundamental structural organisation schema for software systems. It provides a set of predefined subsystems, specifies their responsibilities, and includes rules and guidelines for organizing the relationships between them”
• from mud to structure• layers• pipes and filters• blackboard
• distributed systems• broker
• interactive systems• model-view-controller• presentation-abstraction-
control• adaptable systems
• microkernel• reflection
architectural patterns: from mud to structure
layers
example OSI model
problem large system with high and low level functions requiring decomposition
structure layer j provides services used by j+1, delegates subtasks to j-1
known use TCP protocol, IS
presentation
application logic
domain layer
database
16SOE: refactoring and patterns
pipes and filters
example
problem process or transform a data stream
structure •filter: collects, transforms and outputs data supplied by pipe•pipe: transfers, buffers data and synchronizes with neighbours•data source: delivers data to pipe•data sink: consumes output
known use UNIX program compilation and documentation creation
17SOE: refactoring and patterns
blackboard
problem no feasible deterministic solution for transforming data into high level structures (diagrams, tables, language phrases)
structure a collection of independent programs that work co-operatively on common data•blackboard: central data store•knowledge source: evaluates its own applicability, computes a result, updates blackboard
known use speech and image recognition, vision, surveillance
architectural patterns: from mud to structure
architectural patterns: distributed systems
18SOE: refactoring and patterns
broker
problem manage distributed and possibly heterogeneous systems with independent operating components
structure •client: implements user functionality•server: implements services•broker: locates, registers and communicates with servers, interoperates with other brokers through bridges•client side proxy: mediates between client and broker•server side proxy: mediates between server and broker•bridge: mediates between local broker and bridge of a remote broker
known use
CORBA, WWW
architectural patterns: interactive systems
19SOE: refactoring and patterns
model-view-controller
problem interactive systems with flexible and change prone user interface
structure •model: provides central data and function logic•view: displays information to the user•controller: accepts inputs and makes service requests for the model, display requests for view
known use
Smalltalk systems
architectural patterns: interactive systems
20SOE: refactoring and patterns
presentation-abstraction-control
problem interactive systems as a set of cooperating agents
structure tree hierarchy of PAC agents, with one top level agent –each agent has:•presentation: visible behaviour of the agent•abstraction: maintains data and provides core functionality•control: connect presentation and abstraction and communicate with other agents
known use network traffic control
architectural patterns: adaptable systems
21SOE: refactoring and patterns
microkernal
problem application domains with broad spectrums of standards and programming technologies, continuous hardware and software evolution. Software should be portable, extensible, adaptable
structure •microkernal: provides core services, manages resources and communication•internal server: implements additional services•external server: provides programming interfaces for clients•client: represents an application•adapter: hides system dependencies, invokes methods of external servers on behalf of clients
known use Windows NT
architectural patterns: adaptable systems
22SOE: refactoring and patterns
reflection
problem systems exposed to changingtechnology and requirements, and support their own modification
structure •base level: implements the application logic using information from meta level•meta level: encapsulates system internals that may change and provides interface to facilitate modifications to meta-level•metaobject protocol: interface for specifying and performing changes to meta level
known use
OLE 2.0
23SOE: refactoring and patterns
potential benefits of patterns
• provides a common vocabulary and understanding of design elements for software designers• increases productivity in design process due to design reuse• promotes consistency and high quality of system designs and architectures due to application of tested design expertise and solutions embodied by patterns• allows all levels of designers, from novice to expert, to gain these productivity, quality and consistency benefits
24SOE: refactoring and patterns
concerns• benefits are dependent upon architects, analysts and
designers understanding the patterns to be used – the common “design vocabulary”
• such training can be costly, and in many cases is proprietary and cannot be obtained externally
• specific funding and effort must be directed toward maintenance and evolution of patterns as reusable assets or they tend to devolve into project/application-specific artifacts with dramatically reduced reusability characteristics
• promotes design culture at the expense of analysis culture – less focus on responding adequately and accurately to specific user domains
25SOE: refactoring and patterns
patternstorefactoring
patterns and refactoring work together:bottom up design
refactoringthroughpattern development
design style, abstraction level
26SOE: refactoring and patterns
design as model design as code
architecture
detailed design
refactoring design patterns
architectural patterns
agile
GRASP
traditional and agile design styles compared
27SOE: refactoring and patterns
design assumptions traditional agilestyle top down bottom up
starts with modelling programming
process grand design up front evolutionary design
responsible architect, designers programmers
based upon user domain analysis models
generic design patterns
outcome design document program
weakness separation of design and programming
absence of early overview, unspecific user domain understandings
SOE
bottom-up design - refactoring and patterns
2SOE: refactoring and patterns
design as model design as code
screen sketchobject modelERM
screen codeobject codedatabase tablesdesign style
architecture
detailed designabstraction level
componentsmodulesarchitecture
algorithm designdata validationexception handling
design style, abstraction level
3SOE: refactoring and patterns
design as model design as code
architecture
detailed design
traditional
agile
top down design - MDA (Model Driven Architecture)
4SOE: refactoring and patterns
executable UML
compiler J2EE
.NET
archetype
bottom-up design – refactoring
’Refactoring: improving the design of existing code’Marting Fowler, Addison Wesley
bottom up design: refactoring
• refactoring (noun): a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior
• refactor (verb): to restructure software by applying a series of refactorings
• refactoring is not:• debugging• mending business logic• adding new functionality
6SOE: refactoring and patterns
Martin Fowler (and Kent Beck, John Brant, William Opdyke, Don Roberts), Refactoring - Improving the Design of Existing Code, Addison Wesley, 1999
7SOE: refactoring and patterns
refactoring example: duplicated codecase 0:
activePiece = RightHookgetRightHook();ml = new MoveListener(activePiece);gameBoardaddKeyListener(ml);break;
case 1: activePiece = LeftHookgetLeftHook();ml = new MoveListener(activePiece);gameBoardaddKeyListener(ml);break;
case 2: activePiece = RightRisegetRightRise();ml = new MoveListener(activePiece);gameBoardaddKeyListener(ml);break;
case 3: activePiece = LeftRisegetLeftRise();ml = new MoveListener(activePiece);gameBoardaddKeyListener(ml);break; //more
case 0: activePiece = RightHookgetRightHook(); break;
case 1: activePiece = LeftHookgetLeftHook(); break;
case 2: activePiece = RightRisegetRightRise(); break;
case 3: activePiece = LeftRisegetLeftRise(); break;
case 4: activePiece = HillgetHill(); break;
case 5: activePiece = StraightPiecegetStraightPiece(); break;
case 6: activePiece = SquaregetSquare(); break;
}ml = new MoveListener(activePiece);gameBoardaddKeyListener(ml);
8SOE: refactoring and patterns
why refactor?
refactoring improves designwithout refactoring, even a well designed program will decay (‘go sour’) as programmers make changes
refactoring makes software easier to understandunderstandable code is easier to update and maintain
refactoring helps find bugs refactoring involves clarification and re-shaping through better understanding
refactoring speeds up programminggood design ensures comprehensibility and fewer changes as new functionality is added
9SOE: refactoring and patterns
bad smells• duplicate code: identical or very similar code exists in more than one location • large method: a method, function, or procedure that has grown too large • large class: a class that has grown too large (god object) • feature envy: a class that uses methods of another class excessively • inappropriate intimacy: a class that has dependencies on implementation details of
another class • data clumps: a set of variables that seem to “hang out” together – e.g. often passed as
parameters, changed/accessed at the same time• primitive obsession: all subparts of an object are instances of primitive types (int, string,
bool, double, etc)• refused bequest: a class that overrides a method of a base class in such a way that the
contract of the base class is not honored by the derived class • lazy class: a class that does too little • duplicated method: a method, function, or procedure that is very similar to another • contrived complexity: forced usage of overly complicated design patterns where simpler
design would suffice• …………
refactoring (n): a standard way of improving poor code
10SOE: refactoring and patterns
Add ParameterChange Bidirectional Association to UnidirectionalChange Reference to ValueChange Unidirectional Association to BidirectionalChange Value to ReferenceCollapse HierarchyConsolidate Conditional ExpressionConsolidate Duplicate Conditional FragmentsConvert Dynamic to Static Construction by Gerard M. DavisonConvert Static to Dynamic Construction by Gerard M. DavisonDecompose ConditionalDuplicate Observed DataEliminate Inter-Entity Bean Communication (Link Only)Encapsulate CollectionEncapsulate DowncastEncapsulate FieldExtract ClassExtract InterfaceExtract MethodExtract Package by Gerard M. DavisonExtract SubclassExtract SuperclassForm Template MethodHide DelegateHide MethodHide presentation tier-specific details from the business tier (Link Only)Inline ClassInline MethodInline TempIntroduce A Controller (Link Only)Introduce AssertionIntroduce Business Delegate (Link Only)Introduce Explaining VariableIntroduce Foreign MethodIntroduce Local ExtensionIntroduce Null ObjectIntroduce Parameter ObjectIntroduce Synchronizer Token (Link Only)Localize Disparate Logic (Link Only)Merge Session Beans (Link Only)Move Business Logic to Session (Link Only)Move Class by Gerard M. DavisonMove FieldMove MethodParameterize Method
Preserve Whole ObjectPull Up Constructor BodyPull Up FieldPull Up MethodPush Down FieldPush Down MethodReduce Scope of Variable by Mats HenricsonRefactor Architecture by Tiers (Link Only)Remove Assignments to ParametersRemove Control FlagRemove Double Negative by Ashley Frieze and Martin FowlerRemove Middle ManRemove ParameterRemove Setting MethodRename MethodReplace Array with ObjectReplace Assignment with Initialization by Mats HenricsonReplace Conditional with PolymorphismReplace Conditional with Visitor by Ivan MitrovicReplace Constructor with Factory MethodReplace Data Value with ObjectReplace Delegation with InheritanceReplace Error Code with ExceptionReplace Exception with TestReplace Inheritance with DelegationReplace Iteration with Recursion by Dave WhippReplace Magic Number with Symbolic ConstantReplace Method with Method ObjectReplace Nested Conditional with Guard ClausesReplace Parameter with Explicit MethodsReplace Parameter with MethodReplace Record with Data ClassReplace Recursion with Iteration by Ivan MitrovicReplace Static Variable with Parameter by Marian VittekReplace Subclass with FieldsReplace Temp with QueryReplace Type Code with ClassReplace Type Code with State/StrategyReplace Type Code with SubclassesReverse Conditional by Bill Murphy and Martin FowlerSelf Encapsulate FieldSeparate Data Access Code (Link Only)Separate Query from ModifierSplit Loop by Martin FowlerSplit Temporary VariableSubstitute AlgorithmUse a Connection Pool (Link Only)Wrap entities with session (Link Only)
bad smell
very simple example
11SOE: refactoring and patterns
bad smell: data clump
refactoring: extract class
title
titleTexttitleXtitleYtitleColour
class Customer extends DomainObject { public Customer(String name) { _name = name; } public String statement() { double totalAmount = 0; int frequentRenterPoints = 0; Enumeration rentals = _rentals.elements(); String result = "Rental Record for " + name() + "\n"; while (rentals.hasMoreElements()) { double thisAmount = 0; Rental each = (Rental) rentals.nextElement(); //determine amounts for each line switch (each.tape().movie().priceCode()) { case Movie.REGULAR: thisAmount += 2; if (each.daysRented() > 2) thisAmount += (each.daysRented() - 2) * 1.5; break; case Movie.NEW_RELEASE: thisAmount += each.daysRented() * 3; break; case Movie.CHILDRENS: thisAmount += 1.5; if (each.daysRented() > 3) thisAmount += (each.daysRented() - 3) * 1.5; break; } totalAmount += thisAmount; // add frequent renter points frequentRenterPoints ++; // add bonus for a two day new release rental if ((each.tape().movie().priceCode() == Movie.NEW_RELEASE) && each.daysRented()> 1) frequentRenterPoints ++; //show figures for this rental result += "\t" + each.tape().movie().name()+ "\t" + String.valueOf(thisAmount) +"\n"; } //add footer lines result += "Amount owed is " + String.valueOf(totalAmount) + "\n"; result += "You earned " + String.valueOf(frequentRenterPoints) + " frequent renterpoints"; return result; } public void addRental(Rental arg) { _rentals.addElement(arg); } public static Customer get(String name) { return (Customer) Registrar.get("Customers", name); } public void persist() { Registrar.add("Customers", this); } private Vector _rentals = new Vector();
Fowler’s video shop example
12SOE: refactoring and patterns
charge()
Price
charge()
ChildrensPrice
charge()
RegularPrice
charge()
NewReleasePrice
1 1 1
charge()
Movie1
<<code>>return priceCode.charge()
refactor: replace condition with polymorphism
common refactorings
• push down method - behaviour on a super class is relevant only for some of its subclasses’ - the method is moved to those subclasses
• extract subclass - ‘a class has features that are used only in some instances’ - a subclass is created for that subset of features
• encapsulate field - the declaration of a field is changed from public to private
• hide method - ‘a method is not used by any other class’ (the method should be made private)
• pull up field - ‘two subclasses have the same field’ - the field in question should be moved to the super class
• extract super class – ‘two classes with similar features’ - in this case, create a super class and move the common features to the super class
13SOE: refactoring and patterns
common refactorings
• push down field - a field is used only by some subclasses’ - the field is moved to those subclasses
• pull up method - methods with identical results in subclasses’ - methods should be moved to the super class
• move method - a method is, or will be, using or used by more features of another class than the class on which it is defined’
• move field - a field is, or will be, used by another class more than the class on which it is defined’
• rename method - a method is renamed to make its purpose more obvious.• rename field - field is renamed to make its purpose more obvious
14SOE: refactoring and patterns
refactoring principles
• make changes small and methodical• follow design patterns• have your test suites ready
• refactoring introduces new bugs• sometimes in unrelated parts of the
programme• run test suites each time you refactor• clean before you add new functionality
15SOE: refactoring and patterns
two dubious refactoring practices
refactoring the architecturerefactoring: ‘extract package’, ‘refactor architecture by tiers’
• large scale decisions about the organisation of the program taken too late in the construction because of insufficient overview
• liable to cause chaos unless extremely well thought out
the refactoring sprint(s)
• some weeks in the life of the project, paid for by the customer, where the only purpose is to rectify relatively serious design problems caused by lack of real understanding of the user domain and too early commitment to programming
16SOE: refactoring and patterns
traditional view of refactoring
• refactoring reflects poor analysis and design work – do it right first time
• refactoring is expensive – all prior documentation must be altered
• refactoring introduces new errors in unpredictable places
• refactoring threatens architectural integrity• customers expect new functionality and will
not pay for rework• refactoring does not extend value – if it ain’t
broke don’t fix it
17SOE: refactoring and patterns
Software Engineering
participative design:contextual design – Beyer + Holtzblatt
overview
• a system design method designed and used by software consultants
• focused on users’ work• empirically-based• participative (Scandinavian
tradition)• simple analysis and design
techniques• designing user environments• simple prototyping strategy
• problem analysis tools• solution design strategies• small learning curve
analysis tools• design/re-design of
systems: work, information, communication, computer
problem analysis: work oriented
five analysis models:• (information) flow• sequence• (information) artifacts• culture• physical workspace
flow model
actors- task, responsibility
information flow
artifact
sequence
intent:trigger:
task, stepsequence, order
breakdown
information artifact model
•information, data
•distinct parts
•structure
•annotations, informal use
•presentation
•usage
•breakdown
culture
influencer
influence direction
extent and nature of influence
roles, norms, values
physical surroundings
contextual design
• vision• storyboard• work redesign• user environment design• paper prototype
vision
• a diagrammatic representation of the new way of working and communicating when the new systems and services are implemented
• can include; users, other actors, information flows, screens, databases and other things as required
storyboard
• a sequential depiction of the use of the new system and services, divided into its principle stages
• can include; users, other actors, information flows, screens, databases and other things as required
work design (modified use case)
the major actors (user, citizen, role)
their interaction with the major functions in the system
the work process behind the interactions
role
role
role
systemfunction
systemfunction
systemfunction
systemfunction
workprocess
workprocess
workprocess
workprocess
workprocess
• a graphical representation of the major focus areas of the system, usually divided by function
• functions: what the user can do in the system
• overview: information displayed on screen
• links: connections to other places in the system
• objects: principle work ”objects” which may later be coded
user environment design
paper prototype
• drawing of a screen• shows divisions of screen, menu items, data entry
possibilities, buttons, scroll bars, links, pictures, graphics, computer visualizations (or any thing else that a user can see in a system)
Software Engineering
agile or traditional?
a development approach
• process model• associated tools,
techniques, practices• characteristics• a set of (unspoken)
assumptions about the nature of software development and its contexts
traditional agile
linear, iterative iterative
analysis and design techniques, case tool, etc
stand-up meeting, pair programming, time-boxing, etc
2SOE: traditional v. agile
a project situation
• development project circumstances, factors, conditions
• size (FP, LOC, developers, project length, no. of developers)
• complexity• environmental dynamism• technology platform• software type• developer organization history• team members’ work style• customer organization history• user domain complexity• user profile and accessibility• contracting and delivery needs• maintenance expectations• ..........
3SOE: traditional v. agile
traditionalagile
development approach
development approach characteristics:ceremony and cycles (Larman)
4soe: agile or traditional
many short iterations
waterfall strict (no iteration)
formal steps, many documents
few steps, few documents
cycles
ceremonySCRUM
XP UP
project characteristics: uncertainty
5SOE: traditional v. agile
project characteristics: complexity
6SOE: traditional v. agile
project situation: complexity, uncertainty
7SOE: traditional v. agile
complexity
uncertainty
low
high
highlow
project situations: complexity and uncertainty
8
SOE: traditional v. agile
From: Little
Design
Implemen-tationEvaluation
Analysis
traditional development (waterfall)
• single pass, sequential • progresses through
requirements analysis, design, coding, testing, integration
• document-based criteria between stages
incremental development• all requirements and
preliminary architectural design determined up front
• separate increments addressing subsets of requirements
prototyping• quick initial working model • cycle: partial requirements
gathering with (some) stakeholders - quick design –prototyping – evaluation
• full-scale model and functional form – but only on part of the system
• I’ll Know It When I See It (IKIWISI)
agile development• iterative• adaptive process• light-weight• intensive communication
between developers and customer
• programming focus• XP, SCRUM …
evolutionary development• iteratively• expanding increments
of operational product• evolution determined
by operational experience
• development begun on the most visible aspect
spiral model • risk-driven variation of
evolutionary development
• can function as a superset process model
• risk-based transition criteria between stages
commonly used development approaches
9SOE: traditional v. agile
waterfallcode and fix
undisciplined disciplined
spiral model
prototypingprototyping
evolutionary linearadaptive predictive
Unified Process
10
agile methods
(agile) (traditional – plan-driven)
SOE: traditional v. agile
development approaches in one dimension(Boehm and Turner)
agile v. traditional (plan-driven) differences
• application• project goals, size, environment; velocity
• management• customer relations, planning and control, communications
• technical• how the software is developed
• personnel• the type and competency of developers and stakeholders
11SOE: traditional v. agile
From: Boehm & Turner
12
agile vs. traditional characteristics
characteristics agile traditional (plan-driven)
application
primary goals rapid value; responding to change predictability, stability, high assurance
size smaller teams and projects larger teams and projects
environment turbulent; high change; project-focused stable; low-change; project/organization focused
management
customer relations
dedicated on-site customers, where feasible; focused on prioritized increments
as-needed customer interactions; focused on contract provisions; increasingly evolutionary
planning/control internalized plans; qualitative control documented plans, quantitative control
communications tacit interpersonal knowledge explicit documented knowledge
SOE: traditional v. agile
13
characteristics agile traditional (plan-driven)
technical
requirements prioritized informal stories and test cases; undergoing unforeseeable change
formalized project, capability, interface, quality, foreseeable evolution requirements
development simple design; short increments; refactoring assumed inexpensive
architect for parallel development; longer increments; refactoring assumed expensive
test executable test cases define requirements
documented test plans and procedures
personnel
customers dedicated, collocated crack* performers
crack* performers, not always collocated
developers at least 30% full-time Cockburn level 2 and 3 experts; no level 1b or -1 personnel**
50% Cockburn level 3s early; 10% throughout; 30% level 1b’s workable; no level -1s**
culture comfort and empowerment via many degrees of freedom (thriving on chaos)
comfort and empowerment via framework of policies and procedures (thriving on order)
* collaborative, representative, authorized, committed, knowledgeable** these numbers will particularly vary with the complexity of the application
SOE: traditional v. agile
sweet spots
14
SOE: traditional v. agile
From
: Boe
hm
Boehm: critical factors
15SOE: traditional v. agile
Factor Agility discriminators Plan-driven discriminators
Size Well matched to small products and teams; reliance on tacit knowledge limits scalability.
Methods evolved to handle large products and teams; hard to tailor down to small projects.
CriticalityUntested on safety-critical products; potential difficulties with simple design and lack of documentation.
Methods evolved to handle highly critical products; hard to tailor down efficiently to low-criticality products.
Dynamism
Simple design and continuous refactoring are excellent for highly dynamic environments, but present a source of potentially expensive rework for highly stable environments.
Detailed plans and “big design up front” excellent for highly stable environment, but a source of expensive rework for highly dynamic environments.
PersonnelRequire continuous presence of a critical mass of scarce Cockburn Level 2 or 3 experts; risky to use nonagile Level 1B people.
Need a critical mass of scarce Cockburn Level 2 and 3 experts during project definition, but can work with fewer later in the project—unless the environment is highly dynamic. Can usually accommodate some Level 1B people.
CultureThrive in a culture where people feel comfortable and empowered by having many degrees of freedom; thrive on chaos.
Thrive in a culture where people feel comfortable and empowered by having their roles defined by clear policies and procedures; thrive on order.
polar chart of factors
16SOE: traditional v. agile
development approach assumptions
17SOE: traditional v. agile
assumption
make (internal) locus of effort (external) buy
linear process model cyclical (iterative)
rational analytical approach experimental
specification expression form prototype
all at once delivery incremental
expert-driven user involvement user-driven
process developer focus product
delivery developer-customer relationship co-operation
predictive uncertainty management adaptive
all lifecycle phases completeness chosen lifecycle phases
experienced developer maturity naive
NERUR, S., MAHAPATRA, R. K. & MANGALARAJ, G. (2005) Challenges of migrating to agile methodologies. Communications of the ACM, 48, 72-78.
18SOE: traditional v. agile
broader assumptions
traditional agile
the world (user domain) is ..........
stable and governed by rules (natural science)
changing and governed by relationships between people (social science)
development is the ..............
application of an externally imposed algorithm (the design method)
stepwise refinement of a local design solution
a complex problem is solved by ...................
rational analysis informed trial and error/experiment
knowledge exchange is ..........
formal and documented informal by direct communication
.............................
19SOE: traditional v. agile
the normative ideals v. software development in practice
• normative ideal – designers of development approach direct engineers’ actions
• in practice – engineers make informed decisions about their practice
• in theory - traditional and agile methods incompatible• in practice – much overlap
• now its up to you
20SOE: traditional v. agile
SOE
project management and planning
estimating and scheduling
scope
estimate
managerisk
schedule
control
the economics of software
the prime objective of a software firm = make software
2SOE: project management
project cost = fixed costs + developer person days
the prime objective of a software firm = make money
project income = actual revenue – actual project cost
expected revenue = estimated project cost + desired profit margin
current project value = code value to date – all other costs
actual project cost = estimated project cost – all overspend
conclusion: in order to make money the project must be correctly estimated and scheduled, and managed without significant overspend
reminder: the project management problem
• nearly two thirds of project significantly overrun their cost estimates (Lederer and Prasad 1992)
• the average project exceeds its schedule by 100% (Standish 2001)
• good project management is a question of business survival• many software firms have as their primary goal: deliver on
time
3SOE: project management
the project start
• what will we get?• what will it cost?• when will it be finished?
4SOE: project management
• the prime PM objectives:1. make money2. keep the customer happy3. deliver on time4. build good quality software5. keep the team happy
scope(features,
functionality)
resources(cost,
budget)
schedule(time)
customer
5SOE: project management
the purpose of planning
• reduce uncertainty• ensure efficient resource use• monitor progress• establish confidence and trust• allocate roles and tasks• support future decision making• ensure the project earns money• improve future planning• generate and communicate overview of project the planning horizon
time
uncertainty
the plan and its execution
adaptation
prediction
6SOE: project management
traditional
agile
predictive planning
adaptive planning
7SOE: project management
the tasks • scope - understand the problem and the work that must be done• estimate - how much effort? how much time?• manage risk - what can go wrong? how can we avoid it? what can we do about it?• schedule - how do we allocate resources along the timeline? what are the milestones?• control - how do we control quality? how do we control change? how do we manage developments in schedule and estimate and unforeseen risks
scope
estimate
managerisk
schedule
control
predictive planning(traditional)
8SOE: project management
scope
estimate
managerisk
schedule
control
9SOE: project management
scope
estimate
managerisk
schedule
control• functionality decomposition• data and processing demands• interfaces and reports• performance, security and reliability constraints
• understand the customers’ needs• understand the business context• understand the project boundaries• understand the customer’s motivation• understand the likely paths for change
• feasibility report• 2 part contracting• preliminary requirements analysis• the more information, the more accurate the estimation• scoping and contract documentation
software scoping
estimation
• the intelligent anticipation of the amount of work that needs to be performed and the resources needed to perform it
• software size (LOC, function or object points)• development
• effort (person days, hours, months)• cost • time
10SOE: project management
scope
estimate
managerisk
schedule
control
• for example:• how big is the software• how much effort is needed for this much software?• what will that much effort cost and how long will it take?
software size metrics
• Lines Of Code• Function Points
n.......user inputsn.......user outputsn.......user inquiriesn.......filesn.......external outputs• adjust for weighting and ‘complexity
adjustment values’, empirically derived constants
• Feature Points (where algorithmic complexity is high)
• Object Points11SOE: project management
scope
estimate
managerisk
schedule
control
function point to lines of code:
ada 95 49assembly 320c 128c++ 55fortran 95 71java 53pascal 91visual basic 5.0 29………………
Jones: ”Applied Software Measurement ..” Mcgraw-Hill 1996
12SOE: scheduling and estimating
• local historical – collected from a series of projects
• researcher produced – collected from analysis of many projects
• constants• adjustment factors
baseline metrics:scope
estimate
managerisk
schedule
control
estimation strategies
• process-based estimation• use case based estimation• algorithmic (empirical) cost modelling• expert judgement• estimation by analogy• Parkinson’s law• pricing to win
• error margins can be large: use two methods
13SOE: project management
scope
estimate
managerisk
schedule
control
process-based decomposition and estimation
14SOE: project management
scope
estimate
managerisk
schedule
control
Activity CC PlanningRisk
analysisEngineering Release Totals
Function Anal. Design Code Test
a 0.75 2.50 0.40 5.00 8.65
b 0.50 4.00 0.60 2.00 7.10
c 0.50 4.00 1.00 3.00 8.50
d 0.50 3.00 1.00 1.50 6.00
e 0.25 2.00 0.75 1.50 4.50
f 0.50 2.00 0.50 2.00 5.00
Totals .25 .25 .25 3.00 17.50 4.25 15.00 34.80
% effort 1% 1% 1% 7% 45% 12% 40%
use case based estimation
• highly structured use cases divided into scenarios
15SOE: project management
scope
estimate
managerisk
schedule
control
LOC = N × LOCavg + [(Sa/Sh - 1) + (Pa/Ph - 1)] × LOCadjust
N = actual number of use-casesLOCavg = historical average LOC per use-case for this type of systemSa = actual scenarios per use-caseSh = average scenarios per use-case for this type of systemPa = actual pages per use-casePh = average pages per use-case for this type of systemLOCadjust = represents an adjustment based on n percent of LOC where n is defined
locally and represents the difference between this project and “average” projects
algorithmic (empirical) modelling
• empirically derived constants from research-oriented historical baseline data
• example structure: E = A + B x (ev)• E = effort• ev = estimation value (e.g. FP)• A,B,C are empirically derived constants
• various research-based models in the literature with considerable outcome variation
16SOE: project management
scope
estimate
managerisk
schedule
control
c
17SOE: scheduling and estimating
COCOMO 2Boehm et al: ”Software Cost Estimation with Cocomo II” Prentice Hall 2000
expert estimation
• variations of Delphi technique• combine and improve the estimation estimates
of several experienced estimators
18SOE: project management
scope
estimate
managerisk
schedule
control
estimation by analogy
19SOE: project management
scope
estimate
managerisk
schedule
control
current project
similar past project
similar past project
similar past project
size
project team
programming technologies
................................................
• similarities• differences
• similarities• differences
• similarities• differences
20SOE: scheduling and estimating
• competitive bidding produces (unrealistically) low estimates
• estimation is not an exact science• a professional will not conform to wishful
thinking• managers are entitled to an early estimate• your challenge is to make them understand
uncertainty
estimation & politics: scope
estimate
managerisk
schedule
control
scheduling
21SOE: project management
scope
estimate
managerisk
schedule
control
• decide who does what and when• a good schedule:
describes an effective process - succeeddescribes an efficient process - fast and cheapdescribes a realistic process - estimaterespects stakeholders’ interests - motivateenables partition of labour - coordinateenables measurement of progress - controlcan be communicated - simplicity
22SOE: scheduling and estimating
top-down + bottom up
• situation
• strategy
• process model
• phases & deliverables
• resource allocation
• resources
• task decomposition
scope
estimate
managerisk
schedule
control
23SOE: scheduling and estimating
phases, deliverables, milestones scope
estimate
managerisk
schedule
control
hierarchical task decomposition
requirementsr1r2r3
r3.1r3.2
analysisa1a2
designd1
24SOE: project management
scope
estimate
managerisk
schedule
control
Task definition: Task I.1 Concept ScopingI.1.1 Identify need, benefits and potential customers;I.1.2 Define desired output/control and input events that drive the application;
Begin Task I.1.2I.1.2.1 FTR: Review written description of need9I.1.2.2 Derive a list of customer visible outputs/inputs
case of: mechanicsmechanics = quality function deploymentmeet with customer to isolate major concept requirements;interview end-users;observe current approach to problem, current process;review past requests and complaints;
25SOE: scheduling and estimating
dependencies
identify tasks and dependencies, estimate
e.g. from a design (work breakdown structure)
• design a• code & test a• design b• code & test b• integrate & test s
S
A B
d A ct A d B ct B
it S
scope
estimate
managerisk
schedule
control
activity duration (days) dependencies
T1 8
T2 15
T3 15 T1 (M1)
T4 10
T5 10 T2, T4 (M2)
T6 5 T1, T2 (M3)
T7 20 T1 (M1)
T8 25 T4 (M5)
T9 15 T3, T6 (M4)
T10 15 T5, T7 (M7)
T11 7 T9 (M6)
T12 10 T11 (M8)
tasks, durations, dependencies
26SOE: project management
scope
estimate
managerisk
schedule
control
scope
estimate
managerisk
schedule
control
task network with milestones, durations
27SOE: project management
gant chart
28SOE: project management
scope
estimate
managerisk
schedule
control
task assignment
29SOE: project management
scope
estimate
managerisk
schedule
control
30SOE: scheduling and estimating
program evaluation and review technique (PERT)
tasks, calendar days, dependencies, milestones
scope
estimate
managerisk
schedule
control
31SOE: scheduling and estimating
critical path method (CPM)
• the critical path is the longest path through a network (defines the minimum project completion time)
• slack is the amount of time an activity can be delayed without delaying the project
controlling projects
• check (monitor):• resource use (developer
hours)• schedule (overrun)• progress (deliverables)
• act• enforce or modify plan
32SOE: project management
scope
estimate
managerisk
schedule
control
the project manager’s dilemma
• decrease scope (risk contract breech)
• alter schedule (risk late delivery and contract breech)
• increase resources (risk making a loss or having to go to the customer for more money)
33SOE: project management
scope(features,
functionality)
resources(cost,
budget)
schedule(time)
scope
estimate
managerisk
schedule
control
34SOE: scheduling and estimating
project management tool support
35SOE: project management
adaptive planning(agile)
scope
estimate
managerisk
schedule
control
36SOE: project management
an adaptive plan (Larman)
scope
estimate
managerisk
schedule
control
scope
estimate
managerisk
schedule
control
37SOE: project management
two contract phase plan
scope
• planning poker (a simplified version of Delphi techniques
• story points (a simplified application of the FP idea)
• velocity: story points per iteration • estimation improves as velocity
becomes known• ideal days• shared estimation generates
commitment
38SOE: project management
estimate
• prioritize user stories for:• customer functionality• business value• risk• must-haves, exciter-delighters• feature benefit v. cost of absence
• story decomposition• release planning (stories, points velocity)• iteration planning (iteration length, activities)• buffering
39SOE: project management
schedule
user story
• story self-selection generates commitment• burndown chart• velocity monitoring• time-boxing
40SOE: project management
control
primary control locus
41SOE: project management
scope(features,
functionality)
resources(cost,
budget)
schedule(time)
scope
resourcesschedule
scope
resourcesschedule
requirements specification
time-boxed
comparison
42SOE: project management
planning assumptions traditional agilestyle predictive adaptive
team managed self-organizing
team manager leader facilitator
expression detailed plans instrumental sketches
driving force professional responsibility commitment
management focus complexity uncertainty
fixed point scope schedule
techniques analytical, quantitative simple, pragmatic
43SOE: project management
“Planning is everything. Plans are nothing”
“No plan survives contact with the enemy”
Field Marshal Helmuth Graf von Moltke
“Prediction is very difficult –especially about the future”
Niels Bohr
“You improvise, you adapt, you overcome”
Clint Eastwood
“To be uncertain is to be uncomfortable, but to be certain is ridiculous”
Chinese proverb
project managers’ competence priorities at WM Data
44SOE: project management
core competences technical competences
process management
team management
business management
customer management
uncertainty management
Quickly build a functioning teamCompensate for poor communicationManage team stressHandle crises and rebellion
Compensate for contracting Record agreementsUnderstand strategic goalsEstimate realistically
Build a relationshipDevelop understanding and trustHandle differences professionallyEducate the customer
Take tough decisions with little information and without being sure of the consequencesWork to reduce uncertainty, for example through risk managementFocus on the uncertainties and act as the leader in these areas
SOE
managing change in system development projects: configuration management
2
understanding the problem of change
• change is one of the most fundamental characteristics in any software development process (Leon 2000) – it is intrinsic and must be accepted as a fact of life (Lehman 1980)
• changing software is very easy, but if it is done at will, results in chaos (Leon 2000)
• effective projects control changes, whereas ineffective projects allow changes to control them (McConnell 1998)
• the level and formality of control should vary according to project conditions (Whitgift 1991)
3SOE: configuration management
sources of change
• requirements change through misunderstanding, better customer understanding or situational change (= requirements churn)
• analysis clarification with improving developer understanding• design improvements, evolutionary design changes, code
improvement, refactoring• bug fixes during test• improvements and new functionality after release• version, variant and implementation change• component and code reuse
4SOE: configuration management
change sources: system evolution
5SOE: configuration management
initial system
mobile
PC
Windows XP
server
desktopLinux
SUN ...
system variants
bug + issue fixes 1.0 1.1 1.3
version evolution 2.0 2.1 2.3new functionality
alpha beta
SAP to 2006
6SOE: configuration management
change sources: system evolution
7SOE: configuration management
standard system
implementation 1
implementation 2
..........
implementation nSAP r6.20
implementation 1
implementation 2
..........
implementation nSAP r7.0
example: a simple (but late) change request
8SOE: configuration management
change request
the customer requests a new field on a screen
requirements specification
design specification
test plan
system documentation
user manual
model code
database
reports
test scripts and cases
presentation code
XML
component and system interfaces
a simple (but late) change request
9SOE: configuration management
document-ationchange
recode
test rewrite unit test debug system
buildsystem
test
change request
the customer requests a new field on a screen
change problems
• change inherent in the life of the system• many developers working concurrently on many (hundreds or
thousands of) documents and files• probability of introduction of further errors and
communication problems• system must be built and tested, preferably early• many possible combinations of version, implementation and
variant releases
10SOE: configuration management
software configuration management (SCM)
11SOE: configuration management
“the purpose of Software Configuration Management is to establish and maintain the integrity of the products of the software project throughout the project's software life cycle. Software Configuration Management involves identifying configuration items for the software project, controlling these configuration items and changes to them, and recording and reporting status and change activity for these configuration items.” SEI 2000a
some simple CM scenarios
• developer A wants to see latest version of foo.c and its change history since last week
• B needs to revert foo-design.doc to its version two days ago• B makes a release of the project and he needs to know what items to
include and which version• A lives in New Dehli, India and B lives in Boston, US - they want to work
on HelloW.java together• in the latest release, a serious bug is found and manager C wants to track
what changes caused the bug, who made those changes and when• an innocent-looking change 2 days before release causes major test
problems – the whole design is rolled back to its state before the change• C wants to get reports about current project progress to decide if she
needs to hire more programmers and delay the alpha release
13SOE: configuration management
a set of practices:- change control- version control- release management
supported by tools:- configuration database
- source code repository
- build tools
SCM terminology
• configuration item – any development output for which change control is considered necessary
• baseline – a collection of configuration item(s) which is reviewed and approved, and thus under change control – often a project milestone
• revision – a change to a baseline• configuration – a particular assembly of configuration items (such as all the
source code files for release 3.0)• version – a configuration adding repair or new functionality• variant – a configuration with similar functionality on a different platform• release – a version or variant distributed to users
identification control
statusaccounting audit
CM activities• identification
• identifying the items to be managed, establishing naming conventions (e.g. PCL-tools/edit/forms/display/AST-interface/code), storage and access
• control• change evaluation• change coordination• change approval• change implementation
• status accounting• tracking of status of configuration items
• audit• verifying that a configuration conforms to its
specification
CM practice example
16SOE: configuration management
17SOE: configuration management
change request form
Change Request Form
Project: Proteus/PCL-Tools Number: 23/02Change requester: I. Sommerville Date: 1/12/02Requested change:When a component is selected from the structure, displaythe name of the file where it is stored.
Change analyser: G. Dean Analysis date: 10/12/02Components affected: Display-Icon.Select, Display-Icon.Display
Associated components: FileTable
Change assessment: Relatively simple to implement as a file name table isavailable. Requires the design and implementation of a display field. No changesto associated components are required.
Change priority: Low
Change implementation:Estimated effort: 0.5 days
Date to CCB: 15/12/02 CCB decision date: 1/2/03CCB decision: Accept change. Change to be implemented in Release 2.1.Change implementor: Date of change:Date submitted to QA: QAdecision:Date submitted to CM:Comments
change control board (CCB) review
change implementation: code, documentation
baseline update
verification, audit
managing collaborative working: version control, revision control
18SOE: configuration management
• solutions:• pessimistic: file locking• optimistic: version merging
version control: terminology
• checkout – creates local working copy of file• change (diff, delta) – modification to a file under version control• commit – write or merge changes to repository• merge – 2 or more sets of changes applied to repository file• delta compression – retains the only the differences between successive
versions of files• trunk – mainline development stream• branch – an alternative development stream• tag/label – point in time snapshot of group of files
19SOE: configuration management
20SOE: configuration management
version management tools
• version and release identification• system assigns identifiers automatically when a new version is submitted to
the system• storage management
• system stores the differences between versions rather than all the version code
• change history recording• record reasons for version creation
• independent development • parallel working on different versions
• project support• can manage groups of files associated with a project rather than just single
files
trunk and branch, forward and reverse integration
21SOE: configuration management
example version control tool: Subversion
22SOE: configuration management
system building
b uildscript
s ource codecomponent
versions
o bject codecomponents
executablesystem
s ystembuilder
c ompilersversion
managementsystem
l inker
• compiling and linking software components into an executable system• different systems built from different combinations of components• invariably supported by automated tools driven by ‘build scripts’
system building tools
• building a large system is computationally expensive and may take several hours
• hundreds of files may be involved• system building tools may provide:
• a dependency specification language and interpreter
• tool selection and instantiation support
• distributed compilation• derived object management
Example: SCons
comp
scan.o
scan.c
defs.h
syn.o
syn.c
sem.o
sem.c
cgen.o
cgen.c
25SOE: configuration management
release management
• release: not just a set of executable programs, may also include:• configuration files defining how the release is configured for a particular
installation;• data files needed for system operation;• an installation program or shell script to install the system on target
hardware;• electronic and paper documentation;• packaging and associated publicity
• release creation involves collecting all files and documentation required to create a system release
• configuration descriptions for different hardware and installation scripts• the release must be documented to record exactly what files were used
to create it - this allows it to be re-created if necessary
many varieties of integrated tool support
26SOE: configuration management
Rational ClearCase Rational BuildForge
27SOE: configuration management
• configuration management is the management of system change to software products
• a formal document naming scheme should be established and documents should be managed in a database
• the configuration data base should record information about changes and change requests
• a consistent scheme of version identification should be established using version numbers, attributes or change sets
• system releases include executable code, data, configuration files and documentation
• system building involves assembling components into a system • case tools are available to support all CM activities• case tools may be stand-alone tools or may be integrated systems which
integrate support for version management, system building and change management
traditional configuration management
Agile CM
28SOE: configuration management
test scripts
code
automated daily build
continuous integration
sandbox
version control
releaseenvironment
traditional and agile configuration management
29SOE: refactoring and patterns
traditional agilefocus documents and code code
activities CM practice, change management, version control, automated build
version control, automated build
responsible CM team, CM board programmers
process formal, managed informal and integrated with practice environment
outcome CM audit (documented product control)
next release
importance indispensible in medium and large projects
indispensible
Software Engineering
managing SE practice across the software organisation:
- software metrics- Capability Maturity Model Integrated (CMMI)- Software Process Improvement (SPI)
the problem – the SE answer
• development project success rates in the US: 29%
• by budget• <$750,000: success = 55%• >$10,000,000 success = 0%
• England (public sector) 84% partial or total failure
• estimated overall: 20-30% projects are total failures (abandoned)
• ”failure of large and complex information system developments is largely unavoidable”
• the requirements problem• the analysis problem• the design problem• the quality problem• the project management problem• the change problem• the complexity problem
2SOE: metrics and SPI
software engineering
traditional agile
SE improvement styles
3SOE: metrics and SPI
qualitative:description and experience-based
quantitative:metrics-based
internal: own experience
external: professional norms, research
improving software engineering practice
metrics: establishing a knowledge
baseline
SPI: improving SE practice
CMMI: orienting SE practice after
professional norms
• how do you improve your practice in software engineering? not as an individual, not as a team, but as a company?
4SOE: metrics and SPI
Software Engineering
Software Metrics
definitions
• measure - quantitative indication of extent, amount, dimension, capacity, or size of some attribute of a product or process.• e.g., number of errors
• metric - quantitative measure of degree to which a system, component or process possesses a given attribute• e.g., number of errors found per person hours expended
• indicator - group of metrics pointing towards a desirable end• e.g. product quality
• a defined and commonly understood language• defect • error• failure• fault (bug)
motivation for metrics
• estimate the cost & schedule of future projects
• evaluate the productivity impacts of new tools and techniques
• establish productivity trends over time
• improve software quality
• forecast future staffing needs
• anticipate and reduce future maintenance needs
• ………………………
• ………………………………
example metrics
• bug and defect rates• measured by
individual, module, project
• defect removal efficiency
analysis design code test maintain
project
product
process
size-oriented metrics
• measures• LOC - Lines Of Code • KLOC - 1000 Lines Of Code• SLOC – Statement Lines of Code (ignore whitespace)• FP• OP
• typical metrics:• Errors/KLOC, Defects/KLOC, Cost/LOC, Documentation
Pages/KLOC
program complexity metrics
• example - cyclomatic complexity – defines the set of independent paths through a programme
• V(G) = E – N + 2• E is the number of flow graph edges• N is the number of nodes
• V(G) is the number of (enclosed) regions/areas of the planar graph
• a quantitative measure of testing difficulty and an indication of ultimate reliability
• experimental data shows value of V(G) should be no more then 10 - testing is very difficulty above this value
1
3
54
6
7
2
control flow graph:sequence, selection, repetition
design metrics
• structural complexity• data complexity• system complexity• coupling
• example: structural complexity S(i) of a module i.• S(i) = fout
2(i)• fan out is the number of
modules immediately subordinate (directly invoked).
object-oriented metrics
• weighted methods per class• depth of inheritance tree• number of children• coupling between classes• response for a class• lack of cohesion in methods (LCOM)• class size• number of operations overridden• method inheritance factor• coupling factor• polymorphism factor
software quality and metrics
product
operation revision transition
reliability efficiency usability maintainability testability portability reusability
metrics
JMetric
a measurement data baseline – so what?
15SOE: metrics and SPI
goal• reduce the time to implement change requests
question• how long do change requests take, what factors
delay implementation?
metric• tqueue, Weval, teval, Wchange, tchange, Echange,
Dchange
Motorola’s metric programme
• goal 1: improve project planning
• goal 2: increase defect containment
• goal 3: increase software reliability
• goal 4: decrease software defect density
• goal 5: improve customer service
• goal 6: reduce the cost of nonconformance
• goal 7: increase software productivity
• adherence to schedule• delivered defects and
delivered defects per size• total effectiveness
throughout the process• accuracy of estimates• number of open customer
problems• time that problems remain
open• cost of nonconformance• software reliability
16SOE: metrics and SPI
• goal 1: improve project planning • question 1.1: what was the accuracy of estimating the actual value of project schedule? • metric 1.1: schedule estimation accuracy (SEA)
•
independent metrics consultancy
17SOE: metrics and SPI
C1 DevelopmentTools & Technologies
C3 QualityAssurance
C4 Training
C. Development Services
D. Management & Administration
D1 Management & Administration
C2 Standards & Methods
E. Customer Services
E1 Contracting & Consulting
Users
ALL APPLICATIONS ALL ENVIRONMENTS
Y. User Satisfaction
A6 User Documentation
A1 Study
A4 Implementation & Test
A3 Design
A2 Analysis
A. Development
A7 Project ManagementQuality Assurance etc.
A5 Installation
ALL PROJECTS ALL ENVIRONMENTS
B. Production
B3 Repair
B4 Upgrade
B2 User Support
B1 Application Processing & Quality
B5 Technical Enhancement
C1 DevelopmentTools & Technologies
C3 QualityAssurance
C4 Training
C. Development Services
D. Management & Administration
D1 Management & Administration
C2 Standards & Methods
E. Customer Services
E1 Contracting & Consulting
Users
ALL APPLICATIONS ALL ENVIRONMENTSALL APPLICATIONS ALL ENVIRONMENTS
Y. User Satisfaction
A6 User DocumentationA6 User Documentation
A1 Study
A4 Implementation & TestA4 Implementation & Test
A3 DesignA3 Design
A2 Analysis
A. Development
A7 Project ManagementQuality Assurance etc.
A5 InstallationA5 Installation
ALL PROJECTS ALL ENVIRONMENTS
B. Production
B3 Repair
B4 Upgrade
B2 User Support
B1 Application Processing & Quality
B5 Technical Enhancement bench-marking
analysis
reporting
18SOE: metrics and SPI
metrics: establishing a knowledge
baseline
CMMI: orienting SE practice after
professional norms
SPI: improving SE practice
formal baseline of knowledge of software engineering practice in place
professional norms
• professional norms:• embody professional knowledge• facilitate exchange of experience through standard nomenclature• increase predictability in processes and results• facilitate evaluation and certification of professionals• are maintained by international or national authorities or by professional
organisations
19SOE: metrics and SPI 19
CMMI: a norm for software processes
• Capability Maturity Model Integration
• initiated by the US Dept. of Defense, late 1980’s• Watts Humphrey, Software Engineering Institute, Carnegie-Mellon University
• purpose 1: qualification - identify reliable software suppliers• purpose 2: road map to increased professionalism in software organisations
• key document (573 pages): http://www.sei.cmu.edu/publications/documents/06.reports/06tr008.html
• other norms: ISO quality standards, project management standards (IPMA)…
20SOE: metrics and SPI 20
process maturity levels
21SOE: metrics and SPI
22SOE: metrics and SPI
23SOE: metrics and SPI
example: requirements management
24SOE: metrics and SPI 24
Typical Work Products1. Requirements status2. Requirements database3. Requirements decision database
Subpractices1. Document all requirements and requirements
changes that are given to or generated by the project.
2. Maintain the requirements change history with the rationale for the changes. Maintaining the change history helps track requirements volatility.
3. Evaluate the impact of requirement changes from the standpoint of relevant stakeholders.
4. Make the requirements and change data available to the project.
Purpose The purpose of Requirements Management (REQM) is to manage the requirements of the project’s products and product components and to identify inconsistencies between those requirements and the project’s plans and work products.
Specific Goal and Practice SummarySG 1 Manage RequirementsSP 1.1 Obtain an Understanding of
RequirementsSP 1.2 Obtain Commitment to RequirementsSP 1.3 Manage Requirements ChangesSP 1.4 Maintain Bidirectional Traceability of
RequirementsSP 1.5 Identify Inconsistencies Between
Project Work and Requirements
statistical process control:– the level five software organisation
25SOE: metrics and SPI
• software processes documented and institutionalized
• software processes quantitatively measured, benchmarked and statistically evaluated
• software processes continually improved and re-evaluated
26SOE: metrics and SPI
assumption descriptionprocess orientation What is important about a software organisation is the way the work is organised (a
process). Good (mature) software organisations have defined and repeatable processes which govern the way that they work (their capability) and lead to success.
hierarchical management: planning, monitoring, control
Management has the responsibility for process standardization, learning and improvement. Software developers should execute the organization’s processes according to the standardized models and descriptions.
externally imposed generic process models
Software processes leading to effective software development are well understood and share generic features which can be externally documented. These generic process models are suitable for implementation in all software organisations.
documentation, standardization and institutionalization
Good software organizations not only have standardized and documented processes, but those processes are institutionalized; that is, carried out throughout the organization. Software projects are therefore conducted in a similar fashion according to pre-defined process models
organizational progression to maturity
Software organisation improvement is understood as the movement from immaturity (undefined processes) to maturity (standardized and institutionalized processes) through a series of management led change initiatives.
objective measurement, external verification and certification
The extent of process standardization and institutionalization can be measured, and the measurements used to achieve 1) better standardization 2) enforcement of processes, 3) process learning leading to process improvement. Measurement represents objective knowledge about software processes, and maturity can be externally certified
goal-directed change through rational analysis and learning
Organizational learning is achieved by the rational analysis and optimization of processes which sets the goals for organizational development and change
management-sponsored improvement initiative
Software process improvement initiatives follow the principles outlined above: defined and documented externally imposed process, management –led, focused on maturity, objectively measured, analysis-oriented. Thus the CMMI model is an extensive, externally imposed process plan
27SOE: metrics and SPI
Capability Immaturity Model
28
SOE: metrics and SPI
0 : Negligentlip service only to implementing software engineering processes. CMM level 0 organizations generally fail to produce any product, or do so by abandoning regular procedures in favor of crash programs.
0 : Negligentlip service only to implementing software engineering processes. CMM level 0 organizations generally fail to produce any product, or do so by abandoning regular procedures in favor of crash programs.
-1 : ObstructiveProcesses, however inappropriate and ineffective, are implemented with rigor and tend to obstruct work - adherence to process is the only measure of success. Any actual creation of viable product is incidental - quality of the product is not assessed
-1 : ObstructiveProcesses, however inappropriate and ineffective, are implemented with rigor and tend to obstruct work - adherence to process is the only measure of success. Any actual creation of viable product is incidental - quality of the product is not assessed
-2 : ContemptuousWhile processes exist, they are routinely ignored by engineering staff and those charged with overseeing the processes are regarded with hostility. Measurements are fudged to make the organization look good.
-2 : ContemptuousWhile processes exist, they are routinely ignored by engineering staff and those charged with overseeing the processes are regarded with hostility. Measurements are fudged to make the organization look good.
-3 : UnderminingNot content with faking their own performance, undermining departments routinely work to downplay and sabotage the efforts of rival teams, especially those successfully implementing processes common to CMM level 2 and higher. This is worst where company policy causes departments to compete for scarce resources, which are allocated to the loudest advocates.
-3 : UnderminingNot content with faking their own performance, undermining departments routinely work to downplay and sabotage the efforts of rival teams, especially those successfully implementing processes common to CMM level 2 and higher. This is worst where company policy causes departments to compete for scarce resources, which are allocated to the loudest advocates.
29SOE: metrics and SPI
metrics: establishing a knowledge
baseline
CMMI: orienting SE practice after
professional norms
SPI: improving SE practice
formal baseline of knowledge of software engineering practice in place
software organisation has institutionalized SE processes that match professional norms
software engineeering
Software Process Improvement (SPI)
SPI: Software Process Improvement
31SOE: metrics and SPI 31
assessment and gap analysis
education and
training
selection and
justification
installation and
migration
• generic name for initiatives to improve software development and management in a software company
• the movement towards good software engineering practice - as experienced and as industry good practice (norm)
IDEAL
32SOE: metrics and SPI 32
traditional SPI:- the CMMI project
Learn by measuring
results
Act to improve process control
Establish CMMI
team and process areas
Diagnose current maturity
level
33SOE: metrics and SPI
Initiate CMMI project
34SOE: metrics and SPI
metrics: establishing a knowledge
baseline
CMMI: orienting SE practice after
professional norms
SPI: improving SE practice
formal baseline of knowledge of software engineering practice in place
software organisation has institutionalized SE processes that match professional norms
software organisation has organized improvement initiatives
the Scandinavian approach – a critique of traditional CMMI
1. focus on problems – not on a specific solution (e.g. CMMI level 2)
2. emphasize knowledge creation – not knowledge deployment
3. encourage participation – not expert solution
4. integrate leadership – do not rely on staff work
5. plan for continuous improvement – a portfolio, not a project
(Mathiassen et al., 2002)
35SOE: metrics and SPI 35
Industry Model (CMMI)
Knowledge model Network model (Agile)
Underlying metaphor Factory Learning organization CommunityFocus/orientation Process Knowledge Software challenge
Management style Hierarchical management: planning, monitoring, control
Facilitation and team learning
Self-organisation in networks
Guiding principle Generic process models Experiential learning Technical mastery
Organizational form Machine bureaucracy Professionalised knowledge work
Virtually enabled community
Motivation for improvement
Market pressure Individual self-motivation through professionalism
Self-realisation in the technological meritocracy
Improvement focus Internal efficiency, software quality
Customer satisfaction, market accommodation
Software solution
Improvement strategy Goal-directed change through rational analysis of process knowledge, process documentation, standardization and institutionalization
Continuous learning, knowledge sharing, individual and collective competence development
Code sharing, peer feedback, development and sharing of programming skills and techniques
Improvement objective Process maturity, organizational discipline
Adaptation to market Technology leadership
Improvement assessment method
Objective measurement, external verification and certification of process adherence
Responsiveness to market, improved sales, profitability
Code quality, intellectual property rights
Improvement champions Top managers Project managers Programmers 36
Industry Model (CMMI)
Knowledge model Network model (Agile)
Underlying metaphor Factory Learning organization CommunityFocus/orientation Process Knowledge Software challengeManagement style Hierarchical management:
planning, monitoring, control
Facilitation and team learning
Self-organisation in networks
Guiding principle Generic process models Experiential learning Technical masteryOrganizational form Machine bureaucracy Professionalised
knowledge workVirtually enabled community
Motivation for improvement
Market pressure Individual self-motivation through professionalism
Self-realisation in the technological meritocracy
Improvement focus Internal efficiency, software quality
Customer satisfaction, market accommodation
Software solution
Improvement strategy Goal-directed change through rational analysis of process knowledge, process documentation, standardization and institutionalization
Continuous learning, knowledge sharing, individual and collective competence development
Code sharing, peer feedback, development and sharing of programming skills and techniques
Improvement objective Process maturity, organizational discipline
Adaptation to market Technology leadership
Improvement assessment method
Objective measurement, external verification and certification of process adherence
Responsiveness to market, improved sales, profitability
Code quality, intellectual property rights
Improvement champions Top managers Project managers Programmers 37
Industry Model (CMMI)
Knowledge model Network model (Agile)
Underlying metaphor Factory Learning organization CommunityFocus/orientation Process Knowledge Software challengeManagement style Hierarchical management:
planning, monitoring, control
Facilitation and team learning
Self-organisation in networks
Guiding principle Generic process models Experiential learning Technical masteryOrganizational form Machine bureaucracy Professionalised
knowledge workVirtually enabled community
Motivation for improvement
Market pressure Individual self-motivation through professionalism
Self-realisation in the technological meritocracy
Improvement focus Internal efficiency, software quality
Customer satisfaction, market accommodation
Software solution
Improvement strategy Goal-directed change through rational analysis of process knowledge, process documentation, standardization and institutionalization
Continuous learning, knowledge sharing, individual and collective competence development
Code sharing, peer feedback, development and sharing of programming skills and techniques
Improvement objective Process maturity, organizational discipline
Adaptation to market Technology leadership
Improvement assessment method
Objective measurement, external verification and certification of process adherence
Responsiveness to market, improved sales, profitability
Code quality, intellectual property rights
Improvement champions Top managers Project managers Programmers 38
SPI manifesto (agile)
39SOE: metrics and SPI
`
40SOE: metrics and SPI
qualitative:description and experience-based
quantitative:metrics-based
internal: own experience
external: professional norms, research
metrics programmes
traditional SPI(CMMI)
Scandinavian SPI
Metrics for the Object Oriented
Chidamber & Kemerer ’94 TSE 20(6)
Metrics specifically designed to address object oriented software
Class oriented metrics
Direct measures
Weighted Methods per Class
WMC =
ci is the complexity (e.g., volume, cyclomatic complexity, etc.) of each method
Viewpoints: (of Chidamber and Kemerer)
-The number of methods and complexity of methods is an indicator of how much time and effort is required to develop and maintain the object
-The larger the number of methods in an object, the greater the potential impact on the children
-Objects with large number of methods are likely to be more application specific, limiting the possible reuse
n
iic
1
Depth of Inheritance Tree
DIT is the maximum length from a node to the root (base class)
Viewpoints:Lower level subclasses inherit a number of methods making
behavior harder to predictDeeper trees indicate greater design complexity
Number of ChildrenNOC is the number of subclasses immediately
subordinate to a class
Viewpoints:As NOC grows, reuse increases - but the abstraction may be diluted
Depth is generally better than breadth in class hierarchy, since it promotes reuseof methods through inheritance
Classes higher up in the hierarchy should have more sub-classes then those lower down
NOC gives an idea of the potential influence a class has on the design: classes with large number of children may require more testing
Coupling between ClassesCBO is the number of collaborations between two classes (fan-
out of a class C)• the number of other classes that are referenced in the class
C (a reference to another class, A, is an reference to a method or a data member of class A)
Viewpoints:As collaboration increases reuse decreasesHigh fan-outs represent class coupling to other classes/objects and thus are
undesirable High fan-ins represent good object designs and high level of reuse Not possible to maintain high fan-in and low fan outs across the entire system
Response for a Class
RFC is the number of methods that could be called in response to a message to a class (local + remote)
Viewpoints:As RFC increases
testing effort increasesgreater the complexity of the object harder it is to understand
Lack of Cohesion in Methods
LCOM – poorly described in Pressman
Class Ck with n methods M1,…Mn
Ij is the set of instance variables used by Mj
LCOM
There are n such sets I1 ,…, In• P = {(Ii, Ij) | (Ii Ij ) = }• Q = {(Ii, Ij) | (Ii Ij ) }
If all n sets Ii are then P =
LCOM = |P| - |Q|, if |P| > |Q|LCOM = 0 otherwise
Example LCOM
Take class C with M1, M2, M3I1 = {a, b, c, d, e}I2 = {a, b, e}I3 = {x, y, z}P = {(I1, I3), (I2, I3)}Q = {(I1, I2)}
Thus LCOM = 1
Explanation
LCOM is the number of empty intersections minus the number of non-empty intersections
This is a notion of degree of similarity of methods
If two methods use common instance variables then they are similar
LCOM of zero is not maximally cohesive|P| = |Q| or |P| < |Q|
Some other cohesion metrics
Class Size
CS • Total number of operations (inherited, private, public)• Number of attributes (inherited, private, public)
May be an indication of too much responsibility for a class
Number of Operations OverriddenNOO
A large number for NOO indicates possible problems with the design
Poor abstraction in inheritance hierarchy
Number of Operations Added
NOA
The number of operations added by a subclassAs operations are added it is farther away from super classAs depth increases NOA should decrease
Method Inheritance Factor
MIF = .
Mi(Ci) is the number of methods inherited and not overridden in Ci
Ma(Ci) is the number of methods that can be invoked with CiMd(Ci) is the number of methods declared in Ci
n
iia
n
iii
CM
CM
1
1
)(
)(
MIF
Ma(Ci) = Md(Ci) + Mi(Ci) All that can be invoked = new or overloaded + things inherited
MIF is [0,1]MIF near 1 means little specialization MIF near 0 means large change
Coupling Factor
CF= .
is_client(x,y) = 1 if a relationship exists between the client class and the server class. 0 otherwise
(TC2-TC) is the total number of relationships possible
CF is [0,1] with 1 meaning high coupling
)(),(_
2 TCTCCCclientis
i j ji
Polymorphism Factor
PF = .
Mn() is the number of new methods
Mo() is the number of overriding methods
DC() number of descendent classes of a base class
The number of methods that redefines inherited methods, divided by maximum number of possible distinct polymorphic situations
i iin
i io
CDCCMCM
)()()(
Software Engineering
course summary
2SOE: course summary
course objectives
• to understand the difference between traditional and agile approaches to system development
• to understand the primary software engineering tools and techniques and how and when to apply them
• to develop the capacity to use these understandings in your own practice
• overview course – not learn this technique and learn how to apply it in the exercises
3SOE: course summary
course design
development approaches
agile
traditionalminiproject 1:
DA
tools, techniques, practices
miniproject 2:TTP
SPI
miniproject 3:evaluation
the problem – the SE answer
• development project success rates in the US: 29%
• by budget• <$750,000: success = 55%• >$10,000,000 success = 0%
• England (public sector) 84% partial or total failure
• estimated overall: 20-30% projects are total failures (abandoned)
• ”failure of large and complex information system developments is largely unavoidable”
• the requirements problem• the analysis problem• the design problem• the quality problem• the project management problem• the change problem• the complexity problem
4SOE: metrics and SPI
software engineering
traditional agile
one answer:
• apply engineering principles to software development
“(1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. (2) The study of approaches as in (1).”
5SOE: course summary
no – two answers
6SOE: course summary
software engineering
traditional agile
traditional agile
the requirements problem requirements management, requirements engineering, change control
on-site customer, user stories, iteration, test driven development
the analysis problem formal analysis techniques dialogue with customer
the design problem top down formal design evolutionary design
the quality problem formal test strategies, qualityprogrammes, SPI + CMMI, metrics
test-driven design, continuous integration
the project management problem
estimation, scheduling, predictive planning, project control, change management, risk management
iteration, planning poker, velocity
the change problem configuration management, risk management
iteration, version control, continuous integration
the complexity problem anticipate and plan experiment and adapt
7SOE: course summary
literature
8SOE: course summary
(traditional viewpoint)
agile viewpoint
deepening articles –thematically organised
literaturecompulsory reading:Craig Larman: Agile Iterative Development: A Manager's Guide, Addison-Wesley , chapters 1-9 (buy this one)Boehm, B. & Turner, R. (2003) Observations on balancing discipline and agility. Agile Development Conference (ADC '03), Salt Lake
City, Utah Gray, M. M. (1999) Applicability of Metrology of Information Technology. Journal of Research - National Institute of Standards and
Technology, 104(6), 567-578.Heemstra, F. (1992) Software cost estimation. Information and Software Technology, 34(10), 627-639.Nerur, S., Mahapatra, R. K. & Mangalaraj, G. (2005) Challenges of migrating to agile methodologies. Communications of the ACM,
48(5), 72-78.Nuseibeh, B. & Easterbrook, S. (2000) Requirements engineering: a roadmap. In: ICSE '00: Proceedings of the Conference on The
Future of Software Engineering, 35-46. ACM.Parnas, D. & Clements, P. (1985) A rational design process: How and why to fake it. Formal Methods and Software Development, 80-
100.Poppendieck, M. B. & Poppendieck, T. D. (2010) A Rational Design Process–It’s Time to Stop Faking It.Schuh, P. (2008) Agile Configuration management for large organizations. In: The Rational Edge, IBM.Talby, D., Hazzan, O., Dubinsky, Y. & Keren, A. (2006) Agile software testing in a large-scale project. IEEE SOFTWARE, 30-37.Tiwana, A. & Keil, M. (2004) The one-minute risk assessment tool. Communications of the ACM, 47(11), 73-77.Whittaker, J. A. (2000) What is software testing? And why is it so hard? Software, IEEE, 17(1), 70-79.
supplementary reading:Pressman, R.S. Software Engineering: A Practitioner's Approach European Adaption (Paperback) fifth edition, Parts 1-3. Use this to
deepen your understanding of traditional topics where you find it necessary (you can find it online). You can also use another edition (there are several) but be careful you read the corresponding chapters. The chapter numbering systems can be quite different.
9SOE: course summary
course content: mindmap
10SOE: course summary
course content by ten revision topics
11SOE: course summary
comparison of iterative and agile methods: UP, SCRUM, XP
traditional and agile development approaches: similarities and differences
requirements: traditional and agile
analysis and design: traditional and agile
test: traditional and agile
risk analysis: traditional (and agile)
project management, scheduling and estimation: traditional and agile
top down and bottom up design: refactoring and design patterns
configuration management: traditional and agile
metrics and software process improvement: traditional and agile
comparison of iterative and agile methods: UP, SCRUM, XP
• work products, roles, practices• UP – iterative not necessarily agile• SCRUM – team management• XP – programmer orientation
12SOE: course summary
ManagementEnvironment
Business Modeling
ImplementationTest
Analysis & Design
Preliminary Iteration(s)
Iter.#1
PhasesProcess Workflows
Iterations within phases
Supporting Workflows
Iter.#2
Iter.#n
Iter.#n+1
Iter.#n+2
Iter.#m
Iter.#m+1
Deployment
Configuration Mgmt
Requirements
Elaboration TransitionInception Construction
traditional and agile development approaches: similarities and differences
• T or A development approach• T or A software engineering
13SOE: course summary
requirements: traditional and agile
14SOE: course summary
requirements specification: an agreed,mutually understood and relatively stable account of what to build
on-site customer, product owner
user stories, product backlog
iteration, sprint
test cases
analysis and design: traditional and agile
15SOE: course summary
use domain
requirements specification continuous
design improvement
CRC cards
metaphor
high level design
refactoring
test: traditional and agile• XP: test-driven development
16SOE: course summary
iteration, sprint
test cases
test automation
continuous integration
acceptance testing
risk analysis: traditional (and agile)
17SOE: course summary
risk analysis
risk prioritisation
risk mitigation
risk monitoring
risk management
risk identification
one minute risk management
change risk response = iteration
?
project management, scheduling and estimation: traditional and agile
18SOE: course summary
predictive planning
technique-based
estimation
formal risk
analysis
predictive scheduling
corrective control
two contract phase plan
adaptive iteration and release planning
time-boxing
burndownchart
planning poker
story pointsideal days
velocity monitoring
traditional
top down and bottom up design: refactoring and design patterns
19SOE: course summary
design as model design as code
architecture
detailed design
refactoring design patterns
architectural patterns
agile
GRASP
configuration management: traditional and agile
20SOE: course summary
identification control
statusaccounting audit
metrics and software process improvement: traditional and agile
21SOE: course summary
perspectives
• traditional SE - a standard response to difficult problems• discipline and rational
analysis• quantitative and scientific• rules, routines and
procedures• documentation and
bureaucracy• hierarchy and control
• with standard shortcomings• limits creativity and developer
initiative• distributes responsibility• expensive and goal-displacing• standard one-size-fits-all
solutions which are hard to adapt to changing conditions
• creates many products with no direct value
• complexity analysis limited by human understanding
22SOE: course summary
• agile – a serious response to the shortcomings of traditional SE• minimal bureaucracy and
hierarchy• focus on software
production and quality• improvement through
experimentation replaces rational analysis
• developers regain control and responsibility
• taken to extremes - flight from things that developers find unpleasant• discipline, standardization,
documentation• authority• serious investigation of
customer domain• non-coding tasks• delivery deadlines• contractual responsibilities
23SOE: course summary
• exploit freedom and flexibility in traditional SE and scale to task
• find the discipline in agility and use traditional tools where appropriate
24SOE: course summary