An Extensible and Adaptable Management System
Henrik Bartholdt Sønder
Kongens Lyngby 2010 IMM.B.Eng.2010
Side 2 af 71
Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected]
www.imm.dtu.dk
Side 3 af 71
ABSTRACT Management systems play a central role in almost every company these days. Some of them
have evolved past being customer or accounting management systems alone and are now
multi-purpose systems responsible for managing almost everything you can think of:
customers, documents, projects, orders, accounting, inventory etc. Most companies are
forced into changing their management systems at some point though, especially those that
have used the same systems for more than 10 or 15 years; these old and often monolithic
systems are getting increasingly hard to maintain and adapt to the requirements of today’s
management systems.
The goal of this project is to develop a system architecture for an extensible and adaptable
management system. The purpose of this system is to be able to assist in the management of
everything a company would want to manage while still keeping the application as light weight
as possible. This is achieved by developing a highly modular and configurable application able
to provide the user with management capabilities through several modules, with separate
modules for each specific area such as customer relations or accounting. The main application
of this system provides the base infrastructure to support and manage multiple extensions or
modules; a selection of modules then provides the user with necessary management
capabilities based on his or her responsibilities in the company.
The system architecture is designed to be highly maintainable and able to adapt to future
architectural changes with a minimum amount of effort. To achieve an adaptable architecture
the system is designed following the SOLID design principles. These principles are analyzed and
discussed in detail covering the effect they have on both single classes and the application as a
whole. How to properly incorporate these principles in the system on both an overall
architecture level and single-class level is also discussed, and some of the key players in
achieving this are dependency injection and a high level of code reusability. Microsoft’s new
Managed Extensibility Framework is used to support extensibility in the system.
Side 4 af 71
Side 5 af 71
TABLE OF CONTENTS
1 Introduction .......................................................................................................................... 7
2 Inception ............................................................................................................................... 9
2.1 Functional Requirements .......................................................................................... 10
2.2 Non-Functional Requirements .................................................................................. 12
3 Analysis ............................................................................................................................... 16
3.1 Requirements Analysis .............................................................................................. 16
3.1.1 Non-Functional Requirements .............................................................................. 17
3.1.2 Extensibility ........................................................................................................... 18
3.1.3 Adaptability ........................................................................................................... 19
3.1.4 Scalability .............................................................................................................. 19
3.1.5 Testability .............................................................................................................. 21
3.1.6 Conclusion ............................................................................................................. 22
3.2 Initial Design Considerations ..................................................................................... 23
3.2.1 Choosing a Client Platform ................................................................................... 23
3.2.2 Choosing an ORM Toolset ..................................................................................... 26
3.2.3 Design Patterns and Principles ............................................................................. 28
3.2.4 Conclusion ............................................................................................................. 30
4 Designing the System Architecture ..................................................................................... 32
4.1 Initial System Architecture ........................................................................................ 33
4.2 The Development Process ......................................................................................... 34
4.3 Dependency Injection ............................................................................................... 36
Side 6 af 71
4.3.1 Implementing Dependency Injection .................................................................... 37
4.3.2 DI Containers, DI Frameworks and MEF ............................................................... 41
4.4 Silverlight and MVVM................................................................................................ 43
4.5 Modelling towards Configurability ............................................................................ 45
4.6 Web Services ............................................................................................................. 49
4.7 Conclusion ................................................................................................................. 51
5 Final Design & Implementation .......................................................................................... 52
5.1 Scope ......................................................................................................................... 53
5.2 System Dependencies ............................................................................................... 55
5.3 Communication between Modules ........................................................................... 56
5.4 Extending Modules with Modules............................................................................. 58
5.5 The Main Application and the Region Manager ........................................................ 60
5.6 Menu Module ............................................................................................................ 61
5.7 Conclusion ................................................................................................................. 62
6 Test & Maintenance ........................................................................................................... 63
6.1 Code Contracts .......................................................................................................... 64
6.2 Arrange, Act, Assert................................................................................................... 65
7 Conclusion .......................................................................................................................... 69
8 List of Figures ...................................................................................................................... 70
Side 7 af 71
1 INTRODUCTION
This Bachelor of Engineering thesis will cover the development process of an adaptable and
extensible management system. The development process will be covered from requirements
and analysis through design, implementation and test, and will also discuss the following
phase of maintenance and further development of the system, as the ease of which this
system can be further developed is a key priority in this project.
The purpose of this system is to be able to assist in the management of everything a company
would want to manage by providing separate modules for each of these areas such as
customer relations, accounting or inventory management. In the development of this system
the focus lies primarily on the design of the core application and infrastructure needed to
support these modules. The system will initially only have the functions required to manage
customer relations, essentially making it a simple Customer Relations Management (CRM)
system in the scope of this project.
The system was requested by Planit A/S as a replacement for their current system for
managing customers and orders, which they have been using and developing themselves
during the last 15 years. This old system has been getting increasing difficult to maintain and
develop over time and recent developments in the company have pushed the requirements of
this system towards online availability; a requirement they doubted it would be worthwhile
adapting their old system to.
In their search for other commercial alternatives they soon realized that almost any product
they found would fall in one of two categories: it would either be cheap and lack some
necessary functions, or be expensive and have way more functions than they would ever need,
and even then some of the expensive products would still lack a few necessary functions. After
realizing they would most likely have to go with one of the more expensive alternatives, they
decided to postpone getting a new management system and give me a chance to try and
develop a system for them as the project for my Bachelor of Engineering thesis.
Their requirements for the new system was not as much about the functional requirements
such as having well thought-out functions to efficiently manage customers and orders, but
focused almost entirely around the architecture of the system to be delivered. Their goals
were clear: They wanted a easily extensible system, with a highly adaptable architecture; a
system they could further develop over the years just like their old system.
Side 8 af 71
The system was developed with a satisfying level maintainability and extensibility. The client is
satisfied with the current system and we feel that the design of the system looks promising in
regards to it being able to function as an extensible and adaptable platform for supporting a
larger management system. Developing a system with this much focus on maintainability and
good design principles in general have taught me a lot in terms of design principles and
techniques, especially in regards to the SOLID principles and techniques to support these such
as dependency injection. I had been using these principles and techniques in project prior to
this, but my understanding of their effect on the application and how to implement them
properly has been taken to a whole new level.
Regarding the scope of this project it has been quite difficult to manage which topics to
include and which were not important to the core of this project: the development of an
adaptable and extensible management system. As such I have ended up excluding most details
related to web service implementation and the modeling of database tables, even though they
may seem like the usual, interesting topics in a report of this kind. They are not that
interesting in the scope of this report though, as the client application does not really depend
on these layer; all the client cares about in regards to data storage are web service contracts. I
will discuss some topics of web service usage on a higher level though, and the topic of
database models will also be featured in a demonstration on how to model a database
towards configurability.
Notes on Documentation
The structure of this report is influenced by the fact that it is targeted at two different parties
of interest. On one hand it must present the project to the censors and professors at the
Technical University of Denmark (DTU) who will be evaluating and grading the project based
mostly on the analysis and design considerations I have made, along with the argumentation
that goes behind these decisions. On the other hand it must present the product itself to the
stakeholders of this project which are mostly interested in the design and capabilities of final
product. To best separate these two parties of interest Chapter 3 and 4 featuring the Analysis
and Design of the application will contain most analysis and design considerations, in an effort
to have Chapter 2: Inception and Chapter 5: Final Design & Implementation focus on the final
product as much as possible. Any reasoning behind the design or alternatives to it will be
discussed in Chapter 3 and 4 only.
Side 9 af 71
2 INCEPTION
This chapter covers the inception phase of the project and is essentially a project definition
covering the purpose of the system along with vision and requirements. When the
requirement specification is final it will act as the contract between developer and client,
defining the functionality and capabilities of the final system. The contents of this chapter and
what these requirements mean for the requested system will be analyzed and discussed in the
following Chapter 3.1: Requirements Analysis.
PURPOSE
The purpose of this system is to provide a platform for a module-based management system.
The system will be able to assist in the management of any company activity, provided that a
module with the necessary functionality for this kind of management has been developed. The
application will provide users with a selection of modules on a per-user basis, so users are
presented with their specific selection of management capabilities when they use the system,
and nothing more.
TARGET
Planit A/S is the client requesting this product.
PROJECT MEMBERS
The system was designed and developed by Henrik Sønder.
VISION
The vision of this project is for the client to have a management system capable of supporting
their company management needs for many years to come. Having a modular and extensible
application will allow the client to continuously develop new functionality for their application
in order to meet their own needs; both their current needs and any unforeseen needs they
may have in the future.
The vision extends beyond that though, as the client hope to be able to profit from making
their platform available as a commercial platform for different kinds of management. It is too
early to say how well the system will be able to handle larger enterprises, but with proper
adaptability and scalability implementations it should be possible. In regards to small scale
businesses the module based approach should be easily maintainable and easy to make
Side 10 af 71
available online on a per-user basis. The module based approach with modules being able to
extend other modules will also help effectively target specific needs of small user bases.
2.1 FUNCTIONAL REQUIREMENTS
This list of functional requirements is not final, and the details of the functional requirements
are not that important for the scope of this project. The list will have to be developed further
when the development process shifts focus from the architectural requirements towards the
functional requirements of each module.
In the initial development phase the functional requirements are viewed as guidelines for
what to implement when testing system prototypes, as those prototypes might as well
implement something along the lines of what we will need eventually.
FUNCTIONAL REQUIREMENTS
On application startup.
o The user will be presented with a login form.
o A registered user will have to login before he/she is able to see or use any
modules that requires authentication.
On login / On successful user authentication.
o The registered user will now be able to see and use the modules he/she has
permission for.
o A sales representative will be presented with the contact list.
o An accounting user will be presented with the accounts.
o An administrator will be presented with the administration module front
page.
A sales representative can search for contacts by their first and last names.
o The search should search for matching first names and last names at the
same time.
o The search should only return contacts assigned to the current sales
representative, or contacts with no representative assigned; not contacts
with another sales representative assigned. This requirement is a primitive
implementation and will most likely change in the future to support other
ways of controlling access to contacts.
A sales representative can create new contacts.
Side 11 af 71
o A new contact can be added by clicking a “Create Contact” button, at which
point a blank contact details page is navigated to. The contact information
must then be populated with necessary details before the sales
representative is able to submit this new contact to the database.
o If a similar contact already exists and no one is set to represent the contact,
then the sales representative can choose to represent the existing contact.
o If a similar contact already exists and has another representative assigned,
then the sales representative can choose to request access to the existing
contact.
A sales representative can edit existing contacts.
o The Id of a contact cannot be assigned nor edited; this is handled by the
server exclusively.
o The sales representative can edit contacts he or she is representing.
o The sales representative can edit contacts without a representative.
A sales representative can create activities.
o A new activity can be added by clicking “Create activity” button, at which
point a blank activity page is navigated to. The activity information, including
activity type, must be populated with necessary details.
o An activity is always linked to a contact.
o An activity can be marked as Completed, Not started, Ongoing and Canceled.
A sales representative can create a flow of activities based on templates.
o A new flow of activities can be created based on a template by clicking a
“Create Existing Activity Flow” button, at which point a list of templates will
be presented.
An administrator can manage activity flow templates.
o An administrator can create, edit and delete activity flow templates.
An administrator has elevated rights to configure the application.
o An administrator can add, delete and edit entries in the list of available
contact properties.
An administrator has elevated rights to manage managers.
o An administrator can add and delete users.
o An administrator can edit user info; account id and account password cannot
be changed by the administrator.
A registered user can change his or her account information.
o The id of the users account cannot be changed.
o A user can change the account password and edit other contact information.
Side 12 af 71
2.2 NON-FUNCTIONAL REQUIREMENTS
The non-functional requirements of the system are a very important subject in this project.
The system must be very adaptable to changes in both functional and non-functional
requirements. A high level of maintainability and the ability to easily extend the system is a
key priority if we are to further develop the system by both adding new features and being
able to improving existing ones.
Maintainability is probably the most general and broadly covering of the non-functional
requirements, and most of the other non-functional requirements are related to the
maintainability of the system in some way: A system with a high level of adaptability,
extensibility and testability is easier to adapt, extend and test, and hence easier to maintain in
general.
Instead of defining requirements for maintainability in general, we will define requirements
for different aspects of maintainability and discuss them separately. We will focus on the
following four areas, in which we believe the requirements of maintainability can be most
precisely defined.
Adaptability
Extensibility
Testability
Scalability
For anyone not familiar with the meaning of these terms, the requirement analysis in Chapter
3.1: Requirements Analysis will feature a thorough definition of the terms along with an
analysis of their impact on the system.
ADAPTABILITY
A high level of adaptability is a top priority for this system. We do not only expect future
changes and additions to the functional requirements of the system, but also to other aspects
of the system such as scalability and security. The system must be as susceptible to change as
possible to be able to adapt to these new requirements, whether they are simple feature
extensions or extensive enough that they require some level of system-wide architectural
redevelopment.
Side 13 af 71
These adaptability requirements were the initial preferences developed before prototyping.
They are not final and will most likely be extended to be more precisely defined once we can
be confident about what we can achieve.
Adaptability Requirements – High Level
The domain layer should contain the core application logic, and these business rules
should be separated from any other responsibilities or dependencies.
o The domain layer is the layer we do not want to be changing.
o Most application layers should be exchangeable without any impact on the
business rules and models of the domain layer.
To separate the UI from the domain logic, some kind of presentation layer should be
used between these two.
The database should be exchangeable with a minimal maintenance effort.
o Preferably with changes to web services only.
o Depending on the scope of this change, ORM generated classes server side
along with repositories may need to be changed too.
Web service implementations should be easily exchanged as they can be
programmed towards existing contracts.
Adaptability Requirements – Low Level
A high level of code reusability is essential.
o As this does not really have an exact measure, the level of code reusability
will have to be analyzed at the end of prototype phases at which point we
can evaluate whether it is satisfactory or if there is room for improvement.
EXTENSIBILITY
The core system should allow for new extensions to be added to the main application
regularly without a need for recompilation of the existing system.
The architecture of the core system should support communication between
extensions while encouraging or even ensuring a minimal amount of dependencies
between extensions.
The architecture of the core system should encourage code reuse by allowing
extensions to reuse parts of other extensions.
Side 14 af 71
SCALABILITY
While there are no set requirements for scalability initially, the system is expected to develop
requirements for a high level of scalability in the future. Initially, the system must be able to
support less than 20 users using the system at the same time.
Initial requirements do not raise any concerns around any aspect of scalability, except
for race-conditions when more than two users edit the same contact.
o The system will have to be able to handle race-conditions when updating the
database in a reasonable way that inflicts a minimal amount of user
frustration.
Scalability for future requirements will not be properly testable before the system is
further developed and has a higher level of functionality and features.
o To ensure scalability we will have to make the system adaptable enough to
allow for changes where performance bottlenecks could occur.
TESTABILITY
To achieve a high level of maintainability, a high level of testability will most likely be of great
value on a cost/benefit basis. The system must therefore be developed with testability in
mind.
Testability must be high enough that unit testing a high percentage of the application
is at least beneficial, on a cost/benefit basis.
It would be preferable if even the expensive kind of tests were beneficial, on a
cost/benefit basis.
o Especially integration tests, as the system should be a highly coupled
environment, requiring integration tests to properly test the integration of
modules and other parts.
AVAILABILITY
In regard to offline capabilities, we will not be implementing any initially. There will be
demand for them in the future though, so we want to consider the possible solutions we have
for implementing features related to offline capabilities early in the development process.
In regards to geographical availability it is a requirement that the application must be fully
functional at any location, given that the user has an active, working internet connection.
Side 15 af 71
Whether this can be achieved using any computer or requires the installation of a smart client
or other software is not important.
USABILITY
Given the low priority of functional requirements there are no demands for usability in the
scope of this project.
PLATFORM
The initial system must be able to run on Windows platforms, but future requirements are
expected to demand support for multiple platforms.
Side 16 af 71
3 ANALYSIS
This chapter will cover the initial analysis phase of the development process. The first chapter
covers the requirement analysis where we analyze what the given requirements means for the
system; what options we have for designing the system and what we have consider when
designing it.
The other chapter covers the initial design considerations made, including what platform to
use for the client and which ORM tool to use for model and database mapping/generation.
This chapter will also take a closer look at the design principles used, especially the Prism
guidance and SOLID principles.
3.1 REQUIREMENTS ANALYSIS
While the functional requirements of this application do have some impact on the system
architecture the non-functional requirements are by far the most influential. The functional
requirements of the system are not very interesting in relation to the architecture of the
system, and at the time of finishing this report I concluded that they were not even mentioned
in any of the discussions following this analysis. As such, I have made the quite untraditional
decision of excluding the functional requirements of the system from this analysis entirely,
except for one important decision made in the analysis of these requirements.
ANALYZING FUNCTIONAL REQUIREMENTS
The analysis of the functional requirements led to the conclusion that the central object of the
contact module and its related modules should be a contact and not a customer as we
originally decided. Having the central object defined as a contact gave a lot more flexibility in
regards to the types of people and relations we were able to include. This decision might seem
kind of obvious now that I have mentioned ‘contact’ previously, but let me explain our
situation in the early development stages.
My very first requirements were based on the initial conversations with my clients and
featured list of customers, in a customer management module, with customer database
objects and customer relations. At this point, why would I even think about defining this entity
as a contact instead? It is clearly a customer object I need. I was lucky in the way that I was still
developing architecture at that point. Even though it took a while before I noticed that the
client had requirements for different types of contacts beyond just customers, I had not
Side 17 af 71
developed that much functionality for the modules. The domain and database model was
easily changed and the functionality of the modules was easily adapted to these new models;
but imagine if this requirement miscommunication had been discovered later, or if such a shift
in model requirements simply happened because of other unexpected changes.
The situation could easily escalate: Having a customer database object would have made it ok
for any developer to include customer specific information inside both the customer models
and customer database tables, and before long we could have spent several days
implementing exactly what the customer asked for: a list of customers and a dozen functions
to support, manage and store information about these customers. And then the customer asks
for the list of contacts which are not actually customers.
3.1.1 NON-FUNCTIONAL REQUIREMENTS
As mentioned in the requirement specification, maintainability is the most general and broadly
covering of the non-functional requirements, and most of the other non-functional
requirements are related to the maintainability of the system in some way.
I will therefore use the same approach for the requirement analysis and take a closer look at
the same four aspects of maintainability:
Adaptability
Extensibility
Testability
Scalability
Note that the terms adaptability and extensibility are sometimes used interchangeably in
system architecture, but since we are trying to design an extensible system which is also
adaptable we will want to distinguish between them as such:
Extensibility is a measure of how the functionality of the system can be extended through pre-
defined extension points, and relates to the ease of which we can add functionality by adding
modules. Extensibility is about changing functionality by adding, removing or exchanging
extensions, not by changing them internally.
Adaptability is a measure of how easily the system can undergo architectural changes like
having to move your solution from a local database to a cloud-based source of data. It is also a
Side 18 af 71
measure of how easily the system can be changed on an implementation level, by changing
the functionality of existing classes to fit new requirements most effectively.
Having properly defined these terms, we will now take a look at the analysis of requirements.
3.1.2 EXTENSIBILITY
Extensibility is a system design principle where the implementation takes into consideration
the future growth of the application. It is also a measure of the ability to extend a system and
the level of effort required to implement the extension. Highly extensible systems are often
built as frameworks with a core architecture providing a base application in which several
extensions or modules are executed in, with each extension supporting the application with
their specific features. Having an extensible system means the system is designed to include
mechanisms for expanding or enhancing it with new features without having to make any
changes to the existing system, or at least keeping the required changes to a minimum.
To easily be able to achieve a high level of extensibility I expect to be able to support most of
my extensibility needs by using one of these mentioned extensibility frameworks. Whether it
is beneficial to create a new extensibility framework specifically for this application could be
researched, but given the low experience of the developer at work it would most likely be
beneficial to use an existing extensibility framework. The base application will then fulfill no
functional requirements on its own, but only serve as a platform responsible for managing
other extensions or modules which then provides the functionality of the application. The
main application will essentially be a shell with only the necessary infrastructure and extension
points needed to effectively support multiple modules; and most of these extensibility needs
will be provided by a commercial extensibility framework or library. Besides providing the
necessary infrastructure to support modules it is important that this infrastructure is designed
in a way that promotes or even enforces maintainable programming; it may be beneficial to
restrict developers in some areas, if that means we can ensure a higher level of maintainability
overall.
Creating this base application shell will allow us to easily change and extend the functionality
of the application as we can easily add new features by adding new modules, and we can also
manage these features on a per-user basis simply by managing which modules are active for
that given user.
Side 19 af 71
In regards to design principles like the SOLID principles, extensibility or an extensibility
framework in itself does not necessarily promote any of these. You could argue that an
extensible system provides some amount of Separation of Concern in that it separates
functionality into different modules, but this is a very high level of abstraction compared to
where we really want Separation of Concern: at the implementation/code level. The
separation is there nonetheless, and will in most cases provide a small benefit towards overall
maintainability of the application.
3.1.3 ADAPTABILITY
Adaptability covers the ease of changing or replacing existing parts of the system, both minor
changes and system wide architectural changes like changing the database level from a
standard SQL database to a cloud-based solution. While extensibility covers changing or
extending a system within the limits of the interfaces made for extending it, adaptability
focuses on the ease of changing the function of specific classes while keeping the required
maintenance resulting from these changes to a minimum. Having a system with a high level of
adaptability should decrease the maintenance cost of changing the implementation of existing
classes in the system. Being adaptable is all about being able to change the application to fit
our needs, and as such a high level of code reusability is definitively the most important factor
in keeping an application adaptable.
In regards to design principles adaptability is definitely well covered by the SOLID principles.
All 5 principles of SOLID provide substantial benefits to application adaptability and they may
even be considered necessary for actually achieving a high level of adaptability - at least in this
kind of application. Having said that, Chapter 3.2.3: Design Patterns and Principles contains a
thorough discussion of design principles such as the SOLID principles, in which many aspects of
adaptability and how to achieve it will be analyzed in detail.
3.1.4 SCALABILITY
Having a scalable system means being able to increase the amount of data in the system
and/or the number of users using it by a large margin and still be able to use the system
without experiencing performance issues. When talking about a requirement for scalability the
scope or amount of scalability needs to be defined for it to actually mean something, and as
such a scalable system is a system that is scalable within certain limitations. To clarify using an
Side 20 af 71
example a system could be said to be scalable without performance issues up to 100.000 users
with an average of 0.5 Gigabytes of data for each user.
In conclusion: a system cannot simply be considered scalable, but scalable within limitations.
Our goal is therefore to achieve a satisfactory scalability; to achieve scalability within satisfying
limitations.
We can easily meet the scalability requirements of the initial solution for our client only, but
defining scalability limits for future requirements of this application is a difficult task at best.
To best address the scalability requirements both now and in the future we will take the same
approach as we did when analyzing maintainability and split these requirements up into
smaller, more specific terms1.
Load scalability: The ability for a distributed system to easily expand and contract its resource pool to accommodate heavier or lighter loads. Alternatively, the ease with which a system or component can be modified, added, or removed, to accommodate changing load.
Geographic scalability: The ability to maintain performance, usefulness, or usability regardless of expansion from concentration in a local area to a more distributed geographic pattern.
Administrative scalability: The ability for an increasing number of organizations to easily share a single distributed system.
Functional scalability: The ability to enhance the system by adding new functionality at minimal effort.
In our current situation the best way to ensure scalability in a system is not necessarily to
develop the perfect and most efficient architecture right away. Things change, and to be
expecting otherwise would be naive. This is also mentioned in the definition of load scalability
in the list above, as load scalability requirements are often best achieved by not optimizing the
current system for an extensive and perhaps unnecessary amount of scalability. Instead the
requirements should be met by ensuring that parts of the system can be changed to adapt to
these scalability requirements, but only if the requirement for scalability surpasses a certain
point.
The same also goes for administrative scalability, in that we might consider implementing a
simple forms authentication to support login capabilities for the initial system. However, to
ensure administrative scalability for future requirements we will have to design the initial
1 These four terms and their definitions were taken from a Wikipedia article on Scalability: http://en.wikipedia.org/wiki/Scalability
Side 21 af 71
authentication system to be easily exchanged with a more advanced authentication system in
the future; one able to manage a larger user base more effectively.
With this in mind and in regards to optimizing a system in general, we should also keep the
wise words of Donald Knuth in mind:
“We should forget about small efficiencies, say about 97% of the time: premature optimization
is the root of all evil" – Donald Knuth2.
3.1.5 TESTABILITY
Testability is a measure of the effort required to test a system. The testability of system may
vary depending on what part of the system is being tested, with the typical example being that
the user interface often requires a lot more effort to properly test, resulting in a low testability
for that particular part of the system.
Testability is very influenced by the structure of the code itself, which in turn is influenced by
the design patterns and implementation techniques used in developing the application. As
most of the other maintainability related requirements, testability also benefits greatly from a
proper implementation of the SOLID principles.
Besides tailoring the design of your application towards being testable, testability can also be
highly influenced by the quality of testing tools and testing environments used. Making sure to
test and verify the quality and quirks of available testing tools will definite go a long way in
ensuring testability.
2 Knuth, Donald. Structured Programming with go to Statements. ACM Journal Computing Surveys.
Side 22 af 71
3.1.6 CONCLUSION
Code reusability is a top priority for achieving most non-functional requirements.
o We should consider following the SOLID principles.
o We should consider using Dependency Injection.
Testability improves maintainability and vice versa.
o We should definitely consider coding towards a high level of testability. Not
simply because “Testing is always good, isn’t it?”, but because it improves
maintainability which is very important in this project.
o We should further research the subjects of testability and maintainability
and figure out how to improve these measures, in an effort to let them
further improve each other.
o We should test and verify the quality and quirks of available testing tools and
ensure that they are sufficient for our needs.
Extensibility is a top priority.
o We should consider using or developing an extensibility framework.
Scalability requirements are low initially, but will increase in the future.
o Load scalability will initially depend on client and database implementation,
and requirements will probably be easy to meet.
o Load scalability in future requirements will depend on adaptability.
o Functional and geographic scalability should be easily obtainable and is not a
concern initially.
o Administrative scalability will depend on the chosen login technology.
Availability requirements points towards an online client-server solution.
o We need to consider whether to create a web application or a smart client .
o Offline capabilities are not a requirement initially, but adaptability towards
offline capabilities should be considered for future requirements.
Usability requirements are still not specified.
Side 23 af 71
3.2 INITIAL DESIGN CONSIDERATIONS
This chapter will feature explanations of the first design considerations and decisions we
made. These include decisions what platform to use for the client and which ORM tool to use
for model and database mapping/generation. This chapter will also take a closer look at the
design principles used, especially the Prism guidance and SOLID principles.
3.2.1 CHOOSING A CLIENT PLATFORM
When it came to choosing a platform and development language Microsoft’s Silverlight
became the obvious choice very early in the analysis process and as such I did not thoroughly
analyze any of the other possible choices. Part of the reason why Silverlight was such an easy
pick was that I have a lot of experience in .NET based programming, especially compared to
that of other languages. This naturally puts Silverlight in favor of most other choices, and I did
not succeed in finding any alternatives that were better or at least more interesting than
Silverlight, which they would have to be if they were to change my mind. Having said that I
better make a good case for choosing Silverlight that easily, so let’s get to it.
WHY WAS SILVERLIGHT CHOSEN?
In short, Silverlight was chosen for a number of different reasons related to:
Silverlight’s recent jump in maturity.
o .NET 4.0 and Silverlight 4.0 was released in April 2010, with great
improvements to Silverlight in general.
o Microsoft’s vision and commitment to Silverlight.
The Managed Extensibility Framework (MEF) being embraced by Microsoft and
released as a part of the .NET 4.0 Framework.
Prism3 4.0 being released with up-to-date guidance targeting both Silverlight 4.0 and
MEF.
3 Prism was previously known as “Composite Application Guidance” but this was changed to
Prism (version 4.0) around the release of the 4.0 versions of Silverlight and the .NET
Side 24 af 71
Silverlight having a great solution to the often tough question of whether to deploy a
smart client or a web application.
While the .NET Framework is not entirely new it is still not as mature as some of the other
alternatives, and the same goes for both Microsoft’s Silverlight and the Entity Framework
which are both quite new actually.
A couple of months before the initial design phase of this project Microsoft released both
Silverlight 4.0 and the .NET Framework 4.0, which proved to be really solid updates for the
Silverlight platform as a whole. Not only did the Silverlight 4.0 release improve Silverlight
immensely but the .NET Framework 4.0 release also brought some much welcome updates to
WCF services and the Entity Framework. The details of added or improved features in these
releases are not that important for this subject, except for the fact that it was a solid boost to
the maturity of Silverlight in general and that Microsoft chose to add the Managed
Extensibility Framework into the .NET Framework 4.0 release. This naturally added to the
value of Silverlight as the platform of choice, especially since the Managed Extensibility
Framework (MEF) would fit very nicely into the modular, extensible architecture we wanted to
create. We could of course still use MEF even if it had not gotten the same kind of support
from Microsoft, or developed our own extensibility framework just as well, but with Microsoft
having chosen to support this framework we felt pretty confident that we would not regret
using MEF to contribute such a major part of the extensibility in our application.
Besides the timely updates of both Silverlight and the .NET Framework Microsoft also pushed
the choice of Silverlight in favor even more by an update to one of their Patterns and Practices
sections: Prism. We will take a closer look at how the design principles of Prism are used in the
design of this application in chapter 3.2.3: Design Patterns and Principles, but in short Prism is
a collection of design patterns, reference implementations and reusable libraries designed to
help you build flexible and easy-to-maintain Windows Presentation Foundation (WPF) and
Silverlight applications. The update of Prism to version 4.0 was important because it provided
up-to-date guidance targeting the new Microsoft .NET 4.0 Framework and Silverlight 4.0, and
also included guidance for using the newly added MEF; guidance which was invaluable in
designing the architecture of the system.
And then comes the question of whether to create a web application or a smart client. In
general, web applications are the most simple and easiest to deploy, as deploying the website
Framework. For more information about Prism visit the open source community site CodePlex
at http://compositewpf.codeplex.com/.
Side 25 af 71
and web services to the server is often all that is required. Deploying a smart client is a bit
more difficult and often a much more advanced process where you have to use a deployment
system to update all your systems with both the new application or application update along
with any additional system updates required in the new version. This can be quite a challenge
if your company has a variety of clients with different versions of underlying system software
and perhaps even different operating systems, so having a system to manage this process is
clearly a necessity in larger enterprise solutions. This also means you will need experience and
knowledge in using this deployment system. The smart client does pull the long straw when it
comes to graphics, presentation logic and options for interactivity though. Most new smart
client platforms supports a much more user friendly and maintainable way of developing the
application compared to that of web application platforms. It is possible to create highly
interactive environments in a web application, but it does require a lot more experience in
that area. Making sure your user interface works perfectly in all browser environments can
also be a difficult and time consuming task even for experienced developers, especially
considered the alternative: the smart client.
So, why is Silverlight a great solution to the question of whether to deploy a web application
or a smart client?
A special feature of Silverlight is that it enables us to develop an application with the benefits
of both a web application and a smart client. In general use Silverlight is essentially a smart
client running inside a browser. It even makes it possible for us to very effortlessly convert a
standard Silverlight web application into a Silverlight desktop application, which is an actual
smart client installed on the machine that also supports elevated user rights and better access
to the file system compared to that of a web application. This desktop application can still
utilize the benefits of a web application when it comes to easy deployment and make sure it
updates itself if a newer version of the software is available.
Last but definitely not least, a reason for picking Silverlight has also been a result of
Microsoft’s extensive marketing campaign for Silverlight. Microsoft has sponsored a lot of
informative guides, interviews, how-to videos and free to download sample implementations
of how to do just about anything utilizing Silverlight and many other related technologies.
Many of these guides feature “Hello world” kind of material to get started using different
technologies. Some of the guides are a bit more advanced, and some presents reference
implantations to guide you on how to implement these technologies while using certain
patterns to make the application extensible or to keep it as maintainable possible, etc. A few
of these examples also feature real life development projects in Silverlight, like one featuring
the development of a Silverlight application for eBay which explained both what was
developed and how it was done, all quite detailed. All this has been a great help in learning to
Side 26 af 71
get the most out of Silverlight, and to see Silverlight working for a company as big and
international as eBay also helped a bit in raising my impression of Silverlight’s maturity to a
level where I felt comfortable developing this application in Silverlight.
Much of this campaigns effectiveness is of course also a result of me having prior experience in
Silverlight and .NET; I follow the news and channels of these technologies so I usually know
about the newest tools and releases, sometime when they are still in their alpha or beta
phases, or rather: I know about them if other people in the community finds them interesting.
More importantly though, being in this community keeps me up to date on interesting articles
and informative guides using blogs or tweets, which means that I do not only know about the
newest Silverlight and .NET features, but I can also easily find the best guidance available and I
know which forums or individual experts to ask for help if I come across any complicated
problems.
CONCLUSION
Silverlight, and some of the most important technologies often used in parallel with Silverlight,
has matured a lot in the latest 4.0 release. I think Silverlight is a great choice and I have no
doubt in its ability to successfully serve as a platform for this application, especially when it
comes to keeping a satisfying level of maintainability. Not only did I see that it was possible to
create this application in Silverlight, but given my prior experience in .NET along with the easily
available guidance targeting Silverlight 4.0 I have all the information I need to be confident
that I am able to develop and implement the application with a high level modularity,
extensibility and maintainability. I have found no reasons not to favor Silverlight.
3.2.2 CHOOSING AN ORM TOOLSET
The general purpose of an Object-Relational Mapping (ORM) tool is to simplify and assist in
the development and maintenance processes of the data access layer. Specifically, an ORM
tool contains a set of techniques for converting data between incompatible data types in
object oriented programming languages, and the tools most often streamlines the application
of these techniques to a point where you might even forget this incompatibility does indeed
exists. Needless to say, it is very common to use an ORM tool to interact with a database
when developing the interface between a client application and a relational database.
There were few candidates for choosing an ORM toolset, with NHibernate and the Entity
Framework (EF) being the most interesting ones and LLBLGen as the closest runner-up.
Side 27 af 71
NHibernate is definitely the more mature of the two. It has been used and developed for over
7 years and has a great community to support and further develop it. Entity Framework 4.0 did
come with a lot of great updates though, and while NHibernate would probably have been just
as good a choice we chose to use the newest version of EF: Entity Framework 4.0. This
decision was based on the following facts:
The developer has several years of experience in EF, and no experience in
NHibernate. EF was therefore the option of choice to begin with, and NHibernate
failed to provide enough benefits to consider a change in this decision - or at least the
developer failed to find these benefits.
We have already decided to use Microsoft technology for other parts of this project,
and Microsoft technologies are well known in their ability to integrate with one
another.
The developer has experience in using the Entity Framework for not just its ORM
abilities, but for its object generating capabilities through T4 templates.
To clarify the last item of the list, these object generating capabilities are used to assist in the
development of all domain models. If we do choose to utilize EF to generate both our server
side models and our domain models, we may have to use two different models for this in the
future. However, in the first development phases we are able to generate both these models
using the same EF model, which is really helpful in speeding up the initial development and
prototyping phases. By experience we know that the first development phases will have the
domain models at a one to one relationship with the tables of the database, or at least with a
view of the database. Having a tool to generate both the domain models and the tables of the
database using the same model as a reference is great when you want a fast and effective
development process. At some point in time you will most likely want to make further
adjustments to your database separately though, but until then this one to one relationship
can be really beneficial.
On a side note, using EF to create domain models does not create any unnecessary
dependencies towards EF or Microsoft technologies, as the models generated can be changed
through T4 templates; even the default code generation templates provided does not
generate dependencies towards EF.
Side 28 af 71
3.2.3 DESIGN PATTERNS AND PRINCIPLES
In this chapter we will take a look at the design patterns and principles used, specifically the
guidance of Prism and the SOLID design principles.
Prism is a collection of guidance for designing and building rich, flexible and easy-to-maintain
applications using either Windows Presentation Foundation (WPF), Silverlight or Windows
Phone 7. This guidance assists us in incorporating important architectural design principles
such as loose coupling and separation of concern, which goes a long way in developing what is
known as composite applications. The most defining factor of composite applications is that
they are built using loosely coupled components that can evolve independently, but can be
easily and seamlessly integrated into the application.
The guidance of Prism is a perfect fit for this kind of system, and the section “Intended
Audience” of the Prism community page on Codeplex.com4 also seems to hit our requirements
for this application spot on:
“Prism is intended for software developers building WPF or Silverlight applications that
typically feature multiple screens, rich user interaction and data visualization, and that embody
significant presentation and business logic. These applications typically interact with a number
of back-end systems and services and, using a layered architecture, may be physically deployed
across multiple tiers. It is expected that the application will evolve significantly over its lifetime
in response to new requirements and business opportunities. In short, these applications are
"built to last" and "built for change." Applications that do not demand these characteristics
may not benefit from using Prism.”
When talking about design patterns and principles, it is important to note that Prism is neither.
Prism does not guide application design by providing a design pattern or principles to follow in
the same way the MVVM design pattern or the SOLID principles does. Prism embodies many
of these known principles, especially the solid principles and the MVVM pattern, but does not
try to explain the principles or design patterns themselves.
Prism instead attempts to guide developers by supplying a handful of really well implemented
code examples showing how expects have developed software while following these
principles. I think this is a really great type of guidance, especially for novice developers like
myself, as we are now able to see how exactly how an expert would follow these principles.
Despite believing that I already knew quite a bit about how to program following the SOLID
principles, these example implementations quickly increased my understanding of the SOLID
4 http://compositewpf.codeplex.com/
Side 29 af 71
principles to a much higher level because I was able to see not just how to possibly implement
them, but how to implement them really well.
Prism did not only teach me how to code properly though. The Prism libraries provide a
collection of basic classes needed to support this kind of application, and classes such as the
EventAggregator, the RegionManager and the default implementations of Unity and MEF
bootstrappers have been invaluable in quickly setting up a working environment. Having the
source code and descriptions of all these helper classes easily available also makes it easy to
extend the functionality of these classes should we ever need to. This gives us some level of
confidence maintenance-wise, as we know that even these parts of the application which we
have not developed ourselves still has a fair level of adaptability.
In conclusion, Prism provides us with some great guidance and a collection of basic tools to
start building composite applications. The purpose of Prism is to teach developers how to
properly create software following important design principles like SOLID, and their unique
approach is to provide example implementations of high quality to allow developers to learn-
by-example.
SOLID PRINCIPLES
The SOLID principles are a collection of 5 different principles that when applied together will
make it much more likely that a developer will create a system that is easy to maintain and
extend over time.
The SOLID principles are listed below. I collected a few different definitions or explanations of
some of the principles.
Single Responsibility principle.
o An object should have only a single responsibility.
o An object should never be more than one reason for an object to change.
Open/Closed principle.
o Software entities should be open for extension, but closed for modification.
Liskov Substitution principle.
o Objects in a program should be replaceable with instances of their subtypes
without altering the correctness of that program.
o Or worded differently: Functions that use references to base classes must be
able to use objects of derived classes without knowing it.
Interface Segregation principle.
Side 30 af 71
o Many client specific interfaces are better than one general purpose
interface.
o Clients should not be forced to depend upon interfaces that they do not use.
Dependency Inversion principle.
o One should depend upon abstractions. One should not depend upon
concretions.
o High level modules should not depend upon low level modules. Both should
depend upon abstractions.
o Abstractions should not depend upon details. Details should depend upon
abstractions.
All these principles are very important for the maintainability of this project, and they will be
mentioned several times in the following design discussions. It is difficult to give examples of
how we follow all of these principles throughout our application, but Chapter 4.2: The
Development Process features an example of how to implement Dependency Injection, in
which all these principles are followed very well. Once you are accustomed to these principles
you barely notice that they are actually affecting every class you create. Just the MVVM design
pattern in itself embodies every SOLID principle and even does so in several steps; for instance
the Single Responsibility Principle is applied to separate both UI logic, presentation logic and
domain logic.
3.2.4 CONCLUSION
The application will be developed following the SOLID principles. Vigorously.
The Prism collection of guidance has provided me with a lot of examples of how to
properly implement composite applications in Silverlight, so I feel pretty confident in
this area even at this early time in development. The basic helper classes of Prism
have also been tested briefly in several of the provided example implementations,
and this made me able to quickly get a very good idea of how I would probably want
to implement the application, at least on a high level.
Silverlight was chosen as development platform.
o The developer has much more experience in Silverlight than any other
platform.
o Silverlight is great solution to the question of whether to deploy a web
application or a smart client.
Side 31 af 71
o Silverlight targeted guidance such as Prism made it easy for the developer to
get a very good understanding of the current capabilities of Silverlight,
convincing the developer that Silverlight was a great choice for this
application.
Entity Framework 4.0 was chosen as the ORM Toolset.
o Having already decided upon Silverlight, there were only few reasonable
alternatives.
o The developer has much more experience in Entity Framework than any
other ORM Toolset.
Side 32 af 71
4 DESIGNING THE SYSTEM ARCHITECTURE
This section will cover the design of the client application and explain the choices we made
when designing the system; what we ended up choosing and why, along with comparisons to
any viable alternatives. In the previous chapter we chose a selection of design patterns and
principles to follow, and in this chapter we will take a closer look at these different patterns
and techniques and explain how they are implemented or embodied in our system and how
they impact our application as a whole. Another purpose of this chapter is to serve as the
reasoning behind the design of the system, and I will at times have to include examples of
processes or implementation details to properly explain the benefits gained by using these
technologies or techniques.
The first section of this chapter is devoted to explaining the initial system architecture; the
architecture used during prototype development. The next chapters features Dependency
Injection and a brief look at the Managed Extensibility Framework, as these subjects have had
a major impact on the design of the application, resulting in some really great benefits
towards code reusability and maintainability in general. Chapter 4.4: Silverlight and MVVM
will discuss the design for interaction between the UI layer and the domain model layer and
explain how we achieve a good level of separation between these two layers. After that comes
a chapters covering application configurability, in which I will present a detailed example of
the development process that took place when I tried to refactor part of the data model
towards being more configurable. The last chapter discusses the choices made in regards to
web services and specifically why we did not choose to use Microsoft RIA Services even though
they were promoted extensively at the time of this development.
Side 33 af 71
4.1 INITIAL SYSTEM ARCHITECTURE
This section will cover the system architecture used in the initial development phases of this
system. The layout of projects and dependencies were changed several times due to several
different prototypes being made with slightly different architectural requirements, but this
model was the basis of our system architecture during prototype development. The system
architecture is shown in Figure 4-1.
Main Application
Web Server
Modules
Entity Model
Composition Rootusing MEF / Unity
Infrastructure
Services
Main View
Contracts
Module Module
Module ModuleModuleCatalog
Silverlight Model
.NET 4.0Model
Figure 4-1: Initial system architecture. Arrows indicate dependencies.
This initial design was influenced by the following facts:
We wanted to use a Dependency Injection Framework, so our main application would
have a composition root.
Using Silverlight, the domain model has to be maintained in both a .NET 4.0 and a
Silverlight 4.0 version.
Modules dependencies are only allowed for:
o Necessary/Helpful Infrastructure.
Side 34 af 71
o The Silverlight model.
o Service Contracts.
In the prototype development phase, we did not want to bother about the database
implementation.
The reason I did not include a database or ORM tool in this model was that there was really no
reason to spend time updating and populating the database in these early stages of
development. Instead I used dummy web services, which just constructed a graph of objects in
code instead of retrieving this data from the server. This naturally saved me a lot of time and
allowed me to focus more on the prototype being developed. The continuous changes to
models happening as a result of a very agile development phase were also easily handled in
this environment, as new model properties required no database adjustment or re-population;
the only thing required was to add a static or random value for the field in the web service
code responsible for generating “fake” object graphs.
4.2 THE DEVELOPMENT PROCESS
This chapter will discuss the process of developing the application in relation to known
development techniques.
The process was definitely not of the waterfall-model type of development processes, and I
doubt that model would be any good for this kind of project. The waterfall model requires the
product to be very precisely defined which often result in a very thorough requirement
specification. This project was the exact opposite of that. It had a very loose set of
requirements to begin with and a very limited amount of design goals to achieve in the initial
development phases. The development process of this project was surely very agile. A lot of
client interaction was done in several iterations, and both the system and the requirements
were further and further developed by each development cycle or iteration.
PROTOTYPE DEVELOPMENT
Looking at the overall development process, the initial development stages featured a lot
more prototyping and a lot less client interaction; except for the first two meetings in which
the base requirements of the project were defined. These initial development phases was
primarily used for research and knowledge gathering as I definitively did not have the
experience to just jump right in and start developing awesome, maintainable code. On one
end the new technologies had to be tested and proved sufficient for the project, and several
Side 35 af 71
prototypes were delivered as a result of me testing solutions with both Unity and MEF, and
even solutions with both Unity and MEF at the same time. I always develop on a private Team
Foundation Server (TFS) which I share with a friend of mine, so when the initial infrastructure
began developing I started to branch my projects in different directions to be able to reuse
both the infrastructure and possible some of these test implementations more easily. This
made it easy to very effortlessly jump back and forth in more or less different implementations
of the system while still being able to maintain and further develop them all quite easily.
FEATURE DRIVEN DEVELOPMENT
At some point the system matured enough that the focus turned more towards the functional
requirements, and then the development process began including more client interaction. At
this point the process also switched to what most would refer to as Feature or Behavior Driven
Development, as the development process would always revolve around the development of a
particular feature or two. After prioritizing a list of features with the client the development
process would go something like this:
I would pick the first feature on the list and make sure that I was sure how to properly
implement it. If I was even a little unsure about the exact requirements or purpose of this
feature, I would ask the client to help me clarify this feature. When I was confident I would
implement the feature as the client actual wants it, I went on to implement this feature from
top to bottom. After implementing it and briefly testing its functionality I would contact the
client and verify the feature was as expected and if the feature was considered done I would
most likely move on to unit testing the feature right away.
Developing a system feature by feature felt very natural, and the process was especially
effective when it was possible to easily communicate with the client. I did experience this one
day when I was literally able to poke my client each time I had developed a feature and
wanted her to verify it. This ease of communication is of course not necessary for this kind of
development. In an office environment it would probably be beneficial to have longer
meetings and define feature requirements more precisely instead of having to interrupt each
other several times a day to talk about features for less than a minute each time.
SKETCHFLOW
Despite being a fan of this feature or behavior driven development, I did try a couple of other
approaches just to experience them. One was to develop the interface layout using mockup
software like Microsoft’s Sketchflow. The initial experience was quite good and it might be
beneficial in some cases, but using it does come with some overhead. I guess using software
like this could be quite beneficial if the mock created featured several pages with a lot of
Side 36 af 71
different features, but for the few features we decided to try and mock it did not really give
much benefit; the overhead of using it was greater than the benefit of having pretty drawings
of those features. It is awfully fast to mock or sketch anything in Sketchflow though, so if the
amount of information mocked through Sketchflow is sufficiently large, I definitively think it
can be beneficial to have Sketchflow. It is worth noting that Sketchflow also acts as a link
between not only clients and a team of developers and designers, but between the developers
and designers too, as it has several features supporting the development of business logic and
UI logic in parallel.
TEST DRIVEN DEVELOPMENT
I also tried the approach of test driven development (TDD), and that did not go too well. I
believe this was mainly a result of me trying to do TDD while not being that experienced at
testing. It was surely also a result of me not being experienced in testing Silverlight, as testing
Silverlight is a bit of a mess in its current state. More and more testing tools and features are
getting Silverlight ready, but testing Silverlight still requires a bit more compared to testing in
the standard .NET framework. This is often done by simply linking classes between Silverlight
and .NET projects, but even this still results in an added effort towards application
maintenance.
4.3 DEPENDENCY INJECTION
Dependency injection (DI) is a technique used to decouple highly dependent components by
separating behavior from dependency resolution. The technique is by no means new; it was
coined by Martin Fowler and his cohorts a little more than a decade ago and the technique has
been around for a long time before that too, but it is still as useful as ever. The technique
might be quite commonly known, but I am still going to explain the basics of it in this chapter
simply because Dependency Injection has such a great impact on the architecture and
maintainability of this system.
Dependency injection in larger applications is most often accompanied by a dependency
injection framework build to properly handle the dependency resolution of the application,
but it can also be applied to small sets of inter-dependent components without a dependency
framework to support it, in which case it is most commonly referred to as a “Poor Man’s DI”. I
will discuss these dependency injection frameworks and containers more thoroughly in
Chapter 4.3.2: DI Containers, DI Frameworks and MEF, where I will also explain how MEF both
relates and distinguishes itself in regard to DI Containers.
Side 37 af 71
To best explain what DI is I will present a simplified example of how DI can be implemented
using Constructor Injection, in section 4.3.1: Implementing Dependency Injection. This
example will show how to refactor a class towards using Constructor Injection, and explain the
effect these changes has on the class and how this can benefit the process of application
development and maintenance.
4.3.1 IMPLEMENTING DEPENDENCY INJECTION
The actual implementation of DI can be done in several different ways but the one most
commonly used is Constructor Injection which happens to be the best solution for most
situations while also being one of the most simple to properly implement.
What I am trying to achieve in this example is to properly implement both separation of
concern and inversion of control, which I will do by separating the behavior of a class from the
responsibility of dependency resolution. The example might seem overly simple but it shows
exactly what DI is about and makes it easier to explain the subject further.
The ContactViewModel class in Figure 4-2 is dependent on the ContactRepository class, which
it uses to retrieve contact information from the database. To get an instance of the
ContactRepository class it simply instantiates a new instance and everything works just fine.
public class ContactListViewModel { private IContactRepository _contactRepository; public ContactListViewModel() { _contactRepository = new ContactRepository(); } public ObservableCollection<ContactViewModel> Contacts { get; set; } public void LoadContacts() { Contacts = _contactRepository.GetContacts(); } }
Figure 4-2 ContactViewModel class before refactoring towards Dependency Injection.
Side 38 af 71
So why do we not like this implementation?
There are several problems in this case, which all boil down to the fact that we have
introduced a direct dependency to the ContactRepository class. For starters the
ContactViewModel class is responsible for resolving its own dependencies, which is not always
a cardinal sin but still does raise a finger in regards to the Single Responsibility Principle. In
addition it also resolves these dependencies in very tightly coupled way giving the class what is
referred to as Direct Control over its dependencies. Direct control might not seem like a bad
concept initially; it is pretty straightforward and the name does sound like something you
would want for your application, but in the world of DI Direct Control is considered an anti-
pattern – it might even be the anti-pattern actually. DI is all about letting someone else resolve
our dependencies for us so we want to get rid of this implementation of direct control;
specifically we want to get rid of the new keyword, or at least move this responsibility to
another class.
On the bright side we did code towards an interface, the IContactRepository, which is good
since it allows us to exchange the ContactRepository class with another class implementing
the same interface without changing anything in the ContactViewModel class, except for
changing “new ContactRepository()” to that of the new class. This implementation have
provided some degree of maintainability compared to not coding towards an interface, but if
we want to change the IContactRepository instance used we will still be required to change
the implementation of the ContactViewModel and recompile our code – We can do better
than that.
So how can we improve this?
Figure 4-3 below shows the ContactViewModel class refactored to implement constructor
injection. Note that we have only refactored in the constructor of the class, not anywhere else.
The dependency is now injected into the constructor of the class, effectively leaving the class
without any responsibility for resolving the dependency or any knowledge towards the actual
implementation of the dependency; all the class is responsible for is its own behavior and of
course also for knowing how to properly use or consume the dependency through the
supplied IContactRepository interface, e.g. the repository.GetContacts() method.
Side 39 af 71
public class ContactListViewModel { private IContactRepository _contactRepository; public ContactListViewModel(IContactRepository contactRepository) { _contactRepository = contactRepository; } public ObservableCollection<ContactViewModel> Contacts { get; set; } public void LoadContacts() { Contacts = _contactRepository.GetContacts(); } }
Figure 4-3 ContactViewModel class after refactoring towards Dependency Injection.
Now that the class is not responsible for resolving its own dependencies we are able to resolve
them instead. The code for this class is now much more configurable and reusable, which
provides many different benefits to the application. Code reusability is especially important if
building a testable application, as a configurable class like this is very easy to use in a test
environment. If the ContactListViewModel was to be tested we would now be able to
instantiate it while supplying it with a dummy repository implementation in its constructor
instead of the original repository. This would remove any dependencies, and the test
environment would now be much better controlled
As mentioned this refactoring does seem very simple and it surely is - especially if you forget
to consider the fact that we now have to create other classes to manage the dependencies for
us. However, this seemingly tiny change has a very positive effect on the maintainability of the
system, as this separation of concern and loose coupling opens up some great opportunities
for effectively maintaining our code. The benefits of dependency injection can be boiled down
to the following list:
The ability to replace one implementation with another.
The ability to reuse code in unexpected ways.
The ability to let teams develop functionality in parallel.
Easier code maintenance.
Testability.
Side 40 af 71
As an example of the benefits of being able to replace an implementation with another we
could easily imagine ourselves replacing the implementation of ContactRepository with
another class implementing the IContactRepository interface. This could enable us to support
other types of databases, or be the first step in an effort to support offline capabilities simply
by creating an OfflineContactRepository class and injecting this instead of the
ContactRepository. The important thing to note here is that we are able to replace the
repository which is essentially the data source of ViewModel, and that we are able to do this
without any changes to the ViewModel. Note that we would of course still keep the
ContactRepository class and the issue of which class to use is simply a question of
configuration; which of the two implementations we supply to the ContactViewModel class
could easily be decided at either compile or run time.
We also have more options when it comes to simplifying the process of teams developing
functionality in parallel. If the ContactViewModel and the ContactRepository was to be
developed in parallel by two different teams the ContactViewModel team would be able to
implement their part using a dummy repository class with only the necessary implementation
needed to confirm the functionality of the ContactViewModel. The real repository
implementation would then be supplied when it was fully developed; again without any
changes to the code except for configuring the Composition Root to use the new
implementation.
This level of code reusability also results in a lot of flexibility when testing our application.
Instead of using the ContactRepository when testing the functionality ContactViewModel class
we can now configure our test setup to use another implementation of the repository
specifically used for testing. A typical example of how DI can benefit from a high level of code
reusability is included in Figure 4-4 below.
Typical developer story: A repository in the client responsible for retrieving data from the database using a web service has to be tested. Problem: The repository depends on this web service to function properly, and in turn also depends on the database server being online and the client having an active internet connection. These dependencies complicate the testing environment, and you do not want the test to be affected by all these dependencies, as the outcome of the test could be affected by these outside influences and result in an error getting thrown in a perfectly fine repository.
Side 41 af 71
Having a high level of adaptability makes it easy to exchange this web service implementation with a dummy web service in the test environment. The outcome of the test will then not be dependent on outside influences like the web service, the server, the client database or the current internet connection, and your test environment will be simpler and more focused on testing the actual repository.
Figure 4-4: A typical example of how DI can improve testability.
The final benefit we will mention relates to the ease of which developers are able to
understand and use the code. Having all the dependencies of a class included in the
constructor of a class will make it difficult for any developer to use the class without being
aware of these dependencies – In many circumstances the developer will not even be able to
create the class without supplying all dependencies. This improves both the maintainability
and testability of the code, and not following these principles can lead to very frustrating
situations. I will spare you the details of the typical scare-examples given by DI supporters, but
they most often feature developers being unable to use classes because several dependencies
have not been loaded, while the class does not provide any information regarding what
dependencies it actually needs.
4.3.2 DI CONTAINERS, DI FRAMEWORKS AND MEF
This chapter will feature a brief explanation of DI Containers and DI Frameworks. I will also
compare and related these to Microsoft Managed Extensibility Framework, but I will not
discuss the details of different dependency injection containers as I do think it is out of the
scope of this project. I do not think DI containers are that interesting for the scope of this
project, especially since I myself have had very little influence in this final MEF
implementation; meaning that very few lines of code was necessary to make it work the way I
wanted and the base implementation of MEF suits most of my DI needs so far. I have
developed branches of the application using both Unity and MEF separately and in
combination, which did require way more implementation than the final version, but the final
version uses MEF only and is kind of boring – But it works best given our current needs.
Dependency Injection is a very interesting subject though, and I will highly recommend reading
Mark Seemann’s book “Dependency Injection in .NET”5 if you are interesting in DI.
5 This book has not been printed yet and is barely even finished, but it is available through an early
access program on http://www.manning.com/seemann/. It does require you pay for it though.
Side 42 af 71
DI CONTAINERS AND DI FRAMEWORKS
DI containers are the objects responsible for resolving the dependencies of other application
objects; objects like the previously mentioned ContactListViewModel used in the example in
Chapter 4.3.1: Implementing Dependency Injection. These containers are often called the
factories, builders or providers of the application, responsible for providing the dependencies
for other application specific classes like the viewmodels, repositories or services. In the case
of DI frameworks there is only a single DI container, often referred to as the Composition Root
of the application. This DI container handles the instantiation of every class in the application,
and knows what instances it needs to supply when a constructor of a class asks for an instance
of a class suitable for an IContactRepository interface or any other interface needed. This
information can be supplied to the Composition Root in several different way, and many often
use a configuration file to do this. It is this configuration file that then informs the composition
root that if a class asks for an IContactRepository, it should be given an instance of the
ContactRepository class. The brilliant thing about this is that the configuration file for the test
environment simply states that an IContactRepository request should be given an instance of
the DummyRepositoryForTestEnvironment class instead; then you suddenly have a much
simpler test environment simply by changing a configuration file.
WHAT IS MEF?
MEF is a library built to assists in creating extensible applications and has recently been
included in both Silverlight 4.0 and the .NET 4.0 Framework. MEF enables greater reuse of
application components and using MEF your .NET applications can shift from being statically
compiled to dynamically composed.
It is important to note that MEF is not a Dependency Injection Container. MEF does resolve
dependencies though, and can as such be used to mimic the functionality of a DI Container to
some degree, but that is not the purpose MEF was built for and its performance as a DI
Container may be subpar compared to actual DI Containers.
One of the reasons MEF if not considered a DI container is that it does not support
configuration-based composition of the Composition Root out of the box; meaning we cannot
build our composition root using configuration files for the current application. Instead we use
[Export] attributes on classes and properties to signal MEF that this object should be
composed to the Composition Root, which in turn allows other [Import] declarations to import
these exported objects elsewhere in the application.
If you have been paying close attention to the DI examples so far, I bet you will quickly realize
that this presents a problem for the testability of the previous examples. If the
Side 43 af 71
ContactRepository class is defined as an [Export] of type IContactRepository using an attribute
on the class itself instead of using a configuration file, then this relationship is defined
statically, as in, directly in the code. Now it is suddenly not that easy to improve our testing
environment by changing this export value to the DummyRepositoryForTestEnvironment we
liked so much. It is still possible though, and still quite easy; it is just not possible to do this
using a configuration file.
Do note that many developers actually tend not to use configuration files for their test environments, but instead setup their test environment manually, essentially taking over the responsibility of the DI framework. This is common even a DI framework is normally used to resolve dependencies in the application, as this keeps the test focused on that separate test and ensures that it is not affected by outside influences like a configuration file. Whether the manual or the configuration based approach is more beneficial depends entirely on the needs of your specific testing environment.
4.4 SILVERLIGHT AND MVVM
The Model-View-ViewModel (MVVM) design pattern is very similar to the more well-known
Model-View-Controller or Presentation Model patterns, but MVVM has been tailored to make
better use of specific functions in Microsoft’s XAML6 based UI development platforms:
Windows Presentation Foundation and Silverlight.
MVVM is very commonly used when developing Silverlight applications and the most defining
factor of the MVVM pattern is that it separates the View from the Model by exposing the data
objects of the Model through a new layer: the ViewModel. This new ViewModel layer is now
responsible for exposing all the model data to the View and it is also allowed to alter these
data to fit our needs, such as changing the user name “John Anderson” to “Mr. Anderson” or
presenting a culture aware welcome message such as “Velkommen <name>” if the user is
from Denmark.
6 XAML is a markup language and is in some ways very similar to html. It is not used by itself though, but
as the presentation layer of both the WPF and the Silverlight platforms.
Side 44 af 71
Presentation LayerView Domain Layer
Models & Domain Logic
Views / XAML Code
ViewModels
Silverlight Specific ViewModels/Controllers
Figure 4-5: Our implementation of the Model-View-ViewModel design pattern.
It is not necessarily a bad decision to make the View responsible for managing UI logic such as
changing the user name “John Anderson” to “Mr. Anderson”, but following the MVVM
principles we want to move UI logic away from the View layer whenever possible, so that the
View is not responsible for such things.
Having UI logic in the View when developing in Silverlight or WPF usually means that the logic
is handled in the XAML code or in the code behind, or in both. Implementing the UI logic in
this way might seem like the correct way to do it if you are new to Silverlight as many Visual
Studio features and shortcuts seems to promote this kind of development; one example being
the automatic creation of code segments for consuming UI events in the code behind when
double clicking a control in the Visual Studio designer. Implementing UI logic in this way has its
drawbacks though, as it makes it complicated to maintain the application properly and most
often results in the UI logic being quite difficult to test efficiently, or even difficult to test at all.
This is largely due to XAML code being difficult to test, and any logic having connections to
XAML suffers too. However, if we keep our UI logic in the ViewModel we can test it much
more efficiently as we are now able to test the logic in an environment that knows nothing
about Silverlight UI or XAML at all.
To summarize what this pattern achieves:
The UI layers do not contain any business logic.
o The UI layer is not that important to test now that all logic is separated away
from the UI layer - which is nice since testing the UI is often difficult.
The domain layer knows nothing about the UI.
o If the UI layer needs to be changed we will be able to reuse all business logic
of the domain layer.
Side 45 af 71
The presentation layer knows nothing about the UI.
o If the UI layer needs to be changed we will be able to reuse many view
models. If the UI require changes in UI technology, only the Silverlight
specific view models will have to be replaced; the non-Silverlight specific
view models can be reused.
Presentation logic has its clearly defined place in the view models of the presentation
layer.
o Developers will know to put presentation logic in view models and hence not
pollute the domain classes with unnecessary fields or methods.
4.5 MODELLING TOWARDS CONFIGURABILITY
This chapter will explain the process of modeling our data models and database towards
application configurability. The main content of this chapter is explained by following an
example of how we developed a small section of our database towards better configurability.
This chapter also covers some aspects of the development process by giving an example of
how part of the application was developed through multiple development cycle or iterations
featuring both analysis, design and implementation.
The data model for handling contact relations changed several times during development as a
result of the continued client-developer interaction often leading to slight changes in
requirements. Requirement changes during the development of the system were well
expected though, especially functional requirements since they have not been prioritized as
high as the non-functional ones, given the importance of the overall system architecture in
this project.
Figure 4-6 shows the initial data model created for storing Contact relations. Note that the
Company_Employee model is not normally shown when designing the model in Visual Studio,
but we want to discuss the actual database model and these models corresponds to the two
database tables the Entity Framework generates to store both Contact information and these
Company<->Employee relations.
Side 46 af 71
Figure 4-6: Initial data model for Contact relations.
The model in Figure 4-6 was created based on the client requesting that users should be able
to group contacts by the companies they work in. This data model was created and used in the
application for the first couple of prototype iterations. It worked just great and the fact that a
user was either related to a single company or none resulted in quite simple solutions for
displaying this in the user interface.
Later on we realized that these relations were not quite sufficient as we discovered other
types of contact relations that we also wanted to incorporate such as chain stores being
related to a main group for that particular chain. The initial model supports this scenario to
some degree, but we have no way of knowing whether the relations stored are relations
between a company and an employee or between a separate chain store and its main group.
We concluded that the model needed to be able to distinguish between different types of
relations, and at the same time it needed to allow for a many to many relationship instead of
the one to many relationship of the earlier model. Note that this latter many to many change
in requirements is merely a foreign constraint issue and does not change the structure of any
database tables.
Side 47 af 71
Figure 4-7: Second iteration of the data model for contact relations.
After redesigning the model to fit these new requirements the initial model was that of Figure
4-7. The newly added relation-type field would now allow us to create types for Company-
>Employee and Chain Store->Chain group relations, and any row in the ContactRelation table
would essentially be a two-way connection between two contacts. Just to clarify: a relation of
type Company->Employee would define that the contact with Id equal to ContactId was a
company employing the contact with an Id equal to RelatedContactId. However, this model
was never actually implemented because we have a goal of implementing towards
configurability in mind. The next immediate problem this model has is that of creating and
maintaining which relation types are available in the system; a problem of which we really
want a configurable solution for. As we have previously mentioned we can easily achieve
configurability by storing data such as type definitions in the database instead of defining
these directly in the code, which leads us to the final data model as shown in Figure 4-8 below.
Note that storing configuration in configuration files may be fine in smart client based applications, especially offline ones, but it is not that good an option compared to using the database when the solution is online and must be configured on a per-user or per-database basis.
Side 48 af 71
Figure 4-8: Third iteration of the data model for Contact relations.
Figure 4-8 presents the final data model where we have exchanged the RelationType field with
a field for storing a RelationDefinitonId which references to a table storing contact relation
definitions: the amply named ContactRelationDefinition table. Having our configurable contact
definitions for the application stored in a database by definition makes this a per-database
configuration; unless of course we relate additional information to these definitions like the Id
of a registered user, in which case the configuration also has the option of being per-user
based.
Had we used the RelationType field and defined the available types statically we would have
had a per-application configuration of definitions, requiring us to recompile and deploy at
least some parts of the application to introduce new definitions. This final implementation
allows for a per-database configuration of definitions which means that the contact relation
types available in the application can now be created or edited at the database level without
requiring any amount of application recompilation, deployment or other maintenance.
The reason a per-database configuration was chosen is of course that the application currently
uses one database per client, so a per-database configuration is essentially a per-client
configuration which is the way most clients would prefer this kind of configuration. Having this
per-database configuration for data such as these definitions means that a company is now
able to simply add or edit the definitions to the database, at which point the updated
definitions will immediately be available to all their employees using the application from that
point on.
This per-database configuration also has a very positive effect on the administrative scalability
of the application, as we can easily support multiple companies using our application through
different databases without having to worry about their specific needs for different types of
contact relations. Compared to having statically typed relation types the benefit of per-
Side 49 af 71
database configuration is tremendous, as the client will not have to bother us with new
updates to support other types of relations. However, we have to remember that moving
configuration needs such as these to the client does not come freely in terms client support, as
we now have to support the clients in making these configurations themselves. Providing a
user interface for configuring the application is naturally a great start, but even then there will
still be clients requiring direct help and guidance, especially if this user interface is not highly
user friendly and easy to understand for new clients.
On a side note, this per-database configuration is only great because we have the need to be
flexible in regards to contact relation types on a per-client basis. If our user base primarily had
clients with the same requirements for contact relation types and we could confirm that these
types would probably not need to be changed at all, then a statically typed approach or at
least a less configurable approach might have been more beneficial. There is always the option
of providing a good configuration default of course, but even so there is still the risk of having
to support clients in matters of further configurations or even misconfigurations.
4.6 WEB SERVICES
This chapter will discuss the choices we made in relation to the web services of our system. In
regards to authentication of users calling web services, we have decided to move
authentication out of the scope of this project. A simple forms authentication service was
provided for initial login requirements, but the implementation of this is almost solely based in
configuration files and is very similar to the ones provided in the default project templates of
both Silverlight and ASP.NET projects in Visual Studio 2010.
The decision to use Microsoft Windows Communication Foundation (WCF) Services was
almost immediate, but the following decision was a bit more difficult: Whether to use RIA
Services on top of the WCF service or not. We will start out by discussing WCF and RIA
Services, and then follow up discussing service contracts and how we use the repository
pattern.
WHY WCF SERVICES?
This phrase might be getting old by now, but again the decision was partly based on the
developer’s prior experience with Windows Communication Foundation (WCF) Services. This is
not the only reason though, as WCF Services are a collection of very powerful services, with a
lot of features and a lot of options for configurability. We will leave the details of these
Side 50 af 71
features for another day though, as they are of no concern to the system for the scope of this
project – Rest assured that these features are indeed powerful.
WHAT IS RIA SERVICES?
Microsoft’s Rich Internet Application (RIA) Services were promoted heavily at the time of the
.NET 4.0 release, as the new and exciting way to easily create N-tier applications in both ASP
.NET and Silverlight. RIA Services helps you coordinate the application logic between the client
application and web services by making the application logic on the server available on the
client. RIA Services automatically discovers the .NET classes of the server which are exposed by
domain services and generates similar objects in the RIA Services enabled Silverlight client. The
process of doing this without RIA Services requires you to duplicate and maintain your
application logic so it is available in both a .NET 4.0 and a Silverlight 4.0 assembly, since it is
not possible to use references between these two frameworks.
WHY NOT RIA SERVICES?
To begin with, we do not want to be dependent on RIA Services to generate all the models
used in our client application, especially because we would like to be able to clearly specify the
location of these models which the auto generated classes of RIA Services does complicate a
bit.
Another thing we did not like about RIA Services was that it promotes the use of exposing
IQueryable objects in your web services. Instead of the service contract exposing a
GetContacts method returning a list of contacts, RIA Services instead exposes an IQueryable
which is essentially a LINQ query with direct knowledge of to the Entity Model. Exposing this
IQueryable allows developers to write LINQ queries for the entity model directly in the domain
layer which we think is a big no-no if you want the application to maintain any amount of
scalability. Using IQueryable would probably make it easier to develop new modules for our
application, especially if we allow third party modules to be created. These third party
modules would then have that extra bit of control when querying the database without having
to request new features for the web services, but the downside to this deal is that this extra
client-side control would restrict the control we have over query execution server-side.
If we lose this server side control we might run into a scenario which we really want to avoid,
where a poorly constructed LINQ query in the domain layer results in the database executing a
very inefficient query like joining 32 different tables before any kind of selection is done. The
way database queries are written has such a great impact on the efficiency of these queries,
and therefore we do not want to allow domain specific code to have any impact in our
database queries except for the parameters we make available in web services. This way we
Side 51 af 71
now have much better control of how our database is queried. It is possible for us to analyze
the queries we use much more efficiently and optimize the web services by changing the way
the database is queried or even changing the database without changing anything in the
client.
We concluded that the benefits of RIA Services lie mostly in their ability to generate client-side
code based on server-side code. If we do want this functionality, we can achieve it using other
code-generation tools and have much better control over the code-generation process.
If we ever change our minds and do want to allow someone to use IQueryable on our server,
this will be very easy to add later in development.
4.7 CONCLUSION
I will develop the application with a main application acting as a shell for modules.
o This main application will be responsible for providing the necessary
infrastructure to properly run and manage multiple modules concurrently.
o This main application will use Microsoft’s Managed Extensibility Framework
(MEF) to support some of this extensibility.
I will use dependency injection and implement classes to best support this
throughout the application.
I will follow the design pattern of MVVM in all Silverlight projects.
The initial system architecture being designed without a database was a great
solution for quickly developing prototypes.
I got better at implementing towards configurability.
Side 52 af 71
5 FINAL DESIGN & IMPLEMENTATION
This chapter covers the design and implementation of the final product; or rather the product
at the time of this delivery, as further development of the product is already planned. The
reason this chapter presents both design and implementation details side by side is that the
client is not too concerned about the functional requirements as featured in the usual
presentation of a finished product. The clients concerns revolves around the design of the
system and the benefits and features related to application maintenance and further
application development, and this is more effectively presented by presenting design along
with implementation details. The end of this chapter also features a conclusion of what this
design has achieved in regards to system requirements.
Figure 5-1 shows a high level model of the overall the design on the application. The following
chapters will explain the design of the main application and the modules in detail.
Figure 5-1: High level system architecture model. Arrows indicate dependencies.
Side 53 af 71
5.1 SCOPE
This chapter will briefly explain what I decided not to include in the final design of this
application and why. It has been a little difficult to maintain the scope of designing and
developing a modular and extensible application, because even though I naturally need to use
both web services and databases, the details of their design does not really influence my
priority in this project: The modular and extensible application. User authentication in
particular has been difficult to leave out, as the process and learning experience of
implementing federation based security has been very exciting so far.
USER AUTHENTICATION AND SECURITY
I am not able to properly present and discuss the implementation of the login functionality of
the system, as we are currently in the process of developing a federation based authentication
and security system. We are not quite done with this development and are currently in the
process of moving our solution to another host with better SSL capabilities.
The current system does feature a primitive login module to meet our initial login
requirements, but this implementation is really just code taken directly from the Visual Studio
“Silverlight Business Application” template and imported as a module. Very few lines of
configuration in web service was needed to make this work, and since all this code was simply
taken from a template I do not see a reason to present it.
Figure 5-2: The current login screen.
Side 54 af 71
WEB SERVICES
I would like to keep the focus on the design on the modular and extensible application if which
this report is about. Therefore I chose not to include any details about the web services as the
details of how I have implemented the Create, Read, Update and Delete contracts of the web
services has no noticeable influence on the current application. Had we implemented a proper
layer of authentication and security they might have been interesting.
DATABASE TABLES
Details about database implementation has been partly covered in 4.5: Modelling towards
Configurability, which featured an in depth explanation of how to achieve configurability on a
per-database basis. Similar data structures exists for storing contact properties, but besides
that there is nothing interesting to say about the database structure. Ironically, this would also
have been an interesting subject had we developed the federation based authentication. We
could then start developing the database towards supporting per-user configuration needs
along with creating views and proper indexes for most efficiently managing user permissions.
But even then the subject would still have been out of scope for this project.
Side 55 af 71
5.2 SYSTEM DEPENDENCIES
In this chapter we will take a closer look at the design of the client application in regards to
dependencies. Figure 5-3 shows dependency graph of the client application. Notice that no
web services or database is present, since the client application knows nothing about these.
The only connection the client has to web services are the web service contracts, so we can
change the database and web service layers as much as we would like without it having any
effect on the client – as long as the web service contracts are still functional of course.
Modules can easily be added or removed, and nothing in the system will ever be dependent
on modules except for other modules.
Figure 5-3: High-level dependency graph of the Client Application.
Side 56 af 71
5.3 COMMUNICATION BETWEEN MODULES
Most of the communication between modules is handled by the EventAggregator class. The
EventAggregator is a very simple class and many implementations of this type of class is widely
available on the internet, with a varied selection of functionality and features for handling
events. The one currently used is provided by Microsoft and has been sufficient for our needs
so far.
Figure 5-4: The Event Aggregator7 allows for independent publishers and subscribers.
The EventAggregator is used for handling events because it separates senders of events from
the receivers of events. Separating senders from receivers goes a long way in making these
parties independent, so it is naturally a very important requirement to do so if you want
modules to be independent of each other but still able to communicate.
Using instances of the CompositePresentationEvent objects shown in Figure 5-4 we can
publish and subscribe to events and use the EventAggregator to take care of everything in
between. There is also a generic version of these events objects, with the added functionality
of being able to carry a generic type of payload: the CompositePresentationEvent<TPayload>
class. To best explain how the EventAggregator works, let us continue with an example of how
the contact details page of the contact module communicates with the contact activities list of
the contact activities module.
7 Taken from the Microsoft MSDN webpage: Event Aggregator.
http://msdn.microsoft.com/en-us/library/ff921122%28v=pandp.20%29.aspx
Side 57 af 71
The first thing we will need is a class for this event, deriving from the
CompositePresentationEvent class. Any subscribers listening for this event will most likely
want to know the Id of the contact we are currently loading, so we will also want to publish
the Id of the contact. In this case we will use the CompositionPresentationEvent<TPayload>
class, and then define an integer as the payload for storing the id of contact. Figure 5-5 shows
this new class, which was given the name “ContactDetailsLoadContactEvent” because we
really like to use long and descriptive names.
public class ContactDetailsLoadContactEvent : CompositePresentationEvent<int> { }
Figure 5-5: The event thrown by the contact details page when loading contact information.
We have an event now, we just need to publish it. Figure 5-6 below shows how we publish this
event from the ContactDetailsViewModel: each time we load a contact in the contact details
page we also publish a ContactDetailsLoadedEvent through the EventAggregator, with the
contact Id included as payload.
public void LoadContactByContactId(int contactId) { _ContactRepository.GetContactById(contactId, GetContactByIdComplete); _EventAggregator .GetEvent<ContactDetailsLoadContactEvent>() .Publish((int)contactId); }
Figure 5-6: The contact details page publishes an event when it loads contact information.
The event is now published and the contact details page has done its job; at this point the
dontact details page has no idea who receives this event, how many receivers there are or
what this information is used for.
By publishing this event we have now created a kind of extension point in the application. This
event does not necessarily need to be consumed by any subscribers, but it could be, and this
allows every other module to react in some way. In the current implementation we have
extended the contact details page with both contact properties and contact activities. These
two modules would not be worth much if they did not load new contact data whenever the
contact details page loads a new contact, and to make sure they do we listen for the event we
just published. To do this we simply subscribe to this event through the EventAggregator and
supplies an action to be taken whenever this event fires. Figure 5-7 shows how we subscribe
to this event in the constructor of the ContactActivityListViewModel.
Side 58 af 71
public ContactActivityListViewModel(IEventAggregator eventAggregator) { _EventAggregator = eventAggregator; _EventAggregator.GetEvent<ContactDetailsLoadContactEvent>() .Subscribe((contactId) => LoadContactActivitiesByContactId(contactId)); }
Figure 5-7: When the contact activity list is loaded it subscribes to the event.
By subscribing like this we essentially tell the EventAggregator that we want to listen for the
ContactDetailsLoadContactEvent, and when this event fires we want to execute the
LoadContactActivitiesByContactId(contactId) method with the integer payload of the event as
a parameter.
And that is all we need for this basic inter-module communication: A class for each event and
just a few lines of code to publish and subscribe.
5.4 EXTENDING MODULES WITH MODULES
In this chapter we will demonstrate how we are able to extend modules by importing and
exporting composable application parts using MEF.
The demonstration involves extending the contact details page with a list of contact activities
from the contact activities module. To do this, the contact details page of the contact module
supplies an interface for this extension called IContactDetailsExtension. For the activity list to
be exported into the Composition Root of MEF, we simply put an export attribute above this
class as shown in Figure 5-8.
[Export(typeof(IContactDetailsExtension))] public class ContactActivityListViewModel : IContactDetailsExtension {}
Figure 5-8: Using MEF, the ContactActivitiesModule exports an ActivityList.
Now we have a contact activity list available as a composable part in the Composition Root.
Any class with a reference to the contact module will now be able to import this contact
activity list by putting an Import attribute on a property of the IContactDetailsExtension type.
If MEF detects that a composable part is available for import in any class requesting this
specific import, the part will be imported to that class and the OnImportSatisfied method of
that class will be called. This method allows developers to write code to react to this new
import. In our example we react by adding the newly imported extension to the extension
Side 59 af 71
region, as shown in Figure 5-9. Note that some defensive programming has been removed
from this example for clarity.
[ImportMany(AllowRecomposition = true)] public List<IContactDetailsExtension> ContactDetailsExtensions { get; set; } public void OnImportsSatisfied() { AddExtensionsToExtensionRegion(); } private void AddExtensionsToExtensionRegion() { foreach (IContactDetailsExtension ext in ContactDetailsExtensions) { _regionManager.AddToRegion( RegionNames.ContactDetailsExtensionTabRegion, ext); } }
Figure 5-9: Using MEF, the ContactModule listens for extensions.
This is all that is needed to extend modules with other modules in the current design of the
system. These extensions can be loaded on demand, at run time, at which point the new
functionality, in this case the list of contact activities, will simply appear in the extension tab of
the contact details page as soon as the module is loaded.
Side 60 af 71
5.5 THE MAIN APPLICATION AND THE REGION MANAGER
The main application acts as the “Shell” of the application; the main extension point for
loading modules. The primary responsibility of the main application is to host the main regions
of the Region Manager and it also handles application exceptions on a general level, so that
they are not thrown directly at our user but are instead presented to the user in a popup
dialog.
Figure 5-10 shows the main view with no modules loaded, and it is indeed just an empty shell
in this state.
Figure 5-10: The "Shell" or Main Application, when no modules has been loaded.
What is shown in Figure 5-10 is essentially our RegionManager class, without any views loaded
into its regions. The main application is responsible for adding three main regions to the view:
The top left menu, the top right navigation box, and the main view of the application I which
most modules will be loaded.
Side 61 af 71
5.6 MENU MODULE
The main responsibility of the menu module is of course display the buttons of all the other
menus, as shown in Figure 5-11.
Figure 5-11: The Ribbon Menu.
We have had several implementation of this menu module, but have now settled for exposing
a pretty simple Menu Service which any module can use to add new menu tabs, subgroups or
buttons.
public partial interface IMenuService { void AddMainGroup(IAddMainGroupRequest request); void AddSubGroup(IAddSubGroupRequest request); void AddMenuItems(List<IAddMenuItemRequest> requests); }
Figure 5-12: The IMenuService interface.
Figure 5-12 shows the interface for this menu service, and the actual implementation of the
menu service is in the menu module. The module exports the menu service implementation
using MEF. Any module asking for an IMenuService will then be given a reference to the
current menu service of the application which can then be used to create the necessary tabs
and buttons.
The current menu features a ribbon control, but we also have a simpler version featuring a
simple tab control and no support for subgroups. This simpler menu was created as we
realized we did not really need a ribbon menu for the small amount of button we currently
have. This was actually a great experience in regards to testing the adaptability of the system,
as the menu was very easily exchanged. The module needed to implement its own version of
the menu service, which did not support subgroups, but besides that we did not need to
change any code in any other modules. The simpler menu takes up a lot less space vertically,
Side 62 af 71
and works fine with existing modules as it simply ignores any requests for subgroups and just
puts all buttons in one big group for each tab.
5.7 CONCLUSION
The architecture and design of the application is still not completely done, but the result so far
look promising. Besides developing modules to support the functional needs there are still a
lot of work to be done in several aspects of the system.
The application was developed as a main application shell, with the sole responsibility
of managing and presenting modules.
o The main application uses Microsoft’s Managed Extensibility Framework
(MEF) to support some of this extensibility.
Following the SOLID principles and the guidance of Prism has resulted in:
o A very satisfying level of overall application maintainability.
o The high level of code reusability – With all the benefits this gives.
The application uses Dependency Injection wherever applicable.
o Improved level of code reusability.
o Improved level of testability.
Availability requirements were met.
o The solution is online and available with an internet connection.
Scalability requirements were met.
o The solution is either sufficiently scalable with the current implementation
or can be adapted to fit requirements with a minimal amount of effort.
Extensibility requirements were met.
o Modules can easily be managed by configuration.
o Modules can communicate between each other.
o Modules can extend each other with additional features.
A high level of configurability was developed for current implementation.
o The extend of these implementation only extend to contact relations and
properties though, on a per-database basis. No per-user based configuration
has been developed yet.
Side 63 af 71
6 TEST & MAINTENANCE
This chapter will present some examples of how Code Contracts, Unit tests and Mocking tools
are used to improve the testing environment and the overall maintainability of the application.
I decided to include these code metrics to give an idea of how well implemented the system
currently is. The basic guidelines for maintainability index scores8 are:
A green rating is between 20 and 100 and indicates that the code has good
maintainability.
A yellow rating is between 10 and 19 and indicates that the code is moderately
maintainable.
A red rating is a rating between 0 and 9 and indicates low maintainability.
Figure 6-1: Code Metrics from a Visual Studio analysis.
I have no experience in reading code metrics, but I find these numbers quite promising. The
amount of code in each module will of course be higher eventually, except for some modules
like the navigation module, which actually does its current job using only 47 lines of code.
Note that the Ribbon module is the first implementation of a menu using a ribbon control,
while the MainMenu module is the new menu featuring a simpler display of buttons. The
8 Code Metric Values - http://msdn.microsoft.com/en-us/library/bb385914.aspx
Side 64 af 71
product manager having a score of 100 is a result of the module not being implemented at all;
it is an empty project.
6.1 CODE CONTRACTS
CodeContracts provides a language based way of specifying and checking for object invariants,
preconditions and postconditions in objects. These contracts are used to provide both runtime
and static checking of objects and methods. Code contracts are a part of the .Net framework
and therefore do not require specialized components to compile. However, to get the most
out of code contracts several tools have been developed to provide developers with static
analysis that is able to give warnings at design time when a contract has not been met. An
example of how CodeContracts can be used is shown in the following code snippet. The code
snippet is from the Ribbon module that handles the menu, more specific the method shown in
the snippet is used for getting an existing ribbon tab or creating a new one if one cannot be
found.
private RadRibbonTab GetOrCreateTab(string ribbonTabName, string ribbonTabHeader) { Contract.Requires(!string.IsNullOrEmpty(ribbonTabName)); Contract.Requires(!string.IsNullOrEmpty(ribbonTabHeader)); Contract.Ensures(Tabs.Count == Contract.OldValue(Tabs.Count) || Tabs.Count == Contract.OldValue(Tabs.Count) + 1); Contract.Ensures(Contract.Result<RadRibbonTab>() != null); var tab = Tabs.FirstOrDefault(t => t.Name == ribbonTabName); if (tab == null) { tab = new RadRibbonTab() { Name = ribbonTabName, Header = ribbonTabHeader, Height = 80 }; Tabs.Add(tab); } return tab; }
The four contracts that are shown can be split into 2 categories: the first two define pre-
conditions (Contract.Requires) and last two define post-conditions (Contract.Ensures). The
pre-conditions function like traditional defensive programming where all arguments are
Side 65 af 71
validated; if an argument does not pass validation an exception is thrown, the same thing
happens in the Requires method if a given expression is not met. The post-conditions are more
interesting in that they ensure that the state of the object, the result and the arguments are
correct when the method is done executing. While post-conditions are defined at the
beginning of a method they are actual invoked when the method ends, which enables them to
throw exceptions if a post-condition is not met.
Code contracts are in its simplest form a way of defining restrictions for the conditions and
state of an object, but with the right tools it becomes so much more. Some of the advantages
of using code contracts includes:
Design time feedback on contract validation (Business rule validation).
Compared to “old” defensive programming the code syntax is cleaner and easier to
read.
Documentation generating tools can take advantages of the contracts to create more
precise documentation.
Automatic testing tools can use the contracts to generate more relevant unit tests.
Developers can do design-by-contract programming.
Code contracts helps ensure that others use the code base as intended.
6.2 ARRANGE, ACT, ASSERT
Using the design principles of SOLID I have achieved what I consider satisfactory level of
maintainability and testable throughout the application. This chapter will present two basic
examples of how different parts of the system can be faked or mocked to enable a unit testing
environment that is not dependent on a running system or any other unnecessary
dependencies.
The system under test (SUT) is the ContactListViewModel which loads contacts from a web
service, converts the contacts to ContactViewModels and then sets a property that the view
binds to. To load the contacts the SUT uses a repository to get the contacts from a web
service, which is shown in Figure 6-2.
Side 66 af 71
public void LoadContacts() { _contactRepository.GetContacts(ContactSearchViewModel.SearchSortOptions, LoadContactsCompleted); }
Figure 6-2: LoadContact code snippet.
Since the field “_contactRepository” is injected into the SUT and is based on an interface we
can in our test replace the part that connects to web services and simply verify that the
repository is invoked. Figure 6-3 features a test case for the LoadContacts method that
ensures the repository is invoked.
[TestMethod] public void LoadContactsInvokesRepositoryTest() { // Arrange var regionManagerMock = new Mock<IRegionManager>(); var eventAggregatorMock = new Mock<IEventAggregator>(); var repositoryMock = new Mock<IContactRepository>(); repositoryMock.Setup(r => r.GetContacts(It.IsAny<SearchSortOptions>(), It.IsAny<Action<List<Contact>>>())); var sut = new ContactListViewModel(eventAggregatorMock.Object, repositoryMock.Object, regionManagerMock.Object); // Act sut.LoadContacts(); // Assert repositoryMock.Verify(r => r.GetContacts(It.IsAny<SearchSortOptions>(), It.IsAny<Action<List<Contact>>>()), Times.Once()); }
Figure 6-3: Testing the LoadContact method.
In the Arrange part of the test we set the state of the SUT. First we create mocks of the
IRegionManager, IEventAggregator and IContactRepository interfaces; this is done so the
ContactListViewModel can be property instantiated. The mock of the IContactRepository is set
to allow invokes of the GetContacts method; this will ensure that the mock can handle calls to
this method. In the Act part of the test we simply invoke the LoadContacts method on the SUT.
In Assert we finally verify that current state of the SUT is as expected, which in this case means
verifying that the GetContacts method of the repository is invoked once. This example unit
test demonstrates our ability to abstract away from an actual implementation and focus our
test on a specific method, in this case a method that requires a web server with web services.
Side 67 af 71
The next example involves the same SUT, but in this case we want to test the
LoadContactViewModelsCompleted method. This method is invoked when contacts have been
retrieved from the web service and converted into ContactViewModels. The main purpose of
this method is to put the retrieved ContactViewModels into the Contacts property. Since the
view data binds to the Contacts property we need to make sure the current thread is the UI
thread. This is done by checking a static dispatcher that cannot be faked or mocked without
using heavy weight isolation tools like Moles9 and TypeMockIsolater
10.
private void LoadContactViewModelsCompleted( List<ContactViewModel> contactViewModels) { if (SilverlightDispatcher.Dispatcher.CheckAccess()) { Contacts = new ObservableCollection<ContactViewModel>(contactViewModels); } else { SilverlightDispatcher.Dispatcher.BeginInvoke(() => LoadContactViewModelsCompleted(contactViewModels)); } }
Figure 6-4: Often used Silverlight code to ensure we use the correct thread.
Instead of faking the dispatcher we add an abstraction and wrap it into an implementation of
that abstraction so it is possible to mock the dispatcher with something that is not dependent
on external systems like the UI and thread management. Figure 6-5 shows the
LoadContactViewModelsCompleted unit test, that mocks an abstraction of the dispatcher.
9 http://research.microsoft.com/en-us/projects/moles/
10 http://www.typemock.com/typemock-isolator-product3
Side 68 af 71
[TestMethod] public void LoadContactViewModelsCompletedOnUiThreadTest() { // Arrange var dispatcherMock = new Mock<IDispatcher>(); dispatcherMock.Setup(d => d.CheckAccess()).Returns(true); SilverlightDispatcher.Dispatcher = dispatcherMock.Object; var contactViewModels = new List<ContactViewModel> { new ContactViewModel(), new ContactViewModel() }; var sut = new ContactListViewModelFake(); // Act sut.LoadContactViewModelsCompleted(contactViewModels); // Assert Assert.IsNotNull(sut.ContactAccessor); Assert.AreEqual(contactViewModels.Count, sut.ContactAccessor.Count); }
Figure 6-5: Mocking an abstraction of the UI thread dispatcher.
The unit test creates a mock of the IDispatcher which is used as an abstraction for the
dispatcher, and then sets the CheckAccess method of the mock to always return true. As seen
in Figure 6-5 the SUT is actually a fake of the ContactListViewModel. This is because the
method we are testing is marked as private and Silverlight does not allow the creation of
accessors that are normally used when testing private methods. The fake
ContractListViewModel simply inherits from the SUT and exposes the method we want to test.
This is also the fact for the _contacts field which is also exposed in the fake as the
ContactAccessor property. As usual, after invoking the method in the Act section we verify the
state of the SUT in the Assert section.
Side 69 af 71
7 CONCLUSION
The goal of the project was to design and develop a base structure for a modular and
extensible management system. Most of the requirements of the system has been achieved at
the time of the thesis delivery, and a short and precise list of what the final design and
implementation achieves can be found in the conclusion of the Final Design and
Implementation, Chapter 5.7: Conclusion.
In the analysis and design phases we decided to develop the application on the Silverlight
platform, using MEF for supporting extensibility. We also decided to use the SOLID design
principles and the guidance of Prism to most effectively ensure the resulting code would a
certain level of adaptability and maintainability. The effect and proper implementation of
these principles were analyzed and the resulting maintainability scores of the system looks
very promising indeed. The requirements for module support were achieved using simple
implementations for inter-module communication and extensions. Both the client and myself
are very satisfied about how simple and loosely coupled we made the process of extending
modules and handling communication between them.
In regards to further development of the system, the client has decided that the current
system architecture looks promising and that they are happy to continue the development of
the system beyond the scope of this project. I look forward to this further development, as I
will also be working on this project for an extended period of time after the delivery of this
thesis.
Side 70 af 71
8 LIST OF FIGURES
Figure 4-1: Initial system architecture. Arrows indicate dependencies. .................................... 33
Figure 4-2 ContactViewModel class before refactoring towards Dependency Injection. .......... 37
Figure 4-3 ContactViewModel class after refactoring towards Dependency Injection. ............. 39
Figure 4-4: A typical example of how DI can improve testability. .............................................. 41
Figure 4-5: Our implementation of the Model-View-ViewModel design pattern. ..................... 44
Figure 4-6: Initial data model for Contact relations. ................................................................... 46
Figure 4-7: Second iteration of the data model for contact relations. ....................................... 47
Figure 4-8: Third iteration of the data model for Contact relations. .......................................... 48
Figure 5-1: High level system architecture model. Arrows indicate dependencies.................... 52
Figure 7-2: The current login screen. .......................................................................................... 53
Figure 5-3: High-level dependency graph of the Client Application. ......................................... 55
Figure 5-4: The Event Aggregator allows for independent publishers and subscribers. ............ 56
Figure 5-5: The event thrown by the contact details page when loading contact information. 57
Figure 5-6: The contact details page publishes an event when it loads contact information. ... 57
Figure 5-7: When the contact activity list is loaded it subscribes to the event. ......................... 58
Figure 5-8: Using MEF, the ContactActivitiesModule exports an ActivityList. ............................ 58
Figure 5-9: Using MEF, the ContactModule listens for extensions. ............................................ 59
Figure 5-10: The "Shell" or Main Application, when no modules has been loaded. .................. 60
Figure 5-11: The Ribbon Menu. .................................................................................................. 61
Figure 5-12: The IMenuService interface. ................................................................................... 61
Figure 6-1: Code Metrics from a Visual Studio analysis. ............................................................. 63
Side 71 af 71
Figure 6-2: LoadContact code snippet. ....................................................................................... 66
Figure 6-3: Testing the LoadContact method. ............................................................................ 66
Figure 6-4: Often used Silverlight code to ensure we use the correct thread. ........................... 67
Figure 6-5: Mocking an abstraction of the UI thread dispatcher. ............................................... 68