126
DEPARTMENT OF COMPUTER SCIENCE SERIES OF PUBLICATIONS C REPORT C-2011-X Trust and Mistrust Management in Enterprise Systems Lea Viljanen UNIVERSITY OF HELSINKI FINLAND

Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

  • Upload
    others

  • View
    7

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

DEPARTMENT OF COMPUTER SCIENCESERIES OF PUBLICATIONS C

REPORT C-2011-X

Trust and Mistrust Management

in Enterprise Systems

Lea Viljanen

UNIVERSITY OF HELSINKIFINLAND

Page 2: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Tiedekunta — Fakultet — Faculty Laitos — Institution — Department

Tekijä — Författare — Author

Työn nimi — Arbetets titel — Title

Oppiaine — Läroämne — Subject

Työn laji — Arbetets art — Level Aika — Datum — Month and year Sivumäärä — Sidoantal — Number of pages

Tiivistelmä — Referat — Abstract

Avainsanat — Nyckelord — Keywords

Säilytyspaikka — Förvaringsställe — Where deposited

Muita tietoja — Övriga uppgifter — Additional information

HELSINGIN YLIOPISTO — HELSINGFORS UNIVERSITET — UNIVERSITY OF HELSINKI

Faculty of Science Department of Computer Science

Lea Viljanen

Trust and Mistrust Management in Enterprise Systems

Computer Science

Licentiate thesis January 2011 121

enterprise computing, authorization, trust, intrusion detection

Kumpula Science Library, C–2011–XX

In the near future world of dynamic open service ecosystem, where enterprises collaborate withvarious partners to further their business goals, there are new challenges in managing threats suchcollaboration brings, especially malicious peers. Traditional security solutions relying on restrict-ing access on individual partner basis are not feasible in this open dynamic environment. Instead,the new community security requirements can be better met with new detective controls utilizinghigh level information.

This thesis proposes a model where trust is utilized as the community member security solution.The solution consists of trust-guarded authorisation augmented by peer behaviour monitoring andcontinuous feedback for mistrust management. First, an extensive survey of trust managementand an ontology built based on that shows how different types of trust can be utilised in enter-prise computer systems and how trust-guarded authorisation is a valid realisation of the theoreticaltrust formation process. Second, for mistrust management this thesis surveys existing applicationintrusion detection mechanisms to find suitable mechanisms for peer behaviour monitoring andfeedback. Thus this thesis combined two previously well known and separatebly implementedcomponents in a novel way.

The final contribution of this thesis is the TuBE community member security framework designbased on the model above. The design shows how i) the trusting decisions are instrumented whenan enterprise is engaging in collaboration, ii) the intrusion detection type monitoring is used at eachservice participant, iii) the collaboration can be terminated or restructured based on an observedintrusion or policy violation, and iv) how the observation affects the reputation and future trustdecisions for the violating participant.

Computing Reviews Classification:C.2.3 COMPUTER-COMMUNICATION NETWORKS: Network Operations: Network Monitor-ing,H.3.5 INFORMATION STORAGE AND RETRIEVAL: Online Information Services: Web-basedservices,K.6.5 MANAGEMENT OF COMPUTING AND INFORMATION SYSTEMS: Security and Pro-tection: Unauthorized access

Page 3: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Contents

1 Introduction 1

2 The Collaborative Enterprise Environment 42.1 Key Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Types of Communities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Community Life-cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4 Example Communities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4.1 Centralised Travel Agent Community . . . . . . . . . . . . . . . . . . . 102.4.2 Distributed Content Delivery Community . . . . . . . . . . . . . . . . . 12

2.5 Technical Environment and Support Infrastructure . . . . . . . . . . . . . . . . . 142.6 Security and Threats of the Environment . . . . . . . . . . . . . . . . . . . . . . 16

2.6.1 Security definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.6.2 Threat Analysis Method . . . . . . . . . . . . . . . . . . . . . . . . . . 172.6.3 Threat Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.6.4 From Threats to Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.6.5 Countermeasures and Solutions . . . . . . . . . . . . . . . . . . . . . . 29

2.7 Monitoring in Communities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.8 Challenges and Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.8.1 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.8.2 Requirements for Solutions . . . . . . . . . . . . . . . . . . . . . . . . 35

3 On Trust 373.1 Trust Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2 Different Types of Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.3 Trust and Mistrust Types in Enterprise Systems . . . . . . . . . . . . . . . . . . 403.4 Trust Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.4.1 Trust Model Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.4.2 Trust Model Input Factors . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.3 Trust Model Output Type . . . . . . . . . . . . . . . . . . . . . . . . . . 483.4.4 Towards an Ontology of Trust . . . . . . . . . . . . . . . . . . . . . . . 49

3.5 Trust Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.5.1 Background and Challenges . . . . . . . . . . . . . . . . . . . . . . . . 513.5.2 Enterprise Trust Management . . . . . . . . . . . . . . . . . . . . . . . 52

3.6 Management System Evaluation Criteria . . . . . . . . . . . . . . . . . . . . . . 57

i

Page 4: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

4 Behaviour Monitoring 594.1 Intrusion Detection Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.1.1 Data Collection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.1.2 Data Analysis Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.1.3 Reaction Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.2 Problems in the Current IDS Approaches . . . . . . . . . . . . . . . . . . . . . 624.3 Data Collection Methods in Application Level Intrusion Detection . . . . . . . . 64

4.3.1 Host Based Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.3.2 Network Based Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . 664.3.3 Direct Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.3.4 Monitoring the Execution Environment . . . . . . . . . . . . . . . . . . 67

4.4 Analysis Methods in Application Level Intrusion Detection . . . . . . . . . . . . 694.4.1 Misuse Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.4.2 Anomaly Detection in Execution Monitoring . . . . . . . . . . . . . . . 704.4.3 Anomaly Detection in Input Semantics . . . . . . . . . . . . . . . . . . 784.4.4 Specification Based AppIDS . . . . . . . . . . . . . . . . . . . . . . . . 79

4.5 Application IDS Reaction Models . . . . . . . . . . . . . . . . . . . . . . . . . 824.6 Intrusion Detection for Trust Management . . . . . . . . . . . . . . . . . . . . . 82

5 The TuBE Approach to Trust Management 845.1 The Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5.1.1 The Trust Model and Supporting Infrastructure . . . . . . . . . . . . . . 845.1.2 The Trust-guarded Authorisation . . . . . . . . . . . . . . . . . . . . . . 875.1.3 The Monitoring Service and Feedback . . . . . . . . . . . . . . . . . . . 88

5.2 Technical Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.3 Design and Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.3.1 Trust-guarded Authorisation and the Wrapper . . . . . . . . . . . . . . . 905.3.2 The Trust Data Broker . . . . . . . . . . . . . . . . . . . . . . . . . . . 925.3.3 The Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935.3.4 The Reputation Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . 945.3.5 The Action Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.3.6 The Context Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . 955.3.7 The Data Flow and Application Programming Interfaces . . . . . . . . . 96

5.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.4.1 The TuBE Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.4.2 The TuBE Trust Management . . . . . . . . . . . . . . . . . . . . . . . 1015.4.3 Impact as Enterprise Protection . . . . . . . . . . . . . . . . . . . . . . 1025.4.4 TuBE as a Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.4.5 New Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

6 Conclusion and Future Directions 1066.1 Technical Research Questions for Implementation Phase . . . . . . . . . . . . . 1076.2 Future Research Directions in Trust and Collaborative Communities . . . . . . . 108

Bibliography 109

ii

Page 5: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Chapter 1

Introduction

Advances in computer networking during the recent decades have brought unprecedented opportu-nities for organisations and individuals to connect with other organisations and people first locallyand then globally. This has created new ways of interacting and conducting business in the globalbusiness world.

It is the view taken in this thesis that the emergence of automated business networks consistingof independent networked organisations more or less automatically soliciting and trading theirservices is just around the corner. This form of business automation will have a major impacton the global economy because the networks or communities will reduce the impact of physicaldistance in commerce, utilise economics of scale to drive costs down, and enable organisationsto be more agile and thus successful in the modern rapidly changing world. It is assumed that inthe future business networks will form automated ecosystems consisting of for example buyers,sellers, brokers and subcontractors acting for the benefit of all participants.

For this world view to fulfil its promise, organisations need open, robust, scalable and secureframeworks on top of which these business network ecosystems will be built. Especially securityis of key interest since there have been several recent security breaches where huge number ofconsumer credit card numbers have been lost to criminal elements. These have caused govern-ments and key business drivers to start setting legal and compliance requirements for networkedbusinesses and services.

From the security point of view, there is a dilemma especially in networked computer systemswhen interacting with others. On the one hand, the remote party can benefit the community par-ticipant by placing orders on goods or services. On the other hand, the remote party might place afraudulent order or try to break the security of our open system, thus causing the community mem-ber harm. So we have two contradictory requirements. If we want to sell our wares or servicesin business networks or take part in any collaborations, we have to open our services and makethem available to others, thus placing trust on third parties. But at the same time, we also must bevigilant to protect our assets from harm, even from peers we trust.

How can trusting and not trusting be managed the same time during collaboration? Thisquestion is especially important in environments designed for collaboration between enterprisessuch as business networks or communities. It is also a challenging goal in current systems, sincemany collaboration and security technologies have been built on binary principles: if we trusta peer then we always allow access, if we do not trust the peer we do not allow any access ever.These trust relationships, or authentication and authorisation rules derived from them, are currentlymostly static and manually administered. However, in a dynamic environment, where businessnetwork participants, such as buyers and sellers, can change sometimes even rapidly, static andmanually managed systems cause scalability and maintenance problems. Therefore, statically

1

Page 6: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

configured access and authorisation rules are not a suitable solution for business network andcommunities.

The key question for this thesis is thus, how can a collaboration framework for enterpriseco-operation be protected without sacrificing dynamism and scalability required in the businessworld of tomorrow? There are three distinct challenges within this question. The first challengeis one of dynamism, how can the collaboration be protected in a world where open collaborationand dynamic participation over open networks is the future? The second challenge is the questionof protecting members from malicious peers, how can we find out if a previously well behavedcommunity member suddenly turns malicious and what to do after finding out? The third chal-lenge is the cost of management, how can the protective framework be structured so that the dailyoperations can be automated enough for the solution to be cost effective to deploy in a scalablemanner?

The answer to the key question in this thesis lies between the computer science realms of col-laborative networks and information security. It is evident that the traditional security models andtechniques focused on securing transactions between two known parties need some refinement inthe community setting. Therefore this thesis focuses on a new information security area, commu-nity security and its subfield community member security, both of which require novel solutions.The aim of this thesis is to explore and evaluate the concept of trust as a solution for the keycommunity security question above and the three challenges within.

Trust is already one of the key forces in today’s society. A good example is the monetarysystem, where tangible goods are exchanged for pieces of paper or bits of metal, which do nothave corresponding material value. The value is created by our trust in a system where the paperor metal can be exchanged again for goods or services of comparable value elsewhere.

Trust is also a major factor in the daily interaction between people. In every encounter wehave with other people, we subconsciously assess the trustworthiness of the other persons and dy-namically adjust our behaviour and attitude accordingly. This is especially important in situationswhere the trust evaluation we make results in collaboration where we are subject to harm if ourassessment of the trustworthiness of others is wrong.

A key observation in human behaviour is that the trust relationships are not static. We interactwith people daily but simultaneously and continuously assess their trustworthiness. If somethingunexpected happens, we readjust our trust expectations and our behaviour to suit the new situation.

This readjustment policy may also be a good approach for trust in business network scenar-ios, since enterprise applications must continuously interact with other applications as well. Inrecent years it has become evident that the current practise of evaluating trust only the first timethe parties interact is not sufficient trust management: it is also very important to continuously anddynamically evaluate trust and its related aspects of mistrust, distrust and untrust while collabo-rating.

How do we then translate the dynamic human trust to computing concepts so that it can beapplied as a solution to community security needs?

Several key elements are necessary. First of all, community participants need to be identifiable,i.e. authenticated. Second, we need a trust evaluation process to help us decide if the collaborativeaction is allowed to proceed. Hence, in a computer system we must make decisions based on peerbehaviour and trust evaluation, trust-guarded authorisation. Third, we need to continuously mon-itor how the collaboration proceeds to detect any problems or anomalies due to misplaced trust.Finally, we need constantly readjust our trust valuations to take account past peer behaviour. Ofthese, this thesis assumes the first one solved and focuses on the less researched last three ele-ments. These elements need to be bound together in a comprehensive trust management solutionfor community security.

2

Page 7: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

The contribution in this thesis is to show how a trust management system well grounded intrust theory can be used as a community security solution, especially for community member se-curity needs. The three focus components discussed above have been bound together to a conceptframework called TuBE, which consists of trust-guarded authorisation enhanced by semantic in-formation, introspective monitoring and dynamic trust value readjustment. Moreover, this workidentifies several interesting avenues of research that should be explored later by other research.

This thesis is organised as follows. First, the next chapter introduces the collaborative en-terprise environment and its threats to find and clarify the the research questions in the focus ofthis thesis, and solution requirements. Next, in Chapter 3 the concept and theory of trust and itsmanagement is explored to make sure our understanding and subsequent development of the keycomponents of trust-guarded authorisation and continuous trust valuation is in line with previousresearch. Chapter 4 surveys different application level intrusion detection methods to find suitablecandidates for our third component, the behaviour monitoring mechanism. Finally, in Chapter 5the TuBE concept framework binding these three components together is introduced. In this chap-ter it is shown how TuBE can be used to solve the key research question and the three challengesintroduced here. Conclusions and pointers to future research are presented in Chapter 6.

3

Page 8: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Chapter 2

The Collaborative EnterpriseEnvironment

In the modern global business climate, enterprises have a need to form alliances to offer services tobusiness clients across the world. This globalisation of business can manifest itself in many ways.One example of this can be seen in the move of physical production to countries with low labourcosts. Another is the rising popularity of various outsourcing and off-shoring services. To success-fully do business in this changing world, enterprises need to utilise this trend and start cultivatingrelationships with other organisations, which may be physically very distant from the enterprisehome location. Additionally, if cost cutting is imperative, enterprises must continuously shopthe market for best prices for their needs. All this requires the ability to flexibly and effectivelymanage a multitude of relationships with suppliers, business partners and other collaborators.

Traditionally, relationships have been managed with written contracts, which usually go throughseveral review rounds with the respective legal departments. Additionally, purchasing and order-ing processes have been paper oriented and required various signatures. However, if there arehundreds of relationships to manage, some of them potentially for a very short duration, this tra-ditional process is too cumbersome and more cost-effective solutions are required. From thereemerges the need for automated enterprise collaboration, where enterprises dynamically and au-tonomously form collaborative relationships and further their enterprises’ business goals. Theview taken in this work is that in the future enterprises base their businesses more and more onopen service markets, where services are bought, sold and traded flexibly by autonomous agentsfor the benefit of participating enterprises.

There are of course many more forms of collaboration in the modern networked world. Forexample, recent years have seen the emergence of many collaborative social networking or Web2.0 services, such as Facebook, LinkedIn or Second Life. These services have two features incommon. First of all, the users of these services are usually individual people1. Second, theusers take active part in generating or being content in that service, i.e. there exists significantamount of user generated content, UGC. While these services are scientifically interesting fromseveral viewpoints (such as sociological, economics and collaborative technology), this work fo-cuses on collaboration and networking between organisations, not private individuals. However,the proliferation of these services based on social networks between people hints at a strengthen-ing collaborative trend in the general atmosphere of the business world. For this trend to become

1There are of course exceptions. For example some companies and even countries have official presence in SecondLife and organisations recruit staff in LinkedIn, but generally very few people in these services view themselves asofficial representatives of some organisation.

4

Page 9: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

reality in business cases, a mature technological platform is required.Since the field of business networks is relatively new, concepts and vocabulary are still some-

what fluid, but this chapter introduces the collaborative enterprise environment and its challengesand solution requirements.

2.1 Key Concepts

A community is a configuration of objects defined to express some common purpose or objective[ISO96b]. A similar concept is a collaborative network (CN), which is constituted by a variety ofentities (e.g., organisations and people) that are largely autonomous, geographically distributed,and heterogeneous in terms of their operating environment, culture, social capital, and goals, butcollaborate to achieve common or compatible goals using a computer network [CMA05]. Col-laborative networks can manifest themselves in many forms. Especially when the focus is oninter-enterprise collaboration, a community can be called a business network [KRRM08].

These definitions, while having some differences in vocabulary, have two things in common.First of all, participants are separate entities (objects, persons, organisations) and secondly theseentities have some common or at least compatible goals they wish to further. The underlyingassumption is that each entity participating in the collaboration benefits by it and thus investing inthe business network participation either financially or mentally is worthwhile for all participants.

Since the vocabulary in the discipline of collaborative networking is still developing [CMA05],this work follows the terminology defined the ISO/IEC Open Distributed Processing referencemodel when available. Thus community is chosen as the key concept to describe this conceptof general collaboration in this work. However, since the focus of this work is on enterprisecollaboration, business-network can equally be used.

The community consists of participants or members, which are computationally representedby network management agents (NMA), agents for short [KMR06]. Community members canbe organisations, companies, governments or sometimes even private individuals. To enable theircommunity participation, a piece of agent software (the NMA) must exist to support them in thecommunity operations. The agent should be independent enough to both reactively and proactivelyact on behalf of the organisation when necessary. For example, in a business community the agentshould be allowed to make a binding sales offer if the tender query falls within acceptable limits.These constraints or limits in an NMA, or anywhere else in the community infrastructure canbe expressed in a policy. A policy expresses a set of rules related to a particular purpose . Arule can be expressed as an obligation, a permission or a prohibition [ISO96a]. For example, anorganisation may have a policy that it takes part only in communities where the initiator is locatedin a set of pre-defined trusted countries. Or a community level policy may prefer more trustedparticipants over unknown newcomers when selecting transaction partners.

Each participant fulfils a particular role in the community. Role is an identifier for a behaviour,which may appear as a parameter in a template for a composite object, and which is associatedwith one of the component objects of the composite object [ISO96a]. In a community these rolesare shorthand labels for certain expected behaviour and are based on the common purpose of thecommunity. For example, in a business transaction community there can be roles of buyers, sellers,brokers and payment verifiers. Associated with each role is a set of tasks, behaviour, that needsto be performed by an enterprise fulfilling that role. The roles and their interactions are defined inthe business network model (BNM), which can also define restrictions, such as lists of acceptablepeers for each role [KMR06].

The purposes of the communities can vary and therefore the number of roles in communities

5

Page 10: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

can also vary. This variability of purpose also means that to fulfil greater enterprise level objec-tives, the enterprises must be able to participate in different business networks at the same time.Thus, to produce a good service a travel agent must be able to participate in several roles in severalbusiness networks, such as a buyer of holiday flights in one community and a seller of vacationpackages in another.

Communities may have an initiator or broker who defines the purpose of the community andsets the rules of the collaboration [CMA01]. This initiation style requires very little negotiationbetween the participants, they either agree to the initiator’s terms or not. Alternatively, the commu-nities can be formed ad-hoc, where any participant may initiate a community and the collaborationis formed more dynamically. In this ad-hoc style, some negotiation may take place between par-ticipants to define the collaboration boundaries and interaction rules. When the community isformed, the participants may agree to a contract, which is an agreement specifying obligations,permissions and prohibitions for the objects involved [ISO96a].

Communities can also have a coordinator, who will be the regulator component of the com-munity related activities and responsible for the global goal. The coordinator is either a nodespecialised in coordination and added to the community, or its role can be played by an alreadyexisting community member [CMA01].

Community purpose can be described as an activity, a process. There are several challengesin describing the community process, for example the process must be modelled, incorporatingcorrectness of executions with respect to the model, and respecting the constraints of the under-lying services and their resources. Second, we must be able to interface a process to underlyingfunctionalities [HS05]. In the case of communities, this interface between a business process andthe technical environment would include a suitable model of transactions that take account thedistributed nature of the community.

The community process modelling can be made from several perspectives. Coordination ofa data flow between community participants can be defined in a choreography, which views thecommunity process as being a set of message exchanges between participants. The message ex-changes are constrained to occur in various sequences and may be required to be grouped intovarious transactions [HS05]. An orchestration sees the same process as a program or a partialorder of operations from the viewpoint of one single member and its orchestration engine [HS05].

How is then an electronic community built from these components and how do the participantsinteract autonomously? There are at least five different functional layers or levels that are neces-sary for interoperable electronic communities: technology, service, community, organisation andbusiness levels [RK06]. This structure resembles a traditional network protocol stack, where eachlayer uses services from the lower layer to provide services for the layer above. These layers in acommunity setting are illustrated in Figure 2.1.

Within the lowest technology level, the basic communication interoperability is managed. Thislevel includes the technical protocols and standards required for common network connectivity anddata flow and could be implemented in a standard TCP/IP protocol stack with modern applicationlevel protocols discussed later. On top of that is the service level, which handles the semantics andbehaviour of the service or set of software modules. Service level interoperability means capabilityof inter-operation between electronic services with well-defined, self-descriptive interfaces. Pro-tocols on this level are interested in ensuring syntactic, semantic and behavioural compatibility ofinterconnected services. The community level functions manage the business network as a wholeand may include protocols for negotiating and managing the business network model. Compatiblefailure-handling properties of the community are also essential. On top of this community levelthere is the organisational level, by which the business rules and policies for interaction are definedso that the automated community is compliant with applicable laws and business strategies defined

6

Page 11: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 2.1: Community levels.

in the topmost business level. It is assumed that the higher the level the less automation there is.For example, setting business strategies is a task best defined in company board workshops, notby automated means. These strategies can then be defined for automation in the organisation levelpolicies.

2.2 Types of Communities

As with the multitude of terminology, there are also a multitude of different types of communi-ties. The communities can be taxonomically classified by several factors: duration, dynamism oftopology, participation and coordination [CMA01].

• The community duration can vary. There are communities whose existence is for the du-ration of a single task or business opportunity. These communities are short lived and aredissolved when the business goal, such as the holiday sale or construction of an item, isachieved. On the other hand, there are communities that are long-lived and are more strate-gic in their nature. These long term alliances may be in existence during several actualbusiness processes.

• Dynamism of topology differentiates the communities on how the members leave or jointhe community according to the phase of the business process. Some communities have analmost fixed set of members where very little variation is allowed during the lifetime of thecommunity. On the other hand some allow for more dynamic approach, where members ful-filling a particular role can be easily switched. In other words, communities with a dynamictopology are more open to new members.

• Participation in a community can be exclusive or inclusive. While an enterprise may joinseveral communities for different business purposes, there might be exclusivity clauses in

7

Page 12: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

the community contracts that prohibit the members from participating in other similar pur-pose communities.

• Community coordination structure can be star-like or centralised, where a central authority,typically the community initiator or coordinator, dictates and enforces the rules. This patternis found in cases where the community exists to serve the need of a single enterprise and thecommunity is created to form a type of extended enterprise.

This classification does not take into account the existence of a formal contract between mem-bers. So additionally, the communities can be classified as implicit or explicit. Implicit commu-nities do not have any high level contractual basis. There is no objective standard of behaviourbut each participant evaluates the other participants’ compatibilities against its own subjectivebehaviour standard. Of course this type of community still requires certain level of protocol inter-operability and the ability of participants to fulfil the tasks for the roles, but these low level issuescan be negotiated ad-hoc.

An example of the implicit case is the Internet electronic mail, where a system contacts anothersystem to send e-mail. The purpose of that community is to facilitate sending electronic mailbetween the community participants. A system needing a message sent acts in a sending clientrole and a receiving system acts in an e-mail receiver role. Both have a common applicationprotocol, Simple Mail Transfer protocol, SMTP [Pos82]. There is also a third role, a relay, whichis assumed by receiving servers that can relay mail from sending clients to other servers if nodirect connectivity exists between the sender and the receiver. Traditionally, most e-mail serversin the Internet have been also open relays to ensure fault-tolerant e-mail delivery. There has notbeen need for any explicit contracting about Internet e-mail delivery roles and responsibilities.

However, this openness and the informal relationships have their downside, the risk of abuse.These very informal and implicit contracts between e-mail clients and relays have in recent yearsbeen a factor in facilitating the unsolicited bulk e-mail (spam) problem by allowing abusive clientcomputers to send millions and millions of spam e-mails via different relays. To counter this, quitemany organisations deploying e-mail servers have now restricted the relay role to be assumed onlywith a particular set of clients, with whose peer they may have a more binding contract, such asan outsourcing or client agreement. In this sense, the Internet e-mail community has been driftingfrom an implicit community to a partially explicit community.

Depending on the type of the community, the technical requirements for the underlying supportinfrastructure can be very different. For example, a short-term and very dynamic communityneeds very flexible mechanisms for community population and renegotiation. And in the non-exclusive participation scheme there must be very strict information confidentiality enforcementacross different but similar type of communities.

Especially in the ad-hoc implicit communities, current support infrastructure may not be suf-ficient, especially considering the misuse possibilities illustrated by the spam case above. Thereis a clear need for more explicit community definitions which include application behaviour de-scriptions. The inter-organisational application management is based on explicitly stated, platformindependent federation contracts that define the forms of cooperation between performing compo-nents [Kut02]. Thus a community is specified via a contract on the members’ collective behaviourdetailing community roles, enterprise object assignments to roles, policies applying to the rolesand behaviour descriptions. The term e-contracting covers various aspects of contract formingand use [MJDP02]. Aspects in a general contract management architecture include a contractrepository, a contract monitoring facility, a contract notification component and other componentssuch as contract enforcement [MLG+04]. Contract enforcement covers mechanisms for ensuring

8

Page 13: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

that actual behaviour of parties governed by a contract is compliant with their expected behaviourstated in the contract [MJDP02].

2.3 Community Life-cycle

Communities have an operational life-cycle that has four phases: creation, operation, evolutionand dissolution [CMA01]. These four phases include at least five management tasks: population,establishment, operations that include monitoring, reacting to change requests, and termination[KM05]. Thus the creation phase has two distinct management tasks, population and establish-ment.

The population process chooses community participants from a group of peers who are uniquelyidentifiable. Peers can be either people or organisations, i.e. principals, or autonomous agents act-ing for those principals2. The peers add participation offers into the offer repository and the pop-ulation process selects a suitable offer for each role in the general community model or businessnetwork model, BNM, which is perhaps suggested by an initiator.

In an inter-enterprise scenario, the business network exists to fulfil a business scenario. Thisscenario is made public by publishing a business network model. The model captures all externalbusiness processes that are relevant to the business scenario. The BNM also gives structure to theelectronic contract, which governs the community at run time [KRRM08].

In the population phase, if an entity can pick its business partners flexibly, it can select themto optimise any kind of quality-of-service criteria, including performance, availability, reliability,and trustworthiness [HS05].

The participants need to be interoperable and a negotiation process may occur to refine thesuggested community properties in the community establishment task. To finish the creation phasemeans that all roles have been populated and members have both the technical preparedness toaccept interaction, and have also completed all required contractual issues.

The second phase is the actual operations phase, where the common goal is realised. Thisphase may need support functions, such as basic secure data exchange mechanisms, informationsharing and visibility rights support, orders management, distributed and dynamic planning andscheduling, distributed task management, high levels of task coordination and collaborative engi-neering support [CMA01].

During the operations phase there must also be the ability to renegotiate or change the contractif requested by one or more parties. This type of change initiates the evolution phase. Evolutionsmight be necessary during the operation of a VE when it is necessary to add and or replace apartner, or change roles of partners. This need might be due to some exceptional event, such as(temporary) incapacity of a partner, changes in the business goal, etc [CMA01].

The final phase in any community is the dissolution or termination, either when parties agreethat the common purpose has been achieved, or when the contract has been breached either inten-tionally or unintentionally in such way that there can be no return to normal operations. The thirdoption for community termination is a normal contract expiration, if the contract has an end date.

Within each of these community life-cycle phases there are interesting research questions.For example, what are the best community population strategies and what kind of methods andprotocols are best in different kinds of contract negotiations? How can the community transactionsbe modelled either formally or informally? What kind of expert systems can be built for decision

2Despite ODP vocabulary, peer is preferred over principal as the actor noun also because it can not be used as anadjective and thus we avoid ambiguities. Client refers to a piece of software acting on behalf of the peer unless thediscussion refers to business.

9

Page 14: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 2.2: An example community with centralised coordination.

making in a community? How can the community be gracefully terminated guaranteeing thattransaction data which is not needed anymore (such as credit card numbers), is deleted? However,this work focuses on the community operations phase and especially on the question of how tobest secure the participant organisations from the threats arising from collaboration.

2.4 Example Communities

As discussed above, there are several types of communities. In this section two example com-munities are introduced, one with centralised command structure the other with a more dynamicmembership style. The community examples are for illustrative purposes only, the ODP view-points used in this thesis are mainly computational and engineering, i.e. what processing compo-nents are needed for a community to function, how they should be distributed and what kind ofinterfaces and interaction there are. The enterprise view, i.e. business requirements, the purposeof the community and enterprise policies, are used only generally since the focus of this work isto create solutions applicable in various applications and business domains.

2.4.1 Centralised Travel Agent Community

A simple example community with centralised command structure is shown in Figure 2.2. Thepurpose of the community is to provide a service, in this case holiday bookings to holidaymakers.The community initiator is the travel agent and he/she defines the rules in the form of communitycontract. All participants must agree to this contract to participate in the community. The holiday-maker contacts the travel agent for a scuba diving holiday booking and the travel agency contactshis subcontractors, an airline and a hotel in the target country. There may be several airlines and

10

Page 15: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

hotels that have declared their interest in joining the community, but the travel agent uses someselective criteria, such as pricing and quality, in populating this community of preferred partners.

Additionally, the travel agency does not want to deal with the tour service companies directlyso the community choreography specifies that the hotel is responsible in contacting local serviceoperators, such as companies organising sightseeing or scuba diving tours.

The community works in the following way. First, all interested airlines and hotels, i.e. peers,declare their interest in joining a community by placing the description of their offerings into theoffer repository. From there the travel agent, i.e. initiator, selects a suitable set of members for theparticular holiday community.

The holiday community can be built in two ways. The first type exists to provide the travelagent with a wide network of preferred partners which are later queried for more detailed offers,such as availability on certain dates. In the taxonomy of community types, this example falls in thefollowing categories. First of all, the community of travel agent partners is potentially long lived.If the travel agent continues to receive good service from the community hotels and airlines, thecollaboration is fruitful and likely to continue. Second, the community is dynamic. New airlinesand hotels may join and old hotels may drop out because of renovations or hotel chain mergers.Additionally, hotels may change tour operator partners. Third, the travel agent community wouldnot probably be exclusive. For hotels it is important to have many sales channels, so any suggestedcommunity contract limiting their participation to one travel agent’s community only would prob-ably not be received favourably. It is theoretically possible to have one very large travel agentwhich has a some kind of exclusive deal with one hotel, but as it is unlikely, our example com-munity is not exclusive. Fourth, the coordination structure is centralised. The travel agent dictatesthe community rules to the airlines and hotels. The hotels in turn dictate their rules to the touroperators. Additionally, this type of community is explicit. Transactions involving money shouldalways be governed by a suitable contract everybody agrees to.

It may not be technically necessary to build a separate preferred partner community, but realworld alliances and networks have shown that this is the industry way of making business. Hotelsform chains, airlines have alliances and these groups have made special deals with travel agentsand car rental companies.

However, the purpose of a community is usually more detailed. The second type of communityhas a very specific goal: in the travel agent case to provide a particular holidaymaker a successfulvacation. The key differences to the preferred partner community type above are in duration anddynamism. This particular delivery community exists for the duration of a particular holiday onlyand is disbanded afterwards. Secondly, once the community members have been selected andreservations booked, the members are not likely to change. However, the needs of the holiday-maker may dictate that the community must accept new and previously unknown members even ifthe members are usually selected from the network of preferred partners. If, for example the hotelor travel agent has not previously arranged a cave diving tour for a holidaymaker, the cave divingorganiser must be accepted even if it is a new and previously unknown member. Alternatively,the travel agent may decline to organise such a tour to the holidaymaker and thus risk losing thecustomer. Therefore, the community should be open by its nature so that peers can freely com-municate with each other according to the needs of the holidaymaker. In practise this means thatin this type of communities members have to interact with previously unknown other members asdictated by the business network model.

It is quite possible that these two approaches are not separate communities at all. The commu-nity business network model and the choreography can be built to include both long lived preferredpartner operations and short-term holiday delivery transactions. This illustrates the multitude ofpossibilities business communities can offer organisations. However, this work is mainly focused

11

Page 16: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 2.3: An example of a more dynamic community.

on the specific service delivery community with an open membership policy since the challengesare more interesting there.

2.4.2 Distributed Content Delivery Community

Another simple example community is shown in Figure 2.3. The purpose of this community is toprovide a distributed content delivery service for digital content, for example movies. The contentpublisher will submit content to be available in the community and participating members willagree in the contract to host the content or pieces of it. The business benefit in this communityis twofold. On other hand the publisher can very simply make the content available without hugerequirements for its own network capacity. For the member the benefit is parallellised and thusfaster download of the digital content. Additionally, this community could be enhanced by mem-bers specialising in selling and distributing different types of licenses to open the access controlrestrictions specified in the digital rights management (DRM) technology deployed in the content.

This content delivery service is more well known in the context of peer-to-peer (P2P) network-ing. P2P networks have the basic feature that they are based on direct communication betweenequal members and lack a centralised coordination server [DDCdVP+02, ATS04]. This basic P2Pconcept can be used for different applications, such as member-to-member communication (forexample Skype) and resource sharing. Resource sharing examples are content distribution withfile sharing protocols such as BitTorrent [Coh08] and distributed processing such as SetiHomeusing BOINC distributed computing platform [And04]. However, depending on the purpose andneeds of the P2P application, there may be some centralised services, such as authentication. Ac-tually, applications are labelled peer-to-peer not because of their internal operation or architecture,but rather as a result of how they are perceived externally, that is, whether they give the impressionof providing direct interaction between computers. Therefore the key features of P2P architecturesare direct sharing of resources and the abilities to treat instability and variable connectivity as thenorm and to handle transient population [ATS04].

Current Internet anonymous or pseudonymous file-sharing P2P networks can be considered

12

Page 17: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

ad-hoc communities consisting of private persons and their computers. Everyone in the commu-nity can publish content and access published content in the community. Since content may bedistributed across all or some members of the community, it can be accessed parallel and thus de-livery is faster than from a single centralised point. While P2P is currently notorious for enablingillegal sharing of copyrighted material between individuals, P2P networking itself is a neutraltechnology base which has also been investigated in relation to enterprise applications. For ex-ample, some enterprises have needs to rapidly distribute large content files such as movies to alarge number of clients or members and the required network capacity may exceed any single net-work provider’s abilities. One current example is the game publisher Valve, which has a contentdelivery network Steam. The Steam network enables consumers to buy games directly from thepublisher and play multilayer games on line. The control of this community is centralised fromthe content point of view, but the Steam network does incorporate some P2P techniques for directcommunication and file transfer between members.

In the Steam example members are individual people, but it is foreseeable that enterprises mayhave content delivery needs to other businesses as well. For example, movies are made more andmore made directly in digital formats. What if they are in the future distributed directly to movietheatres using content delivery networks instead of physical media that can be mislaid? Would theability to publish the movie directly to theatres bypassing country or continent level distributorsgenerate new revenue and to publishers?

In the standard business mindset of today, the content owner decides exactly to whom the con-tent should be distributed. In this case there may be a populator or some other centralised coordi-nator deciding who can become member of the business community. However, this cumbersomepre-selection of members is not necessary if the content itself contains measures to control whocan utilise it. There are several digital rights management (DRM) systems available for that pur-pose. So it is feasible to build a dynamic and distributed content delivery business system with noor very little member pre-selection and where there are new members joining the community dur-ing the community lifetime. Since the content should be available to new members, old membersdo not necessarily know in advance with whom they exchange the pieces of content.

While the current way of conducting enterprise business in the net is based on written contractsand pre-selection of community members, the view taken in this thesis is that in the future the wayof making business could change to more open and ad-hoc participation if sufficient enablingtechnology exists. The move towards openness probably starts from specific business needs, suchas content delivery, but slowly extends to other business applications as well when the businessbenefits become clearer.

The move towards more open community participation brings several benefits. First of all,more and more members can participate in business networks even from regions and business en-vironments less familiar to community initiators. This means more equal opportunities for newand less known enterprises. Of course, this has the downside that existing members face strongercompetition. Therefore, existing members may not view increasing openness as a benefit. How-ever, from the community purpose point of view the increased competition may reduce the costsof service production or raise the service quality in the community, which are direct monetarybenefits.

These are the key reasons open and ad-hoc community structure will be the future. However,for the open model to succeed, there are some problems that must be solved in the technicalcollaboration environment and support systems. These problems are security and managementwhich will be explored later.

13

Page 18: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

2.5 Technical Environment and Support Infrastructure

A technology often discussed together with electronic communities is service oriented computing.Service-oriented computing (SOC) is the computing paradigm that utilises services as fundamentalelements for developing applications. SOC involves the service layers, functionality and roles asdescribed by the Service-oriented Architecture (SOA) [PG03]. This paradigm enables applicationsacross different domains to collaborate to provide a unified service.

In this paradigm, services are self-describing, open components that support rapid, low-costcomposition of distributed applications. Service descriptions are used to advertise the servicecapabilities, interface, behaviour, and quality. Publication of such information about availableservices provides the necessary means for discovery, selection, binding, and composition of ser-vices. In particular, the service capability description states the conceptual purpose and expectedresults of the service (by using terms or concepts defined in an application-specific taxonomy).The service interface description publishes the service signature (its input/output/error parametersand message types). The (expected) behaviour of a service during its execution is described by itsservice behaviour description (for example, as a work-flow process) [PG03].

Services come in two flavors: simple and composite services. The unit of reuse with servicesis functionality that is in place and readily available and deployable as services that are capableof being managed to achieve the required level of service quality. Composite services involveassembling existing services that access and combine information and functions from possiblymultiple service providers [Pap03]. Composite services can be offered by a composite serviceprovider who utilises a community to provide these needed multiple services.

SOC provides the ability for interacting parties to choreograph their behaviours, so that eachcan apply its local policies autonomously, yet achieve effective and coherent cross-enterprise pro-cesses. In addition, it provides support for dynamic selection of partners as well as abstractionsthrough which the state of a business transaction can be captured and flexibly manipulated; in thisway, dynamic selection is exploited to yield application level fault tolerance [HS05].

While communities can be built on any paradigm, as the Internet electronic mail case shows,this idea of interacting and co-operating service modules is very useful when building communitieswhose reason of existence is collaboration. Therefore this work assumes the use of service orientedcomputing paradigm in the service level of the collaboration stack discussed in 2.1. In practisethis means each community member creates a service, simple or composite, and publishes it to thecommunity thus creating the service level from the available technology base.

However, the community may need more support infrastructure than the general service ori-ented paradigm to function. While enterprises offer their services to the business networks, thecommunity population, establishment and operations phases have tasks that require additional ser-vices, such as a service offer repository, a populator, a business network model repository andmonitoring components. These or similar common support functions are typically implemented ina business-to-business (B2B) middleware systems which can be called community support systems(for example ADEPT [RD00]) or virtual enterprise support environments (for example CrossFlow[GAHL00] and WISE [LASS00]).

Topologically the middleware components are distributed around the community. Each mem-ber usually has a middleware component, the network management agent, enabling communityparticipation. Additionally, there may be several common components, such as the populator,which can be either distributed across several community members, run by a neutral third party, orfully controlled by one member. This division may reflect the community coordination structure.

The community middleware can be built as a framework, which is a reusable design of aprogram or a part of a program [Deu89]. For the community middleware or parts of it to function

14

Page 19: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 2.4: Components of the Pilarcos environment [KRRM08]

as a framework, it should by definition be applicable to many business scenarios and communitystructures.

Generally middleware systems are not usually very flexible from the community coordinationpoint of view. For example the WISE system has a mandatory central coordinator component, theWISE engine, to which the clients and members connect, which distributes tasks, monitors andcontrols the workflow in the community. As such it has difficulties adapting to a democratic anddistributed community structure such as the content delivery community example. In contrast,the CrossFlow system utilises only an external contracting service (a Service Trader in its ownlanguage) for matching service offers and requests.

This thesis uses concepts of the Pilarcos middleware system [KRM07, KRRM08], which isdepicted in 2.4. In this system, the middleware consists of a populator component, service offerand type repositories and the business network model repository. Each member offering a servicehas its own business network management agent (NMA), a contract repository, a monitoring com-ponent and an administrative management application. This structure is applicable to many typesof communities.

The threats present in the future collaborative environment are discussed next to lead the anal-ysis towards possible solutions.

15

Page 20: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

2.6 Security and Threats of the Environment

2.6.1 Security definitions

Security is generally defined as being free from danger or as the condition of safety [KT93]. Thedanger is presented through threats towards assets. Assets in a computing setting are typicallyinformation, hardware or software. However, they can also be immaterial like reputation or sat-isfaction [RK10]. A threat is a potential occurrence that can have an undesirable effect on thesystem assets or resources [Bis03].

The concepts used in the field of computing security are not very exact. Information securityhas emerged in the recent years as the umbrella concept for containing all aspects of security in acomputing setting despite the name referring to only securing information assets. Concepts likenetwork security, where the protected asset is the network, and computer security, where the assetis a computer system, are generally thought of as aspects of information security. In this thesis wefollow this umbrella status and define information security to mean protecting both informationand information systems assets from various threats.

In this thesis we define community security to be an aspect of general information security,where the protected asset is the community. Here the community is a combination asset consistingof all software, hardware and information assets at all community participants necessary for thecommunity to reach its goals. Correspondingly we define community member security, or membersecurity in short, as an information security aspect and a subset of community security, whichprotects the community member assets from the threats community membership brings.

The four general goals of information security are protecting the confidentiality, integrity,availability and non-repudiation of the assets. The assets can be divided into two categories,community framework assets and community specific assets. Framework assets are software andhardware components (such as the NMA) required for participation and the technical interoper-ability of the community. Additionally, each community has a purpose, such as holiday sales,which brings some community specific assets to be protected. The community specific assets aregenerally in this thesis called business data, since the assets travel through the eCommunity as datarepresentation. However, it is recognised this business data can be at some point of the communityfunctionality converted to physical objects and services.

In community security the characteristics and purpose of the service or community dictatewhich of the general information security goals are relevant for each asset and what is their orderof importance. For example, non-repudiation is crucial for communities having sales transactions,while confidentiality is a key goal for patient records in a healthcare service community. For ageneric community, contract integrity is also of importance. As a contrast, availability plays thegreatest role for background image rendering system for the movie industry. Problems in renderingone frame causes delays in the rendering of subsequent frames in the pipeline and thus delays thewhole movie incurring huge costs.

To analyse the threats affecting our example holiday delivery community it is worthwhilefirst to examine what are the assets worth protecting in this community. This community doesnot deal with tangible goods, the transaction targets are services. However, services have valueas well and the end result of the community is that a holidaymaker will receive something ofvalue in exchange for money given to the community. This money is eventually distributed to thecommunity members according to the contracts made during the community lifetime. Thereforeone class of assets worth protecting are the promises embedded in the community contracts. Thesepromises may be backed up with guarantees required by the members for entering into the contract.In the travel agency world, it is very usual to require holiday-maker’s credit card details as a

16

Page 21: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

guarantee for a booking. Protecting the credit card information in the community is very important,since card details are very much a popular crime target for identity theft and credit card fraudpurposes.

In our distributed content community the confidentiality of the the movies or other digitalcontent, is protected otherwise with DRM technology. However, the integrity and availability ofthe content must be protected within the community as well. Members will not be very satisfiedif they can not use the downloaded content because one of the pieces of the content has beencorrupted in the community.

A threat is related to a vulnerability, which is a weakness that makes it possible for a threat tooccur [Bis03]. An example of a concrete threat is a hacker3 breaking into a payment system via anunsecured wireless network access point and stealing credit card information. The vulnerability inthis case is weak technical security measures on the wireless access point.

Vulnerabilities that enable the hacker to break into the system can be for example bugs inthe application code, architectural flaws, design or system management oversights or algorithmicproblems. In the security community the method to utilise the vulnerability, for example a smallprogram, is called an exploit. An attack vector is the specific route via which the vulnerability canbe exploited, for example an e-mail message or a door to access the system console.

For a real security incident to occur and cause loss of one or some of the security goals, severalthings are thus needed: a threat, a vulnerability and an attack vector to reach that vulnerability withan exploit. If there is no vulnerability, a threat can not be realised. If all attack vectors for a specificvulnerability are known and closed, the vulnerability can not be exploited. However, all potentialattack vectors can not always be closed. For example, very few organisations in modern businessenvironment can be without electronic mail to the rest of the world for long and the same can besaid for network access to collaborative business applications that generate revenue.

There is an additional concept related to threats, a risk. A risk is a qualitative or quantitativemeasure of how likely a particular threat is from a subjective point of view. A risk analysis aimsto measure the risk of known threats, whereas a threat analysis is interested in enumerating all thepossible threats only. In a risk analysis, also the cost of the incident caused by the realised threat isoften estimated. Based on the occurrence frequency and financial impact of the threats, the mostsensible countermeasures can be selected. For example, if a purchase fraud occurs about once amonth with average direct and indirect losses of 10 000 euros, it is cost effective to deploy a fraudprevention solution only if it costs less than 120 000 euros yearly in total.

2.6.2 Threat Analysis Method

What is very interesting from the collaborative enterprise point of view, general threats, risksand possible countermeasures have not been thoroughly analysed in a virtual enterprise scenario[DJPJ04]. Some partial efforts in filling the gaps between single system and an enterprise levelsecurity exist, for example in the form of message context protection [vdHLP05], but no compre-hensive threat, risk or requirement analysis has been done previously. Therefore, such an analysisfor community security has been conducted to help focus this thesis.

Therea are all kinds of threats towards adopting an eCommunity collaborative business model.Some of the threats are the high cost while adopting and managing the ecosystem, infrastructure-level threats introduced by the new global support services, and the current lack of legislative

3Here the word hacker is used according to its current popular semantics, i.e. a person trying to get unauthorisedaccess into others’ networks or systems. If anyone from the MIT Tech Model Railroad Club is offended, I respectfullysalute and apologise.

17

Page 22: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 2.5: The starting point of the eCommunity threat analysis.

support [RK10]. This threat analysis is focused on the threats introduced by the required supportservices and infrastructure.

There are several methods to use in a threat analysis, for example checklist-based methodsor more free-form brainstorming method. A checklist-based method is limited to selecting appli-cable threats from the pre-compiled list and may miss threats not present in the list. Free-formbrainstorming on the other hand is not limited in this way, but still may miss important threats justbecause some aspect of the target has not been forgotten from the analysis.

Therefore the threat analysis here has been done using a more formal threat tree method[Amo94]. The method divides the target system into parts or views which can then be furtherdivided as many times as necessary to create a tree structure comprising of different cases forconsideration. The method produces a conceptually complete target system decomposition if eachdivision includes a part or view called other, which includes everything else not listed in the ex-plicitly named cases.

While the threat three method is essentially a form of guided brainstorming this complete-ness ensures that every possible angle of the system will be considered during the analysis. Ofcourse the quality and depth of the results is dependent on several factors: the actual divisions,the capabilities and knowledge of the person(s) doing the analysis and the time available for theanalysis.

In this thesis, the target system is an eCommunity and the division into different views hasbeen done twice as shown in Figure 2.5. The first division is done according to the assets aneCommunity must protect. These assets include first of all the eCommunity framework itself,its functionality, structure and operational data, for example offers, contracts and the businessnetwork model. Second, each member itself has assets requiring protection, such as computingresources and private data belonging to that member only. Third, there is the real purpose of thecommunity, the data transferred in the community for business purposes, for example credit cardnumbers, pricing and quantity information of goods, or personally identifiable information. Therest of the assets are in the "others" section. Assets in this final category can be non-tangible suchas the member or community reputation.

The second division is made according to the community actors, i.e. sources of threats. Fourtypes of actors have been identified here, namely clients, members of the community, communitymiddleware components and of course, the others. An example of the other actors are external

18

Page 23: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

hackers. When the community purpose and the community support components are fully known,the actor list can be expanded to specific member roles and named middleware components. Thisexpansion would make the threat analysis more meaningful and complete for software designpurposes. However, at this point a general threat analysis suitable for many different communitypurposes and potential frameworks is more useful.

Thus from these two orthogonal divisions the threat tree yields sixteen cases to be considered.The key question asked in analysing all these cases is: What threats can the actor X cause to theasset Y in the eCommunity? The divisions here reflect the issues in the focus of this thesis. To focusthis thesis the "Other" asset subtree is ignored and the analysis concentrated on the three primaryasset types leaving 12 cases. In other research projects the divisions can and should be madedifferently and the analysis focused differently. More detailed divisions can also be performed ifnecessary, but it very easily yields a large number of cases to be considered. This exponential casenumber explosion is the drawback of the threat tree method.

The threats found are usually somewhat similar between similar cases (for example "clienttowards the community" vs "client towards member assets") but there are subtle differences war-ranting separate consideration. For each found threat it is classified into one of six broad typesaccording to the the STRIDE method [HL03]:

1. Spoofing or masquerading as another entity. Spoofing can occur if the authentication mech-anisms are weak or non-existent between the communicating parties (client, web server,application servers etc) especially if data crosses organisational trust boundaries. Whilespoofing is a legitimate threat itself, it is usually used to cover other attacks, such as denialof service.

2. Tampering of data. An attacker can potentially manipulate all data residing on the systemor travelling in the data stream between client and server. Sensitive data can be manipulatedalso in a rich client application. Client applications can be reverse engineered and dataembedded in them changed.

3. Repudiation, i.e. denying that a transaction occurred or was initiated by someone.

4. Information disclosure. Data can be obtained from the client machine or when the data trav-els in the network. However, weak server or application security can also lead to informationdisclosure from the server or application internal data structures.

5. Denial of Service (DoS) prevents a system from performing its duties. Messages with ma-licious payload can cause server crashes or hangups. It can also be caused by flooding acommunicating party with excess data or service requests. DoS are typically server sideissues, but sometimes a DoS attack can be directed at a client as well, especially in a richclient situation, where significant response processing happens at client as well.

6. Elevation of privilege can be thought from two points of view. First, to gain privilege whereno privilege was before (i.e. unauthorised access). Second, to gain more privileges in asystem where some legitimate access already exists.

2.6.3 Threat Analysis

Since the threat gathering and analysis process is by its nature open-ended the list of threats belowis incomplete. Each practical community application should perform its own threat analysis whichtakes into account the particular business requirements and the technical details of the application

19

Page 24: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

and framework. For example, the threat tree of the travel agent example could be constructeddifferently and thus some different and more concrete threats in that particular environment found.However, this list may act as a starting point for such a more detailed analysis.

Please note that at this point no judgement is passed on the viability and likelihood of theobserved threats, these are discussed later. Similarly, existing solutions to mitigate the foundthreats are discussed later.

1 - A client towards community

A client requests a service from the community so it has access to at least one community interface.Spoofing or masquerading as someone else is a possibility in any interaction. In this case

the threat is that the client masquerades as someone else and thus creates problems for the com-munity structure. The client can try to masquerade as another client, a community member or amiddleware component such as the populator.

Masquerading as another client may cause problems for the community if the business processis not a simple request–response but requires also interaction in between. Out-of-order or seman-tically unexpected requests and/or responses may cause the community choreography driven bythe business network model problems. Moreover, it is possible that if a client can masqueradeas the populator it can falsely initiate contract renegotiations, repopulations or even communitytermination.

A client in his/hers normal role does not really have much tampering possibilities towardsthe community structure and/or functionality. The client can only affect the data and requestshe/she sends to the community. However, a client by definition has access to the community viaa member or initiator and thus can try to hack into the member via a community interface whichhas a security vulnerability and thus gain more tampering opportunities.

A client can also deny sending a service request to the community, i.e. repudiate the transac-tion. This may cause problems to the community especially in cases where the community chargesa fee from its services. Alternatively, the client may claim never having received a particular ser-vice or data requested.

A client can also disclose privileged community information if the community reveals anyprivate information to the client. For example, this information may include identification of thecommunity members if the service response includes this information. This structural informationmay also be available if the client is offered some kind of request tracking interface to see how therequest is handled in the community.

The client has also some denial of service capabilities towards the community as a whole. Itmay flood the community with service requests, valid or invalid, so that other clients can not beserved. Alternatively the client can block responses from getting through and thus reserve andeventually exhaust resources in the community.

A standard client can usually only send service requests and receive replies. A client maytry to elevate its privileges in the community by hacking into community members or communitymiddleware components, which have more capabilities (or privileges) in the community. By defi-nition the client has access to the community member/initiator it contacts for service, and possiblyto others, therefore this is a valid threat.

2 - A member towards community

Threat of a member spoofing or masquerading as another member is valid. Successful spoofingcan cause the community choreography problems when interaction patterns break down or be-come semantically unexpected. However, the threat of a member masquerading as a middleware

20

Page 25: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

component or other community infrastructure actor is more problematic as it can cause disruptionsand security breaches in the community functions. For example, masquerading as a populator isespecially problematic in communities and situations where periodic repopulation occurs. In thesecases the attacker can basically reorganise the community as he/she wishes.

A community member can also tamper with the community metadata or interaction messagesthat it has access to. This may include contract data and negotiation protocol messages, but onemember can only have a limited effect in the contracting and negotiation process.

More interesting threats are possible with a Sybil attack, where a single malicious entity cancreate several false identities and use those false identities to collaboratively harm the system[Dou02]. The validity of the attack here is very much dependent on the identity establishmentsystem the community uses. One malicious member could try to establish several different peeridentities and influcence the membership selection and community setup. In an open communitywhere anyone can join or send offers, this is a valid threat, although the population process ulti-mately selects the members. A separate threat with similar effect is the situation, where severaldifferent members act in collusion to shape the community towards their desired goals.

A community member can also deny ever agreeing to a specific contract or community partic-ipation.

Members usually have access to some structural and contractual information of the commu-nity. The member has a contract with the community and the contracting information may includeinformation about other members of the community as well. The member has some choreographyknowledge, for example which parties act in which roles. The knowledge is not necessary ex-haustive, but at least it usually knows its up- and downstream partners in the service choreographysequence. This information may be revealed accidentally or deliberately.

If a client can cause havoc with the community interface to the point of denial of service,the community member has even more disruption possibilities. While clients can only accesspublic interfaces, members may utilise other members’ private interfaces and some middlewarecomponents. By sending massive amounts of valid or invalid messages they can shut them down(if a vulnerability allowing it exists) or exhaust the receivers’ resources and thus prevent themfrom serving others.

A member may try to gain more privileges in the community by hacking into other systemsvia vulnerable interfaces or systems. Other members do not usually have more privileges than theattacker towards the community, except in cases where the community control is centralised. Inthat case a member can gain control privileges in the community by attacking the central member.

3 - Middleware towards community

Middleware and community framework components are the most security critical componentswhen the community structure and functionality are in focus. The middleware components takingpart in community population and eContracting processes could spoof their identity and masquer-ade as another component, member, prospective member or client. Components could also tamperwith the community setup messages, such as contract offers and population messages.

At minimum such spoofing and tampering means disruption of the normal community com-munication choreography and will cause at least a temporary slowdown and possibly a denial ofservice situation while the community tries to gracefully recover from the disruption.

Spoofing can also have more severe consequences, for example in situations where the com-munity middleware components have been distributed to different organisations. For example, ifone organisation provides the offer repository and another the populator, the populator could bedirected to use a third party offer repository if the offers at the old one are not to the liking of the

21

Page 26: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

organisation running the populator. This in turn may cause the community to provide a differenttype or degraded quality of service to the client. Changes like this are not visible to the client ormaybe even not to the initiator.

There is also the possibility that the contract and type repositories change the offer and typedata stored in their databases after it has been entered there. This could cause the community tobe set up under false pretences if the members do not check the final contract diligently.

Some repudiation threats are also possible. The offer/contract repository can repudiate everreceiving a particular offer or contract. This may have negative consequences for the communityin cases where the rules of population dictate that for example the cheapest offer must always beselected, as is common in public sector tendering. The same end result can also be achieved ifthe offer/contract is simply lost in transit or dropped due to a software glitch in any middlewarecomponent.

The middleware components have naturally access to many community messages and metain-formation, especially contracts and the business network model information. Middleware compo-nents may reveal some of this information to third parties as part of error messages. It is also pos-sible if some middleware components or component instances are shared between several similarcommunities, information about one community may end up visible to another community mem-bers or administrators. This is an especially relevant threat with shared data storage middlewarecomponents such as the offer and business network model repositories even if the communicationwith these components goes via a centralised populator.

Since the middleware is responsible for technically implementing the community initiation,repopulation and termination tasks, components may very easily cause a denial of service situationeither by ceasing to function due to a software bug, deliberately refusing to function or respond atall, or attacking other middleware components with excess or malformed traffic.

In a reverse case there is a possibility of denial of service if the middleware components failsto send the correct community termination messages and the members must keep the communityinterfaces, components and data structures alive in their systems. This denies the members theability to use these reserved resources for something else, so this could also be classified as athreat towards the members’ assets.

4 - Others towards community

If the community is not open, but requires clients to authenticate themselves, other external partiesmay try to spoof their identity to gain client status within the community. This is especiallydesirable from attackers’ point of view if the service provided is of material value and the attackercan spoof its identity so that utilising the community does not incur any costs to the attacker. Fromthe community point of view the threat is of using resources to provide a service without benefitor proper compensation, basically a fraud.

External parties may try to conduct a man-in-the middle attack to gain access to the communityinternal messages. If the attack is successful and gains both read and write access to data, thisthreat belongs to the tampering category. If the attack is only partially successful and gains thethe attacker read-only access to data, it is a case of information disclosure. Tampering is the moredangerous of the two from the community functionality point of view, but information disclosuremay also reveal data community members wish to keep community confidential, such as contractterms and safeguards. Similar effects can be achieved if the attacker succeeds in masquerading asa member or a middleware component.

Denial of service is a possible attack as well. Especially distributed denial of service (DDoS)attacks conducted via botnets are a problem in the current Internet and it does not seem to be a

22

Page 27: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

diminishing problem. Problems are greatest in communities offering services to everyone, not justa pre-selected list of approved clients. The DoS attack can happen on the network level (IP/UDPpacket or TCP connection flood) or on the application and community message level. The higherlevel in the protocol stack the DoS attack is using, the more resources it takes at the receiver endto analyse and classify the incoming contact as a DoS attack.

A denial of service situation can also be caused by a non-malicious agent or the threats can bein fact agentless. For example, networking equipment may fail due to human configuration error,because of a software glitch or just because the hardware reached the end of its life. These issuesare possible, but from the point of view of this thesis not very interesting and thus these types ofthreats are not elaborated further.

5 - A client towards a member

Since a client must have network level access to at least one community member (the initiator orother public interface member) the client can try to exploit a vulnerability in that interface to gainunauthorised access to the member network node. Depending on the type of the vulnerability theeffect of this attack can vary. It can give the client access to member’s private data (for examplepasswords) or it can enable the client to write to member’s system thus allowing tampering withthe member’s data. Additionally, the vulnerability can give the attacker means to conduct a denialof service attack towards the member. A vulnerability can also provide the client a way to gainelevated privileges in the member system and thus gain complete control of the system. Thisin turn has severe consequences since this maliciously controlled system can then be used as astepping stone for more advanced attacks towards the member’s internal systems or towards thecommunity.

An example vulnerability in the member system allowing these type of threats is a softwareproblem at the interface, where it fails to properly sanitise or filter client submitted data so that thedata is interpreted and executed as code in the member system.

It is interesting to note that the attacking client does not necessarily need direct network accessto the victim member. The vulnerability may not be at the first public interface but may actuallyreside at another member further in the business process interaction chain. The vulnerabilitycould theoretically be exploited if the maliciously crafted input data is passed unfiltered down thebusiness process chain.

6 - A member towards another member

If a member successfully spoofs its identity and can masquerade as another member, it can alsoreceive information private to the spoofed member. This may have severe consequences if theprivate data is for example part of the contracting process and price negotiations.

To tamper with another member’s private data, the attacking member must have access to it.If such private data is sent to the community during for example contracting process it is possibleanother member listens to it using a man-in-the middle attack or masquerading as a communitymiddleware element. Otherwise access can be gained by breaking into the victim member’s sys-tems. The attacker and victim members may have open access to interfaces between themselvesif the community choreography so dictates. The attack can be conducted for example with mali-ciously crafted messages towards a vulnerable interface.

Direct access between the attacking and victim members is not always necessary. A membercould also try to hack its way to other member’s system by sending maliciously crafted data vic-tims via the middleware mediated offer and contracting messaging exchange. Even if there is no

23

Page 28: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

direct communication between the attacker and victim systems, the populator and offer repositorymay pass the data unfiltered from the attacker to other members of the community. If these mem-bers then have vulnerable systems interpreting the received data as executable code, the attackingmember may gain privileges at the victim system or create a denial-of-service situation.

Denying a transaction has taken a place between members is a valid concern. Repudiationissues can affect members directly, for example when another member ever denies sending a ser-vice request or receiving a response. While it does mean disruption in the community operationsas discussed above, it may have grave consequences to the member who acted based on the nowrepudiated information. Not only does it mean costs in the form of lost processing power in thevictim’s systems, but it also may mean real financial losses in the form of shipment of physical orintangible goods.

Even in normal legitimate community transaction between members there may be some pri-vate information exchanged. For example the members may have details of each others’ networkstructure, addresses and security policies. This information may be leaked accidentally for ex-ample in the form of error messages to clients or community members. Additionally, it can bemaliciously revealed if a member is so inclined.

A member can also carry out a denial of service attack towards another member, especially ina case where members have a relationship dictated in the community agreement. The attackingmember can thus legitimately send requests or responses to the victim. The messages may bemultiplied or otherwise flooded to the victim and thus preventing the victim member systemsfrom carrying out their normal business.

7 - Middleware towards a member

The problems caused by attackers exploiting software vulnerabilities discussed above with client–member threats are of course possible when the attacker is a middleware component. However,here the more interesting additional threats are discussed.

Some middleware components, such as the populator and the business network model storage,are in the privileged position that they have knowledge of the community as a whole and theypractically control the community structure and metadata. This knowledge may aid them in mas-querading as a community member or client. Spoofing may reveal data the victim member doesnot wish to distribute to the community middleware.

Data tampering is possible in a scenario where the member sends out some private data withthe contracting offer and a middleware component having access to the data changes it. A middle-ware component may also try to attack the member via a software vulnerability in an accessibleinterface and gain access and data tampering possibilities that way. There will be accessible in-terfaces between the member and the populator and offer repository components, so this is quitepossible.

The only repudiation type threat towards a member’s assets found is the other side of a middle-ware component denying ever receiving an offer/contract. Above, this threat was discussed fromthe viewpoint of the community losing by not getting to know all possible offers. However, seenfrom the perspective of the member whose offer was lost, it means potential loss of revenue.

Disclosing private member data by a middleware component is possible in scenarios whereprivate data is sent to the community middleware. In this case any software bug or design flaw inany middleware component could cause the private data to be displayed to outsiders for exampleas part of error messages or log files. Especially authentication tokens, such as passwords, are atrisk of disclosure if the member–middleware interaction is using password based or other weakauthentication schemes.

24

Page 29: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

There is also a threat where the middleware component floods the member with enough re-quests and/or responses so that either the member node resources or the member communicationcapacity is exhausted. Especially in the case of communication capacity exhaustion the DoS canaffect severely also the normal networked business of the victim member.

8 - Others towards a member

The threats of external parties exploiting vulnerabilities of publicly accessible interfaces at themember’s systems apply also here. This is case of the prototypical external hacker attacking apublic networked service. If such an attack is successful, the attacker can tamper with the data ordisclose it to third parties or even destroy it causing a denial of service situation.

Also previously, the threat of an outside party spoofing its identity and initiating a servicerequest was discussed in the context of the community as a whole losing resources without com-pensation. However, if this same spoofing problem is looked at from a viewpoint of a member,the threat is about an attacker committing fraud so that the victim has to bear the cost of attacker’sactions. Of course the issue of who should bear the costs of a fraudulent transaction is dependenton the legal framework governing the community participation.

9 - A client towards business data

Generally a client is a major source of the business data. The client provides a service requestwith enough details that the business process can be completed. However, the data originatingfrom the client is not necessarily the only business data in the community. Members can interactwith each other using a different set of business data if the process so dictates. For example in thetravel agent community the client provides his/her itinerary wishes and a credit card number. Butthe hotel booking a sightseeing tour for the tourists uses a somewhat different set of data wheninteracting with the tour operator, which may contain for example bulk rate discounting codes andother assets requiring protection. These both cases need to be considered in a threat analysis.

A client has limited possibilities in affecting the business data that travels the network. First ofall, it can always spoof the original service request either coming from someone else or just falsifythe business data (for example credit card number, order numbers etc) coming from it.

Generally the client does not have access to the business data moving between the communitymembers apart from the information it gives on the original service request. However, if thebusiness network model and the choreography built based on it requires the members interact withthe client during the process, the client may have some more opportunities to tamper with thereceived data. For example, if in our travel agent case the client is asked during the process whichcombination of airline, hotel and tour he wishes, it is possible that in the confirmation responsethe client could change some reference and account numbers.

A client can also deny sending a service request with particular contents, i.e. claiming it sentdifferent data. This is a subtly different case from denying sending the request at all, since here theclient can claim the received service was not correct according to the request. Somewhat relatedis the threat where the client just denies the received result is correct even if the response matchesthe request.

Clients leaking business data may be a possible threat in the scenarios where the communitymembers interact with the client also during the process. In those middle-of-the-process messageexchanges there may be business data that the client can leak to the outsiders. For example, if inour travel agent itinerary confirmation case the confirmation request could contain business datasuch as discount rates, bulk pricing, account numbers etc.

25

Page 30: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Here the case of the client accidentally or deliberately leaking its own business data, such ascredit card numbers, is considered trivial and not elaborated further.

Normally if the client tampers with the business data, it can only change the quality and natureof the service it receives. However, in some cases tampering with the business data may causesignificant activity and reorganisation in the community, which may cause denial of service typeof situation. For example, if the original purchase request was for 5 units of ball bearings or movieframe renderings but it is later changed by the client (or by someone else) to 5 000 000, this mayexhaust the resources of either the community or individual members.

10 - A member towards business data

In the previous section some possibilities of a client causing threats to the business data werediscussed. However, the community members are the biggest source of threats. This is simplybecause they as a group have the most opportunity since the business data travels through them.

A member could affect the business conducted in the community by spoofing the businessdata or request origin. Spoofing the request or data origin at the middleware could be used forexample delivering services to an organisation with whom the service provider does not want todo business.

The member can change the business data when it travels through it. The data presentation for-mat can change several times during the community process interaction if the members’ interfacesare different. This transformation is not necessarily a security threat, but if these transformationsare not done carefully the semantics of the data can also change, which then could be seen as areal data tampering threat. Also in our content delivery community a member could change thecontent pieces it stores and delivers to other members.

In a reverse threat a member does not perform a necessary transformation even if the othermember’s interface would require it. An example of this type is a failure to do a currency or unitconversion.

A member can also store the business data travelling through it. In some regards it is necessary,since the business network interaction failures need to be analysed and in the absence of any logsit may be impossible. However, if the members retain the business data in the logs, the datastorage becomes a desirable attack target for external parties. It is also possible that the businessdata owner has some business rules concerning the storage of said data (for example credit cardnumbers or other personally identifiable information) and the member’s storage and logging policymay violate those rules.

Information leaks of the business data are also possible if the member systems reveal thatinformation as part of for example error messages. Deliberate information leaks can occur too, ifthe member decides to turn malicious.

A member can duplicate or multiply received service requests and thus cause problems to boththe client and to other members. Business data can also get lost at the member so that the servicerequest is never fulfilled. These could both be classified as denial of service threats.

11 - Middleware towards business data

The middleware components should not theoretically have access to the business data. The middle-ware components ensure the community is populated according to the requirements, the businessprocess interaction proceeds correctly and the renegotiations and community termination happennicely. None of these functions should require access to the actual business data, just the commu-nity metadata.

26

Page 31: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

However, in practise the situation may be more challenging. If there is a middleware compo-nent responsible for the community interaction monitoring or transaction logging, the membersmay send also business data, such as order numbers, as part of the interaction information. Thesemonitoring and logging components can also be in a man-in-the-middle type position regardingthe community interaction and thus gain access to the business data. If this is the case, the mid-dleware can cause similar threats to the business data as discussed above as threats caused by amember.

12 - Others towards business data

As clients can try to exploit vulnerabilities in the community components to gain access to themember data, so can other external parties try to gain access to the business data. Usually thebusiness data is of primary interest, especially if it includes any information that can be directlyutilised in criminal intent, such as credit card numbers. In this case any vulnerability in anycomponent through which some of the business data flows or to which it is stored presents a realthreat.

One class of threats not discussed previously may arise if one or several members participatein many similar communities and there is infrastructure sharing between the communities at themember. In this case there may be accidental business information leaks between the communities.Other communities’ members or middleware components also may try to actively exploit commonmembers’ vulnerabilities in order to get access to the other community business data, such aspricing information. Deliberate denial of service attacks are also possible if the communities canbe seen in any way competing with each other. Occasional denial of service can also happen ifboth communities are in a phase requiring a lot of the common member’s resources and there is apossibility of resource exhaustion.

2.6.4 From Threats to Risks

A threat analysis gives general guidance to the type and nature of threats needing countermeasures.The next task in assessing the risk would be analysing the impact and frequency of the listed threatsin the community. The environment may already have some countermeasures built-in, which couldbe then taken into account in the analysis.

The analysis would reveal which of the risks are most likely and which have the most impactif the risk is realised in the current environment. The full analysis would help in prioritising andfocusing the risk mitigation efforts.

However, from the viewpoint of a thesis interested in drafting a community security frame-work, taking into account the full prioritised order of risks is not possible. To focus this work tomatch available resources it is sensible to pick only a few of the listed threats and analyse just theirpotential impact and frequency. If that analysis reveals these threats require solutions in the frame-work, solution candidates can be developed and their suitability further analysed. The remainingthreats can be explored in other research projects.

The focus of this thesis is on solutions to the threats stemming from peers or members actingmaliciously by spoofing their identity and/or tampering with the data. Threats involving maliciouspeers have been introduced above in sections 1, 5 and 9 while discussions on malicious communitymembers have been discussed in sections 2, 6 and 10. Malicious outsiders have been discussedin sections 4, 8 and 12 above. The fact that threats from malicious peers and members have beenfound in many categories suggests that this is an inherent issue in enterprise communities and thusrequires solutions before the community business models will be more widely adopted.

27

Page 32: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

In estimating likelihood or frequency of malicious peers we assume an open community, whichis the worst case from the trust and security perspectives. Assuming open Internet facing inter-faces, it is a fact supported by current network security experience that that there will be maliciousparties connecting to them, whether the parties are clients or members trying to spoof as someoneelse or outsider hackers.

If the malicious parties are clients or members, they already have all the tools they need tointerface with the community so there is no technological hurdle. In contrast, outsider hackerswill need to make an effort to develop tools to semantically connect to the particular communityinstead of just connecting at the network level. This makes the attack likelihood from outsideattackers smaller. However, taking into account the trend of attack sophistication rising from thenetwork level towards the application level in the protocol stack [Eur07], it is very probable thatall open community interfaces will eventually be exploit targets by malicious outsiders. Thereforethe likelihood of this threat will increase over time.

It is also possible that an old and respected member of the community will suddenly turnmalicious either consciously or unconsciously. The unconscious malice is the case of an outsiderhacking his or hers way into the member’s systems and turning the member’s infrastructure againstthe community. This kind of behaviour has been seen in Internet botnets, where simple network-connected personal computers have been turned into remote controlled attack platforms that canbe instructed to for example send spam or take part in distributed denial-of-service attacks. It isforeseeable that if a community is interesting enough, this type of attack could be directed towardscommunities as well although the development cost for the attacker would be quite high and thusthe likelihood low. However, the impact of such a successful attack is major, all the business dataflowing through the member would be open to tampering and eavesdropping.

There is also the case of conscious malice, i.e. the member changing its way of makingbusiness into a malicious or disruptive activity. The likelihood of this depends among other thingson the likelihood of detection. If a member can silently and undetected act against the communitypurpose, and it can get more benefit from the malicious activity than from within the communitycontract constraints, there may be incentive for disruptive or malicious behaviour. However, thisincentive must be weighed against the possible repercussions against the member if and when themalicious activity is discovered.

The likelihood of malice is also dependent on the purpose of community. An outsider mayhave very little monetary incentive in attacking a procurement community for ball bearings, butif there is a chance of gaining free airline tickets or hotel stays as in our travel agent scenario,or especially credit card information, the community will be an attack target. The likelihood ofexploitation is also dependent on other security measures, such as how easy it is to convert thecommunity participation into money or other assets. If for example the travel agent communityis built so that the participant members are given credit card details or hotel vouchers on-line,the interest of malicious parties is bigger than in a system where the transaction produces onlyreservation confirmation number that are bound to a specific passport serial number. Similarly,stealing movies from the content delivery community is not profitable unless the attacker has thetechnology or licenses to open the DRM protected files.

In our travel agency community example the key business data assets to be protected are creditcard numbers of the holidaymakers used to confirm a reservation. These numbers are a primetargets for criminal elements and thus attacks are very likely.

What is the impact then of various malicious threats? Unfortunately it is very difficult topredict the impact in a generic community. The impact depends on the purpose of the communityand also on the community structure and the role of the malicious party. In centrally controlledcommunities, such as our example where the travel agent is very much in control, the impact to

28

Page 33: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

a single hotel participant can be very limited. However, if the malicious party is in the role of acentralised payment processor or license generator, the impact is significant. There are issues inmore democratic communities as well. In the content delivery community, one malicious membercan cause significant data corruption problems affecting many other members.

Especially communities where any goods of value are traded have possibilities of financialloss, which are at least in the order of a single typical transaction value and may be on the orderof sum of all transactions in a day or week or a month, depending on if and how fast the maliciousparticipant is discovered. However, the financial loss may be bigger if the malicious entity is in afinancially central role, such as the payment processor.

From this risk analysis can be concluded that the likelihood of malicious insiders and outsidersis somewhere between high and low depending on the purpose of the community. Additionally,it can be concluded that the impact of malicious actors can vary from minor to major. There-fore, the conclusion for the general community security framework requirements is that the riskof malicious parties in open communities must be managed. otherwise the applicability of saidframework is severely limited to low-risk closed communities. It is also possible that memberorganisations refuse to use the community framework if the community member security featuresdo not offer them sufficient protection from malicious parties.

2.6.5 Countermeasures and Solutions

There are several possible measures to counter the threats found above. These measures need to bebuilt into the system as early as possible [And01]. The countermeasures, controls, can be dividedinto two main types: preventative and detective [KT93]. Preventative controls, such as firewalls,prevent some threats from occurring. Detective controls, such as fire alarms, do not prevent thethreats from happening, but detect a security problem and alert when it occurs.

There are are also three supplementing types: corrective, deterrent and recovery [KT93]. Cor-rective controls, such as security guards locking all doors for the night, make sure errors arecorrected. Deterrent controls, such as zero tolerance towards security policy violations are usedto discourage people from attacking the system or organization. Recovery controls, such as datarestore processes, help the organisation to recover from a security incident.

Quite often solutions from many of these types are necessary when risks are mitigated. Whatpossibilities do we then have to restrict either the impact or likelihood of malicious parties attack-ing the community or one single member of the community?

From the community topology point of view one solution to counter the threats springs in-stantly to focus. If we can restrict the communication so that only community members can com-municate with each other and the middleware components, and each community member can onlyaffect the members with whom it directly communicates according to the BNM choreography, theattack vectors are few and can be individually managed.

For example, the WISE community support system uses a centralised WISE engine, withwhom each member and client communicates [LASS00]. It is not explicitly said but can be in-ferred from the description that each participant must be pre-authorised to connect to this en-gine. The previously mentioned CrossFlow system relies on proxy-gateways at each participantto control remote method invocation traffic to and from the organisation [GAHL00]. The securityprinciples and exact function of the said gateway are not explained in more detail.

Similar communications restriction functionality outside the community context exists in theform of generic network level firewalls and application level firewalls. Network firewalls usuallylimit access based on source and target IP addresses and application port numbers. Applicationlevel firewalls, for example web application firewalls (WAF), can analyse the application protocol,

29

Page 34: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

such as HTTP in WAF case, in more detail and filter access based on the protocol messages andmessage contents.

Analysing the situation further from the point of a single member, for example a hotel in ourtravel agent example, it is clear that the individual member must be able to participate in differentcommunities organised by multiple travel agents. It is quite possible that the travel agents arenot previously known by the hotel, agents may have picked up the hotel’s offer from the openoffer repository. Additionally, the hotel must be able to connect with multiple tour operators,some previously unknown, based on the needs of the holidaymaker. While the threats from thepoint of view of the community as a whole may be easily manageable by restricting network levelcommunication, the needs of the individual members make this solution impractical. Similarly,the content delivery community is built on the principle that new members can join any time anddownload the content by contacting other members.

From the community participant point of view it is difficult to manage the risk by preventingincidents stemming from community participation. There are several preventative options: toclose the attack vector interface, to limit the access to authorised peers, and to ensure the exposedinterfaces are free of vulnerabilities that can be exploited.

First of all, the enterprise can not close the attack vector at network or application level since itwould deny access to the potentially useful community and business. Secondly, static and manu-ally managed access and authorisation rules are not cost-effective protection in the dynamic com-munities of the future, the management overhead in any way dynamic communities is too large.

Besides, allowing access to authorised peers only does not prevent incidents. One single com-munity member can not prevent other participants from turning malicious, so a threat always existseven with authorised peers. What an enterprise can do in principle, is to ensure that all softwareinterfaces and components it is using or exposing are fully free of vulnerabilities. In practise this isimpossible for two reasons. First of all, the enterprise may not have any control over the softwarequality if the application has been bought as a software package from a third party vendor. Second,if it had developed the applications and agents in house, high level of software quality assuranceincluding source code auditing is time consuming and expensive.

Application level firewalls could be used to delete undesired content from the applicationprotocol, for example HTTP, and thus to reduce threats. However, community participation addsfunctionality layers above the basic application protocol, and thus application firewalls do notprevent all attacks stemming from the community participation.

Thus, current preventative controls can not be used to manage the member security threats incost-effective manner. What about other types then? Corrective and recovery controls are used bydefinition when an attack has already been successful. In a fast paced electronic community theattack may already have done significant damage propagating to other community members beforea correction and/or recovery action is instantiated, especially if there are no detective controls andmanual action is necessary. So these types of controls are not enough as themselves but usefulcombined with other controls. Additionally, deterrent controls are not effective if the attacker is anautomated non-aware component, for example a wrongly configured NMA. In the case of our keythreat, a deliberately malicious party, deterrent controls can be rendered ineffective if the attackeris able to make the attack look like it originated elsewhere so that the retaliation is directed at awrong party.

Since the availability of suitable and good community security solutions is a requirement forwider adoption of electronic communities, something else must be found. The conclusion is tofocus on researching a community security solution which uses detective controls, possibly aug-mented with corrective or recovery controls.

One well known detective control is system monitoring. System monitoring can be part of

30

Page 35: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

the ODP security audit function or a general monitoring function. The security audit functionis responsible for monitoring and collection of security related actions [ISO96b]. The types ofsecurity problems monitoring can detect depends on the monitoring system and data the monitorshave access to. This control mechanism is discussed next.

2.7 Monitoring in Communities

As with any interaction, a community should be continuously observed or monitored to ensuresuccessful collaboration and to detect any errors or security incidents during the processes. Moni-toring can be used to achieve many different goals and the selected goal also dictates what kind ofmonitoring mechanisms must be used.

The basic need for monitoring in any distributed application rises from the general function-ality point of view; someone must be capable of observing the status of the distributed system tosee where the computation or transaction currently resides especially to detect whether any tech-nical problems, such as deadlocks, have occurred. For example, a business process has a certainchoreography, where each peer in a role must obey a certain sequence of messages for the processto succeed.

Another set of monitoring goals are related to community security. When threats or risks arenot completely known, designed countermeasures may not have sufficient coverage to protect fromall types of problems. In the face of these unknown or very little understood threats, which cannot be countered in the framework design, the task of security monitoring is even more important.

Given the challenging threat environment of open networking discussed above and the impos-sibility of closing all potential attack vectors, two key goals of security monitoring in a communityare first to protect one peer from other peers, and second to protect the community as a whole frommisbehaving peers. These goals fall into security realms of community member security and com-munity security, respectively.

This division of goals is not the only variable in monitoring. There are various functionalparts in the enterprise collaborative system and each part can have its own monitoring mechanismto ensure that the part is functioning well or that it is secure. Thus there are several differentmonitoring tasks in the collaborative community [Haa05]:

1. Community independent monitoring, which is a general monitoring function in an organisa-tion. Monitoring tasks on this level are not knowledgeable of the community functionalitybut are focused on for example TCP/IP networking issues.

2. Application monitoring, which is monitoring the inner state of a single application takingpart in the community.

3. Choreography monitoring, which follows the state of connections and determines whethermessages sent or received follow the choreography.

4. Task level monitoring is responsible for following the task to be performed and decidingwhether to change to the next phase or epoch.

5. Role level monitoring. Peers can have several different roles during the business processlifetime and their status and changes can be monitored.

6. Session monitoring. There can be several sessions within one contract or agreement andtheir state and completion need to be monitored.

31

Page 36: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 2.6: Monitoring tasks in collaborative communities

7. Contract or agreement monitoring is responsible for observing the agreements and theirfulfilment in a business network.

Contrasting this list with the five-level general community layer structure discussed earlier in2.1, several observations can be made. First, the application monitoring mechanism may work onthe technology level or the service level depending on the mechanism’s access to the service as acollection of software modules and its understanding of the semantics of the service. Therefore,service monitoring must be added to the list above as a new task.

Second, choreography, task, role and session monitoring are all functions of the communitylevel. Agreements or contracts should be also monitored there. However, the community levelmonitoring function does not have knowledge or organisational policies, so contract breachesduring the lifetime of the community should be raised to the organisation level so that the breaches’severity can be investigated and it can be decided whether the breach is against the organisationpolicies. Therefore, there needs to be a monitored feedback mechanism also at the organisationlevel.

Thus, following the structure of Figure 2.1 earlier in Chapter 2.1, the enhanced stack of mon-itoring tasks in collaborative communities is displayed with example monitoring targets in Figure2.6.

All of these monitoring tasks can be performed by the participant organisation or peer itself forits internal needs (introspective monitoring in the realm of community member security), othersrequire a party that monitors elements for the good of the whole community (inter-enterprise mon-

32

Page 37: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

itoring in the realm of community security). Even if one party performs both, these tasks shouldbe conceptually separate since the goals, constraints and methods of monitoring are different.

In introspective monitoring, each peer monitors its internal state against its own policies andthe community choreography and contracts. On the technical level the monitoring target may besyntax errors in various network protocols and on community level contract violations during theparticipation in the community. For example, the community contract is usually specified in ahigh-level language, which specifies community actions and their order and modality, plus valuethreshold limitations. Contracts do not necessarily specify order syntax and semantics in greatdetail. Therefore, a peer may fulfil a contractual obligation by sending a purchase order, but itspayload may be such that it syntactically or semantically causes security or policy violations onthe receiving end. And even if the choreography defined exactly what should be sent and when, amalicious peer can violate the choreography specification. Therefore introspective monitoring isnecessary. It should be available at all levels, but especially in technology and business levels.

Inter-enterprise monitoring is focused to ensuring that participants in every role fulfil theirobligations on every level (from technical to business level). For example, agreement monitoringcan be considered part of the contract enforcement function, which observes the participants’conformance to the contract so that any deviation from the agreed behaviour is noticed. Thereasons for deviation need not be malicious, but may be due to for example changed policieson the participant’s side. This function should be performed by a neutral third party, a separatemonitor function [MJDP02]. The monitor member of the community is interested in the fulfilmentof the contractual obligations and the work-flow specified in the community choreography, not inthe needs of any single party. If the community coordinator is a neutral third party, the coordinatormay also perform these neutral monitoring functions.

From the member security point of view, inter-enterprise monitoring is not sufficient to protectthe member enterprise. For example, if a peer deviates from the agreed choreography and sendsa malicious payload, inter-enterprise monitoring is not usually able to stop the payload4. Anothercase where inter-enterprise monitoring is not sufficient is with implicit communities. In that case,there is no formal contract agreement and thus there can be no inter-enterprise contract monitoring.

Introspective monitoring function is thus essential for all collaborative enterprises for theirmember security needs. In implicit communities, the introspective monitoring function can notcompare the peer behaviour to any common contract, but it can compare the peer behaviour to itssubjective behaviour expectations, syntactical or semantical. For example, in the previous elec-tronic mail relay example, the receiving party has an expectation founded on technical standardson how the SMTP protocol and the community choreography works and thus the conformance canbe monitored.

The role and positioning for introspective monitoring is visible in the structure of the Pilarcosmiddleware discussed earlier in Figure 2.4. However, it is interesting to note that there is currentlyno neutral component responsible for inter-enterprise monitoring. This is one future researchdirection. However, it is best tackled when it is clear what is the role and function of introspectivemonitoring and what (if any) services it can provide to the upper levels of the monitoring stack.

2.8 Challenges and Solutions

The aim of this thesis is to develop a solution for community security challenges. These chal-lenges in community security management have been discussed previously in the description of

4This is of course possible if every message in the community goes through a central monitoring service, but this isnot always possible or sensible in a modern distributed business network.

33

Page 38: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

the environment and the threat analysis. At this stage it is useful to collect these challenges andanalyse what requirements they bring to the community security solution.

2.8.1 Challenges

The challenges can be seen from two different points of view, the community perspective and themember enterprise perspective.

From the collaborative community perspective the key threats and challenges are:

1. Participant organisations may be or turn malicious towards the community or towards onesingle participant. It is not possible to know or predict which parties will start acting ma-liciously. Being malicious towards the community as whole can cause disruption in thecorrect functioning of the community and thus deprive all or some participants of its bene-fits. If for example in our travel agent community the travel agent turns malicious towardsthe whole community by sending fraudulent orders, the hotels and tour operators do notbenefit at all, on the contrary. The possibilities of malice and fraud vary based on the char-acteristics of the community. There may also be several malicious parties in a community,even acting in collusion.

2. The participants may have different goals in their community participation. For example,most businesses have only one goal – to maximise revenues. Other organisations may alsohave more subtle agendas, such as protecting their reputation and other assets while engag-ing in collaboration. Organisations with malicious management may also have maliciousgoals, such as collecting credit card numbers and selling them to identity thieves.

3. The participants are separate entities potentially in different jurisdictions. It is quite possiblethat if there is a conflict or a problem in the collaboration, there is very little practicalpossibility for judicial recourse.

4. There are several different types of enterprise collaboration communities, from large tosmall and from highly dynamic to very static. This makes it difficult to design solutions thatfit all possible scenarios.

However, from the point of view of a single community member, the problems and concernsare more direct. Key concerns are dynamic collaboration with unknown parties, these partiesturning or being malicious, security threats of the open environment and management overhead.

1. A member will interact dynamically with other, potentially previously unknown organisa-tions, to gain benefits brought by the community. This means exposing some assets to harmfrom these unknown parties.

2. These other organisations may be or turn malicious towards the member. This is the samechallenge as discussed above from the community point of view, but the focus here is on thequestion, what can this malicious party do to the individual member? Here the result maybe financial or reputation loss for the participant organisation but otherwise the communitycan still function and other participants may keep their benefits. For example, if in our travelagent community example the scuba diving operator turns malicious towards the client hoteland sends a confirmation for a requested diving tour with a price it does not intend to honour,the travel agent and the hotel still do get the money and commission from the airfares and thehotel reservations. It is also possible that two or more participants are malicious separatelyor in collusion against one member in the community.

34

Page 39: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

3. The threat environment is getting more difficult every day, especially in organisations util-ising Internet-based communication. Particularly open and dynamic systems must face se-curity threats that the open participation and dynamic community relationships bring. Thefacts that anyone from anywhere in the world could join the community and that yesterday’scommunity member can be today’s outsider, can cause many problems for example fromauthorisation and fraud prevention point of view. Additionally, in the Internet-based appli-cation environment tens of vulnerabilities are found in software components every week.Most of these vulnerabilities are publicised in such a manner that exploit code for utilisingthese vulnerabilities is either directly given or otherwise obviously deductible. This meansthat any security problem found in a publicly accessible system is immediately exploitableby thousands of malicious hackers. While good design and coding practises can diminishthe probability of vulnerabilities, it is very likely that bugs in the application code causesome security problems during the application lifetime.

4. Community participation may require too much manual management in order to be costeffective. While business community participation is intended to get financial benefits suchas increased sales, the price of the required applications, hardware and the managementeffort may be prohibitive. Especially day-to-day system management and administrationmanpower is very much a restricted resource in today’s enterprises. The cost-benefit analysiscan be very simple for example in our holiday eCommunity network. If a hotel sells oneextra room per day by participating in the community, participation and its managementshould not cost more per day than the revenue from that sale.5

These challenges mean that community collaboration needs to have specialised communitysecurity solutions that are based on more than preventative controls of perimeter protection andstatically managed communication restrictions. The community collaborative framework mustinclude technical means to easily and simply manage and secure open collaboration in an dynam-ically changing environment where previously well behaved parties can turn malicious.

2.8.2 Requirements for Solutions

In the face of these challenges, the community security framework needs to have mechanisms forboth securing the community but also mechanisms to protect an individual member of a com-munity. This thesis focuses on community member security, which is a subset of the generalcommunity security problems. The protection needs of the community as a whole are left forfuture research.

In the view of the discussion above the community member security management solution inthe overall community framework needs to fulfil the following criteria:

1. It must work with an open community structure, i.e. the principle anyone technically capablecould join. This requirement is selected for the maximum applicability and scalability of theframework. What works with open communities works also with closed communities.

2. It must work in a dynamic environment without continuous manual intervention. By dy-namic it is here meant that community participants can change during the community life-time, potentially very rapidly. This requirement is the cost-efficiency criterion to reducemanagement overhead. Additionally, agility in business decisions can be a financial advan-tage.

5Since the economics of collaboration is not in the focus of this work, this example ignores such business manage-ment issues as fixed vs variable costs, profit margins and return of investment calculations.

35

Page 40: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

3. It must include provisions to manage situations where a previously well behaved peer turnsmalicious, for example by using detective controls such as monitoring.

4. It should be minimally service or application specific. This requirement means that thesolution should be deployable in a variety of communities and services, from travel agentsystems to manufacturing procurement.

5. It should nor rely on the business application security controls since applications themselvescan contain security vulnerabilities. Attacking these business application vulnerabilities cancircumvent other security controls in the application.

What kind of a system fulfils all these requirements? For the first and second requirements, theopen and dynamic structure rules out any systems where each community participant must be firstapproved and entered into some kind of database before collaboration is accepted. This means thatauthentication and authorisation controls with pre-existing static relationships only, can not be beused. The new system must support a way for previously unknown entities to start participating inthe community while managing the risks this approach brings.

The third requirement means that the risk management approach requires some kind of moni-toring to observe peer behaviour and to detect possible maliciousness. Members must monitor thecollaboration and weigh the benefits of each community transaction request against any behavioralhints of maliciousness from the transaction originator. Since the goal is to protect a single memberas opposed to the whole community, the solution is tentatively placed into the service or tech-nology level in the community protocol stack discussed previously. There are several interestingchallenges in the upper level community security as well, but the focus of this thesis is on theselower levels.

The last two requirements for application generality and independence requirement meansthat we can not assume the protected applications or services at the community member to includeany specific protection mechanisms or interfaces. The protective system must work outside theservice or application. However, it should be investigated, whether we can gain any insight intothe application semantics using non-invasive means.

Since current systems have deficiencies in fulfilling these criteria, we must look elsewhere.One route is finding suitable "soft computing" approaches for modelling aspects related to humanbehaviour in collaborative organisations and to handle the issues of decision making and behaviourmanagement in the context of incomplete and imprecise knowledge [CMA05]. This route fits verywell with the requirements above, since human decision making is open, dynamic, somewhatscalable and not tied to a particular application domain. The degree of trust to other people isoften used as a factor in making these decisions.

The hypothesis presented in this thesis is that a system combining trust in making authorisa-tion decisions, a monitoring component to detect problems and a feedback mechanism from themonitoring component to ensure dynamism in the trust evaluation component is a suitable solutionfor the community member security management problem and these specific requirements.

With that in mind, in Chapter 3 trust and its transformation to computing systems is exploredin more detail to explore whether it is indeed suitable for our needs. Following that, more "hardcomputing" methods and approaches to introspective peer behaviour monitoring are discussed inChapter 4 so that both of these can be later combined into a framework solution.

36

Page 41: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Chapter 3

On Trust

Trust has been a very awkward concept since it is silently embedded in many aspects of humanbehaviour and, as with quite many other behavioural issues, it and its effects are not very wellunderstood. However, since trust is a part of the basic decision making framework for humans, itis important to try to understand it. It is also important to understand it from a system automationand computing point of view, since computing agents become more and more autonomous also inenterprise communities. Autonomous operation requires the ability to make decisions in the agentwithout human intervention and thus the human decision making system, including trust, becomesof interest to computer scientists.

There are several interesting questions in and around trust. First of all, the foundation: what istrust, how can it be defined and what kinds of trusts are there. From the utilitarian point of view,there are three separate problem areas around trust. The first one is to define the facts that supporttrust, the second is how to find the appropriate rules to derive consequences of a set of assumptionsabout trust, and the third is how to use information about trust to make decisions [Dem04]. It isinteresting to note that these problems can be investigated separately, and thus different solutionscan be applied to each question.

In this chapter the foundations and theory of trust and mistrust are first explored and discussedin enterprise setting. Secondly, different trust models are explored and new vocabulary and anontology is developed. A shorter version of these beginning chapters has been separately published[Vil05]. Finally, questions related to trust management in an enterprise setting are discussed.

3.1 Trust Characteristics

To fully understand trust, some characteristics of trust need to be first explored. In the real worldtrust decisions are made by people from their own viewpoint, thus trust is very much subjective.If Bob is considered worth trusting by Alice, Carol does not have to trust Bob. So, to evaluate atrust condition, a role of some subject must be assumed and the trust must be evaluated from thatparticular perspective. The subject is called trustor and the target of trust is called trustee. Fromthis subjectivity the lack of global trust is inferred, i.e. in practise there are no entities everybodytrusts. From this basic property, other natural characteristics follow:

• Trust is not symmetric. If "Alice trusts Bob", it does not automatically follow that "Bobtrusts Alice".

• Trust is not distributive. If “Alice trusts (Bob and Carol)”, it does not follow that “(Alicetrusts Bob) and (Alice trusts Carol)”.

37

Page 42: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

• Trust is not inherently transitive. If “Alice trusts Bob” and “Bob trusts Carol”, it does notautomatically follow that “Alice trusts Carol”.

Of course, symmetric and transitive trust are achievable in some special cases, but as a binaryrelation over a group of peers trust is not symmetric or transitive. Mathematically, the relation “Atrusts B” over the group of peers is not a function, since the relation may not be total: some peersmay not trust anyone. Additionally, some peers may trust several other peers, so the trust relationis not functional.

There have been many verbal definitions of trust, one of the most widely used definitions oftrust comes from Diego Gambetta, a sociologist:

trust (or, symmetrically, distrust) is a particular level of the subjective probability withwhich an agent assesses that another agent or group of agents will perform a particularaction, both before he can monitor such action (or independently of his capacity everto be able to monitor it) and in a context in which it affects his own action [Gam88].

This has influenced the thought of Mui et al:

Trust: a subjective expectation an agent has about another’s future behaviour basedon the history of their encounters [MMH02].

Another earlier example of a trust definition comes from Audun Jøsang:

Trust in a passionate entity is the belief that it will behave without malicious intent... Trust in a rational entity is the belief that it will resist malicious manipulation by apassionate entity [Jøs96].

In the field of computing it is not sufficient to just define trust, as above, it is essential forautomation that the concept of trust is represented by a trust model, which can be utilised bysystems. Here a trust model is defined to be the formal or computational realisation of the usedtrust definition, verbal or implicit. The word computational is used here loosely, meaning a modelthat can be utilised by computer applications. Several trust models and their characteristics areanalysed in depth later in Chapter 3.4.

3.2 Different Types of Trust

McKnight and Chervany suggest after investigating quite many trust definitions that no single def-inition is able to capture the rich semantic field of meanings of trust. Therefore trust should becharacterised as a set of inter-related constructs. They define six different types of trust: trustingbelief, trusting intention, trusting behaviour, system trust, dispositional trust and situational deci-sion to trust [MC96]. While these six types are not the only trust concepts, they encompass mostof the discussion in the trust research.

Trusting belief is the extent to which we believe another party to be trustworthy in a givensituation, i.e. being able and willing to act in our best interest. The trust definition of AudunJøsang is focused on this type of trust. In people, this depends on their cognitive beliefs aboutother persons. Trusting belief is both person and situation dependent and is driven by personalbelief formation processes.

Trusting intention is the extent to which a party is willing to depend on another party in agiven situation and taking into account the potential risk. Trusting intention is, as its name implies,intentional dependence of someone or something and it is also very situation specific. The trustdefinitions of Gambetta and Mui et al above have elements from this category.

38

Page 43: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 3.1: Relationships between the trust constructs [MC96]

This intention continues to trusting behaviour, which is the above mentioned intention trans-lated to actual behaviour, i.e. actually participating in the action. This is separate from the inten-tion, since intention does not always lead action. Trusting behaviour is situation dependent, i.e.the degree of trust varies based on the action or situation in question.

System trust is the extent to which we believe the proper impersonal structures are in placefor us to anticipate success in a future endeavour. These impersonal structures are twofold: onthe other hand constructs like laws, regulations and contracts and on the other hand situationalnormality. Laws, regulations and contracts give us a safety net if the personal transaction hasproblems, and knowledge about situational normality reduces our uncertainty and thus enables usto perhaps take more risks by trusting. Here the word system seems to be semantically closer to aconcept of political or regulatory system instead of a computer system.

Dispositional trust is a generalised expectation of trustworthiness of other people or entities.There are people who do not easily trust others as well as those whose basic outlook to life isto trust others. The reasons for the different attitudes may be deep in the early psychologicalexperiences or even in the genetic make up of a person. Corporations can also have dispositionsto trust, depending on the experiences of the management and also the competitiveness and thecompetitive styles in the market segment.

Situational decision to trust does not consider the other party when making the trust decisionbut only the particular situation is considered. This means that we decide always to trust in agiven situation regardless of the trustees involved because the benefits of trusting are bigger thanthe potential risks. Situational trust decisions can be made either because we can not know theidentities or characteristics of the trustees, or we choose to ignore the information.

These constructs are not independent, on the contrary there are influencing relationships be-

39

Page 44: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

tween the constructs. For example beliefs about the trustworthiness of other persons create in-tentions which are converted to behaviour. On the other hand, system trust is independent of ourbelief to the trustworthiness of other persons, but influences our trusting intentions directly. SeeFigure 3.1 for an illustration. This key figure shows how the constructs in the lower levels of thestack have an effect to the constructs in the upper layer. This figure is later used and modified inChapter 3.5.2 to illustrate trust management in enterprise systems.

3.3 Trust and Mistrust Types in Enterprise Systems

The system developed by McKnight and Chervany seems quite useful. But for the focus of thiswork, the first question is whether the general approach developed by McKnight and Chervany isapplicable in computing and enterprise systems?

Their approach was based on a cross-disciplinary literature based survey. This means thatthe subject has been looked at from different perspectives, not just from the computer scienceand automation point of view. Thus their system is a distillate of many research efforts in manydifferent fields. This cross-disciplinary background is important for several reasons.

First of all, by using a system developed in a non-computing environment, the bias of lookingat trust only from the sciences and computing perspective is avoided. This has been previouslyidentified as a problem in current computational trust models, which are not sufficiently groundedon understood social characteristics of trust [MMH02].

Of course, this view introduces another bias, looking at trust from only the non-computingpoint of view. This latter bias is corrected by investigating, how to apply this theory in a computingenvironment. Additionally, the diversity of the background is important when the aim is to developa trust model for general enterprise systems, which are not bound to any specific usage scenario.Of course, this strength is also a weakness, sometimes a simpler view of trust and therefore simplerimplementation could suffice if the scope of the application domain is narrow enough.

Taking into account that the work of McKnight and Cherveny is relatively recent and especiallybecause of its sufficient background in the real world usage of the word and concept "trust", theirwork is selected as one starting point for this thesis.

The second question is the more practical issue, how does an impersonal enterprise utilisethese human behaviour based constructs in computer systems? Especially interesting is the keyquestion of computer science as applied to trust: which of these trust constructs can and shouldbe automated and to what extent? Therefore, it is useful at this point to discuss these constructs inrelation to computers and systems, especially in the enterprise community environment.

System trust for the impersonal structures, such as laws and contracts can be seen in enterprisecomputing from two different perspectives, outside and inside. First, the outside perspective looksat the structures and safety nets outside of the enterprise computing systems. Issues like theexistence and applicability of suitable laws, regulations and general contract frameworks betweenenterprises are factors that the enterprise management uses in the risk analysis.

Secondly, a case can be made for the existence of inside perspective as well. This perspectivewould looks at the impersonal structures and safety nets from the computing point of view. Dowe, the enterprise or the community, have enough technical capabilities and safeguards to makethe collaboration successful and recover if there are problems? These capabilities would includetechnical security frameworks and mechanisms in place for the collaboration. For example, ifthe transactions in the collaboration should be confidential there is very little system trust to thecommunity if there are no technical safeguards, such as encryption, against eavesdropping. Thisinside aspect of system trust has not been considered in the original work of McKnight and Cher-

40

Page 45: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

vany since their focus has not been on utilising trust via technical means. However, the realmof technical safeguards fits within the concept of "impersonal structures" described by them intheir definition of system trust, although their main point has been to classify into this categoryeverything outside the realm of the trusting person himself/herself.

As an example in the inside aspect of system trust in the Web Services area, the Web ServicesTrust Language initiative [ABB+05] tries to enhance system trust in the Web Services environ-ment by creating a security model on top of other Web services standards where communicatingpeers prove various claims, such as identity, by sending and receiving security tokens. WS-Trustdefines mechanisms for issuing, exchanging and validating these tokens. WS-Trust is an systemor infrastructure service on top of which other trust services may be built.

Both the outside and inside perspectives jointly create system trust to the collaborative process.However, since the factors include outside issues such as laws, the actual decision whether toactually start participating in communities in a specific technical environment seems best to beleft to management and not automated. If there is lack of trust to the laws, contracts or technicalcapabilities around the collaboration, an enterprise seldom enters into the collaborative process.

However, in addition to the outside and inside impersonal structures discussed above, the sit-uational normality needs to be considered as well for system trust. Situational normality or statusinformation can be translated in the computing arena as context: any information that describesour current situation. It is also possible to view the inside perspective of system trust, i.e. ourtehcnical abilities for successful co-operation, as context information, although the "situationalnormality" then covers very different time frames from years to minutes and seconds. Gatheringthis context information can be automated, but decisions about the limits of situational normalitymay need guidance from management.

A similar automation problem can be seen in the situational decision to trust, which does notutilise personal trust relationships but requires information about the attempted action or service.In computing systems this type of trust is utilised in open Internet services, such as the e-mail re-lays or public web sites, which are open to all potential peers, regardless of their identity. When anorganisation does not have control of the connecting peers and their characteristics in the situation,the decision to trust in a particular situation becomes a policy issue and thus must be made by themanagement after careful deliberation of the risks and the rewards.

Especially enterprises using situational trust are quite vulnerable to malicious parties, so theymust have the appropriate monitoring mechanisms in place to detect any misuse. Since somemeasure of situational trust must be used when participating in the current Internet environment, itis not a great surprise that many enterprises have deployed intrusion detection systems to monitorthe inevitable misuse of the situational trust.

Dispositional trust, i.e. trustor’s general attitude towards people, is best translated to comput-ing environments as some kind of default or initial value for trust to other parties. An example isa default purchasing limit for new peers or default trust score, if no other value can be deducedfrom past experiences or recommendations. This is not a situation dependent construct but a basicattitude or value and thus should not be automated but again best determined in enterprises bymanagement after a comprehensive risk analysis.

Trusting belief reflects how trustworthy we think the peer is in this particular situation. As seenfrom Figure 3.1, trusting belief in humans is created by a personal belief formation process. Incomputer systems this process can be simulated by automatically gathering trust decision supportfacts and creating a trust score based on that information. The variety of these support facts arediscussed in Chapter 3.4. In a simple trust use scenario, where the peer is trusted based on theiridentity information only, the highest score can be given to authenticated peers and the lowestscore to unauthenticated users.

41

Page 46: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 3.2: Relationships between variants of not trusting.

Since peers seldom have absolute belief to the other peers’ trustworthiness, and the past be-haviour of a peer is no guarantee to the future behaviour, all aspects of not-trusting must be man-aged. This concept of not-trusting includes concepts like mistrust, distrust and untrust. As withany developing area, there is some confusion over definitions of mistrust and distrust. For clarity,this work uses the definition of mistrust as misplaced trust and distrust as negative expectationregarding another’s conduct [MD05]. Mistrust happens when a peer we actually trust misusesour trust by doing something harmful to us. This must be detected and managed by having theintrospective monitoring system in place. Distrust means lack of trust and untrust is the uncer-tain area of trusting somewhat but not enough to engage in collaboration and co-operation. Therelationships between distrust, untrust and trust/mistrust are displayed in Figure 3.2.

Sometimes only the peer trust score is used in the decision making process. In the simpletrusting belief example above, authenticated users were decided to be trusted. However, there aretwo additional trust constructs that can utilise this trusting belief: trusting intention and trustingbehaviour.

In the field of enterprise computing, trusting intention dictates how much the enterprise wantsto depend on someone in the current situation to achieve something. In traditional single servicesthis can be thought as behaviour to which some limitations are placed. However, this has moremeaning in communities, where this form of trust is the key. In communities trusting intentionmay dictate how tight partnerships an enterprise is willing to form and what kind of contracts it iswilling to negotiate.

Trusting intention declares the enterprise’s will in community interactions and how deep de-pendence relationships to form. Since this is a decision based on the corporate strategy and vision,it also is best left to be decided by the enterprise management, i.e. people. The strategic decisionscan then be transformed to contract templates or bounds to the automated contract negotiatingprocess. It will define the type of contracts or communities the enterprise is entering.

The five trust constructs above are such that in enterprise settings the best source of the trust isa passionate entity, a human being in the management chain of the enterprise. However, trustingbehaviour which draws input from these lower level constructs via trusting intention is somethingthat can be automated. In the enterprise field, this type of trust manifests itself as actual participa-tion in a community and fulfilment of the contractual obligations if there are any.

The conclusion at this point is that the approach of McKnight and Chervany can be applied in

42

Page 47: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

theory to computing and enterprise systems. Therefore one of the premises of this work is that oncethe trusting intention is known, the translation of that intention to trusting behaviour, i.e. to trust-guarded interaction can be automated. Thus the question how to achieve trusting behaviour usingautomation becomes the most interesting followup question for computer scientists. It shouldeventually be evaluated, whether this approach works in practice.

These questions has been researched in the past in conjunction with different underlying trustutilising systems. The next chapter surveys different trust models and trust systems in computerscience and classifies those based on the information they use to make the translation from inten-tion to behaviour.

3.4 Trust Models

Since there is no universal definition of trust, the developed trust models and systems relyingon those models have been very different in both verbal and formal definitions of trust, and alsoregarding the vocabulary used in the trust models. This has been noted and explored by McKnightand Chervany previously in 1996 [MC96]. However, at this date it is beneficial to revisit thisarea by summarising recent work, classifying various models according to their emphasised trustfeature sets and describing the examined trust models with a common language. This sectionpresents such a language and gives examples on how various research models can be positionedwithin the descriptive system.

This section focuses on only one trust research question, defining or gathering the facts thatsupport trust. Because different trust models emphasise different trust support fact and ignoreor abstract away others, it is assumed that a union of these emphasised facts across all modelsprovides us with a maximal set of trust support facts. This maximal set is then utilised to provideus with an extensible ontology.

3.4.1 Trust Model Research

It is important to note that since trust is a complex concept, all the models are by necessity sim-plifications of that complexity and different models streamline the concept differently. The keysimplification of the models is that they try to facilitate some kind of decision making, i.e. trust-ing behaviour by McKnight and Chervany’s categorisation, even if the actual trust definition haselements from other trust categories.

Early attempts to formalise trust for computer use have been in the context of authentication.Yahalom et al. developed a trust model to be used in authentication scenarios [YKB93]. One ofthe earliest attempts to define trust from the general and computational point of view was StephenMarsh’s thesis [Mar94], which drew input from the sociological trust research. After that sev-eral interesting trust models and also systems, such as PolicyMaker [BFL96], KeyNote [BFK99]and REFEREE [CFL+97] have emerged. Around that time also Abdul-Rahman et al. [ARH97],Daniel Essin [Ess98] and Audun Jøsang [Jøs96, Jøs98] described their trust concepts and models.

In recent years the focus has been on more comprehensive and concrete systems having widertrust management elements, such as Poblano [CY00], Free Haven [Din00], SULTAN [GS02],TERM [BZ02] and SECURE [CGS+03, CNS03]. But since the applicability of trust has beenwidened to cover more than authentication and authorisation, they build on the research doneearlier, but do not necessarily use the same terminology or basic components. Additionally, theirfocus is not necessarily in general usage trust management, but they have tried to explore trust invarious application domains, such as peer-to-peer networking or web applications.

43

Page 48: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 3.3: Trust model input factors

These thirteen models discussed span a time frame of ten years of trust model research. Theyinclude all key trust models and form a comprehensive set of research models. Thus this set isused for the actual feature analysis. All models take some kind of input to calculate some kindof trust score or valuation. Therefore the classification is done using two orthogonal axes: inputfactors and output type.

3.4.2 Trust Model Input Factors

The key differentiating element in trust models is the list of factors or facts that are required orused for evaluating trust or building the trust score. This list is usually given as an n-tuple. Thistuple can contain several elements, some of which can themselves be tuples.

Because of this factor diversity, it is impossible to give a simple taxonomy where each modelsits squarely only in one classification box along this axis. Instead, we have defined a classificationof factors and each model can be described by what set of factors it is using or, in our terminology,is aware of. A summary is shown in Figure 3.3.

Identity-aware Models

As a basic tenet we have found that all reviewed systems are identity-aware, i.e. they assumeto have some identifying information on the target of the trust evaluation, trustee. This identityawareness does not mean that the models must know the real name or other global information ofthe communicating peer. In some models it is quite sufficient that the identity is only locally uniqueand temporally sufficiently stable so that it aids in recognising the same entity in this system overtime. Therefore we can find models that use globally unique identities or locally unique identities.The vocabulary for the trustee varies according to the model, so in the context of this chaptertrustee, peer and principal are interchangeable.

The Poblano system uses peerID, which needs to be unique across the universe of peers[CY00]. KeyNote [BFL96] can use any string as principal identifier, in practise it can also bea string representation of the principal public key. An example of local identities can be found inthe work of Abdul-Rahman and Hailes [ARH97], where the trust relationships are local and thereis no such thing as a global map of trust.

44

Page 49: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Action-aware Models

Most trust models have noticed that there is an action component to trust. That is, the actual trustevaluation and decision depends on what the target of our trust is trying to do in or with our systemor for what purpose we are trusting the target. This is very intuitive, as we may trust one partyto relay our messages but not to transfer any money. This is also reflected in the definition byGambetta in Chapter 3.1 above. On the other hand, the action component is not always necessary,since there is a concept called general trust which is an unqualified trust towards a principal[Mar94].

If we have a trust model that takes the action into account, the set of actions can be closed oropen. Closed means that there is a set of pre-defined actions the model supports and no others canbe defined by users or organisations using the trust model. An open set of actions means that themodel offers a way of defining at least some of the actions or does not restrict them to a particularset.

This concept has many names in the actual models. PolicyMaker [BFL96], REFEREE [CFL+97]and KeyNote [BFK99] call this action and Abdul-Rahman et al. [ARH97] have a trust category.Yahalom et al. use a pre-defined set of trust classes for which we trust a certain principal [YKB93].SULTAN [GS02] uses the name context, but in its core the the definition is about action and actionsets. Similarly, the SECURE project uses context in the meaning of action [CNS03]. Essin hasthe concept of activity [Ess98], although it is not used in the trust evaluation directly, but as asub-component in determining the capability of trust subject and subject reputation.

However, not all models use the action factor explicitly. Some specialised systems, for exam-ple Poblano [CY00] and Free Haven [Din00], use the trust valuation in relation to data contentreceived from a network peer. There the actions are implicit in the system definition, but sincethere is only the one basic action type and the action system is closed, we do not consider Poblanoor Free Haven systems action-aware.

Marsh describes situation, which is a point of time relative to a specific agent [Mar94], i.e. theprincipal evaluating trust. This definition includes the actions the other principals are attempting,therefore Marsh’s model is action-aware.

The TERM system [BZ02] has no action concept as such, but trust calculations are related toroles in role-based access control. There are two roles, an access role and a testifying role, so thesystem has two predefined actions. Hence it is action-aware, although the action set is closed.

Business Value Awareness

Between people trust implies potential loss and also potential benefit. In several models there areconcepts called risk, benefit or value. These are all associated with a particular action and try togive impression on how the action can help us or how the misplaced trust can hurt us. Risk is themost common modelled business value element. However, to understand the full impact of theattempted action, this needs to be balanced against the potential benefits or value of the action. Wecombine these under the common concept of business value, since all these concepts try to modelthe potential impact, positive or negative, of the attempted operation.

Marsh is using three separate business value concepts: utility, importance and risk. Utility is ameasurable benefit, importance a subjective valuation of the importance of the action. These bothare used in evaluating situational trust, i.e. trust in a specific situation or action. Risk is used indetermining the co-operation threshold, i.e. whether to actually engage in the action in a particularsituation [Mar94].

The model by Essin also uses several business value factors. He uses the concept of valuationas the cost of the resources or assets affected by the action. He also uses stake which is the degree

45

Page 50: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

to which the entity(s) proposing to engage in the activity has a vested interest in the outcome[Ess98]. Stake tries to measure the level of commitment for this action and thus it is quite closeto the concept of importance. He also uses the concepts of risk and benefit, although his modelcombines these to a single risk/benefit set.

In the Poblano P2P system [CY00], risk is a statistically computed metric of peer accessibilityand performance, both concepts relating to network metrics. Here the considered risk is of thetype “risk of not getting the information” instead of any loss of data or money. This is however,well within the defined use of “risk”, so we consider the model business value aware. Poblanoalso uses importance, which is the importance in engaging in the activity, in calculating the co-operation threshold.

The SECURE model does not have risk as part of the formal trust model, although the resultingsystem has an added risk and cost/benefit analysis as part of the trust evaluation. Therefore themodel itself is not business-value aware.

Competence-aware models

One type of trust decision factor is information on the competence of the subject with regards toperforming a particular action. When human clients are considered, this is an important decisionfactor, but with automated clients at least some degree of technical competence in following thespecification should be assumed. Therefore this factor is not very common in the reviewed models.

Competence is considered by Marsh in calculating the co-operation threshold based on trust[Mar94]. Grandison and Sloman use a verbal definition of trust where competence is a factor,but their computational trust model in the SULTAN system does not include competence as such[GS02].

One interesting variant of competence is lifting the competence evaluation from the technicalto the semantic level. The Poblano model has a concept of CodatConfidence, which is a measureof semantic experience, i.e. the system’s competence in providing us with relevant information[CY00]. Similarly, the Free Haven system [Din00] has a concept of metatrust, which signals thatthe data received from a node is indeed valuable information. For example, if the Free Havensystem agrees with a recommendation from a third party, the third party metatrust is incremented.

This concept of ability to perform a task is also called capability by Essin. He defines capa-bility as measurable expertise that the entity possesses about the activity [Ess98]. In this commonvocabulary this idea of expertise is called competence.

Capability-aware Models

Capability has a dual meaning in the security and trust research. On one hand, capability is con-sidered synonymous with competence, i.e. evaluating the peer ability to perform a certain task.On the other hand, a capability has a very specialised meaning in the field of security as a tokengiven to a peer to access a resource. We differentiate between these two concepts and here definecapability as a form of an access granting token.

This latter form is also used by Essin. In addition to the capability as competence definition,he defines capability also as demonstrable access and authority necessary to act [Ess98]. This iscapability in this second sense.

Confidence-aware Models

As input for the trust calculation can be received from multiple sources, sometimes also as externalrecommendations or reputation, we may also have uncertainty associated with trust or the trust

46

Page 51: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

input factors. The concept of confidence reflects this uncertainty, although in the actual modelsthis concept has many names.

The TERM model [BZ02] uses the concept of an opinion, meaning how much the calculatingTERM server believes the trust statement. The Poblano system also uses a confidence value,PeerConfidence, in determining whether the trust subject is able to co-operate and thus beingtrustworthy [CY00].

Confidence can also relate to only one of the trust input factors. Essin uses the term certaintythat the true identity of the trust subject is known [Ess98]. The Free Haven system [Din00] uses theterm confidence rating, which is used as a measure of how fast or slow external recommendationschange our trust value in regards of that particular entity.

Context-aware Models

The trust evaluation may depend on the evaluating system internal or external status at that partic-ular point of time, i.e. context. If, for example, the organisation firewall is experiencing heavy portscanning activity or is otherwise deemed being under an internet based attack, it may be sensibleto lower the trust valuation on all or some principals to limit exposure to potentially maliciousactivity.

Interestingly enough, this is not a very widely used factor, Essin has it as a sub-factor indetermining the action valuation and subject stake in the action [Ess98]. In PolicyMaker, thepolicy is defined in an interpreted programming language and it thus can obtain some contextinformation. The set of local policies are considered the context under which the trust is evaluated.

The SECURE model also defines trust via a policy construct. Theoretically it may be possibleto include context information in the policy, but since this option is not explored in their work,SECURE is not classified here as context-aware.

The SULTAN model [GS02] uses the term context to mean the action to be performed (seeChapter 3.4.2), not the system state or circumstances. Therefore SULTAN is not context-aware bythis definition.

History Awareness

When people interact, one of the key components in our trust evaluation is the trustee past be-haviour or track record. Some models try to capture this idea as well. It can be modelled usingvocabulary like experience or evidence or local reputation.

Reputation is opinion or view of one about something [SS00]. There can be two types ofreputation, subjective reputation is reputation calculated directly from the trustor direct experi-ences [SS00] and external reputation is reputation received from third parties. The former type ofsubjective reputation is considered in this history awareness category.

Essin uses the word reputation but does not differentiate between subjective and external rep-utation [Ess98]. The Marsh model also includes past history data by including the trust values inall previous similar situations in the situational trust evaluation [Mar94].

The TERM model [BZ02] includes a concept called direct experience. It is used by the systemto evaluate trust opinions, although it is not formally defined. The SECURE model is very flexibleso that it can also use history information if required by making history part of local trust policy[CNS03].

47

Page 52: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Third Party Awareness

A trust model can be open or closed. A closed model does not take into account any input fromoutside the actors involved in the trust evaluation (i.e. the trustor and the trustee). Open trustmodels accept information from third parties. This information can be in the form of externalreputation, recommendations or even delegated decision making.

In addition to subjective reputation discussed above with regards to history-awareness, we alsohave external reputation. For example, we may belong to a community, which has a common rep-utation service with shared ontology and receive reputation information from that external source.This external type of reputation is also considered in this third party awareness category. This isnot a widely used feature. In the Essin model there is reputation, either generally or bound to aspecific action, as a component in the trust evaluation [Ess98]. In his model, reputation can beeither subjective or from an external source.

Recommendations are conceptually somewhat difficult. Since trust is not transitive, recom-mendations should not be taken directly as trust. However, third party recommendations can be aninfluencing factor when deciding about trust. Recommendations are considered by many models.For example the models by Yahalom et al. and Abdul-Rahman et al. use them. The SULTANmodel and system also uses recommendations as a basis for new trust relationships. The FreeHaven system uses the word of referral instead of a recommendation. The TERM model also usesrecommendations, although it is said that they are used indirectly because of the transitivity issue.

Delegated decision making is not very common. The SECURE model is third party aware be-cause it can handle delegation where a principal can refer to another principal’s trust information.

This external information can be carried in the form of a credential. A credential is simplya statement, purportedly made by some speaker [CFL+97]. These statements, such as a nameor proof of membership, can be used when deciding about trust. A credential is not a semanticinformation category, but a technical one. A credential can carry identity information, subjectproperties, reputation or even capabilities. Therefore we do not categorise models as credential-aware, but the classifying factor is the semantic category such as identity or a particular property.The TERM model calls these credentials evidence, and these evidence statements can carry manytypes of information. For example, a X.509 certificate signed by a certificate authority is a creden-tial testifying a particular name and whose validity must be checked using a third party.

However, in some systems credentials can be executable programs. This is true in the Keynoteand PolicyMaker systems, where these programs are called assertions. In the REFEREE system acredential is also a program that examines the initial statements passed to it and derives additionalstatements.

3.4.3 Trust Model Output Type

The second differentiating axis for trust models is what does the trust model produce.

Discrete Value

In many systems the computational model produces a discrete numerical value in some predefinedrange. This value is then passed to a separate decision component to be compared with a thresholdvalue to make the actual trust decision.

The SULTAN model [GS02] uses a range between 0 and 100 for trust and -100 to 0 for distrust.Similarly, the Poblano system [CY00], peer trust values are integers ranging between -1 and 4.However, the final trust value for the information chain is computed by a formula and the action isallowed if the formula result is above a set co-operation threshold. The same value range is used

48

Page 53: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

by Abdul-Rahman and Hailes [ARH97]. The Free Haven system [Din00] defines the used trustvalues as integers.

The SECURE system [CNS03] approaches this from a different angle. They utilise a set oftrust values, whose elements are not predefined by the model, but can be simple binary results,trust vectors or action-trust pairs.

Logic Value

The key task for a comprehensive trust system is aid in deciding whether the trust subject is trustedenough to allow the requested operation. Therefore it would be natural for a computational trustmodel directly return binary yes or no answers. While this type of output can be also classifiedas a discrete value set, we separate this because this choice implies that an authorisation or trustdecision has already been made. However, this is not very common, usually the trust value ispassed to a separate authorisation system for decision.

One example of binary yes/no answers is in the early work of Yahalom et al. [YKB93].The PolicyMaker evaluation is simple fail/pass [BFL96]. In the KeyNote system [BFL96] theassertions return also yes/no answers, but additionally they can return annotations, i.e. restrictions,with the yes-answer. The model by Essin is not explicit in its outcome specification, but notes thattrust evaluations are always reduced to yes/no answers [Ess98].

REFEREE has tri-values: true, false and unknown [CFL+97]. The unknown answer is givenif the trust or policy compliance can not be determined. While this is the basic return value, thesystem also gives out additional statements about the reason for the value.

Probabilistic Value

As seen from the definitions above in Chapter 3.1, probability is used in some definitions in de-scribing trust and thus in many real world models as the result of trust evaluation. This probabilityvalue is then passed to some decision engine for the actual trust decision.

Even though probabilities are used in definitions, they are not used in the actual policies thatmuch. Marsh’s thesis [Mar94] and the TERM system [BZ02] use probabilities, as does the Jøsangmodel.

3.4.4 Towards an Ontology of Trust

The trust model analysis above gives us a wide view on how trust has been modelled in previousresearch. This information makes it possible to create a domain ontology, i.e. a description of theconcepts and relationships in the trust domain. Trust ontologies have been made before [GPH03],but not based on a comprehensive analysis. This background analysis makes this new ontologywidely usable and compatible with previous research models.

Because different trust models emphasise different features in the concept and abstract awayothers, it is assumed that a union of these emphasised features across all models provides uswith a maximal list of trust input factors. Based on this maximal list we formulate the followingontological structure.

First of all, trust is a relationship between two principals, the subject, trustor, and the target,trustee. The trust between trustor and trustee may depend on the the action trustor is attempting.The action may have a score of business value properties attached to it. The trustor also may usecontext information or history data to help in the trust evaluation. The trust can also depend onthe peer competence. Additionally, there can be an element of confidence attached to the trust

49

Page 54: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 3.4: Trust as a UML diagram.

relationship. There can also be a set of third party opinions in the form of reputation information,recommendations or credentials that influence the trust evaluation.

The trust value itself is represented as an integer. All discrete and logical values can naturallybe represented as integers and probability constructs could be transformed to integers within alarge range (for example 0-10000) without a major loss of expressiveness.

A UML diagram modelling these relationships in trust is presented in Figure 3.4. The modelin the picture is a level 1 meta-model, i.e. it models a particular application domain, trust, whichis represented in a UML modelling system [OMG04]. An actual population of this model, such asdefining trust relationships between concrete principals, is a level 0 meta-model and can be createdutilising this level 1 model.

The conceptualisation can also be described in the OWL Web Ontology language [W3C04a]to facilitate trust model and trust relationship sharing. The UML model in Figure 3.4 can betranslated to an ontology expressed in the OWL language according to ontology definition meta-model guidelines [OMG04] 1.

Putting this ontology and preceding discussion into the context of the wider goal of this thesis,we have now developed a definition for trust based on human perceptions and broken it to theconstituent components people use to evaluate trust. This analysis enables us to try to transformthis concept into the realm of computing and towards our goal of securing electronic communities.

It is also interesting to note that none of the trust definitions investigated use all input factors.1The actual OWL description is not included in this thesis but is referred to in the published paper [Vil05]

50

Page 55: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Daniel Essin’s model is the most complete and he describes an idea of a policy engine built onthat model [Ess98]. His work is directed towards implementing policies and identifying conflictswithin single enterprise, and facilitating sociological debate about them. While policies are anintegral part of an automated system, they are not enough for securing the dynamic communities.

3.5 Trust Management

In addition to the basic trust research questions discussed above, managing trust is an interestingresearch question in itself. It is not sufficient to just understand trust, it is necessary to utiliseand manage trust relationships in the collaborative enterprise environment. Communities and thetechnical agents need tools and mechanisms to manage and use the available trust information. Aswith the concept of trust itself, the view on trust management varies with the used trust definitionand model.

3.5.1 Background and Challenges

Trust management as separate research topic began in 1996 with Blaze et al formulating trustmanagement as the study of security policies, security credentials and trust relationships and theintroduction of the PolicyMaker trust management system [BFL96]. The principles and goals ofPolicyMaker were:

• An unified approach for all applications.

• Flexibility to support complex trust relationships.

• Locality of control to support subjective trust evaluation.

• Separation of mechanism from policy, which enables different applications to use the sameverification infrastructure.

In today’s vocabulary PolicyMaker solves trust management by discarding identity certifica-tion and subsequent authorisation and replacing it with a system where a peer introduces a set ofcredentials and the local trust management engine decides whether a peer with these credentialscomplies with the local security policy. If it does, it is authorised to perform the requested action.The later Keynote [BFK99] and REFEREE [CFL+97] systems were in this regard conceptuallysimilar to PolicyMaker only differing in the mechanics of credential evaluation. The four goalsabove are still desirable requirements for a trust management system today. However, the list isnot sufficient in today’s collaborative enterprise environment.

These credentials can theoretically be anything, from an airline customer reward system mem-bership to a reputation token from another online community. In a purely credential based system,authentication is not necessary. This means the lack of a a persistent peer identity in the system,which in turn makes it impossible to attach any history or reputation information with peers. Inother words, there is no knowledge whether the peer presenting certain credentials has abused theenterprise’s trust previously. This lack of persistent identity is a problem in systems which mustby necessity be open to remote collaboration. Theoretically this problem can be solved in thecontext of credential based systems by requiring that one of the credentials presents some kindof persistent and unique identification and its presence is mandatory. In practise this is equivalentwith a system requiring authentication. Therefore some kind of persistent identity is a requirementin enterprise trust management.

51

Page 56: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Since trust relationships between people are not static, some measure of dynamism, i.e. the ef-fect of trustee behaviour on trust evaluation, should be present. This requirement was not identifiednor addressed by the PolicyMaker and similar systems. The first systems introducing dynamismand monitoring into trust management were TERM and SULTAN.

TERM [BZ02], took into account experience and recommendations and included evidence andevaluation into the model. The SULTAN system [GS02] defined trust management as the activ-ity of collecting, codifying, analysing and evaluating evidence relating to competence, honesty,security or dependability with the purpose of making assessments and decisions regarding trustrelationships for Internet applications. It is interesting to note that this management definitionincludes their trust model and thus mixes the two.

In addition to the trust establishment and analysis, SULTAN introduced a monitoring sys-tem, which feeds experience and other state information to the trust evaluation. In SULTAN themonitoring service updates state information related to scenarios being evaluated or from actualsystems, as well as experience information, such as number of successful interactions, which canbe used to increase trust level ratings directly [GS02, GS03]. However, the main focus of SUL-TAN has been on the logic of trust relationship analysis and the monitoring function is somewhatunderdeveloped and sketch-like.

If trust management systems are viewed from the functional point of view, they consist of anumber of functions and subsystems. One way of functionally dividing a trust management systemis to three parts: the authorisation decision, certificate storage and retrieval, and trust negotiation[CSW06]. However, this particular division to three is not sufficient, since it does not take accountthe dynamic nature of trust. Therefore a feedback mechanism from observed peer behaviour totrust authorisation decision needs to be made a key component of the management system. Thisensures that if a peer abuses our trust it is immediately reflected in future authorisation decisions.This thesis is focusing on binding this new feedback mechanism to the authorisation decision.

Therefore, in the TuBE project trust lifecycle has been described as the combination of initial-ization, usage and update, which takes into account the dynamic nature of trust [RK05].

3.5.2 Enterprise Trust Management

To learn from the definitions discussed above, enterprise trust management is in this thesis definedto include issues relating to the initialisation, usage, monitoring and update of the trust relation-ships for an enterprise.

This definition has several positive sides. First of all, it does not define trust, and thus is in-dependent of the trust model used. For example, it can be utilised in an organisation using a thirdparty aware trust model that delegates all decisions to third party recommendations or by an or-ganisation utilising a context and action-aware model. Secondly, the wording “for an enterprise”includes both intra-enterprise and inter-enterprise trust relationship management. This is an im-portant distinction, since there are two levels of trust issues in a modern community environment:the single system level and the community level. Thirdly, the monitoring and update factors takeinto account the dynamic nature of trust.

Thus, trust management is in this work is divided into four functional parts: trust initialisation,trust use, behaviour monitoring and reaction to monitoring observations. While this division isapplicable to both the intra-enterprise and inter-enterprise scenarios, this parallels the communitylifecycle phases discussed previously in Chapter 2.3. The relationship between these aspects canbe seen in Figure 3.5.

This new division to four functional parts include both earlier divisions to three. The tuple(authorisation, certificate storage and retrieval, trust negotiation) is compatible because these three

52

Page 57: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 3.5: The new four part view of trust management.

components are included the "trust use" part. The tuple (initialize, use, update) is also compatible,but in the model of this thesis the update lifecycle phase has been divided into two more specifictasks, monitoring and reaction. The functions in this new four part view are explored in moredetail in the following chapters.

Trust Initialisation

Trust relationship initialisation dictates how trust relationships are formed with new peers. Thisphase includes finding and choosing the peers and deciding how much to trust them. In a commu-nity setting these tasks have direct links to the community lifecycle phase of community popula-tion.

Although partner discovery can be done in multitude of ways, it is here assumed that in acollaborative community new peers are found from the service offer repository, where potentialparticipants declare their capabilities to participate in various roles. The partner selection itselfcan be trust-guarded by accepting only trusted partners to the community roles. This assumes thatthere has been some kind of encounter with this partner before so an existing trust relationship canbe utilised.

If the initialisation phase bootstraps the trust relationship, the key trust type in this phase is thedispositional trust. If there is no established trust relationships with the peers wishing to fill a rolein the community, the trust relationship must be initialised with them. The dispositional trust, thedefault value for previously unknown entities, is usually the starting point for the trust valuation.The dispositional trust score should be reasonably low to prevent new peers from abusing oursystem, but still positive enough to enable some form of collaboration.

Previously the requirement for persistent identity was discussed. One counterargument forthis requirement is that peers could switch identities after abuse and thus render the behaviourmonitoring and trust feedback cycle useless. Depending on the nature of the collaboration andcommunity populating process, this identity switch may not be feasible. But assuming it is, one ofthe control mechanisms against this risk is the effective use of dispositional trust. If a previouslywell behaved peer suddenly abuses the service, the abuse is detected and the trust evaluation ischanged and causes the authorisation to be cancelled, this cancellation has consequences to theabusing peer. Forming a new identity and starting to use the service as a previously unknownpeer should incur a operational penalty in the community. This penalty can take the form of lowdispositional trust meaning in practise a very restricted set of allowed operations in the service fornew peer. Effective types of penalties or operational restrictions could be an interesting researchquestions for the future.

Another refinement for this initialisation process is the use of third party information, such as

53

Page 58: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

reputation or recommendations. For new peers, or old peers in new roles, recommendations orreputation information may be used to help in the trust relationship initialisation. Since trust is notby nature transitive, care should be taken to decide how much third party opinions affect the trustrelationships. In one end of the spectrum there is the total delegation of trust initialisation to athird party and in the other end the total disregard of all third party opinions. For the goals of thiswork, the existence and effect of reputation systems is acknowledged, but their implementation oractual mechanics is outside of the focus of this study.

Trust Use

How is this now existing trust relationship used? At the top of the trust construct hierarchy therewas the trusting behaviour, actions performed based on trust evaluation. So, how do the othertrust constructs influence this trusting behaviour and how can these construct used in enterprisecollaborative computing?

There can be very many types of actions in communities, for example signing contracts orseeking to form communities, but the focus of this thesis is on allowing or disallowing participantsto use our resources. Since the decision process of giving someone a permission to perform anaction is called authorisation, the concept of trust-guarded authorisation is formed. This conceptis the translation of trusting behaviour to enterprise setting and means the authorisation decisionutilises trust information.

To get this trusting behaviour or trust-guarded authorisation, the underlying trust construct datamust be filtered through trusting intention as can be seen from Figure 3.1. The intention decideswhether the trustor is willing to trust this role occupant in this community doing this particularaction taking account the contractual obligations negotiated before.

This trusting intention specifically depends on the trusting belief, our view on the trustworthi-ness of another party in this situation. What kind of data is there available in an enterprise systemthat can be used to support this? The most widely used input factors for the belief formationprocess in the field of computing were collected and classified previously in Chapter 3.4. Fromthat list the peer identity and the attempted action are crucial in a collaborative community . Forexample, an enterprise may generally trust someone to make purchases from their web shop butif he or she wants to buy items using a credit card, the enterprise’s willingness to depend on thepeer in that case may be significantly lower. The formation process gives data that may be used tocalculate the trusting belief score for a particular peer in this situation.

System trust is also very important in this phase, both for a community and for the singlesystem. System trust dictates whether we believe that the situation and outlying impersonal struc-tures support success in our actions. From the outside perspective we get the legal and contractualframework, but from the computing systems inside perspective the system trust issues are focusedon the trust to the correct functioning of the security frameworks etc.

In the the community we should have system trust to networking and community contractingprocesses. In the community level trust is also about the trustor’s expectation of a certain behaviourwithin the community, or at least the lack of malicious behaviour. Therefore it is natural thatcontracts can encourage trust by making implicit expectations explicit [GMT04] and this affectsalso system trust. There is also another positive effect of contracts, since they usually specifypenalties to non-compliance and contract breaches. These penalties can range from the refusalto deal with the trustee in future transactions to monetary sanctions. Thus e-contracts also offerincentives to good behaviour and could be considered a system trust factor. However, in this worksystem trust is handled as just context for simplicity, but a study of the effects of a dynamic andautomated contracting process to system trust and the whole trust calculation could be interesting.

54

Page 59: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 3.6: Trust constructs in a trust-guarded authorisation scenario.

It should be noted that the attempted action can be used in the trust evaluation from twoseparate points of view. Trusting belief uses the peer identity and the attempted action togetherto form an understanding on the trustworthiness of the peer, while situational decision to trustconsiders only the current situation, attempted action. It is possible to use either or both of thesetrust constructs depending on the system. For example, if the system has no access to peer identityinformation situational decision to trust should be used instead. If the peer identity and attemptedactions are used in the belief formation process, situational decision to trust can augment thetrusting intention calculation. In an enterprise scenario this may be used to give all peers access tosome actions such as service listings independent of their general or specific trustworthiness.

A picture, parallelling the Figure 3.1 of McKnight and Chervany, is shown in Figure 3.6. Herethe general trust goal, trusting behaviour, is defined as a successful action by the peer. The autho-risation for this attempted action needs to be trust-guarded, i.e. depend on whether the enterprisetrusts the peer or not at this particular moment for this attempted action. The intention to trust, i.e.authorise the action, is ultimately decided by comparing a trust score to a threshold value. Theinput values for the trust score comparison are received from trusting belief and dispositional trust.Trusting belief here is a collection of peer specific factors in the used trust model. The disposi-tional trust in turn is modelled by an initial value. Additionally, since the trusting intention is verysituation specific, the comparison is enhanced by the trust input factor of context that describesthe enterprise situational normality in system trust.

This is a new general model for trust use in an enterprise computing scenario. It is wellgrounded in trust theory and independent of the used trust model. This model can be appliedto different concrete applications, one such trust-guarded framework, TuBE, is presented later inChapter 5.

Compared to previous research, this new model makes explicitly clear the differences between

55

Page 60: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

various aspects of trust and enables better structure in the trust management system. For example,in the previous SULTAN system the monitoring component provides not only peer experienceinformation but also system context and risk information [GS03]. Since the SULTAN monitoringmechanism has so widely separate tasks its implementation is not simple. The division of trustmanagement system components according to trust constructs brings two key benefits. First ofall, the modular structure enables easier changes in the framework if the implementation andexperiments show the structure does not work. Moreover, each component becomes simpler toimplement and easier to deploy.

Behaviour Monitoring

When a positive authorisation decision has been done taking to account all trust information, itis prudent to observe the behaviour of the peer in the system or community to detect any misuse,which in turn must cause our trust to be translated to mistrust, i.e. misplaced trust.

As discussed previously in Chapter 2.7 monitoring in communities can be done in variousways and for various reasons. From the community point of view, a neutral third party can useinter-enterprise monitoring to detect any problems in the functioning of the community. However,for the needs of a single community participant, introspective monitoring is also required.

The reasons for this monitoring are twofold. First of all, an enterprise may want to collectexperience of peer behaviour to enhance the trust evaluation in the future, both on the communityand single system level. Secondly, observation can give warning if any anomalies or misbehaviouroccur so that damage to our systems can be prevented or limited. Afterwards, the misbehavingpeer can be classified as someone to distrust, i.e. not trust at all.

In introspective monitoring participants monitor their internal state and compare it to the com-munity rules and expectations. With the goal of detecting security violations, this type of observa-tion moves conceptually close to the security realm of intrusion detection systems (IDS). Intrusiondetection monitors systems, such as applications or networks, to detect and identify potential se-curity breaches. Application level intrusion detection is discussed in more detail in Chapter 4.

Trust Update

It is not enough to just observe or monitor the collaboration. Mechanisms for reacting to theobservations in some way are needed, for example by passing the observation information back tothe trust evaluation and updating the trust values. There are several types of reactions: updatingthe local trust information, modifying the system to correspond to the changed trust valuationand propagating the information to third parties. Dynamic trust re-evaluation has not been verycommon in recent trust management systems [GS02].

The key trust concept here is updating the trusting belief; how much we believe a peer istrustworthy in a given situation. People have an underlying belief formation process that affectthe value and update of this trust construct, systems can use past experiences gathered by anobservation component to adjust this belief. This is a belief of a future behaviour formed by pastexperiences of a peer in a particular situation.

The update or feedback from the monitoring component to the trust valuation is either positive,i.e. everything went as expected and our trusting belief is increased, or negative, i.e. there wereproblems in the collaboration that cause us to trust the peer less 2.

In the extreme case, updating a trust relationship can also mean severing the existing trust rela-tionship. In our view, a severed relationship is still a trust relationship, however its meaning is “do

2The “no change” case is ignored as uninteresting.

56

Page 61: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

not trust”. Therefore complete distrust is not handled any different from other trust relationshipsin this work and trust management thus includes also distrust management.

An update to the trust score of a peer causes the next evaluation in the trust-guarded interactionfor that peer to use that updated value. However, that evaluation can be a long time in the futuredepending on the granularity of the trust guards. If, for example the trust evaluation is doneonly at peer authentication time, the current activity causing a negative trust update can progressuninterrupted and thus cause the enterprise harm. Therefore a good trust-guarded system shouldalso have a mechanism for modifying the system via an immediate functional reaction, such asblocking all network communication or rollback of all actions in this session if the trust valuationdrops too low. This immediate functional reaction feature is missing from the SULTAN systemmonitor, since it only adds state information to the trust state database or sends an alert to theadministrator GUI tool [GS03].

The update to the observer’s internal trust value storage can also be propagated to the wholecommunity as reputation or a recommendation. It could also be propagated to parties externalto the community where the actions take place, but the common semantics of that reputation orrecommendation should be agreed beforehand by creating a reputation or recommendation sharingcommunity. Reputation and recommendation mechanisms and their intricacies are outside of thescope of this work. Reputation and recommendations are useful especially when trying to populatethe community roles, but from the TuBE perspective the key update issue is to get trust updatesfrom the monitoring function for the local trust decisions.

3.6 Management System Evaluation Criteria

Previously enterprise communities, trust and finally trust management have been introduced anddiscussed. From the reviewed research it can be seen that many separate systems have been devel-oped in these areas, but very few bind all the components together.

Therefore it is the aim of this work to develop a trust management system which fulfils therequirements below. The first in the list is a conceptual requirement that ties trust managementin computing systems to the theoretical foundations of trust type categorisation, the next four aregeneral trust management requirements [BFL96], and the last two add the support for the newfourfold trust management model.

1. Conceptually, it automates the translation of trusting intention to trusting behaviour.

2. It presents a unified approach to trust management across applications.

3. There should be locality of control in the authorisation decisions.

4. It is flexible in supporting complex trust relationships and multiple decision input factorsthat influence the trusting belief to a particular trustee.

5. The mechanisms for evaluating trust should be separated from the policies thus enablingreuse of the mechanisms across applications requiring different policies.

6. It should introspectively monitor the peer behaviour to detect ongoing mistrust situations.

7. It must produce behavioural feedback to the trusting belief calculations to prevent furthermistrust.

57

Page 62: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

A platform or framework fulfilling these requirements has several benefits over current stateof the art authorisation systems. First of all, it is well founded by trust theory. It supports dy-namic trust evaluation thus removing the need to manually configure each and every static trustrelationship for each application. Secondly, it is capable of utilising a multitude of trust modelsand is not strictly bound to any single one. This is a benefit from the flexibility point of view.In a community setting there may be situations where different trust support facts are used. Forexample in a community population scenario peer competence may be much more useful supportfact than any business value factor. Thirdly, it is aware of the misuse possibilities such dynamictrust relationships bring and has ways of detecting and managing the mistrust events. Potentialmechanisms for obtaining this third benefit are explored next.

58

Page 63: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Chapter 4

Behaviour Monitoring

Enterprises today rely increasingly on the Internet and the partnering and data exchange opportu-nities the network connectivity brings. Collaborative communities are a new and emerging trendof this network based business. However, the current Internet and networking environment is fullof threats, both external and internal. Physical security around the organisation premises are notenough since network based automated attacks and network-borne viruses have caused organisa-tions and their published services or applications to be potential attack targets every minute ofevery day. Solutions to this problems today are various restrictions. First of all, the enterprise canrestrict the application use to a set of well known partners. Second, they can restrict the networkconnectivity to these partner organisations to limits their exposure. However, neither of these so-lutions do not remove the general exposure threat, since a simple misjudgement from the partnerorganisation personnel may cause a security incident that has ramifications for all other connectedpartners as well. Besides, these restrictions must be actively managed and in a large enterprise,such as a grocery shop chain, the list of suppliers and other partners can be huge. Thus restrictionbased security incurs a heavy management overhead.

Additionally, applications are nowadays so complex that practically every piece of softwarehas some amount of faults, or bugs. Some of the bugs are security related, i.e. their existencecreates a security vulnerability that can be exploited to create a real security problem or incident.

When these two facts are combined, it can be concluded that tools and mechanisms helpingenterprises manage the threats collaboration brings are needed. It is especially important to createsolutions that have a very light management overhead to enable dynamic partnering and collabo-ration. Therefore it is useful to explore ideas that help enterprises manage the exposure threat notwith static restrictions but using dynamically changing trust. This implies the trust managementsystem must detect the actions of potentially malicious people and automated applications. In thelanguage of trust explored earlier in Chapter 3 there must exist mechanisms for managing mistrust,untrust and distrust.

Mistrust management is relevant in cases where there is a set of trusted and thus situationallyauthorised partners, but who may try to defraud us or exploit our service. Untrust must be managedin cases where there is some, but not enough trust to a connecting peer. In distrust managementwe have a negative trust opinion on the peer or not enough information to form a trust opinion,and must guard against also their malicious actions. Peers may try to exploit known and unknownsoftware faults, system features or existing trust relationships to gain access or privileges againstthe published or implicit system security policy. This risk must be managed with an introspectivemonitoring system. Traditionally in the Internet environment the solution has been an intrusiondetection system (IDS). In this chapter we take a brief survey what kind of intrusion detectionsystems there are and how they may help us manage all these variants of trust problems especially

59

Page 64: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

in a collaborative enterprise setting.

4.1 Intrusion Detection Concepts

An intrusion detection system (IDS) tries to detect when a malicious attempt against a systemwhich may lead to an intrusion is made. It usually consists of a data collection agent that collectsinformation about the system being observed. The collection agents are often also called sensors.This collected data is then either stored or processed directly by the detector proper. The detectoranalyses the data and presents conclusions to the responsible person in some usable format. Insome cases the detector can also initiate countermeasures to block the detected intrusion.

Intrusion detection systems can be classified in many ways into several categories dependingon their characteristics. One way of doing this division is according to the data collection methodand the data analysis mechanism. Furthermore, this thesis add a new category to the classification,namely the reaction mechanism, since it is extremely relevant to any system trying to manage trustrelationships, not just detect intrusion attempts.

4.1.1 Data Collection Model

The following aspects can be identified in the data collection models of various intrusion detectionsystems:

• Where do the agents or sensors reside? Classically they have been either host based agentsor network based agents. Systems with host based agents are called host based intrusion de-tection systems (HIDS) and systems with network based agents network intrusion detectionsystems, (NIDS).

• Is the collecting centralised or distributed, i.e. is the data collected from a single point,such as from a centralised log file (for example syslog) or from various places, such asdifferent network segments.

• Is the monitoring direct or indirect? Direct monitoring means that the IDS can directlyobserve the target, including its internal state. Indirect monitoring relies on target externalinterfaces (such as the log files) for information.

Network based IDS (NIDS) sensor a listening device in the network, where it typically cap-tures all network traffic and tries to detect protocol packets with malicious contents. It has a setof usually lexical rules detailing the definitions of malicious traffic. Examples of malicious trafficare possibly port scanning, too large ICMP packets (i.e ping-of-death) and known web server CGIexploits. Network based IDS also has directives what to do when such activity is detected.

Host based IDS (HIDS) works within a computer server and tries to detect misuse or anomaliesthat occur in the server but are not necessarily network based usually by examining various logfiles and system level audit trails. Examples of such problems are brute-force password guessing,changing key system configuration files and accessing protected files.

The data collection can be done centralised or distributed. Centralised collection reads theinformation from a single point, such as central log file or from main network switch or the internetfirewall. However, this approach has both scaling and coverage problems, therefore most currentsystems preferably deploy distributed collection, i.e. there is a sensor in nearly every host andnetwork segment.

60

Page 65: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

The monitoring can be done indirectly or directly. Indirect monitoring is defined as “The obser-vation of the monitored component through a separate mechanism or tool” while direct monitoringcan access the component and its internal state directly [Zam01]. Examples of indirect monitoringare application log file reading, while direct monitoring is performed within an application.

4.1.2 Data Analysis Model

Whether one deploys a HIDS or NIDS system, centralised or distributed collection, an analysissystem is required to provide a sensible summary on the observed activities. One feature of theanalysis model is the location of the analysis, whether the analysis is done in many places simul-taneously) or centralised.

But the most differentiating feature of this is the analysis philosophy; misuse detection, anomalydetection or specification based analysis. This has been one of the basic factors according to whichIDSes have been usually classified.

Misuse detection relies on patterns (usually called signatures) of known attacks and effectivehandling of large amounts of network traffic or log files from which to detect these patterns. Whilethe signatures are attack or attack-class specific, and thus their applicability to unforeseen situa-tions is limited, some new attack types can be detected if they fall in the similar class than someprevious attacks and the signature has been written generically enough.

Anomaly detection tries to differentiate between anomalous and normal behaviour of the sys-tem and alert on all unexpected behaviour since it may be an attack. The problem of this methodlies on the question how to effectively define system normal behaviour, how to compare the currentsituation to this normalised ideal and how to minimise the false positive rate (i.e. normal behaviourflagged as intrusions) while keeping the false negative rate (i.e. missed intrusions) close to zero.On the positive side, this method is not tied to any specific attack or attack type and can potentiallyalert to new and previously unseen attacks. This technique is especially valuable in new situationswhere attacks and attack types are not known.

Specification based analysis approaches the problem by more or less formally by defining thecorrect behaviour of the program [KRL97] or network protocol [SGF+02] and reacting to anydeviations. The key problem in this approach is the initial generation of such a formal model,which needs to be done for each security critical system.

Each of these philosophies have different techniques suited to the task such as statistical anal-ysis, syntactical pattern matching or expert systems. The basic detection principles have been verywell classified by Axelsson in his survey of intrusion detection systems [Axe00].

4.1.3 Reaction Models

One newer aspect in the IDS classification is the IDS reaction model, i.e. what the system doeswhen it finds out a potential problem. The reaction model can be immediate or delayed. Dependingon the organisation and its capabilities sometimes it is quite enough to know a penetration hasoccurred some time in the past, while in some cases it is quite imperative to be able to instantly dosomething to the suspicious activity.

Traditionally IDS components have been interested in just generating and sending alarms im-mediately so that people can deal with the intrusion. This is a sensible approach in situationswhere people are available 24x7x365. But in some organisations it can take hours or even daysfor a system administrator to be able to react to the alarm. Therefore, some organisations wouldactually be better off with a delayed reaction IDS.

61

Page 66: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 4.1: The IDS reaction matrix

The delayed reaction has given rise to various expert systems that analyse massive amounts ofnetwork traffic and system logs with for example data-mining [LS98] or expert system techniques[LSC97]. These enables them to create better overviews and search for larger patterns in theattacks. This can be done for example during the night and when people show up to work inthe morning, they are given a report on the night’s activities. Various types of delayed analysissystems have also been utilised in commercial products.

Another aspect is the mechanism of the reaction, active vs. passive. Passive reactions, such asalarms, do not affect the observed system in any way. Active reactions, such as closing the attack-ing network connection, actively protect the system, thus gaining the name Intrusion ProtectionSystems or Intrusion Prevention Systems (IPS).

The different variants are illustrated in a simple matrix form in Figure 4.1. The lower lefthand quadrant of the matrix, active but delayed measures, is currently somewhat open. Into thisquadrant would fit measures that for example adjust peer authorisation, reputation or trust valuesafter a security incident or any kind of deviation from the norm has occurred. This is somewhatdifferent from immediate reactions, since the effect of these measures is usually visible some timeafter the incident has occurred. For example simple trust score adjustments take effect when thepeer contacts the service the next time. Thus evidence based trust management is placed mainlyin this quadrant with potential to reach into the active and immediate quadrant depending on theactual mechanism of the reaction.

4.2 Problems in the Current IDS Approaches

Consider a scenario where a local workstation user is sending all outgoing e-mail to a companymain mail server, which then re-sends these e-mails to the real destination. This is a commonsetup in organisations where the outgoing connections of a simple workstation are restricted.

What happens if the workstation is infected with a virus, worm or trojan that is either prop-agating by sending out a mass mailing of itself or sending out some other undesired content (i.e

62

Page 67: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

spam) to all addresses in the address book. These e-mail borne viruses have been very common inrecent years and the trojan horse programs installed by the viruses have been used to send spam.

Note that the infection can occur even if an antivirus program is deployed at the workstation.The time between the detection of a new virus and the saturation point of a network enabled virusis diminishing so fast that the antivirus companies have a hard time developing and especiallyglobally distributing the recognition signatures and countermeasures in time. The same is true fornetwork IDS systems that use pattern matching so that the signature patterns need to be distributed.

So, there is an infected machine on the network that starts spewing out massive amounts ofe-mail (either with viral or spam payload). This is not detected by the NIDS, since sending e-mailis by its rules quite acceptable, nor is it detected by the HIDS at the mail server, since it alsoconsiders receiving e-mail from a trusted host in the network quite normal. So the mail serverhappily accepts and starts redistributing the harmful content around the world.

Another scenario could be an extranet server that is offering some application via a definedWeb Services interface. It probably has some form of access control and authorisation to determinewhich systems are allowed to connect and what various contacting peers can do. But what happensif the peer system is penetrated by a malicious hacker? The peer penetration may not be at allvisible to the extranet server, since the authentication and the authorisation information is usuallystored at the server and possibly even unencrypted on the system disk. Thus the attacker can sendsyntactically correct but semantically malicious information to the extranet server undetected byaccess control and authorisation mechanisms, NIDS or HIDS.

Some additional detection approaches are required if the damage of this type of situations is tobe kept to a minimum. Especially some level of application awareness is required of the intrusiondetection systems.

Application level intrusion detection (AppIDS) detects intrusions by collecting informationabout the monitored application system, analysing this information by evaluating relations, andtaking some action if a relation result is judged to be anomalous [Sie99].

The difference between an application IDS and a host based IDS is the fact that AppIDS isapplication specific; it is able to work with one application or application class only. HIDS usuallydetects problems within one host system, possibly running several applications. HIDS worksespecially on operating system level but it can include application specific components as well.Similarly, the difference between AppIDS and NIDS is the fact that NIDS uses only network basedcollection methods. While a NIDS can also include application specific rules, it is concentratingon the network traffic in general, not one specific application.

Defined like this, application IDS is not a new idea. Research on various forms of applicationexecution monitoring has been done in the 1990’s [FHSL96]. The name application intrusiondetection was coined later by Sielken [Sie99] and its benefits were enumerated more closely byZamboni [Zam01]. The application level intrusion detection has several good characteristics overNIDS and HIDS that make it a desirable enhancement to the IDS palette:

• Depth of defence. This is a basic tenet in the security field, if one system (such as a firewall)develops a problem, there must be other mechanisms in place to detect and protect againstintrusions.

• The application awareness in NIDS and HIDS can be restricted. This means that the networkand host based systems do not necessarily need to be enhanced to recognise application levelproblems, this can be delegated to an application level system.

• It can technically monitor applications directly, i.e. there can be sensors observing the ap-plication internal state directly. However, all AppIDS methods do not utilise this possibility.

63

Page 68: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

• The application can see data which a network or even host IDS sensor can not. The keyexample is traffic that is encrypted at the application level (for example using SSH, SSL).This NIDS blindness to encrypted traffic is a concern in some organisations. Terminatingthe encrypted connection in the perimeter may not be an option for all environments.

• The IDS reaction can be more fine-tuned according to the service capabilities. Dependingon the intrusion type the enterprise might not want to break the connection altogether, butfor example restrict the connection or data rate (i.e. tar-pitting). Other application typecountermeasures can be changing the source authorisation parameters (i.e what actions arepermitted and what data is available).

There are also problems at the semantic level. Most intrusions today exploit known applicationvulnerabilities, which are in many cases weaknesses in mechanical input handling. Uncheckedbuffer bounds, misplaced nulls and format string problems etc are syntactical in their nature, i.e.the received value differs from the expected value in length, type or allowed characters. Butcurrent systems are not interested in the actual semantic sensibility of the received input, i.e.if the peer suddenly places an order for 10 000 000 flight tickets. While inputting this shouldnot cause immediate access to the operating system level (unless there is a buffer overflow andattached exploit code) or denial of service attack, it may still be a cause of concern because offraud possibilities. Application level mechanisms can potentially analyse this type of problems.

The survey below in this chapter has been made to canvas the area of application intrusiondetection methods to pinpoint possible technical methods and tools for managing mistrust, distrustand untrust at the application level.

4.3 Data Collection Methods in Application Level Intrusion Detec-tion

Before it can actually be analysed whether an intrusion or misuse attempt has actually happened inan application, there must be access to the application behaviour at a suitable level. This chapterdiscusses various options for monitoring the application and collecting the application behaviourdata for analysis.

However, not all reviewed work in the application IDS research domain were explicit in theirdata collection methods (for example [JL01]) and some were just developing new analysis methodsto old collection mechanisms (for example [WD01, GJM02, WFP99]). The works detailing newanalysis methods only are discussed later in Chapter 4.4.

The reviewed research papers were generally not discussing the distributed vs. centralisedaspect of their collection method. However, since application level data collection can add in-formation to the overall intrusion detection system, these methods can be seen as extra sensorsproviding depth of defence in a larger integrated intrusion detection system.

4.3.1 Host Based Monitoring

There are several mechanisms within a host, typically at the operating system level, that can beused to monitor an application. Some of these could be seen as host-based intrusion detection datacollection but since they have insight to the application behaviour, they have been included in thissurvey.

64

Page 69: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Process Auditing Mechanisms

The process auditing subsystem of the operating system is of use to application level intrusiondetection especially when it comes to extracting system call or other process behaviour data forexecution monitoring analysis. Process auditing subsystems do not have insight into the appli-cation internal behaviour, but are able to monitor how the application acts towards the operatingsystem.

Early work used auditing subsystems to extract security events from the operating system logs.For example VAX/VMS had a very advanced auditing facility that has been used for user activitytracing in an intrusion detection setting [TCL90]. In this particular case the audit events analysedwere user commands, so this is not really in the application intrusion detection domain.

A more widely used system is Solaris SunSHIELD Basic Security Module (BSM), which is apart of the Sun Microsystems’ Solaris operating system. It provides the security features definedas C2 in the Trusted Computer System Evaluation Criteria (TCSEC). The features provided bythe BSM are the security auditing subsystem and a device allocation mechanism that provides therequired object reuse characteristics for removable or assignable devices [Sun94]. BSM has beenused in several AppIDS systems to be the source of system call information [KFL94, KRL97,End98, GSS99, GMS00, MG02].

Wespi et al also used an unnamed audit subsystem to collect audit events [WDD00]. Thereceived audit records were not system calls, but on a more abstract level where the basic operationinformation was available (such as “file open”), but not the particular parameters.

Other operating system tools have also been used. Some Unix systems contain the user-levelprogram strace, which will run a program given as an argument and list all the system callsit makes. Other such Unix programs also exist, such as truss. The strace program hasbeen used to gather information in several application IDS systems [FHSL96, KH97, Mar00].Additionally, the data analysed by Lee et al ([LS98],[LSC97]) had been obtained with strace.

Since these user level programs exist, the Unix kernel offers primitives for system call tracedata collection. Some research also used these underlying operating system constructs directlyfrom user level for better performance [SBDB01], [FKF+03].

Kernel Mechanisms

The process auditing mechanisms above used operating system services for reporting system callor audit data but worked at operating system user space. However, this reporting function can alsoreside at kernel level. For example, Sekar et al used a kernel level module to intercept system callsto support policy enforcement. Also the analysis mechanism resided in the kernel space [SBS99].

A kernel-level approach has also been used in behaviour data collection from software execu-tion environments. In the work by Ko et al, the execution environment i.e. a software wrapper,functions in kernel space [KFBK00].

One application level mechanism utilising kernel level possibilities is the waypoint technique.Waypoints are kernel-supported trustworthy control flow checkpoints in the code. Waypoints canactively report program control flow information in real time. Also security attributes can beattached to each waypoint or to a sequence of waypoints. Waypoints are set at each applicationfunction entrance and exit, also middle waypoints that reside in the function can be created. Thewaypoints are generated via static analysis [XDC04].

From the data collection point of view, waypoints have one key benefit over plain system callcollection. Waypoints can give the intrusion detection system a better granularity than traditionalsystem call based solution by the ability to create more waypoints than system calls [XDC04].

65

Page 70: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

The key problem with this type of approach is the extensive kernel support required. Thewaypoint structures and policy enforcement systems must be located in the kernel to be securefrom manipulation. While they are more secure from the system users, it is also riskier to augmentstock kernels with extra modules. From the kernel point of view, any security or performanceproblem in the added IDS functionality may be reflected to the whole system. This analysis appliesto monolithic kernel structure, exhibited in systems like Linux. The situation is different in micro-kernel systems, such as Windows NT and derivatives, where only minimal message passing andscheduling systems are present in the kernel proper. Other functionality is running in user space orbetween the outer user ring and inner kernel ring. The effects of embedding IDS functionality intoa micro-kernel depend on the positioning of the intrusion detection modules in the micro-kernelprotection system. This has not been a very well researched area.

Kernel-level approaches have also other problems. First of all, they are kernel specific and thusnot portable. Secondly, the kernel systems can also cause performance interference, i.e. genericoverhead to operations that are not in need of an IDS mechanism. The system management is aproblem as well, since reconfiguring a kernel-space system may require a system reboot. Also thekernel services have a very low level of abstraction, which causes the analysis phase having alsolow level of abstraction and deduction.

Library and System Call Interposition

One possibility of collecting information about application calls is library or system call interpo-sition, which is the process of inserting a modified call between the application and the real targetof the call.

The interposition can be done at library level, for example in Linux and Solaris environmentsthe LD_PRELOAD environment variable is able to insert libraries that are searched before originallibraries. This technique can be used to track the library calls an application makes.

The system built by Kuperman and Spafford used this library interposition to detect unsafeUnix programming practises, such as overruns of static buffers in sprintf calls. The penaltyof interposition was about 3% for non-interposed calls and about 57% for interposed and loggedsystem calls. Most of that overhead was due to the logging mechanism [KS99].

Call interposition can also be used at kernel-level. Bowen et al used a loadable Linux kernelmodule to insert an interposition layer into the kernel and rewrote the kernel system call table toexecute first an interceptor and then the call. If any function besides interception is required, it canbe specified in a module compiled from a behavioural monitoring specification language (BMSL)description [BCS+00].

The kernel-level method described above is able to track the application behaviour towardsthe kernel only, whereas library interposition is able to collect data on the library call level. Staticanalysis can achieve similar goals to library interposition, but it can not give us information onwhat happens in an application at run-time.

4.3.2 Network Based Monitoring

Some application IDS systems do use network based data collection, since they use the data avail-able from the network for intrusion analysis. This assumes that the application is distributed in away that network data is meaningful for analysis. This approach is not discussed in detail, sincethis is more in the domain of network based IDS.

Some interesting research exists, though. For example Raz et al captured data in a dynamicdata feed for semantic anomaly analysis [RKS02]. Also CORBA remote calls have been inter-

66

Page 71: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

cepted so that the remote call execution patterns could be analysed similarly to system call pat-terns [SMS99]. These approaches show that network traffic interception can be utilised in theapplication layer as well.

4.3.3 Direct Monitoring

The most straightforward way of collecting data for misuse or anomaly analysis in an applicationis to write detection and collection hooks into the application code itself. This is direct monitoring,i.e. the method can access the component and its internal state directly.

Diego Zamboni did code internal misuse detection in his research by modifying existing soft-ware to detect well known attacks against it. The solution described added a series of detectioncode agents to the sendmail application code against attacks collected from Common Vul-nerabilities and Exposures (CVE) database [Zam01]. He also added some generic agents, forexample against port scanning. Different techniques, such as character matching heuristics orstateful/stateless detection, were used in the agent implementation.

Similar approach was also used by Sielken, who first did a risk analysis of potential applicationfraud points and then wrote additional code to detect anomalies in client behaviour [Sie99].

Elbaum and Munson instrumented an application, a Linux kernel in their case, with code hooksthat provide information on how the application modules call each other [EM99]. This moduleprofile information was then used for anomaly detection.

The hard part of the code-internal method in general is the extensive programming and pro-gramming related activities it requires. While the actual number of code lines generated is notsignificant, modifying an existing system means in practise that a significant effort is spent inunderstanding the code before being able to make meaningful modifications [Zam01].

The actual agent or sensor can also be in a separate module, which has been linked into anapplication via a suitable interface [AL01]. This method enhances the flexibility of the agentimplementation and diminishes the time needed to understand the code base. However, the usedinterface limits the agent’s view of the application and also places restrictions on how the agent canaffect the application. Therefore, the degree of directness in the agent model is also diminisheddepending on the interface characteristics.

4.3.4 Monitoring the Execution Environment

A very distinctive class of IDS data collection methods includes systems where the applicationexecution happens in whole or in part in a pre-configured container or other safe execution en-vironment. The executed application is monitored by this container and the container is able tocollect behaviour data and potentially also analyse unexpected behaviour. There are several vari-ants of this basic idea.

OS-level Environments

There are various methods of restricting the application visible environment for security reasons.One of the most successful is Unix kernel chroot system call and the associated control program,where a subdirectory of the file system hierarchy is created to resemble the full Unix file system,and the application execution is restricted to this subdirectory and the services and componentsvisible from there. However, chroot can not be considered a real application IDS, since it justprevents applications accessing the real OS file system and it has not been designed to alert on anyundesired activity. There are other such execution restriction systems as well, such as the BSD

67

Page 72: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

jail and Linux AppArmor but since they are somewhat off-focus this thesis does not explorethem in more detail.

Since no widely used operating system offers a ready made system focused on application levelintrusion detection, there are ways to augment the systems with monitored execution environmentsor application security wrappers. One such effort is the Generic Software Wrapper system, whichis a loadable Unix kernel level module (LKM) that associates wrapper instances with applicationprocesses [FBF99]. Generic software wrappers have also been used by Ko et al in their work[KFBK00].

Wrappers are defined in the Wrapper Definition Language (WDL) that masks the operatingsystem details from the wrappers and also defines the functionality the wrappers have. One keypoint of this system is the fact that the WDL characterisation of the low level kernel API containssome semantic information on the system calls, for example a tag labelling a call parameter orreturn value as a file descriptor. This enables both some level of OS independence and also simplesemantic reasoning in the wrapper action rules.

Wrappers intercept some or all system calls the applications make and take action based onthe WDL definition. The wrapper can either deny the system call or augment or transform the call.Augmenting means adding functionality to the call, such as encrypting or performing intrusiondetection analysis on the call data. Transforming, on the other hand, is replacing the call with oneor more other system calls or other events [FBF99].

Another example of this kind is Systrace, an environment which supports fine grained processconfinement, intrusion detection, auditing and privilege elevation [Pro03]. There the system callsare recorded by a kernel-level mechanism and the system call is checked against the recordednormal policy in a user space policy daemon. The approach takes also the system call argumentsinto account. Systrace is currently available for Linux, Mac OS X, NetBSD, and OpenBSD.

One positive point of this OS level approach is the fact that it works is practically applicationindependent. It is also easy to deploy even for older or third-party applications. However, since itis an OS level tool, it does not necessarily have any deep insight into the actual workings of theapplication. Therefore, these systems are doing indirect monitoring. A suitable system may notbe available to the preferred operating system either.

Dynamic Execution Environments

Dynamic execution environments (DEE) are systems where compilation, profiling and optimisa-tion are dynamic, such as Sun Java and Microsoft .NET Common Language Infrastructure [IF02].Dynamic execution environments are a subgroup of virtual execution environments (VEE), moreoften called virtual machines.

As an example of a dynamic execution environment, programs written in the Java program-ming language are run in Java Virtual Machine (Java VM, JVM). Rather than running directly onthe native operating system, the program is interpreted by the JVM and the resulting bytecode isrun within the JVM on top of the native operating system. The JVM restricts the application bycreating a sandbox, where a security policy dictates what the application is allowed to do.

One IDS technique utilising the Java virtual machine is dynamic sandboxing. It consists of twoactivities: sandbox generation and sandbox execution. In the first, a dynamic sandbox profile isconstructed by running the program with an instrumented JVM and all observed method calls arewritten to the profile. Later, in the execution phase proper, observed method calls are compared tothis profile and anomalies cause an error [IF02].

The difference between JVM’s own sandboxing and this dynamic sandboxing is similar tothe difference between Unix chroot jail and a real IDS. Both the Java sandbox and the Unix

68

Page 73: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

chroot are able to just deny the activity outside their limits, but dynamic sandboxing can beclassified as a real intrusion detection system, since it is able to detect malicious activity.

There are also systems where only the intrusion detection data collection agent is built intoa JVM [SKV03]. To monitor any suspicious activity performed by applications running in theJVM (JikesRVM in this case) the virtual machine was extended with an event logging system.The auditing facility consists of an event driver, an event queue, and an event logger. The eventdriver adds thread-level execution events to the event queue. The logger processes events that arecontained in the queue and writes them to an external log . This external log can then be processedby a suitable intrusion data analysis system (STAT in this case). In the analysis model this systemis more related to host based intrusion detection systems than other application level systems.

Component Level Wrappers

Component based application development has rapidly become popular in recent years. There aresound reasons for it, especially regarding cutting development costs with code reuse and isolatingfaults with modularisation.

Application wrapping for intrusion detection can also be done on the application componentlevel instead of the level of application as a whole. Herrmann and Krumm created an architecturewhere an adapter is generated for each Java Bean and the bean is wrapped by the adapter. Anadapter generator introspects the bean for its interface actions and creates the related adapter basedon this examination. This approach means that each component is protected by the adapter and allcomponent interaction via the component interface goes through the adapter so that the interactioncan be observed [HK01]. This is direct monitoring in the sense that the application interactionbetween the beans can be observed, but on the other hand this method does not have visibility inthe inner behaviour of the beans.

4.4 Analysis Methods in Application Level Intrusion Detection

Whether one deploys a HIDS or NIDS system, or detects problems on application level, an analysissystem is required to provide a sensible summary on the observed activities. This analysis can bedone according to three distinct philosophies: misuse detection, anomaly detection or specificationbased analysis.

Since application level IDS is a relatively new concept and there are several data collectionmethods described above, there are also several algorithms and methods for analysing the collecteddata. The application data analysis methods have been classified into the three main philosophieswith the difference that in this document, survey of the anomaly detection techniques has beendivided into two sub-chapters: anomaly detection in execution monitoring and anomaly detectionin input semantics. This division is purely a device for managing the readability and structure ofthis document, conceptually these two are in the same anomaly detection domain.

4.4.1 Misuse Detection

Misuse detection has not been a very widely researched topic in the application IDS arena. Themethodology map is shown in Figure 4.2.

A trivial approach to misuse detection is to make modifications in the code to detect knownvulnerabilities [Zam01]. More refined method is to use known attacks to train a neural network,a multi-level perceptron (MLP), to recognise them and also similar attacks from a stream of audit

69

Page 74: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 4.2: The misuse analysis methodology map

events [End98]. A software wrapper technique can also be used for misuse detection if the attacksignatures are encoded into the wrapper data structures [KFBK00].

Kuperman and Spafford used library interposition to detect unsafe Unix programming prac-tises, such as unbounded data insertion into static buffers, race conditions and unsafe system calls(such as system() and execvp()) [KS99]. Since this technique searches for vulnerabilitiesfacilitated by the used platform and tools instead of the application characteristics, it is applicationindependent.

This lack of research in the misuse domain is not very surprising, since misuse detection usesknown vulnerabilities either for searching or training. This means each new discovered vulnera-bility must be encoded in the application intrusion system. If the detection method is source codebased, as in the trivial case, this would mean recompiling and reinstalling the application everytime a new type of vulnerability is found. Also wrapper based techniques suffer from this up-date problem, although to a lesser extent, since the update must be done on the wrapper side, theapplication can remain as it is.

Machine learning could make life with new vulnerabilities much easier, if we could be sure thata new attack would be covered by the learning generated by the old training data. At minimum, anIDS system employing machine learning would need to be tested every time a new vulnerabilityis observed, and if the old training does not cover the new problem, retraining must be done.

4.4.2 Anomaly Detection in Execution Monitoring

One key form of application execution monitoring is application call analysis. Here, the wordcall means both system calls and application internal procedure or method calls. Also “calls” tosoftware modules are considered. While the techniques of extracting information for different call

70

Page 75: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 4.3: The anomaly analysis methodology map71

Page 76: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

types are different, algorithms for analysing this type of information are similar. Thus these typesare not separated here but discussed jointly. The analysis method map for anomaly detection isshown in Figure 4.3. The picture also includes research done in the area of anomaly detection ininput semantics, which is discussed in more detail in Chapter 4.4.3.

System calls seem to be at a suitable granularity level for detecting malicious behaviour inapplications. This suitability is based on the assumption that the exploit must interact with theoperating system to create a real intrusion. While this is true in the case of buffer overflows andshell code exploits, it leaves problems caused by the application data semantics unexplored.

The seminal paper describing this idea came from Forrest et al in 1996 [FHSL96],[FHS97],and it has generated a multitude of different techniques and variants for either obtaining or analysingthe system call information.

Their basic idea is to parallel computer defence with human immunology. This analogy needsthe definition of system “self”, which by their original idea is obtained from the program systemcall profile. This idea of self can be used for anomaly detection by collecting traces of systemcalls from programs in a secure training phase and creating a normal profile of how the processruns. This normal database is then used to monitor the running processes and flag anomalies whensystem calls occur in abnormal positions.

This overall method is called stide, sequence time-delay embedding, and it contains four parts:selecting the feature set (system calls in this case), collecting normal data, extracting features fromthe data and detecting anomalies by comparing the features of current situation to the normal data.

There are two general problems in the system-call based anomaly detection; mimicry attacksand impossible paths [WD01].

Mimicry attack is a form of attack where the exploit is coded in a way that it resembles thenormal run of application as much as possible, for example using only normally occurring systemcalls. However, not all attacks can be coded this way. This attack method would require extensiveknowledge on what is normal for this application and also what kind of application model ordetection threshold target is using [KH97], [WD01]. There are also two types of mimicry attacks:local and global. A global mimicry attack combines the legal system calls of multiple functions,while a local mimicry attack uses the legal system calls of only the running function [XDC04].

The impossible path problem manifests in techniques where the model includes all possiblecontrol paths, i.e. it has also those that no valid run-time behaviour uses. An example of this type isthe return-into-others impossible path, where a function is called from location A, but the controlis returned to point B [XDC04]. The main concern with impossible paths is that the model canclassify such an occurring impossible path as normal and thus miss an attack. Not all system callmethods exhibit this problem, typical examples of this problem are in the state transition research[WD01],[SBDB01].

Sequence Similarity Techniques

The original research by Forrest et al used simple sequence enumeration in data collection and fea-ture extraction from the system calls. The system call sequences were of fixed size, N-grams, andthey used a technique called look-ahead pairs for determining mismatches. The actual measure ofanomaly in the original research was a mismatch count as a percentage of total number of possiblemismatches in the look-ahead sequence. When tested with real exploits against sendmail appli-cation, the mismatch counts were between 0.3-5.3%. The described algorithm can be implementedin O(N) time, where N is the system call sequence length [FHSL96].

The key finding was that this was a good method of differentiating between processes and fordetecting problems where an unusual system call sequence in an attack is used. However, this can

72

Page 77: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

not be used to detect all exploits, for example race conditions or resource stealing.An observation made by Kosoresow et al around the same time was that all the anomalies

observed exhibited behaviour where the system call mismatches between normal and intrusiontraces are executed in bursts, i.e. they were temporally close. This allows for an anomaly detectiontechnique where just the number of mismatched system calls within a certain system call sequencelength is used as a detection rule [KH97].

This type of a simple numeric measure for an anomaly gives rise to a new form of a mimicryattack. In this evasion method the idea would be not to clump the exploit system calls together,but to pad the exploit code with normal sequences of system calls, thus fooling the detectionthreshold. As with basic mimicry attacks, this would require knowing specific details of the IDSanalysis method, in this case the detection threshold.

Forrest, Hofmeyr et al refined their work later. The key modification was in the definition of ananomaly, i.e. sequence similarity. They noted that since anomalies occur temporally together, theycan used the idea of Hamming distance for defining how much a sequence differs from the normalsequence. This method is more complex to computer, however. To detect an intrusion, at leastone of the sequences generated by the intrusion must be classified as anomalous. They measurethe strength of the anomaly by computing minimal Hamming distance and it was assumed thatthe higher the minimal Hamming distance, the more likely it is that the sequence was actuallygenerated by an intrusion. Therefore the signal of the anomaly is the maximum value of theminimal Hamming distances in all the sequences. To make this measure applicable to variablelength sequences, this signal value was used normalised over the sequence length value. Thisnormalised value was used as the preferred measure of anomaly [HFS98]. So, if the normalisedanomaly signal value is 0.5, it means that at some point half of the trace has been different fromthe original.

In the more detailed tests, it was found that after a certain length (6), the length of the se-quences had very little effect in detection as measured by the normalised signal of the anomaly,so they used a fixed length of 10. In real exploit tests, the normalised signal of an anomaly wasbetween 0.2-0.7 and mismatch percentages 1-38%. In summary, they were able to detect all thetested abnormal behaviours, including successful intrusions, failed intrusion attempts, and unusualerror conditions [HFS98]. This of course depended on the correct alert threshold for the signal ofthe anomaly.

The original N-gram method of Forrest et al has been implemented as a special seq-id wrapperin the software wrapper IDS of Ko et al [KFBK00]. The wrapper system works in kernel space andis able to intercept and analyze system call sequences. The seq-id wrapper causes 5-7% penaltyto the execution performance. It can also be used together with a misuse detection or specificationbased wrapper, but then the wrapper penalties get added.

This method has also been implemented when analysing anomalies in a distributed applicationutilising CORBA [SMS99]. There the observed sequences were not system calls, but CORBAremote method invocation sequences.

Wespi et al have augmented and modified this technique. First of all, they analysed auditevents, not system calls. Secondly, their feature extraction has been done using variable-lengthsequences and the Teiresias algorithm, which was initially developed for discovering rigid patternsin unaligned biological sequences. The more abstract audit events could be used instead of systemcalls, since the system call arguments, which were not available in the audit trail, were not used foranalysis. This variable-length method has improved the false positive rate [WDD00]. Variable-length sequences were also used by Marceau, but she constructed a finite state automaton fromthem and the method is thus discussed later [Mar00].

Cabrera et al turned the feature extraction process of this method upside down by using a string

73

Page 78: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

matching classifier to extract data to separate anomaly dictionaries, i.e. system call sequences thatare not in the normal data set [CLM01]. These anomaly dictionaries correspond to the idea of selffor the anomalies instead of self of the monitored system. The incoming system call sequence wasconsidered normal if the anomaly count in the sequence was zero against all anomaly dictionaries.The anomaly count was defined as the average of the relative anomaly counts in the set. Therelative anomaly count is the number of anomalous sequences found in the trace, divided by thetotal number of sequences in the trace. The key problem of this approach is that all new anomaliesare classified as one of the old or as normal. This can be solved by a hybrid approach where theanomaly count is first checked against a threshold value. If the anomaly count value was abovethreshold, the sequence was probably an attack even if it would not be in the anomaly dictionary.It was observed that the String Matching Classifier performs better for longer sequences, whilethe anomaly count detector performs better for shorter sequences. It was also shown that a hybriddetector, combining both schemes at their best choice of sequence lengths leads to substantialimprovement of detection accuracy.

Jones et al also used sequence enumeration techniques, but the sequences were of calls to C-language libraries instead of system calls [JL01]. They utilised three different analysis methods:the original mismatch percentage from Forrest et al, the normalised anomaly signal from Hofmeyret al and locality frames from Kosoresow et al. In two real tests against Apache web server,the mismatch percentages were 1.5 and 3.9 and the corresponding anomaly signal strengths 0.7and 0.8. They also observed that the intrusion detection result is stable if the sequence length is>6. Their tests were done using a sequence length of 10. They also made successful tests withtwo buffer overflows, one trojan program and one memory depletion denial of service exploit.Additionally, they created a a trojan code attack against mSQL and were able to detect that betterwith library calls than system calls (signal strengths 0.8 vs 0.7, max. locality frame count 20 vs7). They presume it is because system calls have too little variety in mSQL, it is doing basicallyI/O functions.

Ghosh et al compared in two papers five different algorithms for analysing the system calltraces. In their analysis they compared the simple N-gram similarity measure technique fromForrest et al [FHSL96] with neural network methods (a back-propagation network and an Elmannetwork) [GSS99]. Later they compared two other machine learning methods, string transducersand a state tester, to the Elman network [GMS00]. The results show that though the similaritymatching approach worked fairly well, the performance can be significantly improved (particularlyin reducing the false positive rate) by using neural networks [GSS99].

The sequence enumeration and similarity techniques have also been used for analysing anoma-lies in user command line behaviour [LB97]. Here the simple equality matching of system callsdoes not work very well, since the user can produce slightly different versions of the same basiccommands. Therefore similarity matching techniques were used to score sequences of commandsfor closeness. The closeness measure was MCA-P, match count with adjacency and polynomialbound. While user input has quite different characteristics from system calls, similarity matchingtechniques may be an option for systems which take into account not only the system call, but alsothe parameters of the call. This is not quite in the domain of application intrusion detection, but isincluded here to illustrate other methods used to solve a similar task.

Call Policy Techniques

One potential approach to execution monitoring is to observe not the call sequences, but to createa list of calls the application makes when run normally, and then in subsequent runs limit theallowed calls to this known good set (i.e. profile).

74

Page 79: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Call policies have been explored in an operating system setting by Niels Provos in his Systracesystem. There the system calls an application makes are recorded during a training time thuscreating a security policy of the allowed system calls [Pro03]. This procedure is not altogetherautomated: if the application uses for example random filenames, the policy must be edited toallow that particular form of non-determinism.

The system call arguments are also considered here. Before making a policy decision, thesystem call and its arguments are translated into a system independent human-readable format.The policy language operates on that translation and does not need to be aware of system callspecific semantics [Pro03].

This has been done also in dynamic Java sandboxing which for a given program consists oftwo activities: sandbox generation and sandbox execution. In the first, a sandbox profile is con-structed by running the program with an instrumented JVM. During this training session, profilinginformation is recorded to the sandbox profile. The sandbox is initially empty and grows duringthe training run by accumulating records for each unique behaviour, i.e. method calls. Becausenothing is added that is not observed, each sandbox is customised to a given program and contextin which it is executed. During sandbox execution, behaviour (i.e. a method call) that is not in theprofile is considered anomalous [IF02].

The JVM prototype looks only method invocations and their signatures, although the authorsnote that also memory behaviour, method arguments, patterns of methods, whole program pathsor even other building blocks than methods could be used for anomaly detection [IF02].

This call policy approach assumes that the application run from which the profile was createddid not include any attacks or exploits. Additionally, it assumes that any deviations from the profileare potential attacks. To lower the count of false positives, this requires that all variations of thevalid application run-time behaviour must be present in the profile. Call policies are a sensibleapproach in a method or procedure call setting, where the potential number of calls is much greaterthan the number of system calls and thus the sequence enumeration technique detailed before maynot be viable.

This method is somewhat similar to the policy-specification based techniques discussed laterin Chapter 4.4.4, but with the difference that the list of allowed calls is not specified by an admin-istrator but extracted from a known good application run.

State Automata Techniques

In the normal system call anomaly detection, the near real time speed of the detection is an issue.Therefore the system call traces need to be reduced for quicker processing. For that Kosoresow etal noted that the applications were deterministic in the sense that long sequences of system callswere repeated. This is also intuitive; if the application forks when it gets a connection, this forkingsequence is always repeated. Therefore deterministic finite automaton (DFA) with macros can beused to build patterns of the various system call sequences.

However, creating the exact DFA of a process will be a problem. First, because every possibleordering of system calls would have to be made, the DFA would be very large. This is simplybecause the system call order can depend on the input data. And second, if the trace length is afunction of data the application is processing, every variation in the data length should be included.If this is not done correctly this could lead to false positive alarms. Creating the macros for theDFA is also time consuming and potentially NP-hard [KH97]. For all these reasons deploying thisapproach to real systems would be difficult.

The simplest work constructing a state automaton was done by Marceau. There variable lengthN-grams of the system call sequences were used to create a suffix tree and an algorithm for creating

75

Page 80: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

a finite state automaton (FSA) from it was presented. The resulting “self database” was smallerthan the sequence database and thus the detector can be more effective at run-time. The algorithmwas adjusted so that it does not create false positives. However, the resulting system was lesssensitive to anomalies than the original N-gram work [Mar00]. The reason for this may be thatthe automaton was intentionally weakened by similarity compression so that a specific value forN would not be needed, but a general upper limit would suffice.

Various other state automata systems have also been researched. Michael and Ghosh used theN-grams from Forrest et al, to create a simple finite state machine. They compared the originalwork with the state machine and another technique, a string transducer. The results showed thatthe original N-gram work performed best if the detection rate was compared with the false positiverate. However, it had much longer training period than the new methods, i.e. with a limited setof training data the new methods had much better false positive rates. However, none of thetechniques was able to detect more than 94% of the intrusions. This was due to limitations in theinput data [MG02].

Sekar et al have also explored the use of finite-state machines to aid in the system call sequenceanalysis [SBDB01]. In their system they augmented the system call information with the programcounter (PC), i.e. sequence-giving information, which allowed them to form the state machineautomatically.

The actual analysis goes so that if an anomaly in the FSA occurs, it is not immediately flaggedas an intrusion, but an anomaly counter is increased. If sufficient number of anomalies are ob-served during a time frame, an intrusion alert is generated. The anomaly count is decreased astime passes so isolated anomalies do not generate alerts. The system generated less false positivesthan the N-gram method used by Forrest et al. What is also interesting to note that the space andrun-time overhead of the algorithm itself is low. However, the overhead for user-level system callinterception used here was considerable, 100-250%.

Feng et al enhanced the FSA model by creating the VtPath model, in which the use of programcounter is augmented and also call stack information is utilised for anomaly detection [FKF+03].In addition to the system call information and the program counter, the return address is obtainedfrom the call stack into a virtual call stack list, which acts as a history of unreturned function calls.Virtual path between two system calls is the ordered list of non-common return addresses in thecall stacks of the two system calls. It is an abstraction of the execution between two system calls.

All found virtual paths and call return addresses are stored during the training phase. In thedetection phase each system call causes the virtual call stack being stored. Depending on thecontents of the stack, return address and the virtual path between the current call and the previouscall, the system can find out various anomalies, such as buffer overflows. The key benefit ofthe the VtPath technique is that it does not suffer from the impossible path problem. In othercomparison, the false positive rate is similar to the FSA method, when they corrected some FSAimplementation mistakes Sekar et al made. And since the system call interception was similar tothe FSA research, the run-time overhead was still considerable.

Warrender et al compared various techniques in analysing system call traces, one of the meth-ods was a probabilistic automaton called Hidden Markov Model (HMM) [WFP99]. HMM statesare unobservable conditions in the system. The model attaches a probability to each state transi-tion and to each type of output in the states. By this probabilistic modelling they can adjust theirmodel over time and also easily represent branches in the system logic. However, the number ofstates and their probability must be given when the HMM is created, and therefore the trainingprocess of a HMM is very long, in the order of hours or days. On the positive side the detectionaccuracy was very high and the false positive rate was very low.

Also Wagner et Dean and Giffin et al have used state machines in modelling the application for

76

Page 81: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

intrusion detection [WD01], [GJM02]. However, their works form a state machine from a programspecification, the source or binary code, so their work is discussed with other specification basedintrusion detection in Chapter 4.4.4.

Machine Learning

Several machine learning algorithms have been applied to application level intrusion detection aswell.

As discussed previously, Ghosh et al compared three different algorithms for detecting theanomalies in the system call sequences: N-gram equality matching, back-propagation networkand Elman recurrent network. The two latter systems are neural networks [GSS99]. Later, Elmannetworks were compared to string transducers and state testers [GMS00].

The results show that though the equality matching approach worked fairly well, the perfor-mance can be significantly improved (particularly in reducing the false positive rate) by usingElman networks. Elman networks were able to detect 77.3% of all intrusions with no false posi-tives. Further, the Elman nets were able to detect 100.0% of all intrusions with significantly fewerfalse positives than either of the other two systems [GSS99]. Later, in user-to-root exploit tests,Elman networks achieved 100% detection of attacks very quickly at a false alarm rate of close to 3per day. With remote-to-local attacks the results were not so good, 30% attacks detected at a rateof approximately 10 false alarms per day [GMS00].

The problem with Elman networks is the training time, it is in the order of thousands of min-utes. The detection performance of string transducers were comparable to Elman networks, butthe training phase is orders of magnitude smaller, in the range of tens of minutes [GMS00].

Comparing string transducers and state testers to Forrest et al’s N-gram matching resulted inN-grams achieving slightly better performance. However, the N-gram method was much slowerto learn, which is to say that it required a great deal more training data to achieve false positiverates comparable to string transducers and state testers [MG02].

Various rule-based systems have also been used. The earliest work used a time-based induc-tive engine (TIM) to infer rules from VAX/VMS user audit records [TCL90]. This was more ofa successful feasibility study than an actual method performance evaluation. Later Lee, Stolfoand Chan used rule-based machine learning to determine anomalies from the system call traces[LSC97]. Their method was to apply RIPPER rule generation system to the traces to generaterules.

The rules generated from RIPPER can classify whether the trace is “normal” or “abnormal”compared to the rules. But it does not as such tell whether the trace indicates an intrusion or not.Therefore some post-processing was done to determine whether the abnormal traces really wereintrusions. These post-processing heuristics try to weed out occasional prediction errors, whichdiffer from intrusions by the length of the anomaly.

The key finding was that the rules for normal behaviour generated by RIPPER rule generatorwere well suited to anomaly detection. Additionally the best method for anomaly detection is toclassify normal behaviour in RIPPER rules, classification of abnormal behaviour works best formisuse detection [LSC97].

Compared to the work in Forrest et al [FHSL96], this method is both faster and the rules havelower storage requirements. Because there were only 200-280 rules generated by RIPPER andonly 100-140 rules on the average had to be checked for classification, authors state that this ispotentially doable in a real-time environment. No practical tests have been performed, however.

As an example of applying machine learning to other IDS data, Gunetti and Russo used induc-tive learning in the form of relational decision trees to classify users into different categories based

77

Page 82: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

on their Unix command-line command usage patterns [GR99]. They used a tool called ReliC andachieved about 90% user recognition rate, which is comparable to other similar research.

Data Mining

Lee and Stolfo, who previously investigated using machine learning for anomaly detection insystem call traces, also extended their work to use data mining techniques [LS98]. The datamining approach relates to the construction of the base classifiers and how to identify meaningfulfeatures from the raw data stream. Data mining techniques, such as association rules and frequentepisodes algorithms can be used to help in this feature extraction. An association rule algorithmcan help find rules within one audit stream, while frequent episodes reasoning can help discoverinter-audit patterns, i.e. patterns between audit data streams. And the latter is especially usefulwhen looking at sequence information, such as system call traces.

Their aim was to make a multi-level model, where a set of base classifiers, such as the systemcall trace learner, would feed a meta learning task. The meta classifier would take input from alltypes of base classifiers (including network based mechanisms) and a learning algorithm wouldbe applied for the overall anomaly analysis. This would be able to utilise information from manysources, not just application or network. However, in their work they have only very preliminaryresults from a tcpdump network packet trace but nothing from a system call trace.

Statistical Profiling

Anomaly detection in execution monitoring can also be made by creating a statistical profile ofthe execution, for example a call histogram, and comparing the executing program to the knowngood profile.

The histogram method has been used by Endler [End98]. All system calls in a call sequencevector were inserted to a large vector so that the occurrence of a call number 25 in the sequenceincrements the value in the 25th position of the larger vector. The larger vector represents howmany times each event of all possible events has occurred in the sequence vector. A histogramclassifier was used to divide the sequences to normal and abnormal. However, the histogramsystem size grows exponentially with the larger vector size so different techniques were used toselecting the most meaningful events. Hand-picking based on experience with buffer overflowattacks proved to be the most successful. To detect anomalous patterns from the histogram, avalue was visually selected from histogram graphs for the anomaly threshold. This could be done,since this work was not real-time detection but work done on audit logs.

Additionally, module interaction can be used for execution monitoring. Elman and Munsonhave described a technique where any application written in C can be instrumented with a CLICtoolkit to insert hooks into the application source code modules [EM99]. A kernel-level tool isthen able to follow the module interaction and compare the interaction profile statistically to anormal profile obtained beforehand.

4.4.3 Anomaly Detection in Input Semantics

There have been very few attempts to do semantic intrusion detection of any kind. Here theemphasis would be in analysing the user supplied data contents and its meaning in relation tothe application and attempted action. In this sense this method falls under the class of anomalydetection.

Semantic analysis can be done in several levels. The most trivial one is writing these semanticchecks into the code. This is of course dependent on the correct identification of potential hazard

78

Page 83: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 4.4: The specification based methodology map

spots and illegal values. Therefore substantial technical risk analysis, such as demonstrated bySielken [Sie99] must be done when developing the software. This risk analysis must then betranslated to actual agents to detect impossible conditions that could signal an attack.

Although Sielken suggested building the application IDS system so that it contains both ap-plication specific and application independent parts, the criticism expressed in Chapter 4.4.1 withregard to misuse detection in application code is still valid: much design and analysis and somecoding is necessary to make this a reality.

Raz et al have investigated semantic anomaly detection in general data feeds, i.e. on inputlevel, such as stock quote streams [RKS02]. They define an anomaly as an observed behaviourof a data feed that is different from our expectation. This expectation describes normal behaviourof the data feed. Since there was no real semantic data specification for their stock quote feedavailable, their method was to use two unsupervised machine learning tools, Daikon and Mean toinfer invariants from the data feed. After the invariant learning, the feed data was compared againstthe invariants and an anomaly alert was generated if the invariant did not hold for the received feeddata. Their method was able to successfully detect most of the data anomalies in the test scenario,the false positive rate being 0.3. It was usually reduced to under 0.02 (max 0.15) when addingvoting heuristics, i.e. cross-checking with another data feed, but it did not help to reduce falsenegatives.

4.4.4 Specification Based AppIDS

There are also efforts to specifying the application behaviour more or less formally and alertingon any activity outside the specification. Two main styles have been identified; one specifiesseparately the policy that the application must follow, the other creates a model of the applicationbased on the source or binary code and then observes whether there are any run-time deviationsfrom that model. The found research and their relationships are shown in Figure 4.4.

Classic works in the specification based detection are modelling the network protocol(s) the

79

Page 84: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

application uses. This is more in the domain of network based intrusion detection (NIDS), forexample [SGF+02] and they are not discussed in more detail here.

Policy Based Systems

There is an approach where a specification of the privileged program execution is created andthen their execution is monitored via Sun BSM [KFL94]. A policy specification language wasalso created for this task. The policy lists each system call the application is allowed to make,including key parameters. The policy language has some semantic information on the applicationand the kernel API, so it is able to process concepts like users, files and ports.

Most of these application policy definitions are site independent, i.e. a policy for sendmailin one Unix host is very similar to the policy in other Unix hosts, but some variables (such as logfile names) are site dependent and must be tailored to fit each system.

Compared to some other system call AppIDS methods, this system does not specify the orderof the system calls in the policy file. If the exploit uses allowed system calls in untypical order, orif the policy does not limit the system call parameters enough, or the exploitable problem lies inan application executed by this application, a penetration may still occur unnoticed, i.e. we get afalse negative result.

A refinement of the method above was done a couple of years later [KRL97]. There a formal-ism for specifying the program specific policy (trace policy) was developed. The policy language,parallel environment grammar, is able to express parallelism and sequencing in the language. Asabove, it also has semantic devices to test some input values, such as the directory where the file ordirectory is to be created. They mention the system able to detect simultaneous edits of the samefile by two parallel processes in their tests but do not give specifics.

Sekar et al have also modified this method [SBS99, BCS+00]. They defined a language, Au-diting Specification Language (ASL) or Behaviour Monitoring Specification Language (BMSL),for application behaviour policy specification and integrated this with a compiler to produce anextended finite state automaton (EFSA) C++ class, from which an application specific system calldetection engine is built. This engine then runs as part of the operating system kernel, interceptssystem calls and uses the automaton to check them against the defined policy. In actual teststhe system call monitoring system (SMS) caused 1.5% overhead. The overhead increases withthe number of system calls in the application: an increase of 2000% in system calls caused theoverhead to rise to 5% [BCS+00].

Calvin Ko has continued to work on specification based IDS with his team. Their kernel-level software wrapper system can be used also for specification based IDS, since the wrappersare defined in a wrapper definition language (WDL) which can be used to specify the allowedbehaviour of the wrapped application [KFBK00]. Also this wrapper definition can access systemcall parameters and is able to impose semantic restrictions. The performance decrease from thespecification based wrapper alone was 5-7% in tests.

All of these systems require some kind of security monitor, analyze or wrapper to detect thebehaviour out of specification, as does the work of Herrmann and Krumm [HK01]. They have aformally defined and simulated security state automata with state dependent security constraintsintegrated with the container (i.e. wrapper). The security constraints are expressed with a formalspecification technique cTLA, which extends Temporal Logic of Actions (TLA).

The basic implementation has Java application beans, adapters, observers and a monitor. Thereis an adapter wrapped around each bean. The bean interfaces with the environment only throughthe adapter. Thus all the bean interactions can be observed and if necessary, the bean can be sealedif anything attack-like occurs. The observers check that the bean complies with the cTLA process

80

Page 85: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

specification. If the adapter detects an interface event, it forwards it to the observers by callingthe corresponding action methods. If all relevant observers signalled the compliance of the event,the adapter really transfers the interface event. If, however, one observer refuses its correspondingaction, the adapter seals the bean and reports the violation to the monitor. The monitor interfaceswith the system administrator. The observers also link to the trust manager, which determinesthe intensity of security checks by their wrappers depending on the trust values carried by thescrutinised components [HK01].

The work also extended the Java security manager, since beans can also access system re-sources via standard streams, thus bypassing the adapter. However, resource access this way iscontrolled by the Java security manager and the manager was modified to report standard streamaccesses.

Static Analysis

Static analysis is a form of testing that analyses the application source code or binary to detecterrors or problems. This technique has also been used for application IDS.

Wagner and Dean used source code to create a model of the system call sequences [WD01].The state machine is created from the source code, so this technique assumes the availability of it.On the other hand, Giffin et al, use the system binary to create the state machine [GJM02]. Bothof these are in essence a form of static analysis. This work is here differentiated from the stateautomata discussed in 4.4.2 since the state machines here are generated from a specification: thesource code or the application binary.

The work of Wagner and Dean suggests two non-trivial techniques for creating the state ma-chine; a call-graph method, which creates a non-deterministic finite automaton (NDFA), and anabstract stack method, which creates a non-deterministic pushdown automaton (NDPDA). In thissystem there are no false alarms, since the state machine will accept all dynamically possible ex-ecution traces. On the other hand, the call-graph system will accept also impossible paths. Thereal problem with these methods is that they are non-trivial to implement and can incur a severerun-time overhead (in the order of minutes) [WD01]. However, the work on static analysis onbinaries achieved significantly better efficiency (run-time increase of 13%). The reason for thiswas identified to differences in operating system library functions (Linux vs. Solaris) [GJM02].This implies that static analysis is not as universal in practise as it is in theory.

Similarly, a set of kernel-supported waypoints can be created from the source code by staticanalysis. These function entrance and exit waypoints are then used for the application control-flowmonitoring. When the application is run, a push-down automaton of the entrance/exit waypointsis created. An access monitor uses this push-down automaton information for control flow mon-itoring and what is more novel, permission monitoring. During the static analysis of waypointgeneration, also the set of system calls (i.e. permissions) for each function is recorded and theaccess monitor can ensure system calls invoked in the context of a function appear in its permis-sion set. From the analysis point of view this system has the good point that it can detect globalmimicry attacks and return-into-others impossible path attacks [XDC04]. While Xu et al classifytheir work as anomaly detection, their method of using static analysis for the waypoint generationis more specification based.

However, local mimicry attacks are not detected by this waypoint method proper, althoughthe authors have deployed other means (interface randomisation) to counter this problem. Theperformance tests were also somewhat problematic. When all system calls were monitored, theexecution of an ordinary Unix program increased 3-5 times, most increase was in the time spent inkernel mode. By monitoring only dangerous system calls, the overhead was reduced by 16-70%.

81

Page 86: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

4.5 Application IDS Reaction Models

There has been very little research done into the application IDS reaction systems, apart from thetraditional “send an alert” feature. This is surprising since application level detection mechanisms,which are more or less integrated into the application, could also signal the application to changebehaviour when a problem is observed. Some work exists, though.

One system call level example is in the pH system [SF00, Som02], where the mechanism isable to delay all subsequent system calls at kernel level by a configurable time factor if a problemwith system call sequences is observed. Additionally, there is a separate configuration item forthe execve() call delay threshold which configures the suspicion threshold where the executionof other programs via the monitored program should be blocked. This latter option is very usefulagainst buffer overflows, which try to execute a shell. Although the performance hit of the pH sys-tem is several microseconds per system call, the overall performance penalty for real applicationswas between 4-10% in tests.

One other example is in the context of an auditing subsystem in the Java virtual machine.There is a separate response module that can be used to react to detected attacks in a specificenvironment. The response module initiates an appropriate response action when an attack orthreat coming from a Java application is detected. The module sends to a dedicated thread in theJikesRVM a request for a particular response action [SKV03].

The specification based mechanism by Bowen, Sekar et al [SBS99, BCS+00] has also reactivecapabilities. The policy defined in their language (ASL,BMSL) can include directives to delayexecution, changing the application scheduling priority, changing the application environment,executing other monitor applications etc. And since the system call interceptor and the policyinterpretation program function at kernel level, they are able to enforce these directives.

4.6 Intrusion Detection for Trust Management

As discussed previously in Chapter 3, this work attempts to automate trust-guarded authorisation,i.e. authorisation decisions based on trust evaluation (see Figure 3.6). What is the need for moni-toring and intrusion detection in this scheme? Depending on the result of the evaluation, there arethree different trust cases that need to be taken into account:

1. Successful trust-guarded authorisation, where the peer trust score is above the threshold andthe action is allowed to proceed. However, we need to be aware of the fraud possibilities,where our trust is abused. From the negative point of view, this is the case of potentialmistrust - misplaced trust.

2. Untrust, where the peer trust score is not above our co-operation threshold. Semanticallythis means that some trust exists, but in this particular case not enough to proceed with thecollaboration.

3. Distrust, where the peer trust score semantically means that the enterprise has negative ex-pectations on the peer conduct.

Intrusion detection and other security monitoring mechanisms surveyed in this chapter give ustools to handle these trust management scenarios. The distrust case is the easiest, distrusted peersmust be prevented from connecting to the enterprise service. If this is not possible, for examplebecause the application or service needs to be open to all the users on the Internet, any suitableintrusion detection or prevention mechanism can be deployed to detect and reject attacks. This

82

Page 87: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

is a typical scenario for any internet based business and very much a standard practise in today’sbusiness world.

Untrust is middle ground between distrust and mistrust. Depending on the organisation trustpolicy, untrusted peers can be handled similarly to distrusted or given some other form of servicethat is within the peer trust score limits. In this latter case, the untrusted class of peers is elevated tothe trusted class but for some other, perhaps downgraded service. Therefore the trust managementissues are similar to the trusted class and no special treatment is required.

The main interest of this thesis lies in the first case with trusted and authorised peers, wheremistrust must be managed. The trust management system evaluation criteria presented in Chapter3.6 included requirements for peer behaviour monitoring and feedback to trust threshold calcula-tions.

From the survey in this chapter several potential general approaches for this task can be pre-selected. First of all, the requirement for feedback to trust calculations means that the monitoringsystem needs to be active in its reaction model, i.e. an action is taken if something anomalousis observed. Whether the reaction is immediate or delayed depends on the needs and capabilitiesof the rest of the trust management system and the application. However, for collaborating com-munity applications, where peers may dynamically interact with the service any time, immediatereaction would be best.

Secondly, it is clear that to separate abnormal, mistrusted behaviour from normal peer be-haviour, mistrust monitoring requires some form of service level understanding. What is the ex-pected format and semantic contents of the request, what is the expected answer? Therefore aform of application intrusion detection is needed.

From the analysis method point of view misuse detection methods are not of use alone here,since detecting known and typical attacks is not enough. Similarly, execution monitoring may nothelp us detect misuse if the attack proceeds very much like normal request as in a mimicry attack.The most useful analysis tools for misuse monitoring would then seem to be based on specificationor input semantics. These of course can be augmented with other analysis methods.

Interestingly enough, these approaches have not been very much researched compared to mis-use detection. Especially research is lacking on the semantic front. There were very few ideas thatused any kind of semantic information either in aiding the analysis process or in defining the nor-mal application behaviour. For example, practically all system call analysis mechanisms surveyeddiscarded the call parameter information. Exploring and developing methods that use semanticinformation could yield new and fresh outlooks also in the area of application intrusion detection.

As for the sensor data collection method, any method that gives the enterprise access to eitherthe application behaviour for the specification analysis or input data semantics is suitable takingaccount the earlier requirement that the system should be application independent as much aspossible. The TuBE system designed to implement the trust-guarded authorisation system withthese constraints is introduced in the next chapter.

83

Page 88: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Chapter 5

The TuBE Approach to TrustManagement

In the previous chapters, two key concepts: trust and application behaviour monitoring have beendiscussed in detail. How do these concepts relate to our goal of protecting enterprise collaboration?This thesis describes a system that binds these concepts together to provide a unified frameworkwell based on trust theory to facilitate first of all trust-guarded authorisation and secondly detectionof mistrust scenarios. Hence, the TuBE 1 framework flexibly protects the community member inan enterprise collaboration scenario.

In the following discussion, client refers to a piece of software that connects to a service or asingle application in the collaborative community. Thus a client is an agent for a peer.

The structure of this chapter is as follows. First the TuBE high level concept and componentsare introduced, then the technical environment is described. The third section goes into design andarchitectural details where the high level concept is bound with the technical environment. Finally,the concept is evaluated against the requirements presented earlier in this work.

5.1 The Concept

To achieve the goals set previously, several separate components are needed. First of all, to utilisetrust in an enterprise, an underlying trust model is required. In Chapter 3 it was discovered thattrust is built using various pieces of information, so several trust support data sources and a compo-nent to combine all that data to information suitable for trusting intention assessment are necessary.Second, the trust-guarded authorisation function utilises this information from the trust model togrant or deny client authorisations. The third key part is the monitoring mechanism in the form ofmisuse detection function and the subsequent information feedback to the trust-guarded authori-sation via both the trust model data sources and as immediate reaction. These components of theTuBE system are depicted in Figure 5.1 and their general functionality is discussed below.

5.1.1 The Trust Model and Supporting Infrastructure

In the TuBE system, trust is defined as the extent to which one party is willing to participatein a given action with a given partner in a given situation, considering the risks and incentivesinvolved. This definition falls under the category of trusting intention in the trust type classificationdiscussed in Chapter 3.2. Since the TuBE system is designed to automatically act on the basis of

1TuBE is a shorthand from Trust based on Evidence.

84

Page 89: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 5.1: The components of the TuBE framework.

this intention, it must translate this intention to trusting behaviour. Trusting belief and context areutilised to form the trusting intention. Trusting belief consists of the action and trustee specificinformation, and potentially some default values, i.e. dispositional trust.

In the TuBE trust model, the key element is a relation, an n-tuple, between peers, actions,and certain properties of the environment. The trust-guarded authorisation, which is the trustingbehaviour in TuBE, involves finding a balance between the uncertainty and risk present in thesystem and the potential benefits of the interaction.

The decision support facts in the TuBE trust model form a 7-tuple of trustor, trustee, action,action risk, action importance, trustee reputation, trustor context. Using the vocabulary developedin Chapter 3.4, the TuBE model is identity-aware, action-aware, business value aware, context-aware and history-aware. Considering the goals of supporting wide range of services and trust-guarded authorisation, the presence of trustor, trustee and action were obviously necessary. Of thebusiness value factor group, risk and importance were chosen for experimentation although anyother combination could have been selected as well. History-awareness is necessary to propagatethe experiences from the monitoring function to the trust evaluation. In the TuBE system, thelocal peer reputation is used as the history component, no third party recommendations are usedfor the sake of simplicity. Finally, the presence of context is necessary as a link to the system trustcomponent and situational awareness because the TuBE system is developed to be used in a highlydynamic and automated environment. High degree of automation benefits from the flexibilityprovided by context.

In Chapter 3.5 the theoretical trust use model was shown in Figure 3.1 and its applicability tocomputing authorisations was developed in Figure 3.6. The actual realisation of the trust formationprocess in the actual TuBE model is shown in Figure 5.2.

85

Page 90: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 5.2: The trust model components in TuBE.

The TuBE model uses a threshold computation as the trusting behavior i.e. trust-guardedauthorisation decision. The underlying belief formation process, which manifests itself in somekind of peer trust score, gathers data from various sources for the 7-tuple. The trusting belief isformed by finding the peer related information by using the peer identity, and finding the actionrisk and importance values by using the attempted action type. If no reputation for the peer isavailable, the initial value for dispositional trust is used.

This differs somewhat from the models presented earlier in Figures 3.1 and 3.6. These modelshad two ways for the action type to affect trusting behavior. One way was through situationaldecision to trust where peer identity was not used, the other through trusting belief, i.e. peer trustscore, when peer is known. Since TuBE assumes the availability of a persistent peer identity,action type influences the trust valuation only through the peer trust score and the direct route isremoved as unnecessary. Dispositional trust, i.e. initial values, still have two separate mechanisms.One, as explained above, is the peer related initial value if no reputation is available. The othermechanism is the threshold value for trusting in the threshold comparison calculation. The initialvalue selected for the threshold is a good way of expressing the dispositional trust attitude of theorganisation not related to specific peers but to co-operation in general.

The focus of this work has been on how to achieve trusting behaviour using underlying trustconcepts. Due to lack of resources some trust types described in Chapter 3.2 have not been in-tegrated into this work. The trust types the TuBE framework does not currently support are thesituational decision to trust and non-context system trust. The situational decision to trust is thetrust type that ignores the peer identity and only considers the situation and potential context. Thistype was not directly considered in the TuBE system since the aim was to investigate the effectof local peer reputation information and feedback on the trust-guarded authorisation. Hence thefocus has been on uniquely identifiable and authenticated peers. Authorisations guarded by situa-tional trust may be an interesting future research direction for people interested in anonymity andprivacy in collaborative systems.

86

Page 91: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

The non-context aspects in system trust may also be an interesting research question, howevernot within the scope of this thesis and the current TuBE framework. Non-context system trust isthe belief on the external and impersonal structures, such as laws and contracts, that facilitate thesuccess of the co-operation. In a community setting, this is an interesting question since much ofthe co-operation success depends on the community participants. For future researchers I leavea question, how can the system trust to the community structure and operations be modelled andused?

Despite the current trust type focus, the aim has been to create the TuBE framework in a waythat does not prevent its extension to accommodate other trust types. Additionally, the frameworkhas been designed so that the trust decision support fact sources have been encapsulated so that itis relatively simple to change the trust model to consider different input factors.

To support the model described above, technical interfaces to various trust decision supportfact sources are required. In the TuBE system these sources are:

• The reputation or history service. This is the subsystem linking the peer behaviour expe-riences to the trust-guarded authorisation. Observations made in the behaviour monitoringsystem change the peer history, which in turn affects the trust valuation of future requests.If necessary, this local history mechanism can be linked to a community reputation systemmaking the trust model 3rd party aware. Although the current focus is to implement only thelocal history functions, in the following chapters this service is called the reputation servicefor generality.

• The context service or services. In the TuBE trust model we acknowledge the need for thetrusting intention evaluation to be dependent on the system situational normality, context.The context information made available can cover data ranging for the enterprise financialsituation to the IT systems’ security status.

• The action business value service. Each service or application has a set of actions, such as“query price” or “buy” that a client can request. Each action has a set of business valuedata associated with it. In the TuBE trust model, this data is “action risk” and “actionimportance”. These values must be declared and stored when the TuBE system is deployedto protect the service.

These three components require also management interfaces to bootstrap the data and in thecontext case, mechanisms to define context sources and their effect on the trusting intention. Tolimit the scope of this work, management interfaces are not discussed in more detail.

5.1.2 The Trust-guarded Authorisation

Trust-guarded authorisation produces the automated trusting behaviour service. Thus a componentanalysing the client service request, evaluating the trusting intention for that peer and convertingthis to authorisation is required. The result of this analysis is the decision whether the trust tothat particular client–request combination in this particular situation is sufficient to let the servicerequest be processed further.

For the authorisation decision, an underlying trusting intention calculation is required. In theTuBE system this is a simple function of the data received from the trust model compared with athreshold value. The three trust data values used for this function are action risk, action importanceand trustee reputation.

Additionally, these three values are augmented with context. Theoretically, the situational nor-mality side of system trust should influence just the trusting intention, i.e. threshold comparison.

87

Page 92: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 5.3: The conceptual model of calculating trusting intention as a threshold comparison.

However, in the TuBE system it was decided to experiment with a more fine-grained context thatcan directly influence intention calculation components. Therefore, the TuBE context changesthese values directly. Compared to the theoretical framework the context data is given a more fine-grained effect in the TuBE system: instead of one adjustment point in the intention calculationthree adjustment points are used. It is intended to investigate the effects of this change in moredetail in the functional TuBE system.

These augmented three values are then processed and compared to a threshold value. If thecomparison is positive, the trusting intention turns into automated trusting behaviour, i.e. authori-sation is granted. The conceptual model of the calculation is shown in Figure 5.3.

The calculation function and threshold value should be settable by policy. There may be or-ganisations that may want to emphasise the local peer reputation more and in some cases theattempted action risk and importance may weigh more in the trust evaluation process. Implement-ing and experimenting with different functions should give interesting data on the overall systembehaviour in different usage scenarios.

Since clients need to be uniquely identified, some form of peer authentication is also required.It is assumed that such a system is in place and provides TuBE with unique peer identification, butthe TuBE framework does not itself include an authentication framework.

The authorisation function requires also administrative support. This includes mechanismsto write trust decision policies and dispositional trust defaults or initial values, and the thresholdvalue itself.

5.1.3 The Monitoring Service and Feedback

The introspective peer behaviour monitoring is in TuBE done with a mistrust detection system.Mistrust detection aims to find out whether the trust in the form of authorised action, was mis-placed. Therefore TuBE deploys an application level service for intrusion detection, which formsthe TuBE introspective monitoring mechanism. It is necessary to monitor the client interactionwith the enterprise service to determine whether the peer behaviour as seen in the requests isanomalous or not. The tests to experiment with and select the most suitable algorithms for thistask are left for the TuBE implementation phase.

If a problem is detected a reaction is necessary immediately. In TuBE the reaction is twofold.First of all, a request is made to the authorisation component to stop or limit the interaction imme-diately. Second, a message changing the peer reputation in the enterprise is sent to the reputationmechanism. Since the reputation score is used by the trust-guarded authorisation, the subsequentauthorisation attempts are influenced by this changed reputation score and possibly denied.

88

Page 93: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

5.2 Technical Environment

To ground the TuBE concept and the theoretical background into the real world, it is assumedthat the technical environment of the enterprise collaboration and business communities is WebServices. Web Services is also the currently preferred platform for Service Oriented Computing.

A Web Service is a software system designed to support interoperable machine-to-machineinteraction over a network. It has an interface described in a machine-processable format (specifi-cally WSDL). Other systems interact with the Web service in a manner prescribed by its descrip-tion using SOAP protocol messages, typically conveyed using HTTP with an XML serialisationin conjunction with other Web-related standards [BHM+04].

The extensible markup language, XML, used in the Web Services environment has gaineda strong position as a general content encoding format. The key ideas of XML are support forwide variety of applications (i.e. extensibility), human-readability and easy development of XMLprocessing applications [Wor04].

The benefits of using XML in a business environment are the following [MG98]:

• Businesses can describe services in a manner that can be widely understood.

• One set of documents, forms and messages can be exchanged by businesses with differentinternal business systems.

• Errors in re-keying data are reduced because data can be transformed through gateways.

• Frequent changes in business process can be handled without substantial engineering cost.

• It leverages investment in legacy systems and can be used with latest Internet technology.

Since XML is a general content encoding format, it can also be used in a networking settingto describe protocol messages. It has rapidly become the most widely used wire format in newmessaging systems. The most widespread example of XML messaging is SOAP, Simple ObjectAccess Protocol, which is a lightweight protocol intended for exchanging structured informationin a decentralised, distributed environment [Wor03]. SOAP messages are defined as XML infos-ets. SOAP enables exchange of SOAP messages using a variety of underlying protocols. Thedescription how the SOAP messages are transported on top of another protocol is called a binding.A typical SOAP transport protocol is HTTP.

The effect of XML to businesses in the form of spontaneous commerce, business automationand virtual enterprises has been understood for some years now [GTM99] but the technology hasnot been quite mature yet. Web Services have in recent years emerged as the pervasive glue thatwill enable applications with varying implementation platforms and internal messaging techniquesto interoperate. In this sense the Web Services initiative is an unqualified success. However,the Web Services world is far from finished. While companies currently develop Web Servicesapplications for business purposes, major issues have been left unexplored in the specifications orare still in the development phase.

In the collaborative communities interoperability stack discussed in 2.1 SOAP, XML, WSDLand other associated standards are in the technical layer on top of which community services andprocesses can be built.

In the service-oriented computing environment, basic services can be combined into compos-ite services. Combining the services in a Web Services environment may be done utilising theBusiness Process Execution Language for Web Services (BPEL4WS) [BIM+03]. It provides alanguage for defining business processes utilising the component services. A choreography de-fines the sequence and conditions under which multiple cooperating independent agents exchange

89

Page 94: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

messages in order to perform a task to achieve a goal state. W3C has defined Web Services Chore-ography Interface (WSCI) [W3C02] and the accompanying Web Service Choreography Descrip-tion Language (WS-CDL) [W3C04b] for defining community choreography. An orchestrationdefines the sequence and conditions in which one Web service invokes other Web services in orderto realise some useful function [BHM+04]. In practical terms these define the externally visiblefunctionality of a community. BPEL4WS is an orchestration language, but it could be extended tocover also inter-organisational issues.

In addition to being functionally important, monitoring of the choreographies and businessprocesses is also important security-wise. For example, denial-of-service attacks targeted towardsone or several participating systems are very problematic in a business process and they can beobserved with suitable monitoring tools. When a problem of this type is observed, the communitycan possibly reorganise itself around the observed problem.

One of the key problems is that monitoring functions are not specified yet, not in either tech-nology, service, community or business level. The specifications above do not include any mech-anisms for monitoring. Therefore this thesis starts by offering a solution to the service and tech-nology layers.

Some other unfinished or work-in-progress sections are trust and security. Specifications doexist for some simple technical tasks, such as secure messaging, but more advanced security mech-anisms, for example on the collaboration side, are unspecified. This lack of a complete securityframework causes problems for organisations planning to deploy Web Services applications in amajor scale. Therefore it is also beneficial to the Web Services community to describe new andemerging solutions such as introspective monitoring, to aid in the security and trust frameworkdevelopment.

5.3 Design and Architecture

From the overall components depicted in Figure 5.1 a technical architecture shown in Figure 5.4was developed. Each of the components are discussed below in more detail.

5.3.1 Trust-guarded Authorisation and the Wrapper

The TuBE framework has been developed for an environment where the participants in a collab-orative community receive and offer services from a (set of interconnected) business applicationsvia a Web Service interface. This interaction and its effect in the business application needs to becaptured to detect security problems at the application level. As discussed in Chapter 4 this eventcapture can be achieved by listening to one of the interfaces, by hooking to the operating systemor application platform (such as Java VM) or by modifying the application.

Because the application description in the Web Services Definition Language (WSDL) is theonly common infrastructure element in Web Services applications, in the TuBE project it wasdecided to intercept the protocol messages, which are defined in the WSDL description. Theinterception happens with the help of a wrapper, which in its core is a simple WSDL proxy. Eachnetwork interface, or TCP port, of a particular service has its own wrapper. The trust guardedauthorisation is achieved in the wrapper by making decisions on whether the incoming messagecan be allowed to be processed or not. In addition, the wrapper interacts with the monitoringintrusion detection system (IDS) by providing it a copy of the traffic. When the intrusion detectionsystem observes an anomaly, it may want to stop or restrict a particular interaction and the wrappermust be able to process those requests.

90

Page 95: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 5.4: The components and data flow of the TuBE framework.

Since the wrapper must be application-specific, the concrete wrapper is specialised from a gen-eral wrapper model included in the TuBE platform by an application-specific WSDL description.Due to this specialisation based on the available WSDL definition, this platform core componentremains application platform independent, unlike in some other wrapping systems [HK01]. Thisapproach is also legacy application friendly, since application source code is not needed for anal-ysis or modification.

This interception style has the drawback that the application cannot be observed directly. Theview of the application is strictly limited to the communication between the peer and server, andthere are no means for observing the internal state of the application from this level. It would bebeneficial to have this type of internal information, thus platform specific interception possibilitiesare left for future expansion.

The wrapper also feeds the message data to the application intrusion detection mechanism.Additionally, it takes feedback from the intrusion detection mechanism when the IDS has observedan anomaly and wants a particular interaction to be stopped or restricted.

The wrapper is responsible for evaluating the trust status of the peer making the request byutilising the trust model presented in Chapter 5.1.1. This evaluation is then translated into anauthorisation decision. Since a trust query must block the peer request while the decision is beingmade, the trust decision is a very performance-critical component. Therefore, it was decided toperform the trust decision in the wrapper instead of a separate decision component. This includesfirst calculating the peer trust score (trusting belief), utilising this score in a threshold comparison(trusting intention) with context information (system trust) and finally doing the authorisationdecision (trusting behaviour).

To make the trust-guarded authorisation decision, the wrapper needs to have all the supporting

91

Page 96: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

data, i.e. trust tuple values, available. As this data is volatile, it is placed in a cache hashed bythe peer identification and action. Since frequent peers tend to stay frequent, the cache shouldpurge the least recently used (LRU) data first. This policy can be implemented with a two-spaceor semi-space algorithm, where client data is moved from Old-space to New-space whenever itis used. If the cache needs to be purged, the Old-space entries are first removed and all of theNew-space is flipped to Old-space. If the cache does not contain the support data needed for atrust decision, the wrapper must request more data from the trust data broker. The decision maytherefore be delayed.

The actual threshold formula for calculating the trusting intention and also how to calculatepeer trust scores are isolated into redefinable functions, in order to support using alternative cal-culation methods for other trust models. The specific function to translate the values in the trustmodel tuple to a decision is specified as a policy element by the administrator. Experiments orsimulations would be useful to determine what kind of functions work best in community settingsand whether the service characteristics bring any requirements for the calculation.

5.3.2 The Trust Data Broker

The TuBE trust model requires information from several sources. Action risk and importancevalues are seldom changing data that come from an administrator-configurable GUI application.Trustee reputation is received from the reputation subsystem. Local context data may be receivedfrom multiple context subsystems. Finally, the combined data should again be passed to multiplewrappers, one for each instance of the service.

The broker stores all information for the trust decisions as maps (peer_identification,action, reputation) and (action, risk_formula, importance_formula) ina database. ’Peer identification’ is a unique trustee identification, such as a public key. ’Action’is the name of the attempted action in the WSDL description. Risk and importance formulae arediscussed later in relation to the action subsystem.

The changes received from the data source components trigger different operations in the bro-ker. Reputation changes as well as action risk and importance changes change the correspondingvalues in the database; a reputation change is user-specific, while risk and importance values areaction-specific.

Context changes are formulae that may potentially change every value in the database. There-fore, it is more efficient to keep the change as a context filter instead of modifying every dataitem in the database. Context filters are applied when data is sent to a wrapper so that if the datamatches the filter, the data is changed according to filter instructions. Each context source has onecontext filter and as there may be several context sources, these filters can be chained in the databroker. For example, a context filter could state that all actions are twice as risky as normally if thesystem context is "under security attack" and the filter would double all action risk values beforesending them to the wrapper. The concept is illustrated in the Figure 5.5.

While wrappers are application specific and each interface requires its own wrapper instance,one trust data broker could serve several wrappers for the same application.

Brokers can also serve several different services within the enterprise which are using thesame trust valuation. In practise this would mean that if a peer misbehaves with one service, itstrust valuation would decrease also for other services within the enterprise. This would be a goodexperiment for the actual framework. However, for scalability and fault tolerance reasons thereshould be several brokers in an enterprise setting.

92

Page 97: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Figure 5.5: The effect of context filters to trust calculation.

5.3.3 The Monitor

The intrusion detection system receives intercepted peer message data from the wrapper and mustdecide whether the peer requests are against the system security policy. As discussed in Chapter 4,there are three basic philosophies to intrusion detection: misuse detection, anomaly detection andspecification based detection. The misuse approach relies on patterns (i.e. signatures) of knownattacks for security problem detection. The specification based approach is interested in findingout whether the system behaves out of specification. Anomaly detection compares the currentobservations to known good behaviour profile and raises alarm if sufficient difference betweenthem is found.

In the TuBE project anomaly detection was selected as the main behaviour monitoring ap-proach from the palette of intrusion detection choices. The reasons for this are twofold. First ofall, there are not very many known problems in generic Web Services application environmentfor effective misuse detection. Secondly, a good application level specification is time-consumingto create and monitor [KRL97, WD01]. Thirdly, it is the aim of this work to be as applicationindependent as possible, so deep application level specifications do not fit the requirements.

In anomaly detection there are several possibilities for the particular method. As discussed inChapter 4, this general approach can be applied to monitoring the execution of the application,for example in the system call or remote procedure call context, or to detecting anomalies in datasemantics. In the TuBE project especially the semantic anomaly detection is of interest.

The semantics of an action can be divided into two broad areas: control flow semantics anddata semantics. Data semantics is straightforward: what is the meaning of the input parameters andthe action output? In some cases it is also interesting to know the meaning of the data the actioninternally handles and stores to produce the output from the input. The control flow semantics ismore complicated and is interested in the question, what does this action accomplish. In a localbusiness process an action may have a simple meaning, but in a series of actions, especially in an

93

Page 98: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

external business process within a community, the set of actions may have a different combinationsemantics. For example, in a local business process an action of reading a single record is justobtaining user information, but making a thousand similar reads about different users may indicatesome kind of data harvesting operation, which probably should not be allowed for privacy reasons.

The Web Services interface is given in a WSDL description, which does not contain explicitsemantic information, but provides information about the Web Services operations, such as mes-sage components and parameters. However, this description can be augmented with semanticinformation, such as ontological annotation and action preconditions and effects [SVSM03]. Thistype of augmentation will allow control flow semantics to be embedded into a WSDL description.Additionally, some preconditions or limits should be placed on the data as well. This is currentlyan uninvestigated area.

The TuBE project focus is on the semantics of data and anomaly detection in the data aspectsof the Web Services interaction. The control flow semantics is also interesting, but the area isexamined in more detail in the web-Pilarcos-project. The TuBE framework has also provisionsfor changing the analysis algorithm provided that the algorithm is using the the message data foranalysis.

The TuBE architecture has been specifically designed in a way that the application IDS engineis decoupled from the trust decision component. This means that any suitable intrusion detec-tion methodology can be used as long as it can utilise the data from the client–server messageexchange, which is captured by the wrapper. For example, specification based network level in-trusion detection is quite possible, if a full specification is available for the application messageinterchange. The TuBE framework includes mechanisms to redefine the IDS analysis class. Theanalysis can also be made by several IDS components, each with a different methodology.

5.3.4 The Reputation Subsystem

The reputation mechanism stores and manages information about peers’ past behaviour, aggre-gated into reputation. Locally, the reputation information (or history in the trust vocabulary) isupdated through first-hand experience on how a particular peer has behaved towards the system.On the community level, members can share their experiences and provide second-hand informa-tion about the peer as well. This makes it possible to receive reputation information even prior tomaking initial contact with a peer.

In order to successfully share reputation information, there must be a commonly agreed pro-tocol for the reputation exchange, and a common ontology and semantics to support the interpre-tation of the shared information. This may prove a challenge if the members of the reputation-exchange community have very different action set s associated with their experiences, possiblyforcing the community to share only generalised reputation information instead of action-specificitems of experience. This area is left for exploration in future work, the TuBE framework assumesfor now that such a reputation mechanism is in place.

If there is a link to a community wide reputation mechanism, a separate interface is requiredto communicate with it. The interface receives reputation change data from the IDS, transformsthe data to a form with commonly accepted syntax and semantics, and sends that to the reputationsystem for sharing. This same component also publishes the peers’ reputation information to theTrust data broker. This interface definition is an administrative task at TuBE deployment time.

There are two choices in sending experience data from the IDS to the reputation system. Oneis to send only negative reputation reports and one is to send also positive reports. The negative-only style would be more efficient, since negative reports, meaning an alert from the intrusiondetection subsystem, are probably very rare. On the other hand, if there is no way to ever increase

94

Page 99: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

the participants’ reputation in the community, the reputation system would eventually becomechallenged and possibly by ever lowering reputation scores. Therefore, it was decided to send alsopositive reports in the TuBE framework. However, this is something that needs to be experimentedwith: how do the changes in the peer reputation affect the use of the service.

If the contacting peer has no reputation or history information in the system, the TuBE systemuses an internal dispositional trust default value in building the trusting belief for that peer. Thiscan happen when the TuBE system first comes operational and the whole reputation database isempty, or when there is a connection with a peer with whom there has been no interaction before.After the initial encounter there will be some behavioural history so that a reputation score for thatpeer can be determined.

5.3.5 The Action Subsystem

Since trusting belief in the TuBE model depends on the attempted action and the risk and im-portance factors that are attached to the action, there needs to be a subsystem that contains thisinformation. The action information is stored in an application specific database

The service actions and the associated values are defined by an administrator via a user in-terface with the help of the service’s WSDL description. Additionally, the community actioninformation is described during the community role realisation. The risk and importance valuescan also be changed during the application run time via the management interface.

The action risk and importance values are integers. The idea is not to define any real semanticsfor these values, other than the basic numerical order, i.e. a larger number means a larger risk orimportance. This enables the administrator to use any kind of tool or mechanism, such as rankordering or monetary risk analysis, for determining the correct risk and importance values. Theeffect of different risk analysis methods on the actual behaviour of the system is an interestingresearch question for the future.

Since the risks are not necessarily the same even for all similar actions, such as “buy”, thereneeds to be a parameter mechanism for relating the risk to the monetary value or other numericalparameter of the action. The parameter information is available since the action definitions must bedone with the help of the WSDL file anyway. This formula or parametrisation must be propagatedto wrappers for the trust calculations.

5.3.6 The Context Subsystem

The context subsystem supports the trust model by providing systems trust in the form of contextdata, which is utilised in the trusting intention calculation. As discussed earlier, context can includeany description of the internal or community situation normality that affects the trust evaluation.The context information may come from a wide variety of sources, potentially even several sourcesSo every context source has an interface component that manipulates the context data to a suitablefilter format to be utilised by the trust data broker.

When the context sources are defined by an administrator, the definition includes informationon which trust decision factor the context information changes and how the context source data ismapped to a suitable context filter.

As a result, it is possible to layer named filters on top of the data sources, depending on whichnamed contexts are in currently in effect. Thus, the data values flowing from each data sourcetowards the wrappers change to reflect the current contexts.

95

Page 100: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

5.3.7 The Data Flow and Application Programming Interfaces

Each of the technical components in the TuBE framework discussed above have an applicationprogramming interface (API) which also describes the data flow between them.

The Component Interaction Paradigm

There are several logically separate components in the TuBE framework. These could be imple-mented as a single service residing in one system. However, the requirements of the individualcomponents make this choice impractical. First of all, the intrusion detection component is a verymuch time and performance critical component, the sooner the misuse or anomaly is detected thebetter. Additionally, to support multiple trust models, the trust data sources need to be modular andeasily added or removed. Therefore loose coupling between the TuBE components is desirable.

The publish/subscribe interaction paradigm provides subscribers with the ability to expresstheir interest in an event or a pattern of events, in order to be notified subsequently of any event,generated by a publisher, that matches their registered interest [EFGK03]. Publish-subscribe sys-tems provides loose coupling in three dimensions: space, time and synchronisation. Space decou-pling is achieved since components do not necessarily know each other and they do not need toshare references to each other. Time decoupling is caused by the fact that the participants do notneed to interact at the same time. The notification of a particular event or action may happen afterthe fact when a suitable subscriber is connected. Also the event providing application does notneed to block while the event is produced, which achieves synchronisation decoupling.

All these characteristics are of use to the TuBE framework, thus the data flow between com-ponents by default utilises a publish/subscribe paradigm.

This paradigm can be realised in the TuBE implementation by several means. There is the theJava Message Service (JMS), which uses topic-based subscription mode, but it can also providepoint-to-point queues if the publish/subscribe domain is not desirable [HBS+02]. Alternatively,there are a set of Web Services publish/subscribe standards, which enable Web Services com-ponents to utilise this paradigm [OAS06]. The latter may have some benefits since the protectedservice is a Web Service and the data sent for analysis is XML based. However, the actual selectionis left for the implementation.

The IDS API

The wrapper intercepts application messages and sends them to the monitoring i.e. intrusiondetection subsystem. In the publish/subscribe paradigm, the wrapper acts as a publisher and theIDS subsystem(s) as consumers.

The data channel from the wrapper to the IDS can be implemented either as a queue or asa publish/subscribe topic. A queue would be natural when only one IDS consumer listens to themessages the wrapper sends. However, a publish/subscribe topic is preferable since the frameworkshould be flexible enough to enable the enterprise to use several different monitoring or intrusiondetection systems that listen to the same traffic flow but analyse the data using different algorithms.Additionally, this approach would enable several wrappers to send their data to the same analysisengines. The best division of topics and the practical interaction of the wrappers, monitoringcomponents and trust valuation in this multi-pronged approach are interesting questions for futureresearch.

For the intrusion detection system the order of messages received and analysed may be sig-nificant. Therefore the actual publish/subscribe implementation needs to support ordered messagedelivery.

96

Page 101: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

The intercepted messages are sent to the IDS “as is”, without modification, so that the IDS hasas much original information about the messages as possible. Depending on the actual intrusiondetection model, some data loss may be acceptable, but a decision to leave some data unused orunprocessed should be made by the intrusion detection subsystem.

In addition to the lossless transfer, the wrapper has possibilities to enrich the intercepted mes-sages with context information that are not necessarily present in the intercepted XML messages.Examples from such data are the lower protocol level connection metadata: the client IP address,timestamp, connection duration, encryption details etc. Whether such information actually en-hances the IDS analysis capabilities, is left for future research.

Thus the published message contents to the IDS subscribers is as follows:

• Message: the message client sent to the service in its complete XML form.

• Application context: encryption details, such as SSL encryption mechanisms

• TCP context: the TCP port, connection duration thus far

• IP context: the source IP address

It is assumed that the message or the application context contains the peer identification insome acceptable form for the service (for example as an XML signature or SSL client certificate).

The Reputation API

Since the focus of the TuBE system is in authorisation and monitoring at this stage it is assumedthere is a reputation mechanism in place. Therefore nothing can be assumed anything from itsnature or interface. This means there must be a component that receives reputation changes fromthe IDS component and translates them to be semantically and syntactically valid changes for thespecific reputation system.

The reputation system subscribes to a reputation topic, where the IDS subsystem publishesmessages with the triple:

• Peer identification.

• Transaction identification.

• Attempted action.

• Success opinion.

The peer must be uniquely identified because the reputation is personal, for example using thepeer public key, which is potentially available from the XML message signature. Transactionsneed to be uniquely identified so that several opinions from the same transaction can be managed.The attempted action may influence the reputation system. The success_opinion is the intrusiondetection system’s view on how well the transaction was completed.

The natural way of an intrusion detection system to view things is by alerting on intrusions orother anomalies. It has no semantic tools to score success. Therefore a very simple discrete scale[-1, 0, 1] is sufficient for the success opinion. The semantics of 1 are such that the transaction wasa success and no problems were observed. On the other hand, -1 means that there was a clear signof intrusion. The zero can be used to denote minor oddities that do not warrant a real intrusionalert. The availability of 0 depends on the actual intrusion detection algorithm; for example in a

97

Page 102: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

specification based analysis the transaction is either out of specification or not, there may not beany middle ground.

This success scoring can then be translated in the reputation mechanism specific componentto reputation change messages. For example, a positive success report could change the peer rep-utation with an increment of 0.00001, while a negative report would instantly set the reputation to0 or any suitable value signifying bad reputation. This transformation procedure is very reputationsystem specific and thus it not discussed any further. It is assumed a reputation mechanism existsand a suitable function and an interface component can be created.

It is important to note that the success opinion is the view of one particular intrusion detectioncomponent and if the component does not recognise the attack, a positive opinion will be given.A success opinion does not necessarily mean that there weren’t any problems in the request. Thisis countered in the TuBE system by the ability to have several intrusion detection methods allanalysing the same data with a different method. Even if a specification based IDS componentsends a success report, another IDS component doing semantic analysis can report an anomaly.These different reports on each single transaction should be correlated into one trust value updateper transaction identification to prevent result skewing. From the security point of view, anomalyreports should have priority over success reports, so it may be sufficient to discard the positivereport if any of the reports indicates a problem.

If the enterprise utilises several different introspective monitoring or intrusion detection sys-tems, the monitor subsystem in TuBE must include a component that receives input from differentmonitor components, correlates them and then sends the update to the reputation subsystem. Themessage delivery from the IDS to the reputation system must be reliable, as it is crucial for theoperation of the dynamic trust system that all changes to the reputation of the peer are actuallyreceived by the reputation system.

The Wrapper API for IDS Reaction

If the intrusion detection system observes an intrusion or other type of an anomaly, it needs tosignal the wrapper to do something to the connection to prevent or limit any further harm. Itshould be noted that when the IDS system observes the problem, the command has already beenexecuted in the real server, so the TuBE model is not intrusion prevention but detection and thekey reaction is sending behaviour feedback to the trust system.

As discussed above in the connection with reputation, the intrusion detection mechanism cannot always differentiate clients to several different classes where the reactions should be different.The methods have usually been developed for identifying strongly only the class of real intrusions.Therefore, the IDS subsystem may not be able to utilise a wide range of reaction possibilities.

Various reaction possibilities are discussed in Chapter 4.5, and the chosen reactions are influ-enced on the component that does the reaction. Currently in the TuBE architecture the componentcapable of affecting the application operation is the wrapper, thus the reaction possibilities in theTuBE system are currently:

1. Break client connection, if it still is active.

2. Refuse future connections from this client either permanently or for a designated time.

3. Add a delay for processing each future request.

These reaction possibilities are not mutually exclusive, the system should be able to executeseveral of these choices if so requested. This set of reactions towards a client can be expressed asa message of chosen action combination together with the peer identification:

98

Page 103: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

• Peer identification.

• Break: boolean

• Refuse time

• Delay time

In this message, refuse time or delay time indicate how long the peer connections need to berefused or how large a delay factor needs to be applied.

Other reactive possibilities, such as modifying the service application scheduling or resourceavailability, could be added with suitable additional components.

Using the publish/subscribe paradigm, there can be one reaction message topic to which allwrapper instances subscribe.

The Broker APIs

The trust data broker component is responsible for managing the trust data coming from separatedata sources. In the current TuBE trust model these sources are reputation, action and context.

This can be achieved by forming each source a dedicated publish/subscribe topic. The trustdata broker subscribes to all of these sources and the data sources publish the changes to thesetopics.

As has been previously noted, the actual reputation mechanism is not known, so the data re-ceived from there must be adapted to suit our needs. Therefore the same component that translatesthe IDS opinions to reputation specific change messages must also adapt the data from the reputa-tion mechanism for broker use.

The TuBE system requires the reputation to be an integer. The data sent from the reputationsubsystem to the trust data broker will be wrapped to a message:

• Peer identification.

• Action.

• Reputation.

It is assumed in TuBE that the action information sent from the reputation system (or thereputation interface) is compatible with the action information created in the Action subsystem.This requires there to be a common ontological base for the reputation interface component totranslate outside reputation to a format suitable for TuBE. This is not a trivial requirement butsince reputation mechanisms are not in this work’s main focus, this is not explored further but leftfor future research.

The action database is a key user interface to the TuBE system. The action subsystem gets theapplication WSDL description, lists the actions and the administrator assigns risk and importancevalues to the actions. The message to the trust data broker topic consists of the following data:

• Action.

• Risk formula.

• Importance formula.

99

Page 104: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Action is a string that is in the standalone case copied directly from the WSDL description.Risk and importance formulae are strings that describe how the risk and importance are finallycalculated for this action.

For example, the risk of a “buy” action could be described by the string “1*$amount”, where1 is the risk base value and “amount” is the name of the parameter in the WSDL description thatspecifies the number of bought items. Alternatively, the formula can be just an integer in a stringform, which denotes a fixed risk or importance value.

These values can be modified by context. For every context source it must be defined howthe context data changes the trust vector values. This filtering information sent to the broker in amessage consists of

• Context name.

• Factor name.

• Factor parameter.

• Change formula.

Context name is the name for this particular context source. It needs to be unique among thecontext sources usable by this application. The factor name is one of the trust vector sources fromthe list of (action risk, action importance, peer reputation). Since all of the factors listed requirea target (either a peer identification or a specified action) factor parameter is a string or regularexpression denoting that target. Finally, change formula is the arithmetic formula denoting howthe factor is to be modified.

An example filter could be (finance, “risk”, “buy_*”, “+2”) denoting that the financial situationrequires all buy action risks to be incremented by two. When a context is deployed, the contextinterface needs to be built and filters configured. Since the configuration needs the list of availableactions, the configuration should be done in the management user interface.

The API for Trust Valuation Data

To achieve trust-based authorisation, the wrapper needs to calculate the trusting intention andcompare it to a threshold level based on a set of data. When the TuBE system is deployed toprotect a service, the wrapper needs to prime its data structures with a set of initial support data.

This is a single initialisation time action and can be implemented as a straightforward syn-chronous API call. The broker responds with initial data in the form of tuples:

• Peer identification

• Vector of action vectors

to the wrapper. This initial set can be the set of previously known community members or arandom user set just to initiate the wrapper cache.

For the run-time trust valuation changes, the data broker has a one topic for which the wrapperscan subscribe to. The messages for this topic are in the form of:

• Peer identification.

• Action name.

• Risk change.

100

Page 105: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

• Importance change.

• Reputation change.

Here the peer identification and action name fields should support wildcard values, i.e. onemessage could authorise several changes to the support database.

5.4 Evaluation

The hypothesis of this thesis was that the use of trust combined with monitoring mechanisms anda feedback loop from the monitoring system to the trust evaluation would be a suitable approachfor enterprise collaboration protection needs. Additionally, the drafted TuBE framework needs tobe assessed as the system achieving this.

Thus in this section TuBE is evaluated using four different criteria: trust model robustness,trust management, enterprise protection and framework flexibility. Additionally, new threats andproblems brought by the TuBE solution are discussed.

5.4.1 The TuBE Trust Model

Current trust models have been criticised for showing one or more of the following three weak-nesses [MMH02]:

1. The relationship of trust and reputation is not clear: Differentiation of trust and reputationis either not made or the mechanism for inference between them is not explicit.

2. Trust and reputation are treated as independent of context or time: Trust and reputation aretaken to be the same across multiple contexts, or are treated as uniform across time.

3. The sociological rooting is ignored: Despite the strong sociological foundation for the con-cepts of trust and reputation, existing computational models for them are often not groundedon understood social characteristics of these quantities.

However, the TuBE model avoids these pitfalls. First of all, it has been developed with the helpof the trust type system analysed in Chapter 3.2 and the input factor analysis shown in Chapter3.4, thus grounding the work to social structures, clear inference and difference between variousconcepts.

While in the TuBE trust model reputation is definitely a separate concept, an undelying factoraffecting trust evaluation, this work has not been extended very far into the realm of reputation.This is a clear direction for future research.

Additionally, the factors that influence the trust valuation in TuBE include the situation i.e.context, and time dependency is taken into account with the behaviour monitoring and feedbackto future trust evaluations.

Therefore the TuBE trust model can be considered robust and free from these weaknesses.

5.4.2 The TuBE Trust Management

In Chapter 3.6 a set of requirements for an enterprise class trust and mistrust management systemwere detailed. How does the TuBE system fulfil them?

101

Page 106: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

1. It automates the translation of trusting intention to trusting behaviour. In the TuBE systemthis is done by using trust-guarded authorisation to decide whether the peer SOAP requestcan be permitted to proceed to the background application.

2. It presents a unified approach to trust management across applications. This is achieved inthe TuBE system by leveraging Web Services technologies. By using the service specificwrapper as a key component, both native Web Service applications and legacy systems canbe supported by the TuBE approach.

3. There should be locality of control in the authorisation decisions. This means that eachorganisation can decide themselves whom to trust and how. This is achieved in the TuBEsystem by trust-guarding the authorisation function in the organisation. While this enableslocal control and enterprise protection, in a wider community setting this is not a simplerequirement. If a single member organisation in a community decides it does not trust aparticular peer anymore and suddenly refuses to co-operate, the community does not workas intended. This is a problem that needs to be taken into account in the community level.

4. It is flexible in supporting complex trust relationships and multiple decision input factorsthat influence the trusting belief to a particular trustee. The default TuBE trust model con-sists of the 7-tuple of trustor, trustee, action, action risk, action importance, trustee repu-tation, trustor context and thus takes into account multiple factors that influence the trustevaluation. Conceptually it is easy to add a new factor to the model by just providing a newtrust source module and trust calculation function.

5. The mechanisms for evaluating trust should be separated from the policies thus enablingreuse of the mechanisms across applications requiring different policies. In TuBE the trustcalculation function and the authorisation threshold value for the wrapper are settable bypolicy. Also the use and effect of situational context can be set by the management. Thusthe enterprise management can deploy the TuBE system in a variety of communities andcircumstances.

6. It should introspectively monitor the peer behaviour to detect ongoing mistrust situations.This is achieved in the TuBE system by leveraging an application level intrusion detectionsystem.

7. It must produce behavioural feedback to the trusting belief calculations to prevent furthermistrust. In TuBE, the application level intrusion detection system is also linked to a reputa-tion subsystem, so any detected problems lower the peer reputation accordingly. Likewise,positive experiences with a new peer increase the peer reputation.

All in all, the TuBE system is a well designed trust management system taking into accountthe four aspects of trust management and their requirements.

5.4.3 Impact as Enterprise Protection

Previously it was shown that TuBE is a good trust management solution from the trust model andmanagement points of view. However, the real test criteria come from the collaborative enterpriseenvironment and its protection needs as discussed in Chapter 2.8.2. Is trust management a goodsolution for the threat of malicious peers in an open dynamic community and how does the TuBEsystem in particular respond to the overall requirements of the solution?

102

Page 107: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

1. The solution must work with an open community structure. Because of this requirement,TuBE utilises trust based authorisation, where the trust model includes provisions in theform of dispositional trust default trust value for new and unknown peers.

2. The solution must work in a dynamic environment without continuous manual interven-tion. This is achieved in TuBE with trust-guarded authorisation and settable policies. Trust-guarded authorisation uses trust evaluation as a basis for authorisation instead of an autho-risation database which needs to be manually updated. Trust evaluations are dynamic andtake into account the peer reputation and enterprise context when making the authorisationdecision. The manual input in the TuBE system consists of setting up the trust support factsources and their policies and inputting the trust calculation functions and threshold valuesfor the wrapper.

3. The solution needs to protect an individual member of a community. This is achieved inthe TuBE system by introspective monitoring in the form of application intrusion detectionmechanism.

4. The solution should me minimally application specific. TuBE is not totally applicationor service independent, but has application specific elements. To increase the accuracy ofmonitoring, the wrapper is service specific. It is created from the WSDL description of theservice the enterprise offers to the community. Additionally, the trust model requires theknowledge of actions and their risk and importance values.

Overall, the TuBE system responds to these challenges very well. The only deviation from theideal comes in the discussion of application or service independence. As we observed in Chapter4, there are mechanisms in the intrusion detection area that do not require application dependence.However, that would deny the TuBE monitoring component the capability of detecting behaviouranomalies, i.e. scenarios where the peer acts within syntactically legal limits but with semanticallyanomalous content. In the TuBE system it was decided to explore monitoring systems capableof detecting these problems aw well. Any kind of semantic awareness in the monitoring systemrequires application dependence. Keeping this requirement in mind the TuBE system was designedto be application specific as little as possible. The only application specific software component isthe wrapper. The other components are not bound to any single service or application other thanby configuration. For example, the action names and values need to be configured to the actiondata source. Similarly, the semantic knowledge about the application needs to be configured to theintrusion detection monitoring component if semantic anomaly detection is used.

5.4.4 TuBE as a Framework

The TuBE framework is a reusable design of a system enabling Web Services to use trust-guardedauthorisation. For a community framework to be truly practical, it should be flexible, extensibleand allow its use in many different scenarios and deployment configurations.

The TuBE framework can be utilised in different configurations. In the simplest case it pos-sible to protect one single enterprise service and service instance with the TuBE framework byspecialising and deploying a service specific wrapper instance from a general wrapper class.

The publish/subscribe paradigm also enables to deploy the TuBE in a scenario where thereare several same service instances just by deploying several wrapper instances. Additionally,the publish/subscribe mechanism makes it is possible to use several different intrusion detectionmechanisms to monitor the web service.

103

Page 108: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

It should also be possible to extend the TuBE framework to utilise other trust models as well.This requires first of all the ability to develop new support data components from existing baseclasses, secondly the possibility to define new APIs in the trust broker and thirdly the possibilityof defining the effect of this new data on the trusting intention calculation in the wrapper.

Implementing all these framework flexibility options in the actual TuBE framework is a chal-lenge and provides much fertile ground for practical research questions and answers.

5.4.5 New Threats

While the TuBE system is intended to solve the previously discussed issue of cost-effective man-agement of secure enterprise collaboration, there may be new threats that the newly designed sys-tem introduces to the collaborative environment. Every time a system or framework is changed, itis necessary to analyse the changed situation if there are new threats or problems that the changehas caused. Typically new vulnerabilities arise just because something as been changed and itsimpact has not been fully analysed. What threats to the does the introduction of TuBE bring to thegeneral threat analysis performed earlier in Chapter 2.6? Here the threats are discussed both fromthe security point of view as done previously but also from the viewpoint of deployment show-stoppers. What threats are there in the designed TuBE system that would prevent it to be deployedat an organisation?

One key issue is of course the increased complexity and the security threats associated withit. TuBE brings more components that can fail and cause a denial of service situation. This is anissue needing addressing during TuBE development, the system needs to be both robust enoughto survive in the proposed environment and fault-tolerant enough so that one faulty componentdoes not cause a total denial of service. Also the system needs to fail securely, i.e. it should beanalysed during development what is the most secure way to handle observed failures of variouscomponents.

However, the complexity is present only to the organisation deploying the TuBE framework.Deploying TuBE does not increase the number of interfaces visible out of the enterprise so theattack surface presented to external parties or community members is not increased by the TuBEdeployment. If an external reputation system is attached to the trust support layer, it will add oneexternal interface, but this is not the default.

There are some deployment threats also caused by the additional complexity. The first deploy-ment threat arises from the fact that the authorisation decision is dependent on many underlyingtrust factors and it may not be immediately clear to observers why a particular request has beengranted or denied. This opaqueness may cause some additional workload to system managementespecially in a problem solving situation. Nevertheless, the workload should be much smaller thanmanaging individual access rights one by one in a dynamically changing community. This threatcould be at least partially solved by providing good management tools for the TuBE administra-tors.

Additionally, the complexity of the calculations may cause the system deployment phase to besomewhat longer than in a standard non-trust-guarded application deployment because the contextparameters, action risk and importance values and the trust threshold should be fine-tuned to therequired level of security. In this the TuBE system does not really differ from other securitysolutions such as firewalls and IDS/IPS systems, which also need an initial fine-tuning period.

The TuBE system uses a model where trust is based on previous actions, i.e. behaviour basedreputation. Reputation based systems are theoretically vulnerable to a Sybil attack where one ma-licious party creates several false identities and uses those identities to influence reputation scores.However, this particular version of the Sybil attack is not a threat in TuBe because of the way the

104

Page 109: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

default TuBE system uses reputation information. The reputation scores in TuBE are calculatedbased on direct experience with the particular identity, they are not by default influenced by thirdparties with potentially false identities. But if the reputation subsystem in TuBE is connected to anexternal reputation system, where the reputation score is somehow formed by collective opinion,this may become a real threat if the collective opinion can be influenced by false identities createdin a Sybil attack.

A new security threat to the offered service is brought by the reaction possibilities of themonitoring component. If the monitoring system has too many false positive alerts that cause it tocause a denial of service situation with a particular client or connecting member, it may mean lostrevenue to the TuBE using member. The monitoring/IPS component should be fine-tuned so thatthe amount of false positives is sufficiently low not to cause major financial impact. This threat isnot only a security threat, it will also directly impact organisations’ interest in using a TuBE-likeframework.

There is also the threat of service performance degradation due to the additional processingin the TuBe framework. When every service request has to wait for the trust-score based autho-risation decision, it is possible that the performance of the service interface is not sufficient. It isforeseeable that especially performance problems would be real showstopper threats to the trust-guard system adoption. Therefore this threat must be taken into account in the TuBE developmentphase so that the performance loss introduced by TuBE is negligible. The effect of various mon-itoring algorithms and trust calculation methods to performance would be good future researchquestions.

The TuBE system may also cause a threat to the overall functionality and performance of thecommunity. It is likely that adopting a TuBE system at some members increases choreographyproblems stemming from trust-based authorisation denials. The community choreography enginehas to manage this type of events anyway, so this is not really a new task but the event frequencymay rise and thus the community throughput may be lower. This is a separate performance issuefrom the performance of a single member’s service and should be separately investigated.

Whether these new threats are sufficiently severe issues preventing TuBE to be a successfulsystem and framework can be judged when a prototype is built and real test results are available.

105

Page 110: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Chapter 6

Conclusion and Future Directions

The modern business environment can be characterised by rapid change. New innovations in prod-ucts, services and also business models are continuously emerging and enterprises need to adapt tothese changes to stay in business. This is driving enterprises to seek agility and adaptability fromnetworked business collaboration and dynamic business decisions. Information technology can beboth an enabler of these business networks and also a driver for pushing the change to even widercorporate audience.

Therefore, concepts like virtual enterprises, business networks and service oriented comput-ing have been both in research and rapid development focus in recent years. Business adaptabilityrequirements have brought increasing needs for managing and securing the dynamic business re-lationships in the information technology systems. The current solutions relying on manuallymanaged static access and authorisation rules are not flexible enough nor cost-effective. Thus theinfrastructure supporting future business networks must have better methods of managing the col-laboration risks and securing the participants. Otherwise the promise of information technology asa business network enabler will not be fulfilled. Insecure business network middleware requiringlot of manual configuration and adjustments may hinder or even prevent the widespread adoptionof busness networks.

Thus, the key research question for this thesis was how to secure enterprise applications in anopen dynamic business network community without sacrificing dynamism and scalability, espe-cially from the point of view of the community member organisation. The hypothesis presented inthis thesis that we can draw some ideas from the human experience and apply the concept of trustto enhance community security especially from the community member point of view.

To fulfill this goal, this thesis presents a novel community security framework having threeseparate elements: an automated trust evaluation process, continuous peer behaviour monitoring,and constant feedback from the monitoring system to trust evaluation for readjustments.

To successfully manage relationships between enterprise agents using trust, the managementframework needs to be well grounded into the nature of trust. This thesis shows such a theoreticalbase and one of the key contributions is to show how the theoretical concept of trusting behaviourcan be automated in a computer system as trust-guarded authorisation. The developed concept iswell grounded in trust theory and independent of the chosen trust model. This generality enablestrust decision automation which in turn enables enterprises to dynamically allow and authoriseother entities to use their services and resources for mutual benefit.

As a step towards the key goal several existing trust models were analysed. Based on thisanalysis a partial domain ontology was built. This step accomplished a common vocabulary fordescribing facts to be considered for trust calculation. This new vocabulary facilitates communi-cation when models use different terminology for similar concepts.

106

Page 111: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Using trust to authorise actions brings in its wake several risks: misplaced trust and the risk ofloss. As trust theory shows, it is not sufficient just to trust, we must also dynamically adjust ourtrust valuation according to the observed peer behaviour so that misplaced trust, i.e. mistrust canbe detected. Therefore, a new view of trust management was developed, which consists of trustinitialisation, trust use, monitoring and reaction to the observations. An enterprise using trust mustthus introspectively monitor the collaborations for its own protection. This thesis surveyed variouspossibilities for the peer behaviour observation method in the field of application level intrusiondetection.

These two theoretically separate ideas, trust-guarded authorisations and introspective monitor-ing in the form of intrusion detection, have in this thesis been brought together as an answer forthe key research question. A novel community security framework, TuBE, has been drafted as asolution framework. From a theoretical point of view, TuBE confirms our hypothesis that trust canbe applied in securing business network members from some key risks in the collaboration.

The main contributions of this work so far are as follows. First of all, it has been shown thattrust theory brings forth trust-guarded authorisation as a suitable method for guarding access tocomputer systems. This has the benefit of enabling automated and dynamic authorisation deci-sions, which are required in business networking. Second, it was shown how these trust-guardedsystems can manage misplaced trust by continuous introspective monitoring and dynamically ad-justing the trust evaluation based on the monitoring results. Third, there are several interestingtools in the application intrusion detection arena that can be used for introspective monitoringalso in a community setting. Finally, it was shown how it is possible to combine these ideas to acoherent framework capable of fulfilling community member security requirements.

Thus, the overall conclusion of this thesis is that enterprises taking part in business networkscan secure themselves from key business networking threats by using collaborative middlewarecombining trust-guarded authorisation and misuse monitoring as security features. This givesenterprises methods and in the future some tools to secure themselves in the collaboration, thusenabling more widespread business networking. This in turn enables businesses to survive in thecurrent rapidly changing business environment.

6.1 Technical Research Questions for Implementation Phase

The TuBE framework presented in this thesis is at this point a draft design and its success andperformance needs to be evaluated with a working implementation. From this design severalinteresting technical research questions are found for the implementation phase:

1. General key question: How well does this concept actually work from different viewpointssuch as security, dynamic management and service performance? Does the implementationconfirm our hypothesis of trust-guarded authorisation and introspective monitoring beingsuitable for securing enterprise collaboration?

2. Monitoring questions: What are the best algorithms for the introspective monitoring func-tions in this scenario? Does utilising several different monitors and thus algorithms benefitthe monitoring function?

3. Reputation questions: What kind of reputation mechanism would suit this scenario? Howwould the use of several monitoring mechanisms giving potentially conflicting behaviourscores to the same transaction be handled in a reputation mechanism?

4. Trust model questions: How can the trust broker and underlying trust data sources be builtso that the TuBE framework could easily support changes in the trust model and underlying

107

Page 112: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

data sources. To continue this further, do changes in the trust model change the performanceof the framework or the level of protection to the service?

6.2 Future Research Directions in Trust and Collaborative Commu-nities

In addition to the implementation questions above, during this work several new research avenueshave been found but not investigated due to lack of resources.

Generally, research on trust can be divided to three groups based on its context. On the lowest,most fundamental level, trust is a part of the infrastructure. Early trust research has been concen-trating on this level. On top of it are the service and community levels. As electronic commercehas gained a foothold and open systems become more common, trust forms an important part onthe service level. There are still many problems to be solved on this level before research on thehighest level, the community, can proceed freely [RK05].

As discussed briefly in Chapter 3.5.2, our test scenario for the TuBE prototype system is basedon these lower technical and service levels and trust-guarding the service action authorisationprocess. However, there are several community level issues that the TuBE concept could influencebut currently does not support.

For example, the trusting behaviour discussed in trust theory could in practise be any action inany part of the community operational life cycle. The focus of this work has been in the operationalphase and monitoring. However, there are several interesting trust questions elsewhere that maywarrant further research. For example, how could an enterprise utilise the trusting intention andtrusting beliefs in community population when selecting the participants for the community roles?Or how does the trust valuation affect our behaviour in the contract negotiation or communityestablishment phase? Or what about our reactions to community structure change requests, shouldchange requests be accepted more readily if we trust the request initiator? Additionally, somethought should be given to various systems trust issues. For example, what aspects in electroniccontracting processes would increase or decrease the systems trust?

It may be possible to build these community support services on top of the same TuBE frame-work that is also trust-guarding the operations phase for the member. This would require thatwe have two sets of actions, the operation phase actions described for example in the applica-tion WSDL file, and the community level actions that are described in the community operationalcontract or an orchestration specification. Thus the TuBE monitoring function would also need ex-tension to the higher levels in the collaborative community stack. There are quite many monitoringchallenges as mentioned in Chapter 2.7.

The actual feasibility and mechanisms of this community lifecycle support need to be de-scribed in a separate research paper, but this area is a direct continuation of the current researchand should be investigated further.

A second interesting set of future research questions can be found in the general larger areasof community security and monitoring in electronic communities. What aspects of the communityinteraction and behaviour can be monitored, how, by whom and how should the results affectthe community interaction. The TuBE system introduced here is a solution for a smaller focusedproblem of community member security, but as observed previously, there are several differentmonitoring task in a community as a whole. Some of these monitoring tasks could be achievedby linking the TuBE instances at every participant together to help in the community monitoring.But this can be researched, as many of the higher level community issues discussed above, whenat least some of the lower community layer functions have been implemented and tested. Thus

108

Page 113: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

TuBE is a small stepping stone to forming a more complete security and protection framework forelectronic communities in the future.

The third group of interesting issues can be found from the general threat analysis performed inChapter 2.6. The work in this thesis concentrated on just one group of threats, malicious membersand external parties in a dynamically changing environment, there were many more threats needinggood solutions. For example, the impact and management of a Sybil attack in different kinds ofbusiness communities is an interesting question. So is the question of how to ensure confidentialityin a community environment where some middleware components are shared between possiblycompeting communities.

109

Page 114: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Bibliography

[ABB+05] Anderson, S., Bohren, J., Boubez, T., Chanilau, M. and Della-Libera,G., Specification: Web services trust language (WS-Trust), 2005.URL http://www-128.ibm.com/developerworks/library/specification/ws-trust/.

[AL01] Almgren, M. and Lindqvist, U., Application-integrated data collectionfor security monitoring. RAID’01: Proceedings of the 4th Interna-tional Symposium on Recent Advances in Intrusion Detection, volume2212 of Lecture Notes in Computer Science. Springer-Verlag, 2001,pages 22–36, URL http://springerlink.metapress.com/app/home/content.asp?wasp=5n5d6ynmwg1xwvf1bd9l&referrer=contribution&format=2&page=1&pagecount=0.

[Amo94] Amoroso, E. G., Fundamentals of Computer Security Technology. Prentice-HallInternational, Inc., 1994.

[And01] Anderson, R., Security Engineering: A Guide to Building Dependable Dis-tributed Systems. Wiley & Sons Inc, 2001.

[And04] Anderson, D. P., BOINC: A system for public-resource computing and storage.GRID ’04: Proceedings of the 5th IEEE/ACM International Workshop on GridComputing, Washington, DC, USA, 2004, IEEE Computer Society, pages 4–10.

[ARH97] Abdul-Rahman, A. and Hailes, S., A distributed trust model. NSPW’97: Proceedings of the 1997 New Security Paradigms Workshop.ACM, 1997, pages 48–60, URL http://citeseer.ist.psu.edu/abdul-rahman97distributed.html.

[ATS04] Androutsellis-Theotokis, S. and Spinellis, D., A survey of peer-to-peer contentdistribution technologies. ACM Comput. Surv., 36,4(2004), pages 335–371.

[Axe00] Axelsson, S., Intrusion-detection systems: A survey and taxonomy. Technicalreport no 99-15, Dept. of Computer Engineering, Chalmers University of Tech-nology, Sweden, 2000.

[BCS+00] Bowen, T., Chee, D., Segal, M., Sekar, R., Shanbhag, T. and Uppuluri, P., Build-ing survivable systems: An integrated approach based on intrusion detection anddamage containment. DISCEX’00: Proceedings of the DARPA Information Sur-vivability Conference and Exposition, volume 2. IEEE Computer Society, 2000,pages 84–99.

110

Page 115: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

[BFK99] Blaze, M., Feigenbaum, J. and Keromytis, A. D., KeyNote: Trust man-agement for public-key infrastructures (position paper). Proceedings ofthe 6th International Workshop on Security Protocols, volume 1550 ofLecture Notes in Computer Science. Springer-Verlag, 1999, pages 59–63, URL http://springerlink.metapress.com/openurl.asp?genre=article&issn=0302-9743&volume=1550&spage=59.

[BFL96] Blaze, M., Feigenbaum, J. and Lacy, J., Decentralized trust management. Pro-ceedings of the 1996 IEEE Symposium on Security and Privacy. IEEE Com-puter Society, 1996, pages 164–173, URL http://ieeexplore.ieee.org/iel3/3742/10940/00502679.pdf.

[BHM+04] Booth, D., Haas, H., McCabe, F., Newcomer, E., Champion, M., Ferris, C. andOrchard, D., Web services architecture, 2004. URL http://www.w3.org/TR/ws-arch/. W3C Working Group Note 11.

[BIM+03] BEA, IBM, Microsoft, SAP and Siebel, Business process execution lan-guage for web services version 1.1, 2003. URL http://www.ibm.com/developerworks/webservices/library/ws-bpel/. Specification.

[Bis03] Bishop, M., Computer Security: Art and Science. Addison-Wesley, 2003.

[BZ02] Bhargava, B. and Zhong, Y., Authorization based on evidence and trust. DaWaK’02: Proceedings of the 4th International Conference of Data Warehousing andKnowledge Discovery, volume 2454 of Lecture Notes in Computer Science.Springer-Verlag, 2002, pages 94–103, URL http://citeseer.ist.psu.edu/bhargava02authorization.html.

[CFL+97] Chu, Y.-H., Feigenbaum, J., LaMacchia, B., Resnick, P. and Strauss, M., Referee:Trust management for web applications. Computer Networks and ISDN Systems,29,8–13(1997), pages 953–964. URL http://citeseer.ist.psu.edu/58910.html.

[CGS+03] Cahill, V., Gray, E., Seigneur, J.-M., Jensen, C., Chen, Y., Shand, B., Dimmock,N., Twigg, A., Bacon, J., English, C., Wagealla, W., Terzis, S., Nixon, P., Seru-gendo, G. D. M., Bryce, C., Carbone, M., Krukow, K. and Nielsen, M., Usingtrust for secure collaboration in uncertain environments. IEEE Pervasive Com-puting, 2,3(2003), pages 52–61. URL http://ieeexplore.ieee.org/iel5/7756/27556/01228527.pdf.

[CLM01] Cabrera, J. B. D., Lewis, L., and Mehra, R. K., Detection and classificationof intrusions and faults using sequences of system calls. SIGMOD Record,30,4(2001), pages 25–34. URL http://delivery.acm.org/10.1145/610000/604269/p25-cabrera.pdf?key1=604269&key2=4040363011&coll=GUIDE&dl=ACM&CFID=33931896&CFTOKEN=21280247.

[CMA01] Camarinha-Matos, L. M. and Afsarmanesh, H., Virtual enterprise modeling andsupport infrastructures: Applying multi-agent system approaches. WASSS’01:

111

Page 116: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Selected Tutorials from Multi-Agent Systems and Applications : 9th ECCAI Ad-vanced Course ACAI 2001 and Agent Link’s 3rd European Agent Systems Sum-mer School, volume 2086 of Lecture Notes in Computer Science. Springer-Verlag,2001, pages 335–364.

[CMA05] Camarinha-Matos, L. M. and Afsarmanesh, H., Collaborative networks: A newscientific discipline. Journal of Intelligent Manufacturing, 16,4-5(2005), pages439–452.

[CNS03] Carbone, M., Nielsen, M. and Sassone, V., A formal model for trust in dy-namic networks. SEFM’03: Proceedings of the 1st International Confer-ence on Software Engineering and Formal Methods. IEEE Computer Society,2003, pages 54–61, URL http://ieeexplore.ieee.org/iel5/8747/27706/01236207.pdf.

[Coh08] Cohen, B., Bittorrent protocol specification, 2008. URL http://bittorrent.org/beps/bep_0003.html. Version 11031.

[CSW06] Chapin, P., Skalka, C. and Wang, X. S., Trust management: Features and founda-tions. Submitted for publication, 2006.

[CY00] Chen, R. and Yeager, W., Poblano a distributed trust model for peer-to-peer net-works. Technical paper, Sun Microsystems, 2000. URL http://www.jxta.org/docs/trust.pdf.

[DDCdVP+02] Damiani, E., De Capitani di Vimercati, S., Paraboschi, S., Samarati, P. and Vi-olante, F., A reputation-based approach for choosing reliable resources in peer-to-peer networks. CCS ’02: Proceedings of the 9th ACM conference on Computerand Communications Security, New York, NY, USA, 2002, ACM, pages 207–216.

[Dem04] Demolombe, R., Reasoning about trust: A formal logical framework.iTrust’04: Proceedings of the 2nd International Conference on Trust Man-agement, volume 2995 of Lecture Notes in Computer Science. Springer-Verlag, 2004, pages 291–303, URL http://www.springerlink.com/media/3N259B5UXH0QTVE2CB9Q/Contributions/Y/A/L/Y/YALYRU2BRPQ4FQ2U.pdf.

[Deu89] Deutsch, L. P., Design reuse and frameworks in the Smalltalk-80 programmingsystem. In Software Reusability, Vol. II, Addison-Wesley, 1989, pages 55–71.

[Din00] Dingledine, R., The Free Haven project: Design and deployment of an anony-mous secure data haven. Master’s thesis, MIT, 2000. URL http://theory.lcs.mit.edu/~~cis/theses/dingledine-masters.ps.

[DJPJ04] Desmet, L., Jacobs, B., Piessens, F. and Joosen, W., Threat modelling for webservices based web applications. CMS’04: Proceedings of the 8th IFIP TC-6TC-11 Conference on Communications and Multimedia Security, volume 175 ofIFIP International Federation for Information Processing, 2004, pages 161–174.

[Dou02] Douceur, J. R., The Sybil attack. IPTPS’02: Revised Papers of First InternationalWorkshop of Peer-To-Peer Systems, volume 2429 of Lecture Notes in ComputerScience, Cambridge, MA, USA, 2002, Springer, pages 251–260.

112

Page 117: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

[EFGK03] Eugster, P., Felber, P., Guerraoui, R. and Kermarrec, A., The many faces of pub-lish/subscribe. ACM Computing Surveys, 35,2(2003).

[EM99] Elbaum, S. and Munson, J. C., Intrusion detection through dynamicsoftware measurement. ID ’99: Proceedings of the 1st Workshopon Intrusion Detection and Network Monitoring. USENIX Association,1999, URL http://www.usenix.org/events/detection99/full_papers/elbaum/elbaum.pdf.

[End98] Endler, D., Intrusion detection. applying machine learning to Solaris audit data.ACSAC’98: Proceedings of the 14th Annual Computer Security ApplicationsConference. IEEE Computer Society, 1998, pages 268–279, URL http://ieeexplore.ieee.org/xpl/abs_free.jsp?arNumber=738647.

[Ess98] Essin, D. J., Patterns of trust and policy. NSPW’97: Proceedings of the 1997 NewSecurity Paradigms Workshop. ACM Press, 1998, pages 38–47, URL http://doi.acm.org/10.1145/283699.283738.

[Eur07] European Network and Information Security Agency, Emerging-risks-related In-formation Collection and Dissemination, 2007.

[FBF99] Fraser, T., Badger, L. and Feldman, M., Hardening COTS software with genericsoftware wrappers. Proceedings of the 1999 IEEE Symposium on Security andPrivacy. IEEE Computer Society, 1999, pages 2–16.

[FHS97] Forrest, S., Hofmeyr, S. A. and Somayaji, A., Computer immunology. Communi-cations of the ACM, 40,10(1997), pages 88–96. URL http://portal.acm.org/citation.cfm?id=262793.262811.

[FHSL96] Forrest, S., Hofmeyr, S. A., Somayaji, A. and Longstaff, T. A., A sense of selffor Unix processes. Proceedings of the 1996 IEEE Symposium on Security andPrivacy. IEEE Computer Society, 1996, pages 120–128.

[FKF+03] Feng, H., Koleshnikov, O., Fogla, P., Lee, W. and Gong, W., Anomaly detec-tion using call stack information. Proceedings of the 2003 IEEE Symposium onSecurity and Privacy. IEEE Computer Society, 2003, pages 62–75.

[GAHL00] Grefen, P., Aberer, K., Hoffner, Y. and Ludwig, H., CrossFlow: Cross-organizational workflow management in dynamic virtual enterprises. Interna-tional Journal of Computer System Sciences and Engineering, 15,5(2000), pages277–290.

[Gam88] Gambetta, D., Can we trust trust? In Trust: Making and Breaking Cooper-ative Relations, Basil Blackwell, 1988, pages 213–237, URL http://www.sociology.ox.ac.uk/papers/gambetta213-237.pdf. Reprintedin electronic edition from Department of Sociology, University of Oxford, chap-ter 13.

[GJM02] Giffin, J. T., Jha, S. and Miller, B. P., Detecting manipulated remote call streams.Proceedings of the 11th USENIX Security Symposium. USENIX Association,2002, pages 61–79.

113

Page 118: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

[GMS00] Ghosh, A. K., Michael, C. and Schatz, M., A real-time intrusion detection systembased on learning program behavior. RAID’00: Proceedings of the 3rd Interna-tional Symposium on Recent Advances in Intrusion Detection, volume 1907 ofLecture Notes in Computer Science. Springer-Verlag, 2000, pages 93–109.

[GMT04] Grimsley, M., Meehan, A. and Tan, A., Managing internet-mediatedcommunity trust relations. iTrust’04: Proceedings of the 2nd In-ternational Conference on Trust Management, volume 2995 of Lec-ture Notes in Computer Science. Springer-Verlag, 2004, pages 277–290, URL http://www.springerlink.com/openurl.asp?genre=article&id=49K9QP1XUDJGLLD6.

[GPH03] Golbeck, J., Parsia, B. and Hendler, J., Trust networks on the semanticweb. CIA VII: Proceedings of the 7th International Workshop on Coop-erative Intelligent Agents, volume 2782 of Lecture Notes in Computer Sci-ence. Springer-Verlag, 2003, pages 238–249, URL http://www.mindswap.org/papers/CIA03.pdf.

[GR99] Gunetti, D. and Ruffo, G., Intrusion detection through behavioral data.Proceedings of the 3rd International Symposium in Advances in In-telligent Data Analysis, volume 1642 of Lecture Notes in ComputerScience. Springer-Verlag, 1999, pages 383–394, URL http://www.springerlink.com/app/home/contribution.asp?wasp=g19tay35wlcyyn9c0vtp&referrer=parent&backto=issue,32,44;journal,1577,1828;linkingpublicationresults,1:105633,1.

[GS02] Grandison, T. and Sloman, M., Specifying and analysing trust for internet ap-plications. I3E’02: Towards The Knowledge Society: The 2nd IFIP Confer-ence on E-Commerce, E-Business, E-Government, volume 233 of IFIP Confer-ence Proceedings. Kluwer, 2002, URL http://citeseer.ist.psu.edu/grandison02specifying.html.

[GS03] Grandison, T. and Sloman, M., Trust management tools for internet applications.iTrust’03: Proceedings of the 1st International Conference on Trust Manage-ment, volume 2692 of Lecture Notes in Computer Science. Springer-Verlag, 2003,pages 91–107.

[GSS99] Ghosh, A. K., Schwartzbard, A. and Schatz, M., Learning program behav-ior profiles for intrusion detection. Proceedings of the 1st Workshop on In-trusion Detection and Network Monitoring. USENIX Association, 1999, pages51–62, URL http://www.usenix.org/publications/library/proceedings/detection99/full_papers/ghosh/ghosh_html/.

[GTM99] Glushko, R. J., Tenenbaum, J. M. and Meltzer, B., An XML framework for agent-based e-commerce. Communications of the ACM, 42,3(1999), pages 106–114.

[Haa05] Haataja, J.-P., Liiketoimintaverkostoja hallinnoiva väliohjelmisto ja automati-soitu valvontafunktio. Master’s thesis, University of Helsinki, Department ofComputer Science, 2005.

114

Page 119: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

[HBS+02] Hapner, M., Burridge, R., Sharma, R., Fialli, J. and Stout, K., Java Message Ser-vice, 2002. URL http://java.sun.com/products/jms/docs.html.Version 1.1.

[HFS98] Hofmeyr, S. A., Forrest, S. and Somayaji, A., Intrusion detection using sequencesof system calls. Journal of Computer Security, 6,3(1998), pages 151–180.

[HK01] Herrmann, P. and Krumm, H., Trust-adapted enforcement of security policies indistributed component-structured applications. ISCC’01: Proceedings of the 6thIEEE Symposium on Computers and Communications. IEEE Computer Society,2001, pages 2–8, URL http://ls4-www.cs.uni-dortmund.de/RVS/Pub/TS/ISCC01.pdf.

[HL03] Howard, M. and LeBlanc, D., Writing Secure Code, 2nd edition. Microsoft Press,2003.

[HS05] Huhns, M. N. and Singh, M. P., Service-oriented computing: Key concepts andprinciples. IEEE Internet Computing, 9,1(2005), pages 75–81.

[IF02] Inoue, H. and Forrest, S., Anomaly intrusion detection in dynamic ex-ecution environments. NSPW’02: Proceedings of the 2002 New Se-curity Paradigms Workshop. ACM Press, 2002, pages 52–60, URLhttp://portal.acm.org/ft_gateway.cfm?id=844112&type=pdf&coll=portal&dl=ACM&CFID=19905017&CFTOKEN=31765782.

[ISO96a] ISO/IEC IS 10746-2, Open distributed processing reference model, part 2, foun-dations, 1996.

[ISO96b] ISO/IEC IS 10746-3, Open distributed processing reference model, part 3, archi-tecture, 1996.

[JL01] Jones, A. and Lin, Y., Application intrusion detection using language library calls.ACSAC’01: Proceedings of the 17th Annual Computer Security ApplicationsConference. IEEE Computer Society, 2001, pages 442–449, URL http://ieeexplore.ieee.org/xpl/abs_free.jsp?arNumber=991561.

[Jøs96] Jøsang, A., The right type of trust for distributed systems. NSPW’96: Pro-ceedings of the 1996 ACM Workshop on New Security Paradigms. ACM,1996, pages 119–131, URL http://security.dstc.edu.au/staff/ajosang/papers/trdsyst.ps.

[Jøs98] Jøsang, A., Modeling Trust in Information Security. Ph.D. thesis, NorwegianUniversity of Science and Technology, Faculty of Electrical Engineering andTelecommunications, Trondheim, Norway, 1998.

[KFBK00] Ko, C., Fraser, T., Badger, L. and Kilpatrick, D., Detecting and countering sys-tem intrusions using software wrappers. Proceedings of the 9th USENIX Secu-rity Symposium. USENIX Association, 2000, URL http://ieeexplore.ieee.org/iel5/8932/28288/01264947.pdf.

[KFL94] Ko, C., Fink, G. and Levitt, K., Automated detection of vulnerabilities in priv-ileged programs by execution monitoring. ACSAC’94: Proceedings of the

115

Page 120: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

10th Annual Computer Security Application Conference. IEEE Computer So-ciety, 1994, pages 134–144, URL http://seclab.cs.ucdavis.edu/papers/pdfs/ck-gf-kl-94.pdf.

[KH97] Kosoresow, A. P. and Hofmeyr, S. A., Intrusion detection via system call traces.IEEE Software, 14,5(1997), pages 35–42.

[KM05] Kutvonen, L. and Metso, J., Services, contracts, policies and ecommunities –relationship to ODP framework. WODPEC’05: Workshop on ODP for En-terprise Computing, 2005, pages 62–69, URL http://www.lcc.uma.es/~av/wodpec2005/wodpec2005-Proceedings.pdf.

[KMR06] Kutvonen, L., Metso, J. and Ruohomaa, S., From trading to ecommunity popula-tion: Responding to social and contractual challenges. EDOC’06: Proceedingsof the 10th International Enterprise Distributed Object Computing Conference.IEEE Computer Society, 2006, pages 199–210.

[KRL97] Ko, C., Ruschitzka, M. and Levitt, K., Execution monitoring of security-criticalprograms in distributed systems: A specification based approach. Proceedingsof the 1997 IEEE Symposium on Security and Privacy. IEEE Computer Society,1997, pages 175–187, URL http://portal.acm.org/citation.cfm?id=884386.

[KRM07] Kutvonen, L., Ruokolainen, T. and Metso, J., Interoperability middleware forfederated business services in web-Pilarcos. International Journal of EnterpriseInformation Systems, Special issue of Interoperability of Enterprise Systems andApplications, 3,1(2007), pages 1–21.

[KRRM08] Kutvonen, L., Ruokolainen, T., Ruohomaa, S. and Metso, J., Service-orientedmiddleware for managing inter-enterprise collaborations. In Global Implicationsof Modern Enterprise Information Systems: Technologies and Applications, Gu-nasekaran, A., editor, IGI Global, October 2008, pages 208–241.

[KS99] Kuperman, B. A. and Spafford, E., Generation of application level audit datavia library interposition. Technical Report CERIAS TR 1999-11, COAST Lab-oratory, 1999. URL https://www.cerias.purdue.edu/tools_and_resources/bibtex_archive/archive/99-11.pdf.

[KT93] Krause, M. and Tipton, H. F., Handbook of Information Security Management.Auerbach Publications, 1993.

[Kut02] Kutvonen, L., Automated management of inter-organisational applications.EDOC’02: Proceedings of the 6th International Enterprise DistributedObject Computing Conference. IEEE Computer Society, 2002, pages 28–39, URL http://computer.org/proceedings/edoc/1742/17420027abs.htm.

[LASS00] Lazcano, A., Alonso, G., Schuldt, H. and Schuler, C., The WISE approach toelectronic commerce. International Journal of Computer System Sciences andEngineering, 15,5(2000), pages 345–357.

116

Page 121: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

[LB97] Lane, T. and Brodley, C. E., Sequence matching and learning in anomaly detec-tion for computer security. AAAI-97: Proceedings of Workshop on AI Approachesto Fraud Detection and Risk Management. AAAI Press, 1997, pages 43–49, URLhttp://citeseer.ist.psu.edu/lane97sequence.html.

[LS98] Lee, W. and Stolfo, S., Data mining approaches for intrusion detection.Proceedings of the 7th USENIX Security Symposium. USENIX Association,1998, pages 79–94, URL http://citeseer.ist.psu.edu/article/lee98data.html.

[LSC97] Lee, W., Stolfo, S. J. and Chan, P. K., Learning patterns from Unix process exe-cution traces for intrusion detection. AAAI-97: Proceedings of Workshop on AIApproaches to Fraud Detection and Risk Management. AAAI Press, 1997, pages50–56, URL http://www1.cs.columbia.edu/ids/publications/wenke-aaai97.ps.

[Mar94] Marsh, S., Formalising Trust as a Computational Concept. Ph.D. thesis, Depart-ment of Mathematics and Computer Science, University of Stirling, 1994. URLhttp://citeseer.ist.psu.edu/marsh94formalising.html.

[Mar00] Marceau, C., Characterizing the behavior of a program using multiple-length N-grams. NSPW’00: Proceedings of the 2000 New Secu-rity Paradigms Workshop. ACM Press, 2000, pages 101–110, URLhttp://delivery.acm.org/10.1145/370000/366197/p101-marceau.pdf?key1=366197&key2=8276292011&coll=GUIDE&dl=ACM&CFID=33734302&CFTOKEN=20018340.

[MC96] McKnight, D. H. and Chervany, N. L., The meanings of trust. Technical Report,University of Minnesota, MIS Research Center, 1996. URL http://misrc.umn.edu/workingpapers/fullPapers/1996/9604_040100.pdf.Tables included as attachments in the online version.

[MD05] Marsh, S. and Dibben, M. R., Trust, untrust, distrust and mistrust—an explorationof the dark(er) side. iTrust’05: Proceedings of the 3rd International Confer-ence on Trust Management, volume 3477 of Lecture Notes in Computer Science.Springer-Verlag, 2005, pages 17–33, URL http://www.springerlink.com/link.asp?id=hlvhnt431yte3lay.

[MG98] Meltzer, B. and Glushko, R., XML and electronic commerce: Enabling the net-work economy. SIGMOD Record, 27,4(1998).

[MG02] Michael, C. C. and Ghosh, A., Simple, state-based approaches to program-basedanomaly detection. TISSEC: ACM Transactions on Information and SystemSecurity, 5,3(2002), pages 203–237. URL http://portal.acm.org/ft_gateway.cfm%3Fid%3D545187%26type%3Dpdf%26dl%3DGUIDE%26dl%3DACM%26CFID%3D11111111%26CFTOKEN%3D2222222.

[MJDP02] Milosevic, Z., Jøsang, A., Dimitrakos, T. and Patton, M. A., Discretionary en-forcement of electronic contracts. EDOC’02: Proceedings of the 6th InternationaEnterprise Disctributed Computing Conference. IEEE Computer Society, 2002,pages 39–51.

117

Page 122: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

[MLG+04] Milosevic, Z., Linington, P. F., Gibson, S., Kulkarni, S. and Cole, J., Inter-organisational collaborations supported by e-Contracts. Building the E-ServiceSociety: E-Commerce, E-Business, and E-Government, volume 9 of 14 of IFIP18th World Computer Congress, Toulouse, France, August 2004, IFIP, Kluwer,pages 413–429, URL http://www.cs.kent.ac.uk/pubs/2004/1982.

[MMH02] Mui, L., Mohtashemi, M. and Halberstadt, A., A computational model oftrust and reputation for e-businesses. HICSS’02: Proceedings of the 35th An-nual Hawaii International Conference on System Sciences, volume 7. IEEEComputer Society, 2002, URL http://csdl.computer.org/comp/proceedings/hicss/2002/1435/07/14350188.pdf.

[OAS06] OASIS, Web services base notification 1.3 (ws-basenotification), 2006.URL http://docs.oasis-open.org/wsn/wsn-ws_base_notification-1.3-spec-os.pdf.

[OMG04] OMG, OWL full and UML 2.0 compared, 2004. URL http://www.omg.org/docs/ontology/04-03-01.pdf.

[Pap03] Papazoglou, M., Service oriented computing: Concepts, characteristics and di-rections. WISE’03: Proceedings of the Fourth International Conference on WebInformation Systems Engineering. IEEE Computer Society, 2003, pages 3–12.

[PG03] Papazoglou, M. and Georgakopoulos, D., Service oriented computing. Commu-nications of the ACM, 46,10(2003), pages 25–28.

[Pos82] Postel, J. B., Simple mail transfer protocol, 1982. RFC 821.

[Pro03] Provos, N., Improving host security with system call policies. Proceedings of the12th USENIX Security Symposium. USENIX Association, 2003, pages 257–272,URL http://niels.xtdnet.nl/papers/systrace.pdf.

[RD00] Reichert, M. and Dadam, P., ADEPTFlex - supporting dynamic changes of work-flow without losing control. Journal of Intelligent Information Systems, SpecialIssue on Workflow Management, 10,2(2000), pages 93–129.

[RK05] Ruohomaa, S. and Kutvonen, L., Trust management survey. iTrust’05: Proceed-ings of the 3rd International Conference on Trust Management, volume 3477 ofLecture Notes in Computer Science. Springer-Verlag, 2005, pages 77–92, URLhttp://dx.doi.org/10.1007/11429760_6.

[RK06] Ruokolainen, T. and Kutvonen, L., Interoperability in service-based communities.Business Process Management Workshops: BPM 2005 International Workshops,BPI, BPD, ENEI, BPRM, WSCOBPM, BPS, Bussler, C. and Haller, A., editors,volume 3812 of Lecture Notes in Computer Science. Springer-Verlag, 2006, pages317–328, URL http://dx.doi.org/10.1007/11678564_28.

[RK10] Ruohomaa, S. and Kutvonen, L., Trust and distrust in adaptive inter-enterprisecollaboration management. Journal of Theoretical and Applied Electronic Com-merce Research, Special Issue on Trust and Trust Management, 5,2(2010),pages 118–136. URL http://www.jtaer.com/aug2010/ruohomaa_kutvonen_p7.pdf.

118

Page 123: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

[RKS02] Raz, O., Koopman, P. and Shaw, M., Semantic anomaly detection in on-line data sources. ICSE’02: Proceedings of the 24th International Con-ference on Software Engineering. IEEE Computer Society, 2002, pages302–312, URL http://pag.csail.mit.edu/daikon/pubs-using/raz-icse-2002-abstract.html.

[SBDB01] Sekar, R., Bendre, M., Dhurjati, D. and Bollineni, P., A fast automaton-based method for detecting anomalous program behaviors. Proceedings ofthe 2001 IEEE Symposium on Security and Privacy. IEEE Computer So-ciety, 2001, pages 144–155, URL http://citeseer.ist.psu.edu/sekar01fast.html.

[SBS99] Sekar, R., Bowen, T. and Segal, M., On preventing intrusions by process behav-ior monitoring. ID’99: Proceedings of the 1st USENIX Workshop on IntrusionDetection and Network Monitoring. USENIX Association, 1999, pages 29–40.

[SF00] Somayaji, A. and Forrest, S., Automated response using system-call delays. Pro-ceedings of the 9th USENIX Security Symposium. USENIX Association, 2000,pages 185–197.

[SGF+02] Sekar, R., Gupta, A., Frullo, J., Shanbhag, T., Tiwari, A., Yang, H. and Zhou,S., Specification-based anomaly detection: a new approach for detecting networkintrusions. CCS’02: Proceedings of the 9th ACM conference on Computer andCommunications Security. ACM, 2002, pages 265–274, URL http://doi.acm.org/10.1145/586110.586146.

[Sie99] Sielken, R., Application intrusion detection. Technical report CS-99-17, De-partment of Computer Science, University of Virginia, USA, 1999. URL ftp://ftp.cs.virginia.edu/pub/techreports/CS-99-17.ps.Z.

[SKV03] Soman, S., Krintz, C. and Vigna, G., Detecting malicious Java code using vir-tual machine auditing. Proceedings of the 12th USENIX Security Symposium.USENIX Association, 2003, pages 153–168, URL http://cs.ucsb.edu/~ckrintz/papers/usenix03.pdf.gz.

[SMS99] Stillerman, M., Marceau, C. and Stillman, M., Intrusion detection for distributedapplications. Communications of the ACM, 42,7(1999), pages 62–69.

[Som02] Somayaji, A. B., Operating System Stability and Security through Process Home-ostasis. Ph.D. thesis, University of New Mexico, 2002.

[SS00] Sabater, J. and Sierra, C., REGRET: A reputation model for gregarious societies.Research Report 2000-06, Institut d’Investigacio i Intelligencia Artificial, 2000.URL http://citeseer.ist.psu.edu/sabater00regret.html.

[Sun94] Sun Microsystems, SunSHIELD Basic Security Module Guide. Sun Microsys-tems, 1994. URL http://docs.sun.com/app/docs/doc/801-6636/6i10gcto4?q=BSM&a=view.

[SVSM03] Sivashanmugam, K., Verma, K., Sheth, A. and Miller, J., Adding semantics toWeb Services standards. ICWS’03: Proceedings of International Conference onWeb Services. CSREA Press, 2003, pages 395–401.

119

Page 124: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

[TCL90] Teng, H. S., Chen, K. and Lu, S. C., Adaptive real-time anomaly detec-tion using inductively generated sequential patterns. Proceedings of the1990 IEEE Symposium on Security and Privacy. IEEE Computer Society,1990, pages 278–284, URL http://ieeexplore.ieee.org/xpl/abs_free.jsp?arNumber=63857.

[vdHLP05] van den Heuvel, W.-J., Leune, K. and Papazoglou, M. P., EFSOC: A lay-ered framework for developing secure interactions betweeen web-services. Dis-tributed and Parallel Databases, 18,2(2005), pages 115–145.

[Vil05] Viljanen, L., Towards an ontology of trust. TrustBus’05: Proceedings of the2nd International Conference on Trust, Privacy and Security in Digital Business,volume 3592 of Lecture Notes in Computer Science. Springer-Verlag, 2005, pages175–184.

[W3C02] W3C, Web service choreography interface (WSCI) 1.0, 2002. URL http://www.w3.org/TR/wsci/. W3C Note.

[W3C04a] W3C, OWL web ontology language overview, 2004. URL http://www.w3.org/TR/owl-features/. W3C Recommendation.

[W3C04b] W3C, Web service choreography description language version 1.0, 2004. URLhttp://www.w3.org/TR/ws-cdl-10/. W3C Working Draft.

[WD01] Wagner, D. and Dean, D., Intrusion detection via static analysis. Proceedingsof the 2001 IEEE Symposium on Security and Privacy. IEEE Computer Society,2001, pages 156–168, URL http://ieeexplore.ieee.org/xpl/abs_free.jsp?arNumber=924296.

[WDD00] Wespi, A., Dacier, M. and Debar, H., Intrusion detection using variable-lengthaudit trail patterns. RAID’00: Proceedings of the 3rd International Symposiumon Recent Advances in Intrusion Detection, volume 1907 of Lecture Notes inComputer Science. Springer-Verlag, 2000, pages 110–129, URL http://www.cs.fit.edu/~pkc/id/related/wespi-raid00.pdf.

[WFP99] Warrender, C., Forrest, S. and Pearlmutter, B. A., Detecting intrusionsusing system calls: Alternative data models. Proceedings of the 1999IEEE Symposium on Security and Privacy. IEEE Computer Society, 1999,pages 133–145, URL http://citeseer.ist.psu.edu/article/warrender98detecting.html.

[Wor03] World Wide Web Consortium, W3C recommendation: Simple object ac-cess protocol (SOAP) 1.2, 2003. URL http://www.w3.org/TR/soap12-part1/.

[Wor04] World Wide Web Consortium, W3C recommendation: Extensible markup lan-guage (XML) 1.0, 2004. URL http://www.w3.org/TR/REC-xml.

[XDC04] Xu, H., Du, W. and Chapin, S. J., Context sensitive anomaly monitoring of pro-cess control flow to detect mimicry attacks and impossible paths. RAID’04: Pro-ceedings of the 7th International Symposium on Recent Advances in IntrusionDetection, volume 3224 of Lecture Notes in Computer Science. Springer-Verlag,

120

Page 125: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

2004, pages 21–38, URL http://citeseer.ist.psu.edu/702165.html.

[YKB93] Yahalom, R., Klein, B. and Beth, T., Trust relationships in secure systems - adistributed authentication perspective. Proceedings of the 1993 IEEE Symposiumon Security and Privacy. IEEE Computer Society, 1993, pages 150–164, URLhttp://citeseer.ist.psu.edu/yahalom93trust.html.

[Zam01] Zamboni, D., Using Internal Sensors for Computer Intrusion Detection. Ph.D.thesis, Purdue University, 2001. URL http://www.cerias.purdue.edu/homes/zamboni/pubs/thesis-techreport.pdf.

121

Page 126: Trust and Mistrust Management in Enterprise …...Trust and Mistrust Management in Enterprise Systems Computer Science Licentiate thesis January 2011 121 enterprise computing, authorization,

Helsinki 2011