70
A Short Review of Multi-Criteria Decision Analysis Techniques Richard Goyette December 2013

A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

A Short Review of Multi-Criteria

Decision Analysis Techniques

Richard Goyette

December 2013

Page 2: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

ii

Contents

List of Figures .................................................................................................................... v

List of Tables ..................................................................................................................... v

1. Multi-Criteria Decision Analysis ............................................................................ 1

1.1 Introduction ......................................................................................................... 1

1.2 Overview of MCDA Methodologies .................................................................. 1

1.2.1 Classical Methods ........................................................................................... 2

1.2.1.1 Implicit (Outranking) Methods ............................................................... 3

1.2.1.2 Explicit Methods ..................................................................................... 3

1.2.1.2.1 Methods Based on Utility/Value Theory .......................................... 4

1.2.1.2.2 Methods Based on Analytic Hierarchy Process ................................ 4

1.2.2 Other Methods ................................................................................................ 5

1.2.3 Hybrid Methods .............................................................................................. 5

1.3 Explicit Aggregation Based on Utility/Value Theory ........................................ 6

1.3.1 Overview ......................................................................................................... 6

1.3.2 Preference and Preference Structures ............................................................. 6

1.3.3 Criteria ............................................................................................................ 8

1.3.4 Single Criteria Value Functions ...................................................................... 8

1.3.5 Ordinal Value Functions ................................................................................. 9

1.3.6 Cardinal Value Functions ............................................................................... 9

1.3.7 Multi-Criteria Value Functions ..................................................................... 10

1.3.8 Evaluating Component Value Functions ...................................................... 14

1.3.8.1 Standard Sequences .............................................................................. 14

Page 3: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

iii

1.3.8.2 Direct Rating ......................................................................................... 16

1.3.8.3 Mid-value Splitting ............................................................................... 16

1.3.8.4 Bisection ............................................................................................... 17

1.3.8.5 Linear Programming ............................................................................. 18

1.3.9 Evaluating Scaling Constants ....................................................................... 20

1.3.9.1 Direct Point Allocation ......................................................................... 21

1.3.9.2 Swings ................................................................................................... 21

1.3.9.3 Trade-off ............................................................................................... 22

1.3.9.4 Rank Order Centroid ............................................................................. 23

1.3.10 Summary ................................................................................................... 24

1.4 Explicit Aggregation Based on Analytic Hierarchy Process ............................ 24

1.4.1 Overview ....................................................................................................... 24

1.4.2 An Example .................................................................................................. 26

1.4.3 Theoretical Basis of Original AHP ............................................................... 29

1.4.4 Criticism ........................................................................................................ 32

1.4.4.1 Validity of Theoretical Framework ...................................................... 32

1.4.4.2 Calculation of Priorities ........................................................................ 32

1.4.4.3 Rank Reversal ....................................................................................... 33

1.4.4.4 Choice of Fundamental Scale ............................................................... 34

1.4.5 Variations of AHP......................................................................................... 37

1.4.6 Summary ....................................................................................................... 37

1.5 A Note on Paired Comparison in Classical and AHP Methods ........................ 38

1.6 Other Methods .................................................................................................. 40

1.7 Group MCDA ................................................................................................... 42

1.7.1 Active versus Passive Group Elicitation ....................................................... 42

Page 4: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

iv

1.7.2 Group Consensus .......................................................................................... 43

1.7.3 Preference Aggregation ................................................................................ 44

1.7.3.1 Aggregation of Individual Judgments (AIJ) ......................................... 45

1.7.3.1.1 Geometric and Weighted Arithmetic Means ................................... 45

1.7.3.1.2 Ordered Weighted Average ............................................................. 46

1.7.3.1.3 Aggregation by Dempster-Shafer Theory ....................................... 49

References ........................................................................................................................ 54

Page 5: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

v

List of Figures

Figure 1-1: Multi-Criteria Decision Analysis Methods ...................................................... 2

Figure 1-2: Additive Model Building Process .................................................................. 13

Figure 1-3: Analytic Hierarchy Process in Relative Mode ............................................... 25

Figure 1-4: Candidate Selection Problem Using AHP ..................................................... 27

Figure 1-5: Candidate Selection Example Using AHP in Absolute Measurement Mode 28

Figure 1-6: AHP Process Summary .................................................................................. 38

Figure 1-7: Multi-Expert Aggregation Using OWA ......................................................... 47

List of Tables

Table 1-1: Binary Preference Operators ............................................................................. 7

Table 1-2: Saaty's Fundamental Scale (Table 1.1 of [77]) ............................................... 34

Table 1-3: Alternative Scales Proposed for AHP (Table 2 from [80]) ............................. 35

Page 6: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

1

1. Multi-Criteria Decision Analysis

1.1 Introduction

In this section, we provide an overview of the theoretical and practical aspects of

Multi-Criteria Decision Analysis (MCDA). In an MCDA problem, one or more

decision makers (DMs) is faced with the task of choosing a “best” single

alternative from a set of possible alternatives. This selection process can be

performed by ranking the alternatives directly against each other with respect to

some consideration (e.g. two cars with respect to cost) or they can be scored

individually with respect to some consideration whose levels have been already

been ranked (e.g. the comfort level of the car where high=1.0, medium=0.5, and

low=0.0).

1.2 Overview of MCDA Methodologies

Because MCDA is a broad domain of active research, creating a concise yet

comprehensive taxonomy is challenging. At its highest level, [51] divides the

domain of research into the two broad categories shown in Figure 1-1. The

methods within these categories are differentiated depending on whether they

originate from what the authors call “classical” approaches to MCDA or whether

they are based on completely different (“other”) ideas.

Page 7: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

2

Figure 1-1: Multi-Criteria Decision Analysis Methods

1.2.1 Classical Methods

Classical methods, as described in [52], are based on the notion that a DM’s

choice preference with respect to a certain criteria can be represented by a

monotone, increasing utility function. That is, the more preferred the alternative,

the larger its numeric representation. Furthermore, when more than one criterion

is involved, classical methods suppose that a composite preference can be

constructed from the sum or product of the individual utilities.

Within the category of classical MCDA, a distinction is sometimes made with

respect to the “school” from which they originate. Lootsma [53] notes that the

“American” school is based primarily on utility and value theory as popularized

by Keeney and Raiffa [54] whereas the “French” school is based primarily on a

class of outranking methods. Figueira et al. [52] distinguish these schools by

Page 8: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

3

noting whether the aggregation approach is implicit (French) or explicit

(American).

1.2.1.1 Implicit (Outranking) Methods

In the implicit approach, the objective is to provide a complete set of preference

relations between the alternatives such that they may be ranked in an ordinal

sense. For this reason, they are often referred to as outranking methods. In

outranking, the concept of dominance plays a key role. One alternative A

dominates another alternative B if it performs better than B on at least one

criterion and at least equally well on all others. In such a case, B no longer needs

to be considered if the objective is to find the “best” alternative. If a ranking is

desired, it can be left in its established, ordinal position.

In some cases, simple dominance may not be enough to produce an ordered list.

Outranking methods can go further by applying weights to some (or all) of the

criteria in order to give some of them more influence than others [55] thereby

resulting in an ordered ranking. As shown in Figure 1-1, popular outranking

methods include ELECTRE, ELECTRE I, and PROMETHEE.

1.2.1.2 Explicit Methods

In the explicit approach, the objective is to find a way of ordering a finite set of

alternatives, , when each alternative may perform differently on any one

of criteria, [56]. Hence, explicit approaches rely on assessing the

performance of each alternative on an appropriate scale for each criterion

followed by an aggregation operation that provides a numerical ranking of

Page 9: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

4

alternatives. The technical methods within the explicit approach “differ primarily

according to how they (a) evaluate performances on each attribute [criteria], and

(b) aggregate evaluations across attributes to arrive at an overall global

evaluation” [57].

Explicit methods consist of two approaches that are often at odds with each other

in the literature. These include approaches based on utility or value theory and

those based on the Analytic Hierarchy Process (AHP).

1.2.1.2.1 Methods Based on Utility/Value Theory

Utility-based methods are derived from a large body of theoretical and practical

work in the area of utility theory [58]. Alternatives are scored quantitatively, most

often using an additive or multiplicative aggregation rule to compute an overall

score based on individual attribute scores. In classical methods, numerical scoring

determines the ranking of attributes (where the highest score indicates the most

preferred alternative). Depending on the specific methods employed, the numeric

score associated with each alternative may have ordinal or cardinal meaning.

1.2.1.2.2 Methods Based on Analytic Hierarchy Process

The Analytic Hierarchy Process (AHP) is another approach that uses quantitative

methods to rank alternatives and is often identified as one of the classical

methods. However, AHP is not derived from the axiomatic framework of utility

theory and, as a result, it has been sharply criticized on several levels [59][60].

Regardless of the debate over perceived shortcomings, the AHP has achieved

success as a popular approach for MCDA. The questioning procedure is relatively

intuitive for experts to understand (although the number of questions required

Page 10: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

5

does not scale well with in increasing number of criteria). In addition, AHP can be

implemented using standard spreadsheet software whereas many other classical

methods require software support for linear programming in order to be

implemented correctly.

1.2.2 Other Methods

A second broad category introduced in [51] as a catch-all for MCDA methods that

do not have explicit models of aggregation or that do not fit neatly into one of the

classical sub-categories described above. These include, for example, approaches

based on Evidential Reasoning (ER) (see [61] for a review), Fuzzy Set Theory,

etc. Many of these methods have been proposed to deal with uncertainty in the

data provided during elicitation.

1.2.3 Hybrid Methods

Some methodologies extract the most useful features from the domain of MCDA

approaches and, where appropriate and formally justified, merge them. We

represent this class of methods in Figure 1-1 as the intersection of specific

domains (although the intersections shown are not exhaustive). For example, AHP

has been extended to handle incomplete information and epistemic uncertainty

through the use of both Dempster-Shafer theory [62] as well as fuzzy methods

(e.g. [63][64]).

Page 11: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

6

1.3 Explicit Aggregation Based on Utility/Value Theory

1.3.1 Overview

In this section, we describe classical methods that are derived from utility theory.

We set the stage by providing an overview of preference theory and preference

structures which are used to provide order among alternatives. We then describe

mathematical representations for preferences with respect to a single criteria

followed by multiple criteria and, in doing so, describe the additive model for

multi-criteria decision making. We follow this model up by describing existing

techniques for evaluating the component elements of the additive model (value

functions and scaling constants). We provide a very brief summary to conclude

the section.

1.3.2 Preference and Preference Structures

Preference theory is defined in [58] as follows:

Preference theory studies the fundamental aspects of individual

choice behaviour, such as how to identify and quantify an

individual’s preferences over a set of alternatives, and how to

construct appropriate preference representation functions for

decision making. An important feature of preference theory is that it

is based on rigorous axioms which characterize an individual’s

choice behavior. These preference axioms … provide the rationale

for the quantitative analysis of preference. ([58] p. 267)

Page 12: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

7

The axiomatic framework of preference theory is based on the set of basic binary

relations shown in Table 1-1.1

Table 1-1: Binary Preference Operators

∼ Indifference – two alternatives are equally preferred

≻ Strict preference – one alternative is strictly preferred to another

≽ Weak preference – one alternative is preferred or indifferent to

another

If a DM prefers A over B without any hesitation, it is represented as A≻B.

Conversely, where no preference is expressed, we write A∼B. If the DM weakly

prefers A to B (i.e. has some hesitation), then we may write A≽B.

If we can capture most or all of a DM’s preferences for various alternatives using

the relations in Table 1-1, then we can begin to formulate at least an ordinal

ranking of the alternatives. A cardinal ranking is preferred because it allows for

algebraic operations on the preferences although the requirements necessary to

arrive at this are more stringent. In either case, [58] indicates that, if the sum of all

preferences constitutes a weak order, then a unique ordering of the alternatives is

possible. A weak order has the following properties [58]:

a.) Transitivity: If A≻B and B≻C, then A≻C.

b.) Indifference is transitive, reflexive, and symmetric.

c.) Only one of the following holds for every pair: A≻B, B≻A, or A∼B.

1 We follow the development of preference functions using the same approximate notation as

found in Dyer [58].

Page 13: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

8

d.) Weak preference is transitive and complete (for every A, B, either A≽B or

B≽A

1.3.3 Criteria

When considering reasonably simple alternatives (or when making decisions

where the stakes are low), a DM may be able to compare them directly. However,

when alternatives are complex, when the decision is very important, or when all

of the alternatives are not available for consideration, a DM can use a set of

criteria to help with the decision process. A criterion is a characteristic of an

alternative that can be used for assessment purposes. For example, a car can be

assessed on its mileage, handling, color, fuel economy, etc. Typically, several

criteria are selected for assessment in an MCDA context.

1.3.4 Single Criteria Value Functions

A value function is a quantitative representation of a preference structure when

there is no risk in the outcome of a decision (the decision results are deterministic)

[54]. A value function that considers only one assessment criteria is a single

attribute value function. Such functions are the first step toward developing an

ordinal scale of preference.

A single-criteria value function can be represented by ( ) where is a particular

level of the criteria or attribute. For example, could represent the cost of a car in

dollars in which case ( ) would be continuous. Or could represent the colour

of the car where ( ) would most likely be discrete.

A closely related concept is the utility function. Unlike a value function, it models

Page 14: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

9

preference where there is an element of risk in the outcome (e.g. the decision

results are stochastic) [54][58]. For example, when comparing two cars, one may

be more preferred than the other but it may also have a chance of not being

delivered on time. The shape of a utility function is related to the DM’s risk

tolerance (risk-taking, risk-neutral, or risk-averting). Here, we assume choice

under conditions of certainty and therefore do not address utility theory except

where elements of that theory apply to our context.

1.3.5 Ordinal Value Functions

An ordinal value function associates a number to each level of an attribute or

criterion such that if A≻B, then ( ) ( ). The values assigned by ( ) have

no meaning outside of ordering the alternatives. The questioning procedure

needed to develop an ordinal ranking of alternatives can be as simple as asking

the expert to assign a number to each alternative.

1.3.6 Cardinal Value Functions

A cardinal value function associates values to each level of a criterion such that if

A≻B, then ( ) ( ) and, in addition, ( ) ( ) has meaning. That is, the

numbers assigned by ( ) lie on an interval or ratio scale. Thus, in addition to

ranking the alternatives, an interval scale also provides information on how much

more one alternative is preferred over another.

Cardinal value functions (also called “measurable” by [58]) have the property that

if A≽D and C≽D, then A≽C implies that

( ) ( ) ( ) ( ).

Page 15: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

10

Thus, the questioning procedure requires that preference differences be evaluated

in relation to a common element. For example, an expert might be asked to assess

his difference in preference for a Jaguar compared to a Taurus versus a Ferrari

compared to a Taurus. However, it may not be necessary to explicitly include the

reference element as part of the questioning procedure. For example, the use of

paired comparisons (discussed later) constructs the interval scale of preference

with respect to the least preferred alternative.

Value functions can also be constructed on a ratio scale. On such a scale, the ratio

of two points has meaning (e.g. A is twice as large as B). A ratio scale requires an

absolute zero reference. For example, a Celsius scale is not a ratio scale because

there is no absolute zero whereas a Kelvin scale has a theoretical zero point. The

existence of ratio scales and, specifically, the requirement for “zero” within

preference theory has been debated in the literature [59][65].

1.3.7 Multi-Criteria Value Functions

As we noted above, an alternative can be assessed on more than one criterion.

When this occurs we require a way of combining the assessment results. Using

notation from Stewart [66], we can represent the combined or multi-criteria value

function for an alternative, , as a function of several criteria, , by

( ) (

) (

) (

) ∑ (

) (2-1)

where (

) represents the individual “values” or “performances” of alternative

on criteria [54][66].

Page 16: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

11

Equation (1-1) is referred to as an additive value function in the classical

literature. It is preferred over the more general multiplicative value function (from

which it is derived) because of its apparent simplicity. However, the additive form

is valid only under certain conditions [54]:

a.) Preference independence: The choices made in the levels of one attribute

are not affected by choices made in the levels of any other attribute. For

example, the choice of a car’s colour would very likely be preference

independent of the car’s handling ability. A change in preference from red

to blue does not affect a prior choice of four-wheel drive over two-wheel

drive.

b.) Mutual preference independence: Every proper subset of A is preference

independent of its complementary subset ([54] p. 111).

Demonstrating mutual preference independence can be difficult in practice for

even a moderate number of attributes. However, [54] points to earlier results to

show that it is sufficient to test the preference independence of adjacent pairs of

attributes to determine mutual preference independence on the whole set. That is,

given the set of alternatives A=( ), testing the subset pairs ( ),

( ), …, ( ) is sufficient to establish mutual preference independence

over the entire set.

It is common practice to bound on [0,1] by re-writing (2-1) as

( ) ∑ ( ) (2-2)

Page 17: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

12

where are scaling constants for each attribute such that and

∑ for and each ( ) is on [0,1].

It is important to note that, while (2-2) appears similar in form to a simple

weighted average, the two are entirely different entities. The scaling constants

shown in (2-2) are often referred to as “weights” but these are not to be arbitrarily

chosen. Rather, they are compensation factors introduced that take into

consideration the magnitude of the scales of the individual criteria prior to

bounding them and thus they are “intimately related to the ranges of the scales”

[54]. Assigning them in an arbitrary fashion (as is often done in the literature)

“makes no sense” [67].

Using (2-2), the classical solution to a multi-attribute problem becomes a two-step

process in which the individual value functions ( ) are first evaluated and then

the scaling constants are determined. Some of the details of this two-step process

are summarized below [54]:

a.) Scale each single-criteria value function. For each criterion, find the best

and worst performance and set the value of these to 1 and 0 respectively.

For example, the fuel economy for a car selection problem might range

from 30 mpg to 56 mpg for all of the cars under consideration. We would

set ( ) and ( ) . More generally (and once again

using notation from [66]), for each attribute , if the worst performance is

represented as and the best performance as

, then ( ) and

( ) .

Page 18: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

13

b.) Determine the shape of each single-criteria value function. Obtain discrete

or continuous value functions for each attribute between its best and worst

performance.

c.) Derive scaling constants. For each criteria, find a set of scaling constants,

λ, for each attribute such that and ∑ for .

Several techniques for doing this in a non-arbitrary fashion exist. Each

trades complexity (and burden on the DM) for theoretical correctness.

Figure 1-2 summarizes the overall process of building a value function, ( ),

using the additive value function in (2-2).

Figure 1-2: Additive Model Building Process

When the additive value function in (2-1) is constructed individually by more than

one expert for the same model, a process for merging their results into a single

Page 19: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

14

representation is required. Two approaches – aggregation and consensus – are

shown as the final steps in the process. These are described presently.

Figure 1-2 shows functions identified as consistency validation. Consistency

validation in classical methods is the process of ensuring that the preferences

made by the DM satisfy the conditions required of a weak order (see Section

1.3.2) and that it is possible to construct a unique value function over the

preferences.

In Figure 1-2, identifies several methods relevant for constructing attribute value

functions and eliciting attribute weights (steps 4 and 5 respectively). In the

following sections, we elaborate on these in more detail.

1.3.8 Evaluating Component Value Functions

Component value functions can be evaluated using several techniques (step 2 in

Figure 1-2). Some techniques are easier to implement than others with respect to

the cognitive effort required by the DM when making judgments. However, as

[67] notes, ease-of-use is often obtained at the expense of theoretical rigor. In the

following sections, we provide an overview of the methods identified in Figure

1-2.

1.3.8.1 Standard Sequences

The standard sequence approach to constructing additive value functions is one of

a group of conjoint measurement techniques originating in the Social Sciences.

We summarize the description of standard sequences provided by [67].

Page 20: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

15

The method of standard sequences seeks to replicate the theory of measurement

by establishing the equivalent of “unit length” and “concatenation” operations

(i.e. addition) for the structuring of preferences. The concept of length is equated

to preference – greater lengths are conceptually similar to greater preference. In

order to establish a standard unit of preference (i.e. “unit length”), preference

intervals on one attribute are measured using a preference interval on another

attribute.

As [67] notes, and despite its apparent elegance, there are several critical

drawbacks to the practicality of standard sequences. The most significant of these

include:

a.) Standard sequences may not be accurate or even possible when criteria are

discrete. This is due to the fact that a standard length measured in a base

criterion may not “fit” properly into the criterion being measured. That is,

its terminal point may lie somewhere between discrete points. Although

there are ways to compensate, they further complicate the process.

b.) The process uses indifference judgments rather than preference judgments

which can be difficult for experts to understand and work with. An

indifference judgment requires an expert to trade one quantity for another

until the expert no longer prefers one over the other.

c.) The process often requires experts to make judgments on fictitious or

constructed alternatives which is cognitively challenging for the decision

maker.

Page 21: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

16

1.3.8.2 Direct Rating

Direct rating generates a value function by asking the expert to assess preference

values in relation to a pair of arbitrary points (generally assigned to the minimum

and maximum expected level of an attribute). Direct rating is very appealing

from a practical perspective due to its simplicity. However, it is not as

theoretically grounded as standard sequences since no standard of measure is

assessed across all attributes [67].

There are a number of variations on direct rating that result in value functions

[68]. However, Edwards advocates simply finding the minimum and maximum

plausible levels of a criteria and connecting them by a straight line without

spending too much time worrying about the shape of the value function [69]. The

argument for this approach is that adjusting the value function in between the

minimum and maximum is not likely to cause a deviation significantly from a

straight-line function and will, in any case, be averaged out in aggregate. In the

case of discrete attributes (e.g. preference for color), Edwards suggests a simple

allocation to a scale of 0-100 [69].

1.3.8.3 Mid-value Splitting

This approach, demonstrated in Keeney et al. [54], is used to compute a value

function by determining equal trades between attributes. To illustrate, we follow

the example in Section 3.7 of [54]. Assume there are four value functions to be

assessed ( ) and the minimum and maximum of are 2.0 and 9.0

respectively. If we let ( ) and ( ) , then mid-value splitting

starts with finding the value ( ) . The task of the expert is then to find:

Page 22: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

17

( ) ( )

such that

( ) ( )

In other words, the judge should be able to find a value for such that going

from 2.0 to while trading from to on the other three remaining

attributes is the same as going from to 9.0 while making the same trade in

the remaining attributes.

As Stewart notes, this procedure (and the lock-step method also identified by

[54]) can be “tedious and mystifying to the decision maker” so they are rarely

used in practice [66].

1.3.8.4 Bisection

Bisection is similar to mid-value splitting except that it considers only one

attribute at a time. As described in [68], the process of bisection involves a

recursive procedure of selecting the endpoints of an attribute range, say and ,

and asking the expert to identify the level of the attribute at which .

The value of is then the mid-value between and .

To illustrate, we use an example from [70] on the construction of a value function

for an office space problem. In this problem, there exists a choice among

locations for a decision maker’s business and one of the attributes is available

floor space. The smallest available area is 400 square feet while the largest is

1500 square feet. At the outset, the decision maker is asked to find the area that is

Page 23: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

18

half-way between the most and least preferred (i.e. between 1500 and 400 square

feet). If the decision maker chooses, say, 700 square feet, he asserts that the

increase in value by going from 400 to 700 square feet is approximately equal to

the increase in value by going from 700 to 1500 square feet. The value at 700

square feet would be assigned 0.5. This procedure would be repeated for each of

the halves and then for each of the quarters as necessary to sketch a curve.

Finally, using this curve, the relative value of any sized area could be obtained.

Shortcomings of the bisection approach include the assumption of a continuous

attribute as well as the fact that it, too, requires the expert to focus on choices that

may not be natural [68].

1.3.8.5 Linear Programming

With the correct problem formulation, linear programming techniques can be used

to obtain additive value functions. For a multi-attribute, additive value function, it

follows from (2-1) that if there are two alternatives X and Y that can be assessed

on criteria, and if ≻ , then

∑ ( ) ∑ ( )

(2-3)

and if ∼ , then

∑ ( ) ∑ ( )

(2-4)

where ( ) and ( ) are the assessed scores of X and Y on the criteria,

respectively. Given a set of constraints provided by the decision maker, the

problem reduces to finding the conditions under which (2-3) and (2-4) have a

Page 24: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

19

solution [67]. That is, each inequality in (2-3) and (2-4) helps to define a feasible

region in -dimensional space. If such a region exists, then any point within it

will satisfy all constraints. Without picking a single point, it is possible to obtain a

minimum and maximum value for each . These are the vertices that define the

feasible region [55]. In some cases, a feasible region may not be possible given

the set of constraints.

Linear programming is the approach taken by MACBETH (Measuring

Attractiveness by a Categorical Based Evaluation Technique) [71] as well as

UTilité Additive (UTA) methods. To illustrate the approach, we describe an

example (Example 1) provided in [71]. Let be a set of

alternatives over which an expert has provided the following judgments:

A linear programming (LP) test can be conducted on these preferences to see if a

solution exists. In fact, MACBETH executes two LP tests; one to confirm ordinal

ranking and a second to confirm that an interval scale is feasible. The second test

will fail if the strength of attractiveness identified by the judge causes judgment

inconsistency.

For the set of judgments above, the following LP constraints are established

(where are variables introduced to demark the feasible region for each

):

Page 25: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

20

, , , , ,

, , , , ,

, , , , .

As [71] notes, an LP test on the constraints above will fail to find a solution

because the judgments contain an inconsistency associated with the difference of

attractiveness assigned to one or more of the judgments. MACBETH tests for

consistency on after each change is made by the expert and suggests up to four

ways of addressing it.

The advantage of MACBETH is that the technique is built around ensuring that

the expert being questioned is not cognitively challenged to provide preference

information. Questions are restricted to comparing pairs of alternatives and rather

than seeking numerical information, a set of linguistic representations are used

(e.g. “very strong”, “weak”). In addition, the information sought is not preference

differences but rather differences of attractiveness. Although the wording is subtle

(and the resulting information similar), the approach is easier for the expert to

comprehend.

1.3.9 Evaluating Scaling Constants

In this section, we describe four of the most prevalent methods for determining

the scaling constants in a classically based MCDA problem. These methods,

Page 26: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

21

shown at Step 3 in Figure 1-2, include direct point allocation, swings, trade-off,

and rank-order-centroid.

1.3.9.1 Direct Point Allocation

In this approach, the expert is simply asked to distribute a fixed set of points

among each of the attributes. The allocation is then normalized to obtain scaling

constants that sum to one [72]. Although this method is quite simple from a

practical perspective, it is also the least rigorous because, as mentioned

previously, scaling constants should be related to the ranges of the criteria that

they represent [67].

An interesting approach to compensate for this lack of theoretical rigour is noted

in [72] where the ranges of the attributes are described to each DM prior to

eliciting the points.

1.3.9.2 Swings

The Swings method generally proceeds in two phases as described in [73]. In the

first part, the DM rank orders the criteria as follows:

a.) The DM is asked to image the worst possible alternative. This is the

alternative that is at its worst level in each criterion. For example, if the set

of alternatives were cars, the worst possible car would be the one with the

least preferred colour, the highest cost, the least fuel economy, etc.

b.) The DM is then asked to select a single criterion that would have the most

effect on the outcome of the decision when it goes from its worst to its

Page 27: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

22

best possible level (e.g. least preferred to most preferred colour). This

criterion is ranked as 1st and is removed from further consideration for the

moment.

c.) The DM is asked to repeat the exercise over for all remaining criteria.

When complete, the DM will have placed the criteria in an ordinal ranking

based on the effect each has on the outcome when moving across its entire

range.

When the ordinal ranking is complete, a process similar to direct rating can be

used to assign each onto a scale. Alternatively, a set of 100 points can be

distributed amongst each criterion to reflect the degree to which the range of the

criterion affects the overall score [73].

1.3.9.3 Trade-off

The trade-off method was introduced by [54]and involves considering “two

hypothetical alternatives that differ in two attributes only” [72]. To illustrate this

approach, we use an example presented in [72] in which there are two

alternatives, and , and the DM is required to consider them with respect to a

pair of attributes that differ by some value. Letting a and b refer to these

attributes, and considering all other attributes equal, the expert must adjust one of

attributes until both alternatives are equally preferred. As [72] notes, this results in

the following equation:

( ) ( ) ( ) ( )

Page 28: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

23

where , are the attribute weights. Repeating this procedure over all attributes

provides a system of equations that can be used to solve for for

attributes.

The trade-off approach appears to be rarely used in practice. Although it is

theoretically sound, it can be cognitively challenging for DMs to find the right set

of equalities. More importantly, the shape of the individual value functions must

be known because specific attribute values (i.e. at ) are used to compute the

scaling constants [72].

1.3.9.4 Rank Order Centroid

The Rank Order Centroid, or ROC, weighting scheme [74] is an approximate

solution to determining weights. It begins in a manner similar to Swings in that

the DM must somehow place the criteria in rank order. However, the allocation of

points is performed by asserting that that all scaling constants are greater than

zero and sum to one [73].

Combining a basic rank order and the constraint that scaling constants add up to

one defines a feasible region in -dimensional space. However, precise weights

are required rather than just weight ranges, so the approach is to find the centroid

of the feasible region. The centroid is the error minimizing solution. The smaller

the feasible region, the small will be the error.

A general equation for each weight is given by [73] as

Page 29: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

24

(

)∑

(2-4)

where is the number of attributes. Edwards [73] notes that the ROC approach

makes the elicitation process easier because generation of the weights does not

require DM input (the only task the DM is required to do is rank order the

attributes which is, in any case, cognitively less challenging than determining the

actual weights on a scale). Edwards [73] points to simulations in [74] to suggest

that the ROC approach will locate the best weight set 75%-80% of the time and,

for those occasions in which it does not, the average loss is less than 2% [55].

1.3.10 Summary

This section provided an overview of classical MCDA and some of the techniques

used to form the components of an additive value function model. In the next

section, we consider the Analytic Hierarch Process which, while considered under

the umbrella of classical approaches, is a significant departure from utility and

value theory.

1.4 Explicit Aggregation Based on Analytic Hierarchy Process

1.4.1 Overview

The Analytical Hierarchy Process (AHP) is an approach to MCDA that was

introduced by Saaty [75]. It has garnered considerable attention and is widely

used in decision theory “presumably because it elicits preference information

from the decision makers in a manner which they find easy to understand” [56].

Page 30: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

25

AHP has similarities to classical methods for the solution of multi-criteria

decision problems. For example, it involves decomposing the decision problem

into component assessment criteria, performing comparative judgements on those

criteria, and then synthesizing an overall ranking using an additive rule [76]. The

general concept is shown for a three level hierarchy in Figure 1-3 (modified Fig.

1.1. from [77]). This figure shows the comparative or relative mode of AHP [78]

in which all of the alternatives are compared to each other (by pairs) under each

criterion and ranked on a ratio scale. For example, under the criterion colour,

every car would be compared to each other to find the preference order for the

cars with respect to colour. An important side-effect of the relative mode is that

every potential alternative is required at evaluation time. Insertion of a new

alternative at a later time requires that the process be redone.

Figure 1-3: Analytic Hierarchy Process in Relative Mode

Each criterion in Figure 1-3 is considered to be an important aspect or attribute of

the decision problem. Similar to classical methods, each criterion is assumed to be

Page 31: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

26

preference independent of all others. When dependence between criteria is

unavoidable, the Analytical Network Process (ANP) is appropriate [78].

During the process of evaluation, each criterion is assigned a priority (often

referred to as a weight) and each alternative is assigned a local priority within a

criteria. Both the criteria and local priorities are assigned on [0,1] and each sums

to 1, respectively. Therefore, the computation of an alternative’s ranking, ( ) is

given by

( ) ∑ ( ) (2-5)

where there are alternatives, is the priority of criteria , and ( ) is the local

priority of alternative on criteria [79].

Although there are fundamental similarities between (2-5) and (2-2), they are

nevertheless quite different. The comparison component of AHP returns judgment

information on a ratio scale whereas almost all utility-based methods deal with

interval scales of measurement. Salo et. al. [79] determined that (2-5) and (2-2)

could be made consistent if the questioning procedure in AHP was modified to

seek ratios of preference differences using a common element rather than using

just ratios of elements.

While the AHP is both popular and widely used, it has received a significant

amount of criticism in the literature. We touch upon this criticism briefly at the

end of this section.

1.4.2 An Example

Page 32: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

27

To illustrate the AHP, we borrow and simplify example from [78]. In this

example, the context for the decision problem is to select the best employee. The

initial problem decomposition is shown in Figure 1-4.

Figure 1-4: Candidate Selection Problem Using AHP

A decision maker (DM) has determined that the criteria for “best employee” are

Education, Dependability, and Experience. There are three candidates for the job.

Priorities (or weights) for each criteria are determined using a process of pairwise

comparison (described shortly). Pairwise comparison is also used to determine the

local priorities for each candidate under each criterion. These priorities are shown

in Figure 1-4 enclosed in brackets. Note that the criteria and local priorities each

sum to 1. The ranking for each candidate is given by (2-5) as follows:

Candidate 1: ( ) ( ) ( ) ( ) ( ) ( )

Candidate 2: ( ) ( ) ( ) ( ) ( ) ( )

Candidate 3: ( ) ( ) ( ) ( ) ( ) ( )

Page 33: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

28

Note that the example in Figure 1-4 is a relatively simple decision problem. In

some cases, multiple levels of criteria could occur. In addition, it may be

necessary to break certain criteria up so that alternatives can be scored on criteria

that are similar in magnitude.

In Figure 1-5 (modified from [78],) we show the absolute measurement mode of

AHP for the same example. In this case, a set of performance scales are

considered at the bottom layer instead of the candidates themselves. Each level

within the performance scale (e.g. outstanding, above average, etc.) is given a

numeric ranking and then each candidate is assigned to of the performance levels

by the DM. This allows the preference structure to be independent of the

candidate so that any number of candidates can be evaluated at any time.

Computing the score for each candidate is performed as before.

Figure 1-5: Candidate Selection Example Using AHP in Absolute Measurement Mode

Page 34: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

29

1.4.3 Theoretical Basis of Original AHP

In the original version of the AHP proposed by Saaty [75], alternatives were

scored and weights determined using a process of pairwise comparison. The

fundamental concept being exploited by the AHP is that the human mind is

capable of directly comparing a limited number of items at the same time and

within a given magnitude. Pairwise comparison relates only two elements at a

time.

In a pairwise comparison, the DM is asked to describe “how many times more

preferred is one element over another”. Such questioning results in the creation of

a ratio scale of measurement. For each comparison, the “lesser” of the two

elements being compared is the unit of measure and the larger is estimated to be

some multiple of that unit [78].

If there are criteria and alternatives, then the questioning procedure will result

in comparison matrices of the form

(

)

where are individual comparisons of alternative to alternative . Saaty noted

that such a comparison matrix is reciprocal – elements on the diagonal (e.g.

where ) equal 1 and elements reflected in the diagonal are reciprocals (e.g.

). Therefore, to complete a matrix, a maximum of ( ) ⁄

comparisons are required.

Page 35: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

30

In the original version of AHP, Saaty observed that, in a cardinally consistent

matrix of comparisons (i.e. ), the scale can be recovered by

computing the principal right eigenvector. To illustrate, we cite an example

provided in [78]. Assume that a set of stones with known weights

[ ] is provided to us and we form matrix A as follows:

[

]

This matrix is consistent. If we let represent the set of stone

weights, then the following is a valid transformation:

[

] [

] [

]

If we were provided with the ratio values

⁄ rather than the weights

themselves (i.e. ) for the perfectly consistent matrix A, then we could recover

by solving the system of equations or ( ) . The vector

is the principle right eigenvector of the matrix A with an associated eigenvalue n.

However, in the case of the AHP, the DM can only provide estimates of the ratios

in A. Thus, A is not likely to be a perfectly consistent matrix. Saaty proposed that

perturbation theory would allow the eigenvalue approach to be used even with

inconsistent matrices (e.g. ) as long as a certain consistency

threshold could be maintained [80]. Saaty proposed a consistency index (CI)

Page 36: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

31

which would allow one to measure the matrix of comparisons for validity. The

consistency index is computed as

where is the matrix dimension and is the maximum eigenvalue. A

consistency ratio (CR) is formed by computing

where RI is a Random Index – the average CI of 500 randomly populated

matrices. Saaty provided the RI values for various values of (e.g. when

) and an arbitrary consistency threshold of CR=0.1. Consistency

ratios above this threshold were considered to be unacceptable.

In the context of preference theory, each is an estimate made by the DM of the

ratio

⁄ where and represent the relative position of elements and on

a “fundamental scale of natural numbers.” This scale represents the “intensity of

importance” for subjective criteria and was defined by Saaty to be on the integer

range [1,9]. The scale only goes up to 9 because it is based on the notion that

people have a “limit to their ability to compare the very small with the very large”

[78] and that comparisons should between elements that are within an order of

magnitude of preference to avoid errors in accuracy [65]. When objects differ by

an order of magnitude, they should be grouped separately and compared using

pivots [78].

Page 37: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

32

1.4.4 Criticism

The AHP has received much criticism since its introduction. There have been

several animated debates in the literature concerning the theoretical validity of

several aspects of the methodology. We summarize the most significant criticisms

below.

1.4.4.1 Validity of Theoretical Framework

It has been argued by Barzilai that the AHP is invalid for a number of reasons –

not the least of which is that it is based on the work of J.R. Miller who “was not a

mathematician and his methodology is based on mathematical errors” [59].

Barzilai also argues that ratio scales, upon which the AHP is based, cannot exist

for the concept of preference because such scales require an absolute zero and

subjective concepts have no absolute zero. However, Wedley [65] asserts that

such a point does exist on preference intensity scales but that it is rarely

acknowledged during preference elicitation possibly due to the fact that it is not

explicitly represented on the derived scales. Wedley argues that the zero reference

takes the form of an “absence of preference” and that, by “making zero more

explicit, AHP/ANP methodology can be made more immune to errors.”

1.4.4.2 Calculation of Priorities

Some authors (e.g. [81]) have cast some doubt on the validity of the eigenvector

method (EM) for determination of priorities (weights). However, [80] notes that

EM is not the only way to compute the weights – Cho and Wedley [82] have

performed a comparative analysis of 18 different methods for computing weights

Page 38: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

33

from a comparison matrix. In 13 of those methods, various distance minimization

algorithms are applied knowing that an estimate of a ratio, , should be equal to

the actual ratio of weights, ⁄ , if the estimate is perfect. Imperfect estimates

lead to errors which are measured using distance minimization. The remaining

methods use algorithms that will produce correct results when there are no

estimation errors (including the right eigenvector method).

Based on their analysis, [82] recommends the use of either the logarithmic least

squares (i.e. simple geometric mean) or the normalized column sum method. They

base their recommendation on good performance given matrices containing both

small and large errors between comparison estimates and actual values.

An additional approach for computing AHP priorities is to use a method that is

similar to the ROC approach described previously. This approach is described by

[83] and has the advantage that less work needs to be done by the DMs (ROC

computes the weights over each criteria so that the DMs do not have to).

1.4.4.3 Rank Reversal

The possibility of rank reversals was first exposed by Belton and Gear [84] in

1983 and has been a significant concern since then. In its simplest form, a rank

reversal is a situation in which the ranking of a set of alternatives changes due to

an artefact of the methodology instead of an actual change in ranking preferences.

For example, an existing rank structure should not be affected by either the

addition or removal of a new alternative or the addition of a criterion for which all

existing alternatives score equally [85]. Note, however, that rank reversals do not

Page 39: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

34

affect AHP in absolute comparison mode because performance levels are being

assessed instead of the alternatives themselves.

1.4.4.4 Choice of Fundamental Scale

During pairwise comparison, a linguistic scale is used to elicit “intensity of

importance” from the DM. These verbal assessments must be converted to

numeric form in order for mathematical operations to be carried out. In the

original version of AHP, a “fundamental scale of absolute numbers” on [1,9] was

introduced by Saaty along with a set of linguistic descriptors as shown in Table

1-2 (Table 1.1 from [77]).

Since its introduction, the appropriateness of the scale has been the subject of

considerable debate. For example, it has been argued that the introduction of an

arbitrary upper bound (i.e. 9) may force inconsistent judgments to be made [80].

For example, because each element in the matrix is transitive, then .

But if the DM chooses and , then the proper transitive response is

which is outside of the given scale.

Table 1-2: Saaty's Fundamental Scale (Table 1.1 of [77])

Intensity of

Importance

Definition Explanation

1 Equal importance Two activities contribute equally to the objective

2 Weak

3 Moderate importance Experience and judgment slightly favor one activity

over another

Page 40: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

35

4 Moderate plus

5 Strong importance Experience and judgment strongly favor one activity

over another

6 Strong plus

7 Very strong or

demonstrated importance

An activity is favored very strongly over another; its

dominance demonstrated in practice

8 Very, very strong

9 Extreme importance The evidence favoring one activity over another is

of the highest possible order of affirmation

However, Harker [76] points out that any scale is appropriate – even one without

an upper bound. He argues that the theoretical framework of AHP is not

invalidated by an arbitrary choice of scale and that further research should be

conducted to find optimal scales for various problem formulations. A number of

other numeric scales have been proposed and are summarized in Table 1-3 (Table

2 from [80]).

Table 1-3: Alternative Scales Proposed for AHP (Table 2 from [80])

Scale Type Definition Parameters

Linear (Saaty, 1977)

Power (Harker & Vargas,

1987)

Geometric (Lootsma, 1989) or

or other step

Logarithmic (Ishizaka,

Balkenborg, & Kaplan,

( ( ))

Page 41: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

36

2010)

Root Square (Harker &

Vargas, 1987)

Asymptotical (Dodd &

Donegan, 1995)

(√ ( ) ⁄ )

Inverse linear (Ma & Zheng,

1991)

( )

Balanced (Salo &

Hamalainen, 1997)

( )

Ishizaka et al. note that there does not appear to be a scientific consensus as to

which scale or set of linguistic terms is the most appropriate in a given context

[80]. A study conducted by Elliot showed that the choice of scale has a significant

effect on attribute weights and can therefore affect the decision outcome in an

MCDA problem [86]. The study compared integer, balanced, and power scales in

both a hypothetical and real engineering setting. Elliot concludes that the choice

of scale would benefit by letting the DM choose a scale appropriate to his or her

“way of thinking” during pairwise comparison of elements. In order to avoid

complicating the elicitation process, the author recommends introducing weights

calculated on all three scales and letting the DM choose which set is most

reflective of his or her preferences. This is possible when paired comparisons are

made using linguistic terms (i.e. the terms in Table 1-2) instead of direct

numerical coding by the DM.

Page 42: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

37

1.4.5 Variations of AHP

Several variations of AHP have been proposed to make use of different scales,

different priorities derivation methods and different aggregation methods. For

example, the Ratio Estimation in Magnitudes or deci-Bells to Rate Alternatives

(REMBRANT) method (described in [87] in relation to AHP) uses a logarithmic

scale for making comparisons, employs a geometric mean to avoid rank reversals,

and replaces the arithmetic mean aggregation with a product-based aggregation

rule. Other variations include the integration of Dempster-Shafer theory [62] and

the use of linear programming to find the criteria weights [88]

1.4.6 Summary

The process of constructing a preference model using AHP is shown in Figure

1-6. Like the classical approaches, it begins by structuring the problem into a

hierarchy of criteria. This is followed by elicitation of local priorities which is

fundamentally similar to obtaining a value function when performing AHP in

absolute comparison mode. As shown in Figure 1-6, there are a number of scale

options that can be used for representing preference comparisons.

Page 43: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

38

Figure 1-6: AHP Process Summary

In step 3, after all of the paired comparisons between elements of a criterion have

been made, a consistency check is performed. If the consistency index exceeds a

certain (arbitrary) value, then the DM is required to revisit his or her matrix of

comparisons and find out where the inconsistency originates. This repetitive

process is represented by the dotted arrow.

When all local priorities and criteria weights have been elicited, results can be

aggregated or a consistency measure can be calculated. Aggregation is most often

carried out for AHP using either arithmetic or geometric mean of individual

pairwise comparison matrix judgments [89][80].

1.5 A Note on Paired Comparison in Classical and AHP

Methods

Page 44: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

39

Paired comparison is a procedure that is fundamental to AHP but it has also been

used in classical contexts – specifically within the MACBETH process.

Therefore, it may be compelling to believe that the same linguistic preferences

can be applied to either (or both) methodologies. However, there is a fundamental

difference between MACBETH and AHP with respect to the interpretation of

scales. As we noted earlier, the original AHP collects preferences on a ratio scale

whereas MACBETH generates an interval scale. Thus, the manner in which

questions are asked during elicitation will differ depending on which scale is

being used. In AHP, questions reflect how many times more one option is

preferred over another [65] whereas MACBETH asks how much more one option

is preferred over another [71]. The former elicits a ratio whereas the latter elicits a

difference.

The linkage between paired comparisons for interval and ratio scales has been

explored by [72] who show that the additive model in AHP can generate points on

an interval scale if the DMs are asked to compose ratios of differences. For

example, a DM might be asked which of two cars is more preferable in relation to

a base car [72]. If we label the cars under consideration C1, C2, and B (the base

car), the DM is then asked to make the following ratio judgement (assuming that

the DM chooses C1 as most preferred):

While this approach provides a linkage between AHP and interval based methods,

it still requires a shift in the way the questioning procedure is performed. Guh et

Page 45: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

40

al. [90] attempt to make the questioning procedure transparent by redefining the

paired comparison approach in the AHP to use interval comparisons directly. In

doing so, they provide a means by which AHP and interval methods that also use

paired comparisons (e.g. MACBETH) therefore provide a roadmap for direct

comparison of the techniques.

1.6 Other Methods

Figueira et al. [51] consider a set of MCDA techniques that do not fit neatly under

classical methods. In Figure 1-1, we have taken their cue and have included a

grouping called “Other Methods” which we briefly consider in this section.

For the most part, [51] describes methods that are concerned with integrating

uncertainty into the MCDA process. The necessity here is due to the general

crispness of preference elicitations of the classical approaches. That is, regardless

of whether linguistic or numerical representations are being used by the DM, a

quantitative representation of a DM’s preference usually results in a crisp number.

As [61] notes, results using models based on crisp numbers may be misleading

simply because there is no mechanism to capture a DM’s preference uncertainty.

A number of approaches appear have been proposed in the literature to deal with

such uncertainty. Some are incremental modifications of existing methods while

others offer substantially new techniques. For example, Fuzzy AHP (e.g. [91]) is a

widely used increment of AHP that uses fuzzy sets to address uncertainty. AHP

has also been used with Dempster-Shafer theory (DST) [92] to allow for

uncertainty or lack of knowledge during the comparison of alternatives [93].

Page 46: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

41

Broadly, the question of using DST in MCDA was discussed in [108]. A novel

(and widely used) approach that also uses DST for decision making is Evidential

Reasoning (ER) [94]. ER uses the concept of a belief structure from DST to

model uncertainty is capable of dealing with the absence of data as well.

Page 47: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

42

1.7 Group MCDA

1.7.1 Active versus Passive Group Elicitation

When more than one expert is consulted to develop a preference model, there are

two approaches that can be taken to arrive at a final result. The first is to conduct

an interactive group preference elicitation session. We define this to be a session

in which all experts take part simultaneously in both time and space. They sit in

the same room and discuss each elicitation task together as a group. A key

advantage of an active elicitation session is that it does not need an aggregation

process. That is, since each item is considered by the group at the same time, the

result is representative of the entire group.

However, there are a number of drawbacks that bear consideration. For example,

[95] notes that such group elicitation processes risk losing important information

contained in dissenting views or preferences. More importantly, group dynamics

(over which there may be little control) can play a significant part in shaping the

outcome:

Each group member brings decision-making baggage (e.g., academic

and socio-economic back-grounds, an agenda, bias, and ego) that

affects the process. Participants may view group members as friends,

allies, or foes. Some participants may have more power in the

organization than others, so that strong and weak voices may not be

heard equally. Furthermore, group members may conceal true agendas

or try to distort their pairwise comparisons. ([89], p. 1436]

Page 48: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

43

In addition, each time there is a requirement to update the model provided by the

group’s preferences, another group session must be arranged. This is a natural but

significant deterrent toward practical application.

A second group approach is to conduct a passive preference elicitation session.

We define this to be a session in which a set of elicitation questions is considered

individually by each member of a group of experts. This approach has the

advantage of reducing the effects of group dynamics and allowing the elicitation

process to be spread out over space and time. However, it does require an

aggregation method – a way of combining the individual inputs of each expert

into a representative group elicitation. The Delphi technique [99] is an approach

that shares some characteristics of a passive group elicitation methodology.

1.7.2 Group Consensus

In both the active and passive approaches, a consensus is usually sought.

Consensus seeking will typically require the elicitation approach to be iterated

several times since consensus among a group of experts with varying experience

cannot be expected immediately.

Consensus is normally associated with total agreement but this is not always

feasible or even desirable. Therefore, Kacprzyk et al. [96] introduced the notion

of “soft” consensus – the degree which experts agree on a particular topic. The

consensus measure and a pre-defined threshold will determine whether an

elicitation approach needs to be reiterated by a group at the end of each round.

Consensus models have been developed for many of the MCDA approaches that

Page 49: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

44

can be extended for use in passive elicitation. For example Saaty and Vargas

[103] suggest a consensus measure for the geometric mean at the level of a single

judgement within a matrix of judgements. This measure calculates the “dispersion

of the judgments around their geometric mean”. If the dispersion is “too large”,

then the judgement is deemed not representative of the group and can result in

violations of Pareto Optimality. Dong et al. presents two consensus models for

AHP when row geometric mean is used to calculate priorities [104]. Herrera-

Viedma et al. [98] provide consensus measures on individual matrix judgments

for a number of preference elicitation formats. The consensus measures used

depend largely on the structure of the problem and the specific MCDA approach

taken to address it. Therefore, a wide variety of proposed measures exist in the

literature.

The collection of preferences may happen over extended periods of time and

space and it may not be feasible to conduct multiple rounds of preference

adjustment in order to increase the measure of consensus. In this situation,

consensus can be used to provide confidence assessments about the preference

information available. The “softer” the consensus, the less confidence can be

placed in the information available.

1.7.3 Preference Aggregation

Mathematical aggregation of group preferences (whether or not consensus is

sought) can be performed at the level of individual judgments if the preference

structures used to capture the preferences are equivalent (or can be made

Page 50: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

45

equivalent through transformation). This strategy is known as aggregation of

individual judgments (AIJ) [100]. In some cases, depending on the MCDA

process being used, aggregation can be deferred. For example, in the AHP, the

weight vectors computed from each judgment matrix may be aggregated into a

group weight vector (this has been termed aggregation of individual priorities

(AIP) in [100]).

1.7.3.1 Aggregation of Individual Judgments (AIJ)

1.7.3.1.1 Geometric and Weighted Arithmetic Means

In the AHP, individual judgments are captured in a multiplicative preference

structure (i.e. pairwise comparison matrices using ratio judgments). Two methods

of aggregation that have been used widely in the past for this structure include the

geometric and weighted arithmetic means [50]. Using notation from [50], if there

are experts and

represent each expert’s judgment on attribute ,

then the geometric aggregate for the attribute is

[∏

]

In the case of the weighted arithmetic mean, some process is used to determine

and assign a weight, to each of the experts. Then, the weighted arithmetic

aggregate for the attribute is

Page 51: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

46

[∑

]

It has been argued in [101] that the arithmetic mean is not a valid approach

because it does not preserve the reciprocal property of each comparison. Debate

about the validity of the geometric mean has also occurred with respect to

violations of Pareto Optimality (i.e. if each expert finds that ≻ , then the

aggregation should find that ≻ ) [80][50]. However, such violations are

understood to be a result of judgment dispersion [80]. In AIP, a common

approach has been to use the weighted arithmetic mean [80][89].

1.7.3.1.2 Ordered Weighted Average

The ordered weighted average (OWA) is an aggregation operator introduced by

Yager [102]. Given a set of objects, ( ), and a set of

weights (( ), an OWA function computes the ordered

weighted average by: i.) reordering the elements of from largest to smallest;

and ii.) computing .

Yager [102] introduced the concept of OWA operators in the context of

aggregating criteria in MCDA problems. Specifically, he noted that there exist

two extremes when considering the selection of candidate alternatives. We either

want all criteria to be satisfied (“and” operator) or we want any of the criteria to

be satisfied (“or” operator). The focus of was to introduce an operator that could

describe the space between these extremes. Such an operator would allow for the

selection of alternatives that satisfy “most” of the criteria. Indeed, [102] shows

Page 52: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

47

that the weights in the OWA operator can be interpreted as a quantifier, , that

expresses this fuzzy relationship.

Herrara-Viedma et. al. [98] developed a consensus model for a group MCDA

scenario that uses an OWA-based aggregator for combining expert judgments.

The weights that they used with the OWA operator were based on the fuzzy

majority criterion using the fuzzy quantifier “most”. Figure 1-7 depicts the

aggregation process.

Figure 1-7: Multi-Expert Aggregation Using OWA

The model in [98] allows DMs to express preferences in four different preference

structures (i.e. utility functions, ordinal rankings, multiplicative preference

relations, and fuzzy preference relations). However, they use a set of transforms

to convert all preference information into fuzzy preference matrices. The

converted fuzzy preferences for eight DMs are shown in Figure 1-7 (the data in

the figure is taken from the example presented in [98]).

Page 53: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

48

For simplicity, we show the computation of a corner value in the aggregate

preference structure. This begins with the extraction of the corner values from all

DM individual preference relations which forms an 8 element vector. This vector

is reordered from largest to smallest and then multiplied by a weight vector. The

resulting value becomes the corner value in the aggregate preference structure.

As [102] notes, the “weighting vector … is a manifestation of the quantifier

underlying the aggregation process”. In [98], the fuzzy quantifier “most” is

defined using the expression:

( ) {

where , , and is some value on (0,1). It is easy to see that ( )

has a non-zero value only when the combined preference of all experts is at least

0.3. Similarly, ( ) is a maximum when is greater than 0.8. Using this

quantifier, individual weights are computed as follows [98][102]:

( ⁄ ) (( ) )⁄

where and is the number of DMs. The weights define a marginal

increase in Q as one iterates through . Our expectation is that the weights

computed in this manner, using the fuzzy quantifier “most”, will eliminate high

Page 54: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

49

and low values (i.e. those that contribute too much and too little preference).

Indeed, this is the case since .

1.7.3.1.3 Aggregation by Dempster-Shafer Theory

In the Evidential Reasoning (ER) approach [61], Dempster-Shafer theory (DST)

[93] is used to manage uncertainty at the level of individual judgments. However,

the ER approach can be extended to include multiple experts in a reasonably

straightforward manner (e.g. see [105]).

ER uses the concept of a belief structure to model uncertainty and is based on the

Dempster-Shafer theory of evidence. The basic elements of DST are described

below.

a.) Frame of Discernment: In DST, the set that contains all possible locations

of a ship is known in DST as the frame of discernment, Θ. Using an

example from [92] that considers the location of a ship, Θ might be

described by

Θ=(PortX, PortY, At Sea, Drydock).

if the ship could be in any one of the locations PortX, PortY, At Sea, or in

Drydock.

b.) Power Set: The power set of Θ, defined as 2Θ, contains all possible subsets

of the elements of Θ [106]. Since the empty set is not possible (the answer

must lie in 2Θ just as the ship must be in one of the locations), there will be

Page 55: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

50

k=(2n-1) elements in 2Θ when there are n elements in the frame of

discernment. In our ship example, n=4 so there will be k=7 sets in 2Θ.

c.) Belief Function: A belief function distributes “belief” among the elements

of the power set. Belief is distributed according to the weight of evidence

provided by a source which could be a sensor, an expert, a database, etc.

This belief distribution process is variously described as mass assignment,

basic probability assignment, or simply basic assignment [106]. Mass is

distributed to an element of the power set pi by means of a basic

assignment function, m, defined as follows [106]:

∑ .

We draw attention to “limited division of belief” [107] which is a key

concept in DST. Mass can be assigned directly to the elements of Θ or it

can be assigned to sets of elements from Θ. In the former case, the answer

to our question (“where is the ship?”) becomes clearer because we increase

our belief in an exact location. In the latter case, we have evidence that

narrows the range of possibilities but does not give an exact location.

Rather than discarding the evidence, we use it to advantage by assigning

mass to the set of locations in 2Θ to which it does apply. For example, we

may have strong evidence that a ship’s location is PortX which is an exact

location within the frame of discernment Θ. Alternatively, we may have

strong evidence that the ship is in a port but not which one. In this case,

strong belief would be attributed to the set ( ) from 2Θ.

Page 56: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

51

Without further evidence, we can say nothing about any of the subsets of

.

The total belief that we have in element pi is the sum of the mass assigned

directly to that element and the masses assigned to any subsets it contains

as shown below:

( ) ∑ ( )

Returning to our ship example, the belief that it is in a port is obtained by

summing the masses assigned to the sets (PortX), (PortY), and(

).

d.) Plausibility: Finally, the plausibility of an element pi is defined as being

equivalent to the lack of direct evidence or belief against it. This is

represented as

( ) ( )

In our example, the plausibility that the ship is in PortX is the sum of the

mass assigned to (PortX) and ( ). The first mass is the direct

belief in PortX whereas the second mass contributes to the plausibility that

we might find the ship in Port X.

DST is “a generalization of Bayesian inference that provides a strategy for

reasoning with incomplete information” [49]. As explained by Parsons [109],

Dempster’s original intent was to “free probability theory from the need to attach a

measure of uncertainty to every hypothesis under consideration”. If belief and

Page 57: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

52

plausibility are considered to be upper and lower bounds on the probability of an

event, then “the results of applying the Dempster-Shafer theory are consistent with

any probabilistic analysis of the same problem, they just make less assumptions”

[109].

When there are multiple sources contributing evidence, to a belief structure, then

there must be a way of merging the masses that they assign. If the sources are

independent, we can use Dempster’s rule [106] to determine the combined

assignment mc that supports a given element as shown in (2-).

( ) ∑ ( ) ( )

[ ∑ ( ) ( ) ] (2-6)

Here, A and B are sets of all elements drawn from the power set whose

conjunction results in . The masses m(A) and m(B) represent mass assignments

for all elements within A and B that have been made by a pair of evidence sources.

Using DST to aggregate over experts is ideally suited to the task because it treats

the input from each expert as a “sensor” providing evidence toward or against a

particular conclusion. In addition to dealing well with uncertainty, DST may also

provide a natural consensus measure because it is possible to capture both the

amount of ignorance associated with an assessment as well as the disagreement

that each expert holds with respect to a judgment.

Page 58: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

53

Page 59: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

54

References

[1] E. Keller, R. B. Lee, and J. Rexford, “Accountability in hosted virtual

networks,” Barcelona, Spain, 2009, pp. 29-36.

[2] Jaeger, T and Schiffman, J., “Outlook: Cloudy with a Chance of Security

Challenges and Improvements,” Security & Privacy, IEEE, vol. 8, no. 1,

pp. 77-80, Feb. 2010.

[3] http://gigaom.com/2013/01/06/bad-news-for-amazon-could-be-good-

news-for-other-cloud-providers/

[4] G. Cybenko and C. E. Landwehr, “Security Analytics and Measurements,”

IEEE Security Privacy, vol. 10, no. 3, pp. 5 –8, Jun. 2012.

[5] J. Viega, ‘Ten Years On, How Are We Doing? (Spoiler Alert: We Have

No Clue)’, IEEE Security Privacy, vol. 10, no. 6, pp. 13 –16, Dec. 2012.

[6] A. J. A. Wang, “Information security models and metrics,” in Proceedings

of the 43rd annual Southeast regional conference - Volume 2, Kennesaw,

Georgia, 2005, pp. 178–184.

[7] S. L. Pfleeger, ‘Security Measurement Steps, Missteps, and Next Steps’,

IEEE Security Privacy, vol. 10, no. 4, pp. 5 –9, Aug. 2012.

[8] R. Böhme, ‘Security Metrics and Security Investment Models’, in

Advances in Information and Computer Security, vol. 6434, Springer

Berlin / Heidelberg, 2010, pp. 10–24.

[9] M. A. Davidson, ‘The Good, the Bad, And the Ugly: Stepping on the

Security Scale’, in Computer Security Applications Conference, 2009.

ACSAC ’09. Annual, 2009, pp. 187 –195.

[10] J. McHugh, ‘Quality of protection: measuring the unmeasurable?’, in

Proceedings of the 2nd ACM workshop on Quality of protection, New

York, NY, USA, 2006, pp. 1–2.

[11] L. Krautsevich, F. Martinelli, and A. Yautsiukhin, “Formal approach to

security metrics.: what does ‘more secure’ mean for you?,” in Proceedings

of the Fourth European Conference on Software Architecture: Companion

Volume, New York, NY, USA, 2010, pp. 162–169.

[12] S. Stolfo, “Measuring Security,” IEEE Security&Privacy Magazine, vol.

9, no. 3, pp. 60–65, May 2011.

[13] Global Environment for Network Innovation (GENI). Available:

www.geni.net.

[14] Future Internet Assembly (FIA). Available: www.future-internet.eu.

[15] European Future Internet Portal. Available: www.future-internet.eu.

[16] European Commission Seventh Framework Programme (FP7) 4Ward

Project. Available: www.4ward-project.eu.

[17] L. M. Correia, T. R. Banniza, A. M. Biraghi, J. Goncalves, M. Kind, T.

Monath, D. Sebastiao, K. Wuenstel, and J. Salo, “Evaluation of business

Page 60: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

55

use cases in the Future Internet,” in Future Network and Mobile Summit,

2010, 2010, pp. 1–8.

[18] G. Schultz, “Security Aspects and Principles,” in Architecture and Design

for the Future Internet, L. M. Correia, H. Abramowicz, M. Johnsson, and

K. Wünstel, Eds. Springer Netherlands, 2011, pp. 115–131.

[19] N. Feamster, L. Gao, and J. Rexford, “How to lease the internet in your

spare time,” SIGCOMM Comput. Commun. Rev., vol. 37, no. 1, pp. 61–

64, 2007.

[20] A. Feldmann, “Internet clean-slate design: what and why?,” SIGCOMM

Comput. Commun. Rev., vol. 37, no. 3, pp. 59–64, 2007.

[21] M. Johnsson, “A System Overview,” in in Architecture and Design for the

Future Internet, Springer Netherlands, 2011, pp. 15–27.

[22] C. Irvine and T. Levin, “Quality of security service,” Proceedings of the

2000 workshop on New security paradigms, Ballycotton, County Cork,

Ireland: ACM, 2000, pp. 91-99.

[23] G. Frankova and A. Yautsiukhin. Service and protection level agreements

for business processes. In Proceedings of the 2nd Young Researchers

Workshop on Service Oriented Computing, Springer, 2007.

[24] Karabulut,, Kerschbaum, Robinson, Massacci, and A. Yautsiukhin,

“Security and trust in it business outsourcing: a manifesto.,” Proceedings

of the 2nd International Workshop on Security and Trust Management,

vol. 179, 2006, pp. 47-58.

[25] D. Gollmann, F. Massacci, A. Yautsiukhin (eds.). Quality of Protection

Security Measurements and Metrics. New York, NY: Springer, 2006, p.

vii.

[26] F. Bolger and G. Wright, “Assessing the quality of expert judgment,”

Decision Support Systems, vol. 11, no. 1, pp. 1–24, Jan. 1994.

[27] K. Leung and S. Verga, “Expert Judgement in Risk Assessment,”

Technical Memorandum DRDC-CORA-TM-2007-57, Dec. 2007.

[28] F. Joerin, G. Cool, M. Rodriguez, M. Gignac, and C. Bouchard, “Using

multi-criteria decision analysis to assess the vulnerability of drinking

water utilities,” Environmental Monitoring and Assessment, vol. 166, no.

1, pp. 313–330, Jul. 2010.

[29] J. Ryan, T. Mazzuchi, D. Ryan, J. Lopez de la Cruz, and R. M. Cooke,

“Quantifying information security risks using expert judgment elicitation,”

Computers and Operations Research, vol. 39, no. 4, pp. 774–784, Apr.

2012.

[30] http://en.wikipedia.org/wiki/Wine_tasting.

[31] I. Houidi, W. Louati, D. Zeghlache, and S. Baucke, “Virtual Resource

Description and Clustering for Virtual Network Discovery,” in

Page 61: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

56

Communications Workshops, 2009. ICC Workshops 2009. IEEE

International Conference on, 2009, pp. 1 –6.

[32] É. Renault, W. Louati, I. Houidi, and H. Medhioub, “A Framework to

Describe and Search for Virtual Resource Objects,” in Future Generation

Information Technology, vol. 6485, Springer Berlin / Heidelberg, 2010,

pp. 208–219.

[33] V. Verendel, “Quantified security is a weak hypothesis: a critical survey

of results and assumptions,” in Proceedings of the 2009 workshop on New

security paradigms workshop, Oxford, United Kingdom, 2009, pp. 37–50.

[34] K. Julisch, “Security compliance: the next frontier in security research,” in

Proceedings of the 2008 workshop on New security paradigms, Lake

Tahoe, California, USA, 2008, pp. 71–74.

[35] “Recommended Security Controls for Federal Information Systems and

Organizations,” National Institute of Standards and Technology, U.S.

Department of Commerce, Computer Security Division, Information

Technology Laboratory, Gaithersburg, MD, NIST Special Publication

800-53, Aug. 2009.

[36] S. Brocklehurst, B. Littlewood, T. Olovsson, and E. Jonsson, “On

measurement of operational security,” IEEE Aerospace and Electronic

Systems Magazine, vol. 9, no. 10, pp. 7 –16, Oct. 1994.

[37] S. Natarajan and T. Wolf, “Security issues in network virtualization for the

future Internet,” in Computing, Networking and Communications (ICNC),

2012 International Conference on, 2012, pp. 537 –543.

[38] A. Fischer and H. De Meer, “Position paper: Secure virtual network

embedding,” in Praxis der Informationsverarbeitung und Kommunikation,

vol. 34, no. 4, pp. 190-193, 2011.

[39] S. Chhibber, G. Apostolakis, and D. Okrent, “A taxonomy of issues

related to the use of expert judgments in probabilistic safety studies,”

Reliability Engineering & System Safety, vol. 38, no. 1–2, pp. 27–45,

1992.

[40] D. Kahneman, Thinking, Fast and Slow, 1st ed. Farrar, Straus and Giroux,

2011.

[41] T. Krueger, T. Page, K. Hubacek, L. Smith, and K. Hiscock, “The role of

expert opinion in environmental modelling,” Environmental Modelling &

Software, vol. 36, pp. 4–18, Oct. 2012.

[42] M. Meyer and M. Booker, Eliciting and analyzing expert judgment: a

practical guide. Philadelphia, PA: Society for Industrial and Applied

Mathematics: American Statistical Association, 2001.

Page 62: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

57

[43] S. C. Hora, “Nuclear Waste and Future Societies: A Look into the Deep

Future,” Technological Forecasting and Social Change, vol. 56, no. 2, pp.

155–170.

[44] C. E. H. Beaudrie, M. Kandlikar, and G. Ramachandran, “Chapter 5 -

Using Expert Judgment for Risk Assessment,” in Assessing Nanoparticle

Risks to Human Health, Oxford: William Andrew Publishing, 2011, pp.

109–138.

[45] V. V. Tsyganok, S. V. Kadenko, and O. V. Andriichuk, “Significance of

expert competence consideration in group decision making using AHP,”

International Journal of Production Research, vol. 50, no. 17, pp. 4785–

4792, 2012.

[46] F. Flandoli, E. Giorgi, W. P. Aspinall, and A. Neri, “Comparison of a new

expert elicitation model with the Classical Model, equal weights and

single experts, using a cross-validation technique,” Reliability Engineering

& System Safety, vol. 96, no. 10, pp. 1292–1310, Oct. 2011.

[47] J. N. Matthews, W. Hu, M. Hapuarachchi, T. Deshane, D. Dimatos, G.

Hamilton, M. McCabe, and J. Owens, “Quantifying the performance

isolation properties of virtualization systems,” in Proceedings of the 2007

workshop on Experimental computer science, New York, NY, USA, 2007.

[48] K. Hayes, “Uncertainty and uncertainty analysis methods,” Australian

Centre of Excellence for Risk Analysis (ACERA), Australia, EP102467,

Feb. 2011.

[49] O. K. Ngwenyama, “Generating belief functions from qualitative

preferences: An approach to eliciting expert judgments and deriving

probability functions,” Data & Knowledge Engineering, vol. 28, no. 2, pp.

145–159.

[50] N. Bolloju, “Aggregation of analytic hierarchy process models based on

similarities in decision makers’ preferences,” European Journal of

Operational Research, vol. 128, no. 3, pp. 499–508.

[51] J. Figueira, S. Greco, M. Ehrogott, and B. Roy, “Paradigms and

Challenges,” in in Multiple Criteria Decision Analysis: State of the Art

Surveys, vol. 78, Springer New York, 2005, pp. 3–24.

[52] J. Figueira, S. Greco, and M. Ehrogott, “Introduction,” in Multiple Criteria

Decision Analysis: State of the Art Surveys, vol. 78, Springer New York,

2005, pp. xxi–xxxiv.

[53] F. Lootsma (1990), The French and the American school in multi-criteria

decision analysis. Recherche opkrationelle/Opns Res. 24, 263 285.

[54] R. Keeney and H. Raiffa, Decisions with multiple objectives: preferences

and value tradeoffs. New York; Toronto: Wiley, 1976.

Page 63: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

58

[55] ‘Multi-Criteria Analysis: a Manual’, Department of Communities and

Local Government, London, Jan. 2009.

[56] F. Lootsma, “The AHP, Pairwise Comparisons,” in Multi-Criteria

Decision Analysis via Ratio and Difference Judgement, vol. 29, Springer

US, 1999, pp. 53–92.

[57] I. N. Durbach, “Modeling uncertainty in multi-criteria decision analysis,”

European Journal of Operational Research, vol. 223, no. 1, pp. 1–14, Nov.

2012.

[58] J. Figueira, S. Greco, M. Ehrogott, and J. Dyer, “Maut — Multiattribute

Utility Theory,” in Multiple Criteria Decision Analysis: State of the Art

Surveys, vol. 78, Springer New York, 2005, pp. 265–292.

[59] J. Barzilai, ‘Preference Function Modelling: The Mathematical

Foundations of Decision Theory’, in Trends in Multiple Criteria Decision

Analysis, vol. 142, Springer US, 2010, pp. 57–86.

[60] V. Belton and T. Gear, “On a short-coming of Saaty’s method of analytic

hierarchies,” Omega, vol. 11, no. 3, pp. 228–230, 1983.

[61] D.L. Xu, “An introduction and survey of the evidential reasoning

approach for multiple criteria decision analysis,” Annals of Operations

Research, vol. 195, no. 1, pp. 163–187, May 2012.

[62] M. Beynon, “DS/AHP method: A mathematical analysis, including an

understanding of uncertainty,” European Journal of Operational Research,

vol. 140, no. 1, pp. 148–164, Jul. 2002.

[63] J. J. Buckley, “A fast method of ranking alternatives using fuzzy

numbers,” Fuzzy Sets and Systems, vol. 30, no. 3, pp. 337–338.

[64] H. Deng, “Multicriteria analysis with fuzzy pairwise comparison,” FUZZ-

IEEE’99. 1999 IEEE International Fuzzy Systems. Conference

Proceedings (Cat. No.99CH36315), vol. 2, pp. 726–731 vol.2, Jan. 1999.

[65] W. Wedley, “AHP/ANP -- Where is Natural Zero?”, Proceedings of the

Ninth International Symposium on the Analytic Hierarchy Process, Vina

Del Mar,Chile, Aug. 2007.

[66] T. J. Stewart, “A critical survey on the status of multiple criteria decision

making theory and practice,” Omega, vol. 20, no. 5–6, pp. 569–586.

[67] J. Figueira, S. Greco, M. Ehrogott, D. Bouyssou, and M. Pirlot, “Conjoint

Measurement Tools for MCDM,” in in Multiple Criteria Decision

Analysis: State of the Art Surveys, vol. 78, Springer New York, 2005, pp.

73–112.

[68] P. Farquhar and L. Keller, “Preference intensity measurement,” Annals of

Operations Research, vol. 19, pp. 205–217, 1989.

Page 64: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

59

[69] W. Edwards, ‘How to Use Multiattribute Utility Measurement for Social

Decisionmaking’, IEEE Transactions on Systems, Man and Cybernetics,

vol. 7, no. 5, pp. 326–340, May 1977.

[70] D. von Winterfeldt and W. Edwards. Decision Analysis and Behavioral

Research. Cambridge University Press, Cambridge, 1986.

[71] J. Figueira, S. Greco, M. Ehrogott, C. Bana e Costa, J.-M. Corte, and J.-C.

Vansnick, “On the Mathematical Foundation of MACBETH,” in Multiple

Criteria Decision Analysis: State of the Art Surveys, vol. 78, Springer

New York, 2005, pp. 409–437.

[72] M. Pöyhönen and R. P. Hämäläinen, ‘On the convergence of multiattribute

weighting methods’, European Journal of Operational Research, vol. 129,

no. 3, pp. 569–585, Mar. 2001.

[73] W. Edwards, “SMARTS and SMARTER: Improved Simple Methods for

Multiattribute Utility Measurement,” Organizational Behavior and Human

Decision Processes, vol. 60, no. 3, pp. 306–325.

[74] F. H. Barron, “Decision Quality Using Ranked Attribute Weights,”

Management Science, vol. 42, no. 11, pp. 1515–1523, Nov. 1996.

[75] Saaty, T. L., The Analytic Hierarchy Process, NewYork: McGraw-Hill,

1980.

[76] P. T. Harker, “The Theory of Ratio Scale Estimation: Saaty’s Analytic

Hierarchy Process,” Management Science, vol. 33, no. 11, pp. 1383–1403,

Nov. 1987.

[77] T. Saaty and L. Vargas, “How to Make a Decision,” in Models, Methods,

Concepts & Applications of the Analytic Hierarchy Process, vol. 175,

Springer US, 2012, pp. 1–21.

[78] J. Figueira, S. Greco, M. Ehrogott, and T. Saaty, “The Analytic Hierarchy

and Analytic Network Processes for the Measurement of Intangible

Criteria and for Decision-Making,” in Multiple Criteria Decision Analysis:

State of the Art Surveys, vol. 78, Springer New York, 2005, pp. 345–405.

[79] A. Salo and R. Hamalainen, ‘On the measurement of preferences in the

analytic hierarchy process’, Journal of Mulit-Criteria Decision Analysis,

vol. 6, pp. 309–319, 1997.

[80] A. Ishizaka and A. Labib, ‘Review of the main developments in the

analytic hierarchy process’, Expert Systems with Applications, vol. 38, no.

11, pp. 14336–14345, Oct. 2011.

[81] C. A. Bana e Costa and J.-C. Vansnick, ‘A critical analysis of the

eigenvalue method used to derive priorities in AHP’, European Journal of

Operational Research, vol. 187, no. 3, pp. 1422–1428, Jun. 2008.

Page 65: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

60

[82] E. U. Choo and W. C. Wedley, “A common framework for deriving

preference values from pairwise comparison matrices,” Computers &

Operations Research, vol. 31, no. 6, pp. 893–908, May 2004.

[83] D. L. Olson and V. Dorai, “Implementation of the centroid method of

Solymosi and Dombi,” European Journal of Operational Research, vol. 60,

no. 1, pp. 117–129.

[84] V. Belton and T. Gear, ‘On a short-coming of Saaty’s method of analytic

hierarchies’, Omega, vol. 11, no. 3, pp. 228–230, 1983.

[85] Y. Liu, “Access Network Selection in a 4G Networking Enviornment,”

University of Waterloo, Ontario, Canada, Master Thesis, 2007.

[86] M. A. Elliott, “Selecting numerical scales for pairwise comparisons,”

Reliability Engineering and System Safety, vol. 95, no. 7, pp. 750–763,

Jul. 2010.

[87] D. Olson, G. Fliedner, and K. Currie, “Comparison of the REMBRANDT

system with analytic hierachy process,” European Journal of Operational

Research, vol. 82, no. 3, pp. 522–539, May 1995.

[88] B. Chandran, B. Golden, and E. Wasil, “Linear programming models for

estimating weights in the analytic hierarchy process,” Computers &

Operations Research, vol. 32, no. 9, pp. 2235–2254, Sep. 2005.

[89] E. Condon, “Visualizing group decisions in the analytic hierarchy

process,” Computers and Operations Research, vol. 30, no. 10, pp. 1435–

1445.

[90] Y. Guh, R. Po, K. Lou, “An additive scale model for the analytic hierarchy

process,” International Journal of Information and Management Sciences,

vol. 20, no. 1, pp. 71-88, Sept 2009.

[91] K. Sugihara, H. Ishii, and H. Tanaka, “Fuzzy AHP with incomplete

information,” in IFSA World Congress and 20th NAFIPS International

Conference, 2001. Joint 9th, 2001, pp. 2730–2733 vol.5.

[92] J. Lowrance, T. Garvey, and T. Strat, “A Framework for Evidential-

Reasoning Systems,” in Classic Works of the Dempster-Shafer Theory of

Belief Functions, vol. 219, R. Yager and L. Liu, Eds. Springer Berlin /

Heidelberg, 2008, pp. 419–434.

[93] M. Beynon, B. Curry, and P. Morgan, “The Dempster-Shafer theory of

evidence: an alternative approach to multicriteria decision modelling,”

Omega, vol. 28, no. 1, pp. 37 – 50, 2000.

[94] J.-B. Yang and G. Singh Madan, “An evidential reasoning approach for

multiple-attribute decision making with uncertainty,” IEEE Transactions

on Systems, Man and Cybernetics, vol. 24, no. 1, pp. 1–18, 1994.

[95] S. French, “Aggregating expert judgement,” RACSAM, vol. 105, no. 1,

pp. 181–206, Mar. 2011.

Page 66: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

61

[96] J. Kacprzyk and M. Fedrizzi, “A ‘soft’ measure of consensus in the setting

of partial (fuzzy) preferences,” European Journal of Operational Research,

vol. 34, no. 3, pp. 316–325.

[97] F. Herrera, “A model of consensus in group decision making under

linguistic assessments,” Fuzzy Sets and Systems, vol. 78, no. 1, pp. 73–87.

[98] E. Herrera-Viedma, F. Herrera, and F. Chiclana, “A consensus model for

multiperson decision making with different preference structures,” IEEE

Transactions on Systems, Man and Cybernetics, Part A: Systems and

Humans, vol. 32, no. 3, pp. 394–402, 2002.

[99] V. W. Mitchell, “The delphi technique: an exposition and application,”

Technology Analysis & Strategic Management, vol. 3, no. 4, pp. 333–358,

Jan. 1991.

[100] L. Mikhailov, “Group prioritization in the AHP by fuzzy preference

programming method,” Computers and Operations Research, vol. 31, no.

2, pp. 293–301.

[101] Aczél and T. L. Saaty, “Procedures for synthesizing ratio judgements,”

Journal of Mathematical Psychology, vol. 27, no. 1, pp. 93–102, Mar.

1983.

[102] R. R. Yager, “On ordered weighted averaging aggregation operators in

multicriteria decisionmaking,” IEEE Transactions on Systems, Man and

Cybernetics, vol. 18, no. 1, pp. 183–190, 1988.

[103] T. L. Saaty, “Dispersion of group judgments,” Mathematical and

Computer Modelling, vol. 46, no. 7–8, pp. 918–925.

[104] Y. Dong, “Consensus models for AHP group decision making under row

geometric mean prioritization method,” Decision Support Systems, vol.

49, no. 3, pp. 281–289, Jun. 2010.

[105] F. Zandi and M. Tavana, “A group evidential reasoning approach for

enterprise architecture framework selection,” Int. J. Inf. Technol. Manage.,

vol. 9, no. 4, pp. 468–483, Sep. 2010.

[106] U. Rakowsky, “Fundamentals of the Dempster-Shafer Theory and its

Applications to System Safety and Reliability Modelling”, International

Journal of Reliability, Quality & Safety Engineering, Dec2007, Vol. 14

Issue 6, p579-601.

[107] L. Liu and R. Yager, “Classic Works of the Dempster-Shafer Theory of

Belief Functions: An Introduction,” in Classic Works of the Dempster-

Shafer Theory of Belief Functions, vol. 219, R. Yager and L. Liu, Eds.

Springer Berlin / Heidelberg, 2008, pp. 1–34.[49].

[108] M. Beynon, B. Curry, and P. Morgan, “The Dempster-Shafer theory of

evidence: an alternative approach to multicriteria decision modelling,”

Omega, vol. 28, no. 1, pp. 37 – 50, 2000.

Page 67: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

62

[109] S. Parsons, “Some qualitative approaches to applying the Dempster-Shafer

theory”, Information and Decision Technologies, 19, 321-337, 1994

[110] T. R. Banniza, A. M. Biraghi, L. M. Correia, J. Goncalves, M. Kind, T.

Monath, J. Salo, D. Sebastiao, and K. Wuenstel, “Final Assessment on the

Non-technical Drivers,” Seventh Framework Programme, FP7-ICT-2007-

1-216041-4WARD/D-1.4, Jul. 2010.

[111] H. Abramowicz, M. Achemlal, P. Aranda, J. Carapinha, C. Foley, J.

Jiménez, M. Johnsson, K. Holger, N. Niebert, R. Rembarz, G. Schultz, M.

Soellner, and C. Tschudin, “Introduction to a 4WARD Architectural

Approach,” Seventh Framework Programme, FP7-ICT-2007-1-216041-

4WARD/D-0.4, Feb. 2009.

[112] P. Gutierrez, M. Zitterbart, Z. Boudjemil, M. Ghader, G. H. Garcia, M.

Johnsson, A. Karouia, G. Lazar, M. Majanen, P. Mannersalo, D. Martin,

M. T. Nguyen, S. Perez Sanchez, P. Phelan, M. Ponce de Leon, G.

Schultz, M. Sollner, Y. Zaki, and L. Zhao, “D-2.3.1 Final Architectural

Framework,” Seventh Framework Programme, FP7-ICT-2007-1-216041-

RWARD/D-2.3.1, Jun. 2010.

[113] M. Achemlal, T. Almeida, S. Baucke, R. Bless, M. Bourguiba, J. M.

Cabero, L. Caeiro, J. Carapinha, F. D. Cardoso, L. M. Correia, M. Dianati,

A. Feldmann, C. Gorg, I. Grothues, I. Houidi, J. Jiménez, M. Kind, Y.

Lemieux, W. Louati, L. Mathy, O. Maennel, P. Papadimitriou, J. Sachs, S.

Perez Sanchez, A. Serrador, I. Seskar, C. Werle, F. Wolff, A. Wundsam,

Y. Zaki, D. Zeghlache, and L. Zhao, “Virtualisation Approach: Concept,”

Seventh Framework Programme, FP7-ICT-2007-1-216041-4WARD/D-

3.1.1, Sep. 2010.

[114] J. Carapinha, P. Feil, P. Weissmann, S. Thorsteinsson, Ç. Etemoğlu, Ó.

Ingþórsson, S. Çiftçi, and M. Melo, “Network Virtualization -

Opportunities and Challenges for Operators,” in in Future Internet - FIS

2010, vol. 6369, Springer Berlin / Heidelberg, 2010, pp. 138–147.

[115] G. Schultz, “Security Aspects and Principles,” in in Architecture and

Design for the Future Internet, L. M. Correia, H. Abramowicz, M.

Johnsson, and K. Wünstel, Eds. Springer Netherlands, 2011, pp. 115–131.

[116] É. Renault, W. Louati, I. Houidi, and H. Medhioub, “A Framework to

Describe and Search for Virtual Resource Objects,” in in Future

Generation Information Technology, vol. 6485, Springer Berlin /

Heidelberg, 2010, pp. 208–219.

[117] L. R. Bays, R. R. Oliveira, L. S. Buriol, M. P. Barcellos, and L. P. G A.

Nieto and J. Lopez, “Security and QoS Tradeoffs: Towards a FI

Perspective,” in Advanced Information Networking and Applications

Page 68: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

63

Workshops (WAINA), 2012 26th International Conference on, 2012, pp.

745 –750.

[118] L. Mekouar, Y. Iraqi, and R. Boutaba, “Incorporating Trust in Network

Virtualization,” in 2010 IEEE 10th International Conference on Computer

and Information Technology (CIT), 2010, pp. 942–947.

[119] M. Achemlal, J. Pailles, and C. Gaber, “Building Trust in Virtualized

Networks,” in 2010 Second International Conference on Evolving Internet

(INTERNET), 2010, pp. 113–118.

[120] A. M. Azab, P. Ning, Z. Wang, X. Jiang, X. Zhang, and N. C. Skalsky,

“HyperSentry: enabling stealthy in-context measurement of hypervisor

integrity,” in Proceedings of the 17th ACM conference on Computer and

communications security, New York, NY, USA, 2010, pp. 38–49.

[121] J. Wang, A. Stavrou, and A. Ghosh, “HyperCheck: a hardware-assisted

integrity monitor,” in Proceedings of the 13th international conference on

Recent advances in intrusion detection, Berlin, Heidelberg, 2010, pp. 158–

177.

[122] J. Harris and R. L. Hill, “StaticTrust: A Practical Framework for Trusted

Networked Devices,” in 2011 44th Hawaii International Conference on

System Sciences (HICSS), 2011, pp. 1-10.

[123] N. L. Petroni, J. Timothy, F. Jesus, M. William, and A. Arbaugh, “Copilot

- a coprocessor-based kernel runtime integrity monitor,” In Proceedings of

the 13th Usenix security symposium, p. 179-194, 2004.

[124] A. Seshadri, M. Luk, E. Shi, A. Perrig, L. van Doorn, and P. Khosla,

“Pioneer,” in Proceedings of the twentieth ACM symposium on Operating

systems principles - SOSP ’05, Brighton, United Kingdom, 2005, p. 1.

[125] C. Irvine and T. Levin, “Quality of Security Service,” Ballycotton, County

Cork, Ireland, 2000, pp. 91–99.

[126] C. Irvine, T. Levin, E. Spyropoulou, and B. Allen, “Security as a

dimension of quality of service in active service environments,” in Active

Middleware Services, 2001. Third Annual International Workshop on,

2001, pp. 87 – 93.

[127] S. Lindskog and E. Jonsson, “Adding Security to Quality of Service

Architectures,” Proceedings of the Scuola Superiore G. Reiss Romoli

2002 Summer Conference (SSGRR-2002s), 2002.

[128] Chui Sian Ong, K. Nahrstedt, and Wanghong Yuan, “Quality of protection

for mobile multimedia applications,” in 2003 International Conference on

Multimedia and Expo, 2003. ICME ’03. Proceedings, 2003, vol. 2, pp. II–

137–40 vol.2.

[129] S. Duflos, V. Gay, B. Kervella, and E. Horlait, “Integration of Security

Parameters in the Service Level Specification to Improve QoS

Page 69: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

64

Management of Secure Distributed Multimedia Services,” 2005, pp. 145–

148.

[130] S. Duflos, B. Kervella, and V. Gay, “Considering Security and Quality of

Service in SLS to Improve Policy-Based Management of Multimedia

Services,” Networking, 2007. ICN '07. Sixth International Conference on,

2007, p. 39.

[131] M. A. Chalouf, X. Delord, and F. Krief, “Introduction of Security in the

Service Level Negotiated with SLNP Protocol,” presented at the New

Technologies, Mobility and Security, 2008. NTMS’08., 2008, pp. 1–5.

[132] M. A. Chalouf, I. Djama, T. Ahmed, and F. Krief, “On tightly managing

end-to-end QoS and security for IPTV service delivery,” Leipzig,

Germany, 2009, pp. 1030–1034.

[133] F. C. Colon Osorio, K. McKay, and E. Agu, “Comparison of security

protocols in mobile wireless environments: tradeoffs between level of

security obtained and battery life,” 1st International Conference on

Security and Privacy for Emerging Areas in Communication Networks,

2005, pp. 275–287.

[134] A. Agarwal and W. Wang, “On the Impact of Quality of Protection in

Wireless Local Area Networks with IP Mobility,” Mobile Networks and

Applications, vol. 12, no. 1, pp. 93–110, Feb. 2007.

[135] M. Alia, M. Lacoste, R. He, and F. Eliassen, “Putting together QoS and

security in autonomic pervasive systems,” New York, NY, USA, 2010, pp.

19–28.

[136] I. Cervesato, “Towards a Notion of Quantitative Security Analysis,” in

Quality of Protection, D. Gollmann, F. Massacci, and A. Yautsiukhin, Eds.

Springer US, 2006, pp. 131–143.

[137] H. Chung and C. Neuman, “Modelling the relative strength of security

protocols,” Alexandria, Virginia, USA, 2006, pp. 45–48.

[138] R. Lundin, S. Lindskog, A. Brunstrom, and S. Fischer-Hübner, “Using

Guesswork as a Measure for Confidentiality of Selectively Encrypted

Messages,” Quality of Protection, 2006.

[139] L. Volker, C. Werle, and M. Zitterbart, “Decision process for automated

selection of security protocols,” 2008, pp. 223 –229.

[140] “Recommendation for Key Management - Part 1: General,” National

Institute of Standards and Technology, U.S. Department of Commerce,

Computer Security Division, Information Technology Laboratory,

Gaithersburg, MD, NIST Special Publication 800-57, Jul. 2012.

[141] X. Cuihua and L. Jiajun, “An Information System Security Evaluation

Model Based on AHP and GRAP,” in Web Information Systems and

Page 70: A Short Review of Multi-Criteria Decision Analysis Techniquesrichardgoyette.com/Research/Papers/ShortMCDA.pdf · 1.2.1.2.2 Methods Based on Analytic Hierarchy Process The Analytic

65

Mining, 2009. WISM 2009. International Conference on, 2009, pp. 493 –

496.

[142] http://www.commoncriteriaportal.org/

[143] C. Yameng, S. Yulong, M. Jianfeng, C. Xining, and L. Yahui, “AHP-

GRAP Based Security Evaluation Method for MILS System within CC

Framework,” in Computational Intelligence and Security (CIS), 2011

Seventh International Conference on, 2011, pp. 635–639.

[144] J. Chae, J. Kwon, and M. Chung, “Computer Security Management Model

Using MAUT and SNMP,” in Computational Science and Its Applications

– ICCSA 2005, vol. 3481, O. Gervasi, M. Gavrilova, V. Kumar, A.

Laganà, H. Lee, Y. Mun, D. Taniar, and C. Tan, Eds. Springer Berlin /

Heidelberg, 2005, pp. 147–156.

[145] T. Taleb, Y. H. Aoul, and A. Benslimane, “Integrating Security with QoS

in Next Generation Networks,” presented at the GLOBECOM 2010, 2010

IEEE Global Telecommunications Conference, 2010, pp. 1–5.

[146] R. Goyette and A. Karmouch, “A Virtual Network topology security

assessment,” in Wireless Communications and Mobile Computing

Conference (IWCMC), 2011 7th International, 2011, pp. 974–979.

[147] R. Goyette and A. Karmouch, “A dynamic model building process for

virtual network security assessment,” in 2011 IEEE Pacific Rim

Conference on Communications, Computers and Signal Processing

(PacRim), 2011, pp. 482–487.

[148] C. Hwang, “A new approach for multiple objective decision making,”

Computers and Operations Research, vol. 20, no. 8, pp. 889–899.

[149] R. M. Cooke, “Expert judgement elicitation for risk assessments of critical

infrastructures,” Journal of Risk Research, vol. 7, no. 6, pp. 643–656, Sep.

2004.

[150] K. Sentz and S. Ferson, “Combination of Evidence in Dempster-Shafer

Theory,” Sandia National Laboratories, Albuquerue New Mexico,

Livermore California, SAND 2002-0835, Apr. 2002.

[151] M. M. Hasan, H. Amarasinghe, and A. Karmouch, “Network

virtualization: Dealing with multiple infrastructure providers,” in 2012

IEEE International Conference on Communications (ICC), 2012, pp.

5890–5895.