A Separating Choice Hyperplane Algorithm for Evaluating Multiattribute Decisions

Preview:

Citation preview

A Separating Choice Hyperplane Algorithm for Evaluating Multiattribute DecisionsAuthor(s): Ted DintersmithSource: Operations Research, Vol. 32, No. 6 (Nov. - Dec., 1984), pp. 1328-1344Published by: INFORMSStable URL: http://www.jstor.org/stable/170951 .

Accessed: 08/05/2014 17:00

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact support@jstor.org.

.

INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research.

http://www.jstor.org

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

A Separating Choice Hyperplane Algorithm for Evaluating Multiattribute Decisions

TED DINTERSMITH Analog Devices, Inc., Norwood, Massachusetts

(Received June 1982; accepted August 1983)

This paper presents an iterative algorithm for optimizing multiattribute decisions. It develops a separating choice hyperplane algorithm, which allows decision- makers to identify their most preferred point from a nonlinear convex set of feasible outcomes. In this interactive algorithm, the decision-maker reveals information about preferences by solving a series of elementary resource allocations. The resulting structure integrates a theoretically rigorous optimiza- tion procedure with intuitive assessment tasks. A practical application of the approach is presented.

A NALYZING DECISIONS with multiattribute consequences has ,Al. proven most vexing. One source of complexity is developing a computationally tractable model relating decision (or control) variables to a multiattribute (or multiple objective function) outcome. Often, a more significant barrier-and the focus of this paper-is the identifica- tion by the decision-maker (DM) of his most preferred point from the set of feasible outcome vectors.

Multiattribute decisions are characterized by outcomes that the DM values on the basis of several incommensurable measures which generally cannot be simultaneously maximized. Examples of such decisions abound. The designer of an integrated circuit is concerned with its speed, functionality, power consumption, and cost. Nonprofit organizations seek to allocate resources to accomplish a multitude of worthy, but conflicting, goals. Or, a DM acting under uncertainty is concerned with his payoff in each of several mutually exclusive states of the world.

This paper presents an iterative, interactive algorithm for optimizing multiattribute decisions. The kernel procedure is a choice point algorithm (CPA), which utilizes a sequence of resource allocations performed by the DM to identify the optimal point of his quasiconcave noncardinal preference function over a convex polytope. By embedding the CPA in a convergence testing and cutting-plane structure, a separating choice

Subject classification: 91 decision analysis: multiattribute, 641 multiple criteria programming: interactive algorithms, 644 nonlinear programming using cutting planes: interactive multiattribute.

Operations Research Vol. 32, No. 6, November-December 1984

0030-364X/84/3206-1328 $01.25 ? 1984 Operations Research Society of America

1328

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Algorithm for Multiattribute Decisions 1329

hyperplane algorithm (SCHA) is developed for the more general case of nonlinear convex feasible regions.

This work's principal objective is to arrive at a multiattribute decision algorithm that obtains information about the DM's preferences using intuitive and reliable assessment tasks-without compromising theoret- ical rigor. The proposed algorithm requires simply that the DM solve a series of elementary resource allocations. The cost of these easily per- formed assessments is increased algorithmic complexity which, fortu- nately, can be entirely absorbed by the computer. In contrast, existing multiattribute decision algorithms generally have simpler algorithmic structures, but place unrealistic assessment demands on the DM (e.g., directly assessing the objective function gradient).

Section 1 of this paper reviews the relevant literature surrounding this multiattribute decision algorithm. In Sections 2 and 3, the SCHA algo- rithm is illustrated in two dimensions, stated formally, and developed; these sections are supplemented by the Appendix, which develops the CPA. A practical application is reviewed in Section 4, and Section 5 presents the paper's conclusions.

1. RELATED LITERATURE

The work presented in this paper utilizes the problem-solving con- structs of decision analysis (see Howard [1968]). A model relates decision and state variables to an outcome measure; then, a structured procedure enables the DM to select his most preferred point from the set of feasible outcomes. This most preferred point is the outcome that maximizes the DM's preference function, denoted V, over the feasible outcome set.

Iterative, interactive multiattribute decision algorithms were first ad- vanced independently by Boyd [1970] and Geoffrion et al. [1972], who argued that a single closed-form expression cannot adequately represent the DM's preferences over the entire multiattribute outcome space. The authors employ a standard ascent algorithm, with the gradient of the DM's objective function defining each iteration's direction of improve- ment. This gradient is obtained interactively by directly assessing the DM's marginal rates of substitution among the attributes. The imprac- ticality of this assessment technique-as underscored by the work of Wallenius [1975]-severely hampers the effectiveness of this class of algorithms.

In an important philosophical departure from previous work, Morris and Oren [1980] developed an iterative algorithm that circumvents the direct assessment of marginal rates of substitution. Instead, the DM solves resource allocations, a task which the authors argue is a much more natural way for the DM to reveal information about his preferences. Unfortunately, the structure of the Morris-Oren algorithm is such that

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

1330 Dintersmith

the choice set for these resource allocations is an n-dimensional cone, introducing complicated interdependencies among attributes due to kinked exchange "prices." Consequently, the desired simplicity of a resource allocation is not fully realized.

Numerous other approaches to multiattribute decisions have been advanced. For a review and critique of many of these algorithms, refer to Roy [1971], Oppenheimer [1977], Hemming [1978], and Dintersmith [19821.

In addition to multiattribute decision theory, the present work draws on another important field of mathematical science: the class of nonlinear programming methods referred to as cutting plane algorithms (see Kelley [1960] and Veinott [1967]). In particular, the SCHA's structure parallels Veinott's Supporting Hyperplane Algorithm.

2. ILLUSTRATION OF THE ALGORITHM

This paper presents a new algorithm, termed the separating choice hyperplane algorithm (SCHA), that enables the DM to identify his most preferred point, denoted c*, from the feasible set Z of outcome vectors z; that is, to determine

c* = arg maxzEz V(z(d)).

The DM's preferences are assumed to be transitive, connected, contin- uous, nonsatiating, and to exhibit nonincreasing marginal rates of sub- stitution. Such preference structures are characterized by indifference curves that are convex with respect to the origin. These conditions ensure that the DM's preferences are quasiconcave; that is,

V(tx + (1 - t)y) > mint V(x), V(y)}; 0 c t c 1.

Neither cardinality nor differentiability is required of this preference function.

In the SCHA, the DM's preference function V is not represented by a closed form, which likely would not be accurate over the entire n- dimensional outcome space. Rather, sufficient information about the DM's preferences is revealed during the course of this interactive algo- rithm to ensure convergence.

The model mapping the feasible decision settings d to outcome vari- ables z is assumed to result in a convex frontier for the set Z of feasible outcome points. The set Z is restricted to the positive orthant, a qualifi- cation that can be trivially modified to reflect any bounded feasible set.

The SCHA does not require an explicit characterization of the bound- ary of Z. Rather, in the course of converging, the algorithm selectively probes the boundary of the feasible set. This structure affords significant computational efficiencies if the decision model is complex.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Algorithm for Multiattribute Decisions 1331

The basic strategy of the SCHA is as follows. The feasible set Z is bounded at each iteration by a convex polytope. The DM, using a series of resource allocations, identifies his most preferred point from this convex polytope. If this choice point proves feasible, it must be optimal, terminating the algorithm. Otherwise, the choice point is separated from the feasible set by a cutting plane, which is appended to the previous bounding polytope-providing a better approximation to Z and setting

Pi

z2

+ A W2~~~~~~~~~

Z2 A= R ,XX.2

Z2 "

0 D o ~~ ~ ~~~~zI D Z

Figure 1. Separation and choice steps of the algorithm.

the stage for the next iteration. The resulting sequence of choice points converges to the DM's optimum over Z. This same structure can be found in Veinott's Supporting Hyperplane Algorithm for problems where the objective function has a known analytical representation.

Figure 1 illustrates in two dimensions the essence of the SCHA. First, a nonnegative vector pl is selected and the linear functional p z(d) is maximized over the feasible set Z. The point wl denotes the Pareto point of Z producing this maximum. The polyhedron defined by Pl =

Iz: pl z c pl wl; z - 0} bounds the feasible set.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

1332 Dintersmith

Using the computer-based interactive resource allocation procedure of Figure 2, the DM selects his most preferred point from the set P1. Since preferences are nonsatiating, this point lies on the boundary of Pl. Consequently, the DM simply allocates the budget of p1 *w1 among the

z2 ' 1

z2

P PM '\X \ \, s

C2 X S

P1

ZI z2 ZI z2

Pi P2 P1 P2 M 0

Figure 2. Indifference curves interpretation of a resource allocation.

attributes z1 and Z2, where prices of pi' and P21 determine the exchange rates between the attributes. In so doing, he solves the problem:

Maximize V(z)

Subject to:

p *z = p w; z 2 O.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Algorithm for Multiattribute Decisions 1333

Ceilings and floors are posted beside each attribute to limit the DM's choice set to the positive orthant.

In Figure 1, the DM's choice point is denoted by cl, lying on the choice hyperplane defined by p1. z = p1. wl. Since cl is infeasible, it is possible to separate it from Z with a second hyperplane, defined by p2.Z =

p2. w2. The polyhedron p2 = $Z: pi.Z pi.Wi, i = 1, 2}

provides a tighter bound to the feasible set. The DM, through a resource allocation, can select his most preferred

point c2 from the set p2. Nonsatiety ensures the c2 lies on the boundary denoted ABD; quasiconcave preferences ensure that the point B (weakly) dominates any other point on BD. Hence, c2 can be assumed to lie on the segment AB. New prices of p2 and a new budget of p2 w2 are presented to the DM, along with ceilings (zl+ and Z2+) and floors (O and Z2) on the attributes; the DM interactively constructs his most preferred point c2 from p2.

At this point, upper and lower bounds to c* are readily available. Since c2 is the DM's most preferred point over a set p2 Q Z, it holds that V(c2) - V(c*). Also, the DM is assured of doing no worse than the more preferred of the two feasible points, w1 and w2, that have been identified. Defining W2ax = arg max{ V(w1), V(w2)}, then V(c*) > V(W2nax)

The SCHA continues with its sequence of choice and separation steps until ci is feasible or V(cW) = V(W*ax). The sequence of choice points ci converges monotonically to the DM's optimum, c*. The assessment burden for the DM involves intuitive resource allocations; a small number of point-to-point comparisons can be used to test for convergence and bound V(c*). Finally, the SCHA is computationally efficient, requiring only selected maximizations of linear functionals in probing the boundary of Z.

3. FORMAL STATEMENT OF THE ALGORITHM

The following definitions are needed before formally stating the SCHA: D = the set of feasible decision settings; Z = the set of feasible outcome vectors defined as {z: z c z(d), for

some d E D, z 2 01; ph = a polyhedron bounding Z, defined as Iz: pi-z c piwi i = 1,

* , k, z 2 01, where p' are nonnegative vectors, and wi = arg maxzez p .z;

Fk = the most recently added face of pk, defined as Iz: pk.z = k. k -p pk .w,zE pk}; and,

wk ax = the most preferred feasible outcome point at iteration k, de- fined recursively as arg max V(wmax) V(wk)1.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

1334 Dintersmith

Separating Choice Hyperplane Algorithm

Step 0. Initialize: Set h = 1. Select p1 and compute w'. Set w ax = W1.

Step 1. Choice: The DM, using an interactive resource allocation procedure, selects his most preferred point ck from the set pk.

Step 2. Compare: If the DM is indifferent between ck and the lower bound WMax, set c* = wmax, and stop.

Step 3. Separate: Seek a hyperplane, defined by pk+l, separating Ck from Z. If impossible, set c* = ck, and stop. Otherwise, set k = k + 1, assess wmax, and return to Step 1.

Initialize Step

While the theoretical soundness of the SCHA is independent of the initialize step, its near-term convergence characteristics are influenced by the choice of p1. For example, in the unlikely event that p1 equals the DM's marginal rates of substitution at the optimum, the SCHA would converge in just one step. In practice, interaction with the DM, as well as examination of the decision's structure, can help identify a p1 that moves the SCHA quickly to the DM's optimum. For example, the DM could solve an initial resource allocation, from which the local marginal rates of substitution are inferred and used as the initial set of weights p1.

Choice Step

Section 2 illustrated how the choice step could be implemented for two-dimensional outcome vectors. Two issues must be resolved for higher-dimensional outcome vectors. First, Lemma 1 proves that at the kth iteration of the SCHA the DM's choice point will lie on the most recently added face Fk. Second, a resource allocation procedure used by the DM to select his most preferred point from Fk must be constructed; the Appendix develops this choice point algorithm.

LEMMA 1. At any iteration k, ck can be assumed to belong to Fk.

Proof. By construction, ck-1 E pk-l, and pk ck-1 > pk. wk. Also, since pk C p-1 V(Ck1) 2 V(ck). Assume pk._k < pk.Wk and V(Ck) > V(y), for all y E Fk. Then, there exists some t, 0 c t < 1, such that pk. (tCk +

(1 - t)ck-1) pk.wk. Since Ck, ek k pk1, the point tek + (1 - t)ckl E

Fk. By quasiconcavity, V(tCk + (1 -t)ck-l) > V(Ck), contradicting the initial assumption.

The set Fk is a convex polyhedron lying on the hyperplane z: pk.Z =

pk wk[. Unfortunately, the DM cannot generally identify his most pre- ferred point from this set in a single resource allocation. The choice point algorithm (CPA), described and developed rigorously in the Appendix,

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Algorithm for Multiattribute Decisions 1335

enables the DM to select his most preferred point ck from Fk. While the precise method of implementing the CPA is complex, the key point is that the complexity is transparent to the DM, who simply solves a finite sequence of resource allocations.

Compare Step

This step simply requires the DM to compare his most recent choice point ck to the lower bound w'nax. If the DM is indifferent between these two outcomes, the outcome wk ax is, for practical purposes, the optimum c*. The reason is that V(Wmnax) C V(c*) C V(ck) 50 if V(Wknax) = V(Ck),

then V(wmax) = V(c*).

Separation Step

Each iteration of the SCHA attempts to separate the most recent choice point Ck from the feasible set Z. With a complete characterization of the boundary of Z, this step is trivial. The SCHA, however, addresses more general decision models where the computation of each boundary point is costly and a complete characterization of the boundary of Z is not practical.

In seeking a separating hyperplane, the SCHA utilizes all information available about the boundary of the feasible set. Let U = {u'i, where each ui' is a boundary point of Z computed previously in the algorithm. Observing that

maxp?0,~p,=1 min1uil p. - p.u > maxpo,yp,=> minzEz p. -c p

the separating hyperplane defined bypk+l must, at the very least, separate ck from the set of points U. Consequently, the separation algorithm selects the set of weights p leading to the maximal separation of ck from U, hoping that this p produces a correspondingly ample separation of Ck

from Z.

Separation Algorithm

Step 1. Determine p = arg maxp?0,.o,=y minkui1 p i -p. ul.

If max min = 0, stop, since ck is feasible. Step 2. Compute u = arg max p.z. If p.u < p.ck, set pk+l = p, set

wk+ = u and stop. Otherwise, add u to the set U, and return to Step 1.

Clearly, if a trial p fails to separate Ck from Z, the new point u is distinct from all previous members of U. Consequently, the algorithm cannot fail to converge simply because of cycling. A more general proof of conver- gence can be found in Dintersmith. In solving for pk+1 in Step 1, the max

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

1336 Dintersmith

min problem can be solved through a linear program, facilitating com- putation.

Convergence

The following fundamental convergence theorem, which is proven in Dintersmith, guarantees the convergence of the SCHA:

FUNDAMENTAL CONVERGENCE THEOREM. Consider P1, p2, ... as pre- viously defined, where each PL is a closed, convex, polyhedron containing the closed convex set Z. Let c', c2, 2.. be the DMA's most preferred point from the sets Pl, p2, * * respectively. If any ci is feasible, it is optimal. If no ci is feasible, then the sequence {cil has a subsequence converging to a limit point c', where c' is feasible and optimal.

An important aspect of any interactive procedure is its near-term convergence characteristics. The SCHA is a member of a class of cutting plane algorithms, a class that has been demonstrated by Wolfe [1970] to converge slowly. However, two steps can be taken to speed the SCHA's convergence. First, an informed choice of p1 may move the DM expedi- tiously to a region of high preference. Second, a more complete charac- terization of the boundary of Z can produce separating hyperplanes that cut away a bigger portion of pk, hastening convergence.

From a practical perspective, the presence of upper and lower bounds allows the DM to answer the question, "When has the algorithm con- verged?" A comparison of the current upper bound, ck, to the current lower bound, wkax, indicates to the DM an upper bound on the value to be gained by proceeding. If this differential is insignificant to the DM, the algorithm has, for practical purposes, converged.

4. PRACTICAL APPLICATION

Three steps were taken to assess the practicality of the SCHA. First, a general APL software package was developed to implement the algo- rithm. Then, a multiattribute decision facing a university research group was modeled and integrated into the software package. Finally, each of two DMs from the research group used the SCHA to identify optimal strategies. A more detailed discussion of these interactive sessions can be found in Dintersmith.

The research group participating in this analysis sought to allocate funds and personnel to achieve an optimal outcome. The effectiveness of an allocation could not be measured strictly in terms of a single criterion. A set of five criteria (published reports, working group meetings con- vened, industrial afflliates recruited, students supported, and discretion- ary activities funded) comprised a comprehensive, yet tractable, outcome measure.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Algorithm for Multiattribute Decisions 1337

For the time period in question, the DMs were not certain what the available budget level would be. Consequently, the analysis was conducted for several different budget scenarios. In total, the two DMs completed seven distinct analyses. The SCHA was easy enough to use that the DMs could carry out several analyses in a single 1-hour sitting.

In a follow-up questionnaire, the DMs indicated that both constrained and unconstrained resource allocations were easy to perform. In the actual sessions, the DMs solved constrained resource allocations more quickly than the unconstrained problems; evidently, reduced degrees of freedom expedite the assessment process. Comparing a choice point to a lower bound (the convergence test) was also judged to be elementary.

The most difficult assessment task of the SCHA was, according to the participants in this experiment, the point-to-point comparison needed to update the lower bound. Apparently, selecting the more preferred from two undesirable outcome points can impose cognitive strain (Janis and Mann [1977]). This observation has significant implications for any iterative algorithm whose convergence depends on the DM's ability to discriminate between arbitrary outcomes.

By allowing the DM to directly construct his outcome vector, the SCHA gives the DM an active role in shaping his strategy. Specifically, the structure of the SCHA is such that the DM will generally directly construct his optimal outcome. This aspect provides the SCHA with a practical advantage over those algorithms that require the DM to respond to (often puzzling) assessment questions and await the computer's iden- tification of his "optimum."

At the conclusion of the SCHA interactions, each DM's marginal rates of substitution were assessed at the SCHA optima. The DMs viewed this task as being-more difficult than either resource allocations or pointwise comparisons. Also, feasible points in the direction defined by this gradient to the DM's preference function were uniformly regarded as inferior to the SCHA optimum, suggesting a discordance between the assessed anid actual marginal rates of substitution.

5. CONCLUSION

A multiattribute decision algorithm has been developed that provides a structured approach for identifying an individual's multiattributed optimum. This separating choice hyperplane algorithm's principal con- tribution is that it integrates a meaningful assessment procedure into a theoretically rigorous nonlinear optimization algorithm. Decision-makers using the approach expressed much greater confidence in their ability to perform the utilized assessment task (a resource allocation) than methods relied on extensively by most other multiattribute decision algorithms (point to point comparisons or directly assessed marginal rates of sub-

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

1338 Dintersmith

stitution). Further, these decision-makers found the overall approach effective and easy to use, suggesting further investigation of the practical utility of the SCHA is warranted.

The SCHA has been advanced in the context of multiattribute deter- ministic decisions. The approach can be extended to uncertain decisions by interpreting the DM's payoffs in different states of the world as distinct elements of a multiattribute outcome vector. The DM's resource allocations reveal information about his probability assessments and risk attitude; the algorithm can converge to the DM's most preferred lottery without explicitly representing these subjective entities. For this frame- work to be valid, the DM's indifference curves must be convex (which Arrow [1964] demonstrates is the case for risk neutral or risk averse preferences satisfying the von Neumann-Morgenstern axioms) and the outcome frontier must be convex. The SCHA's applicability to such decisions is further restricted by the number of states of the world that can be addressed. Because of these limitations, further research into applying iterative, interactive algorithms to multiattribute uncertain decisions is warranted.

APPENDIX: CHOICE POINT ALGORITHM

At each iteration k of the separating choice hyperplane algorithm (SCHA), the DM is required to identify his most preferred point ck from the most recently added face Fk. This Appendix develops the choice point algorithm (CPA), which enables the DM to arrive at ck by solving a sequence of resource allocations. Before stating the CPA, the following constructs are introduced.

Let I be a subset of the indices 1, k. Then the set F = {z: piz = p W, i E I, z E pk) is a set defined by the set I of equality constraints and lying within the original polyhedron pk.

Extending a previous construct, let SI = {z: pi z = p1. w1, i E I, z- c z ' z+J, where

zi+ = maxzEz zi and zi- = minzEz zi.

These maxima and minima can be solved with linear programs over the feasible set F'. The set S' is the intersection of supporting hyperplanes with a rectangular region, aligned with the axes, of minimum size con- taining the face F'.

Finally, let Q'g = {z: pg z = pg wg, z E S'J. The set Q"g represents a set of points obeying the constraints of the set {I, g) as equality con- straints, and bounded by the Zi+ and zi-. In the context of the CPA, the constraint g is a trial equality constraint.

Figure 3 illustrates the geometric significance of these definitions and the basic strategy used in the CPA. The objective of the CPA is to

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Algorithm for Multiattribute Decisions 1339

identify the DM's most preferred point from the set Pk. First, Fk is embedded in a larger set Sk. The DM, through a resource allocation similar to that of Figure 2 with attribute bounds of zj and zi-, selects his most preferred point c from Sk. If c E Fk, then ck = c. If c lies outside of Fk, as in Figure 3, quasiconcavity of the DM's preferences ensures that ck lies on one of the constraints violated by c (on ABD in Figure 3).

A sequential procedure determines where on ABD the DM's choice point ck lies. First, a resource allocation over the segment ABE deter-

z3

ZI

Figure 3. Example of an embedded set FK.

mines which violated inequality constraint holds as an equality. In this figure, the set ABE corresponds to the construct Q'g. The point c is now defined as the DM's preferred point from ABE. If c lies on BE, quasicon- cave preferences ensure that point B dominates all other points on AB. Hence, the DM's most preferred point from ABD can be assumed to lie on BD. An additional constrained resource allocation, with updated bounds, identifies the DM's most preferred point from BD and, by inference, from ABD and Fk.

If the DM's preferred point c from ABE lies on AB, quasiconcavity ensures that this point is optimal for all of ABD and, hence, Fk. The

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

1340 Dintersmith

chain of reasoning is that c dominates all points on BE. By quasiconcavity and the location of c, each point on BD is dominated by some point on BE. So, by transitivity, c is optimal over ABD and is the DM's most preferred point ck.

The set ABE is defined by two equality constraints (one defining the hyperplane and one defining the edge AB). The presence of two equality constraints reduces the dimensionality, or the degrees of freedom, of the DM's choice set. The DM selects his most preferred point from ABE using the constrained resource allocation of Figure 4. By allocating all of

z4

Z. -D41 z - D42 z2 D43 z3

+ Z + z2 + z3 ZI z z3

Zi Z2 Z3

P2

ePi P3

Figure 4. Constrained resource allocation display.

the initial budget Mo to z1, Z2, and Z3, the DM selects a point on the hyperplane passing through P. Removing one attribute from the DM's direct control and expressing it as a linear combination of the other attributes (a relationship posted to the right of the constrained attribute) ensures that the DM's choice point lies on the line passing through ABE. Appropriate ceilings and floors limit the DM's choice set to the segment ABE.

The following choice point algorithm makes rigorous the approach described above for determining the DM's optimum over Fk, a convex n- dimensional polytope.

Choice Point Algorithm (CPA)

Step 0. Set I = {k[. Step 1. The DM, through a resource allocation, selects his most pre-

ferred point c from the set S'. Let J = {j: pi.c > pi-.w[.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Algorithm for Multiattribute Decisions 1341

Step 2. If J = \, set ck = c, and stop. If J contains just one member, set I = {I, J}, and return to Step 1. Otherwise, select a member g of J. The DM, through a constrained resource allocation, selects his most preferred point c' from the set QIg. If c' e F', set ck = c', and stop.

Step 3. Let J' = {j: j E J; pi.c' > p.wi. If J't, set I = {I, g}, and go to Step 1. If J' contains just one member, set I = {I, J'), and go to Step 1. Otherwise, set J = J', and go to Step 2.

The key to this algorithm is the quasiconcavity of the DM's prefer- ences, which ensures that a choice point from an approximation to the set of concern must, if infeasible, violate one of the inequality constraints that the true choice point lies on. Steps 2 and 3 filter through those violated constraints to identify an additional constraint that can be enforced as an equality.

In Step 1, the DM selects his most preferred point c from a set larger than and containing the most recently added face of the polyhedron bounding the feasible set. If c is feasible with respect to F', then it is optimal. Otherwise, Lemma A.1 demonstrates that an infeasible c has implications for the ultimate location of ck.

LEMMA A.1. Suppose that the DMs optimal choice point, denoted ck, is known to belong to the set F'. Further suppose that c is the DMvs preferred point over SI, but c 4 F'. Then Ck can be assumed to lie on one of the constraints defining F' that is violated by c.

Proof. Assume that ck is an interior point with respect to the set of constraints J that were violated by c; that is, pi.ck < pi - wi for every j E J. Then there exists a tj such that pJ- (tjc + (1 - tj)ck) = pi.wi, for each j E J. Let to equal the smallest t,. Then y = toc + (1 - to)ck is such that y E F', and pi y = pi - wi for some j E J.

Now, V(c) > V(Ck), since c is optimal over a set containing F. Consequently, by quasiconcavity of the DM's preferences, it holds that y, a point that satisfies as an equality at least one of the constraints violated by c, is such that V(y) > V(Ck), contradicting the earlier hypothesis.

If c violates just one constraint, Step 2 adds this constraint to the set I and returns to Step 1. In so doing, the dimensionality of the choice set F' has been reduced by one. Qualitatively, the CPA has progressed in pinpointing the particular region of Fk in which ck lies.

If c violates more than one constraint, Step 2 selects a trial equality constraint, denoted g. Then the DM's most preferred point from the set QI9 is ascertained. Step 2 claims that if this point belongs to F', it is optimal over FI, an assertion that is demonstrated by Lemma A.2.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

1342 Dintersmith

LEMMA A.2. Suppose, during some iteration of the CPA, Ck is known to belong to F'. Further suppose that c, the DM's most preferred point from S', violates constraints indexed by the set J. Let c' be the DM's most preferred point from the set Q'g, where g is a member of the set J. If c' is a member of F', then V(c') = V(Ck).

Proof. Assume otherwise. Then the choice point ck, an element of F', is such that V(Ck) > V(c'), and V. C < pg. Wg.

Since c is preferred over a set containing F', it must hold that V(c) > V(Ck). Moreover, there exists a t such that y = tck + (1 - t)c satisfies pg- y = pg W. And, since both c and ck belong to SI, a convex set, then y is also a member of S'. Consequently, y E Q'.

By quasiconcavity, V(y) > V(ck). Also, V(c') 2 V(y), since c' is the DM's most preferred point from Q'4, a set containing y. So V(c') > V(ck), contradicting the hypothesis.

Step 3 filters through the set of violated inequality constraints, denoted J, using resource allocations over sets containing trial equality con- straints, denoted Q'9. Lemma A.3 demonstrates that, if any constraint belonging to J is strictly obeyed by the DM's choice point from Q'X, it can be eliminated as a candidate for the enforced equality constraint.

LEMMA A.3. Assume that Ck is known to belong to F'. Let c' be the DM's most preferred point from Q'9. Let H be defined as the set of constraints strictly obeyed by c'; H = th: ph C, < ph.wh[. Let RIm denote the set iz: z E '; pm .z = pm .wm for at least one m 4 H[. Then ck can be assumed to belong to Rim.

Proof. Assume V(Ck) > V(z) for all z E RIm. Lemma A.2 implies that V(c') > V(ck). Consider y = tc' + (1 - t)ck, 0 C t c 1. Since both c' and ck obey all constraints of the set H, it must hold that y obeys these constraints as well. For t = 0, all constraints not belonging to the set H will be strictly obeyed. For t = 1, some of these constraints will be violated. Hence, there exist some to, 0 < to c 1, such that y = toc' + (1 - to)ck will belong to RI,m. Using the quasiconcavity of the DM's preferences, V(y) > V(Ck), contradicting the initial hypothesis and prov- ing the lemma.

In cases more complex than Figure 3, an infeasible choice point c may violate numerous inequality constraints, represented by the set J. Careful selection of a trial inequality constraint can eliminate several members of J in just a single resource allocation, speeding the search for an element of J to enforce as an equality constraint.

Together, Lemmas A.1, A.2, and A.3 suffice to motivate the CPA. By exploiting the quasiconcavity of the DM's preferences, we can develop

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Algorithm for Multiattribute Decisions 1343

an algorithm to identify the DM's most preferred point from a polyhed- ron, requiring only the, solution of resource allocations.

A finite upper bound on the number of allocations by the CPA can be established. In general, the dimension of the outcome space (the number of attributes in the outcome vector) is some number n greater than three. In this case, each additional enforced equality constraint (added in Step 1) will reduce the dimensionality of the set of points from which the choice point ck can be inferred to belong.

If n - 1 constraints are enforced as equalities, the set SI will be a line segment coinciding with F' (and with an edge of pk) and the CPA will terminate with the DM's selection of his most preferred point from S'. So Step 1 cannot occur more than n - 1 times. Also, if k < n - 1, Step 1 is limited to being performed k times, since no more than k constraints can be enforced as equalities.

On the ith pass through Step 1, the resource allocation of Step 2 can be performed at most k - i - 1 times. The reason is that each pass through Steps 1 and 2 eliminates at least one inequality constraint as a candidate for the enforced equality constraint.

Finally, on the (n - 1)st (or kth if k < n - 1) pass through Step 1, Steps 2 and 3 will not be needed. The net effect is that the CPA requires at most T iterations, where

T = 1 + Ef=1 (k - 1 - i) = 1 + L-(k - 1) - L.(L + 1)/2,

with L defined as min {n - 2, k - 1}. For k = 7 and n = 5, T = 13. This upper bound is a worst case estimate; the procedure used to select the trial inequality constraint g in Step 2 (for instance, by selecting the constraint containing feasible points closest to the most recently selected choice point) can greatly reduce the number of iterations needed in practice. Also, this upper bound only grows linearly with k, so the number of resource allocations required of the DM does not grow excessively as the algorithm progresses.

The CPA represents an algorithm that can identify the DM's most preferred point from any arbitrary convex n-dimensional polyhedron. As such, it can be applied to problems where the feasible outcome set can be expressed directly in terms of linear inequality constraints. Seen in this light, the iterative superstructure of the SCHA is needed only to deal with the complexity of a general, nonlinear outcome frontier.

ACKNOWLEDGMENTS

This work was performed while the author was enrolled in Stanford University's Department of Engineering-Economic Systems, where he benefited from many discussions, particularly with Ronald Howard, Shmuel Oren, David Luenberger and William Linvill.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

1344 Dintersmith

REFERENCES

ARROW, K. 1964. The Role of Securities in the Optimal Allocation of Risk Bearing. Rev. Econ. Studies. 31, 91-96.

BOYD D. 1970. A Methodology for Analyzing Decision Problems Involving Complex Preference Assessments, Ph.D. dissertation. Department of Engi- neering-Economic Systems. Stanford University (June).

DINTERSMITH, T. 1982. A Separating Choice Hyperplane Algorithm for Evalua- tion Multi-Attribute Decisions, Ph.D. dissertation. Department of Engineering- Economic Systems. Stanford University (June).

GEOFFRION A., J. DYER AND A. FEINBERG. 1972. An Interactive Approach for Multi-Criterion Optimization, with an Application to the Operation of an Academic Department. Mgmt. Sci. 19, 357-368.

HEMMING, T. 1978. Multi-Objective Decision Making under Certainty. The Economic Research Institute of the Stockholm School of Economics, Stock- holm.

HOWARD, R. 1968. The Foundations of Decision Analysis. IEEE Trans. Syst. Sci. Cybernet. SSC-4, 211-219.

JANIS, I. L., AND L. MANN. 1977. Decision Making: A Psychological Analysis of Conflict, Choice, and Commitment. Free Press, New York.

KELLY, J. 1960. The Cutting-Plane Method for Solving Convex Programs. J. Soc. Indust. Appl. Math. 8, 703-712.

MORRIS, P., AND S. OREN. 1980. Multiattribute Decision Making by Sequential Resource Allocation. Opns. Res. 28, 233-252.

OPPENHEIMER, K. 1977. A Proxy Approach to Multi-Attribute Decision Making, Ph.D. dissertation. Department of Engineering-Economic Systems, Stanford University (June).

Roy, B. 1971. Problems and Methods with Multiple Objective Functions. Math. Program. 1, 239-266.

VEINOTT, A. 1967. The Supporting Hyperplane Method for Unimodal Program- ming. Opns. Res. 15, 147-152.

WALLENIUS, J. 1975. Comparative Evaluation of Some Interactive Approaches to Multi-Criterion Optimization. Mgmt. Sci. 21, 1387-1396.

WOLFE, P. 1970. Convergence Theory in Nonlinear Programming. In Integer and Nonlinear Programming, Chap. 1, J. Abadie (ed.). American Elsevier, New York.

This content downloaded from 169.229.32.137 on Thu, 8 May 2014 17:00:24 PMAll use subject to JSTOR Terms and Conditions

Recommended