Transcript

JULY/AUGUST 2003 1094-7167/03/$17.00 © 2003 IEEE 27Published by the IEEE Computer Society

D e c i s i o n S u p p o r t

Human-CenteredProcesses: Individualand DistributedDecision SupportGilles Coppin and Alexandre Skrzyniarz, National Graduate School of Telecommunications (ENST), Bretagne

Since at least the 1950s, researchers have shown iterative interest in human-centered

systems (HCS) and human-machine cooperation. Frederick Taylor’s goal was to cus-

tomize the workplace to human features, and these attempts have succeeded in enhancing

what we now call ergonomics: human comfort, user interfaces, and working conditions in

general.1 However, from the human-machine per-spective, the customization goals have largely failedbecause the resulting systems intrinsically ignoredlearning and adaptation paradigms.

Until recently, researchers did not account for thesystem’s “real life,” particularly the slow and com-plex evolution of the human-machine association. Inknowledge-based and decision-support systems, thereal-life factor is even more critical (and criticized)because neither system has a fixed or unanimouslyaccepted form. Recent work has offered initial refer-ence points and theories that clarify, for example, howto elicit expertise, how to model an expert’s decision-making skills, and so on. However, while such ele-ments are key, they do not represent the whole system.

We’ve developed a methodology aimed at offeringdevelopers a holistic view of knowledge-based anddecision-support systems. Here we illustrate thismethod in a framework for analyzing and designingHCSs for decision support. We’ve tested our methodon two types of decision support systems: those fora single expert and those for a team of experts andmachines interoperating in a distributed environ-ment. Although our generic approach has many fea-tures, its key emphasis is on a global ethical and sys-temic principle. HCSs must reinforce the human rolein processes by involving users in the system’sdesign, evolution, and refinement.

Human-centered processes and toolsWe define human-centered processes as those that

rely essentially on the system’s human element. Torefine this definition, we define four basic features

(P1 through P4). Because knowledge-based processesare prominent among human-centered processes, wetake these as P1, the first element of our definition.Knowledge-based processes arise from empiricalevaluation of processes and user roles. We thus asso-ciate them with human-centered processes and tools,which constitute the three remaining features of ourHCS framework:

• P2: Expert-centered computing. We must designthe HCS to enhance or at least maintain the user’srole in the process. Take the case of an expert deci-sion maker in an industrial process. A tool designedto assist with decisions should not automate them—through, for example, a supposedly efficient exper-tise extraction—but rather should assist the expertby making his or her decision-making tasks easierand more efficient. We must consider P2 from anethical viewpoint. Classically, designers could usethe results of user behavior analysis as a normativetool (and in a way, produce results against theexperts) by looking to external references for “thetruth” rather than to individual experts. We proposeinstead not to evaluate users’ behavior, but ratherto understand it “from the inside” and possibly helpusers better control their own current and futureperformance without assuming we know what thatperformance should be.

• P3: Accounting for cognitive constraints. Thiscognitive engineering approach holds that human-centered tools must rely on models that accountfor users’ needs and abilities. Although this is themost commonly used and known of the properties

Researchers have

largely neglected the

evolutionary aspect of

human-machine

relations over time. To

account for this and

thus better design

decision-support

systems, the authors

developed a

framework and tested

it with individual and

distributed decision

makers.

(referring, indirectly, to ergonomics andthe like), it remains a key issue in HCSdesign.

• P4: Interactive design. Human-centeredtool design must directly involve users.Prototyping approaches to software designare an obvious example here. However, wemust also consider more complex issues,such as the need for direct user involve-ment in system operations (such as filter-ing relevant cases for analysis in an induc-tive learning approach, or validating oreliciting preferences).

We further analyze these general featuresin our concrete examples later. But first, wemust address the key element of time in HCS.Ongoing and future HCS developments mustaccount for time within the system, and do soespecially through learning paradigms. It’squite accepted now to consider the user andmachine as a whole instead of as separateentities. But this (relatively) new vision of ahybrid human-machine system must alsoconsciously consider the pairing’s internalevolution over time. To that end, we base our

HCS analysis and design on a model of sys-tem dynamics at two different levels.

The first modeling level is the human-machine association (or, more generally, thehuman-environment association). We proposea model elsewhere that automatically distin-guishes between expert and novice users, andidentifies routine action-reaction sequencesbetween users and the system.2

At the second modeling level, our empha-sis is on the continual tuning of knowledgewithin the human-machine coupling overtime. This is primarily associated with knowl-edge revision and updating, but, according toP4, we must also consider this from an inter-active, rather than just “purely logical” view-point. We can express this second level as afour-part model:

• M1 represents the user’s “mental model”of the task in the spirit of cognitive engi-neering. The model’s form might varyconsiderably, depending on which facetsof cognitive psychology the designerchooses to emphasize (decision models,task models, and so on).

• M2 represents the user’s view of how heor she uses the system to perform the task.

• M3 represents the designer’s view of howthe user uses the system to perform the task.

• M4 represents the designer’s mentalmodel of the task, mirroring M1.

Given this multimodel description, adesigner might view prototyping as a lim-ited loop between M2 and M3, while the oldTaylorist approach would be to fix M3 andM4, while M2 (and possibly M1) evolved tokeep the global system viable. Introducingthe learning paradigm within HCS meansviewing M3 (and if possible, M4) as intrin-sically evolutionary; human and machine arejointly evolving entities—or, better yet, asingle global entity that tunes its internalrules to perform better in its environment.

We applied these generic principles andmodels to two concrete cases. The first caseis a decision-support system for solitarydecision makers in different production sys-tems. In the second case, we look at groupdecision making in complex and criticalsystems.

Decision support for a single expert

In the first case, we targeted processes thatrely on one expert decision maker—or a teamof experts considered as a single “epistemic”’decision maker—in three high-tech compa-nies. Figure 1 shows an example of such a sys-tem. The HCS’s purpose is to offer nonintru-sive, expert feedback that raises the decisionmaker’s awareness of his or her strategies and,beyond that, helps the user adapt these strate-gies when necessary. All models in this caserepresent decision-related information, andwe thus limit them to the following classicalmulticriteria decision framework.

The algorithm One of the system’s most important fea-

tures is its ability to extract expertise in anonintrusive way. This minimizes biasedbehavior by monitoring expert decisionmakers directly (through the informationsystem) during their daily tasks, withoutchanging anything about their choices orstrategies. The point here is to understandbehaviors and not evaluate them in terms ofperformance. (Such performances are typi-cally rated according to the usual qualitymeasurements, which are devoted to con-trolling process performance.) Our focus ison the expert’s own strategies, which we

28 computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Control rules base as decision tree

Initial knowledge extraction

Online expert control rules extraction

Updated decision treeRules base updated with online rules

Conflict resolution

Figure 2. Knowledge extraction principles. The system uses customized decision treesto account for cognitive constraints.

Process/subprocess

Quality check

Strategy• Control• Extraction• Adaptation

Qualityevaluation

Display andhandling service

Actualcontrolstrategy

Proposedstrategies

Experience

Operator

Figure 1. Human-centered production system. The aim of such systems is to offer expertfeedback that raises decision makers’ awareness of their own decision-making strategies.

D e c i s i o n S u p p o r t

assume are sufficient to ensure good processefficiency.

As Figure 2 shows, the HCS offers feed-back on the control situations’history and onthe current state of rules as the HCS extractsand understands them. To extract rules, thesystem uses customized decision trees toaccount for cognitive constraints (a limitednumber of attributes per rule, for example,lets it match users’cognitive limits). The sys-tem continually updates the correspondingrule set based on the new cases that the expertprocesses daily. As Figure 3 shows, the tooldisplays only its current state of understand-ing of the expert strategies, instantly colorcoding cases to be processed (we furtherdescribe this “check as you decide” protocol3

later) and synthetically representing currentlyextracted rules. The tool simultaneously indi-cates the subset of relevant attributes, therelated value intervals, and the associateddecisions that define the rules. In essence, ourrules-updating methodology splits, extends,restricts, or creates rules on the basis of sta-tistically significant new inputs.4

Links to properties and modelsBeginning with M1, we assume that the

expert decision maker’s behaviors can bedescribed, thanks to cognitive psychologyand, in particular, to the spirit of Henry Mont-gomery’s dominance structures.5 Thesestructures are attribute subsets that serve asthe basis for modeling an individual decision.When choosing a new car, for example, youmight indicate subsets such as “price under$10,000 and maximum speed over 100 mph,”or “red color and reasonable gasoline con-sumption.” If an option validates one of thosecombinations of attributes, it’s enough totrigger a positive decision. Researchers havealso proposed a general framework, the Mov-ing Basis Heuristics, to describe such a deci-sion process.6

As for M2, corresponding operationaldecision structures are typically viewed asmulticriteria vectors (context and situationdescription) that the observed decision com-pletes. We therefore think of M3 and M4 asmirrors of these user hypotheses. Figure 3’sinterface shows this: each line of the table

corresponds to one control situation thatrequires the user’s decision. The color codingindicates whether the machine “understands”the user’s current decision according to itsknowledge state and rules set (which resultfrom clustering past cases). For example,

• Red indicates that either the user’s behav-ior is different from past cases or that themachine’s understanding has yet to reachthe adapted level

• Green indicates that the expert decision iscompatible with previous analyses

• Blue indicates that the configuration iscompletely unknown to the machine

The user can interactively choose to drop casesand whether to validate proposed rules. In anycase, the expert’s final decision is uncon-strained. He or she remains fully responsiblefor the decision and its consequences.

Distributed decision supportWhen a single decision maker is respon-

sible for a process, it is naturally human-cen-

JULY/AUGUST 2003 computer.org/intelligent 29

Figure 3. The human-machine interface. Each line of the table corresponds to a control situation that requires the user’s decision.The color coding indicates where each case is in the process: red denotes incompatibility, green indicates that the user’s decision iscompatible with previous analyses, and blue indicates a new case. The orange shading is a selected line.

tered. However, decision aid and decisionmaking have greatly changed with the emer-gence of information and communicationtechnology (ICT). Decision makers are nowfar less statically located; on the contrary,they typically play the role in a distributedway. This fundamental methodologicalchange creates a new set of requirements7:

• Distributed decisions are necessarilybased on incomplete data. “Distributeddecision” means that several entities—humans and machines—cooperate toreach an acceptable decision, and thatthese entities are distributed and possiblymobile along networks. Given physicaland semantic limitations, such entitiesexchange only partial information witheach other and the environment. Theselimitations arise from cognitive constraints(bounded rationality,8 for example) andconsequently, each entity processes limitedinformation.

• Distributed decisions must be robust.According to the continuous changes thatthe network provokes and supports, andthe resulting questions of urgency andsecurity, distributed decision makers mustreach robust decisions.

• Distributed decisions must tolerate andreact to evolution. Although similar to the

previous point, this point extends it byacknowledging that evolutions are inde-terminate and thus unplanned for withinthe decision process.

• Distributed decisions must be secure. Dis-tributed decision making includes domainsthat involve possibly extreme dangers andgreat security needs. Such applications arebecoming increasingly common, espe-cially given the nonlocalized and cooper-ative decision modalities that ICT allows.

• Distributed decisions must be multitime-scaled. Distributed decision making mustbe possible at any moment; it might be nec-essary to interrupt a decision process and toprovide another, more viable decision. Thisconstraint of course reinforces the need forreactivity and security features.

In addition to these requirements, distrib-uted decision makers can have many differ-ent objectives. The system therefore needsa metagoal that would try, for the sake of alldecision makers, to keep the process viable.

Distributed classification: Existing approaches

According to cognitive psychology, we canidentify different decision tasks: binary choice,selection, and categorization. We propose hereto reformulate and instantiate the quite abstract

and generic distributed decision requirementsinto the more specialized field of categoriza-tion, which consists of associating an objectwith a predefined class based on an analysisof its attributes.

In extending our modeling and analysisfrom one decision maker to many, we focushere on information processing from a classi-fication viewpoint and don’t deal with thecommunication issues that the process couldentail. In other words, we consider here theparadigm of distributed decision-support sys-tems, in which several decision makers whodeal with partial, uncertain, and possiblyexclusive information must reach a commondecision. We assume that we can representeach individual decision-making processthrough a classification process, identifyingone of the main issues as the need to “fuse”the different classifier structures. Accordingto Dimitr Ruta and Bogdan Gabrys,9 there aretwo ways to achieve classifiers’fusion: the firstis to operate on classifiers’outputs, the secondis to deal with the classifiers themselves. Inthe latter case, there are three primaryapproaches:

• Dynamic classifier selection (DCS) typi-cally extracts a single best classifier, ratherthan mixing many of them.

• Classifier structuring and grouping (CSG)organizes classifier and combination func-tions in parallel or into groups, thenapplies different fusion methods to eachgroup at each stage (the classifiers’ out-puts are taken as inputs for various com-bination functions).

• Hierarchical mixture of experts (HME)uses the divide-and-conquer principle,organizing expert networks into a tree-likestructure of leaves, each of which tries tosolve a local, supervised problem. At eachnode, a gating network combines theleaves’ output. This trains the expert net-works, which increases the posterior prob-ability according to a Bayes rule.

Here, we propose a new approach based onrestructuring classifier internals to harmonizeoutputs. To reduce conflicts, we use a con-sensus algorithm and optimization methodsderived from basic games theory.

Application contextWe developed our approach as part of the

Multiagent System for Airborne MaritimePatrol research project (SMA2). SMA2 isfunded by the French Army DGA and the

30 computer.org/intelligent IEEE INTELLIGENT SYSTEMS

D e c i s i o n S u p p o r t

Sensor 0

Operator B

TacticaloperatorGives order to

Operator A

Distributedshared

memory

Sensor 1

Pilot

Figure 4. Crew of a maritime-awareness aircraft. In addition to two pilots, the crewconsists of various sensor operators and one or two tactical operators.

French ministry of research and includesThales Airborne Systems (the program’scoordinator), CRIL Technology, and ENST-Bretagne, where we work. SMA2’s aim is todesign a distributed design-support systemtool for an airborne maritime tactical situ-ation awareness system. The system willbe embedded in an aircraft, whose crewconsists of various sensor operators, oneor two tactical operators, and two pilots(see Figure 4).

Each sensor operator helps identify tar-gets, which (for system development pur-poses) we view as time-persistent sensor sig-nals. Of course, each operator uses a localand partial view of the environment throughthe “filter” of his or her own sensor. Thesesensors (radar, electromagnetic signal mea-surement, forward-looking infrared, identi-fication friend or foe, and optics) have dif-ferent perception modes, some of whichprovide redundant information. The tacticaloperator can access all information that sen-sor operators provide; he or she can thusmake the decision about target identificationand how to manage the mission.

In managing and controlling sensor infor-mation and tactical situations, the crew facestwo primary challenges. First, identificationtasks are intrinsically difficult. Operatorworkload is high, and aircraft work condi-tions are rather uncomfortable (due to thenoisy environment, long missions, and regu-lar bouts of bad weather). Operator efficiencyis thus not optimal. Second, sensors providenoisy data. Because of weather and sea con-ditions, perceptions are error prone. Eachsensor typically has a quality data-acquisi-tion measure, but again, depending on thecontext, the measure itself might prove unre-liable. Moreover, when considering militarymissions, some information is often unavail-able and impossible to attain (for example,stealth may be required, preventing the useof efficient active sensors such as syntheticaperture radar imaging facilities).

Such conditions can lead to conflicts in thedecision process and identification failures.As a matter of fact, because of the distributedaspect, correct local decisions can lead toglobal incoherencies, or conflicts. Threetypes of conflicts exist:

• Contradictory parameters. Two acquisi-tions can lead to different values for thesame parameter.

• Incoherent data. Although data mightseem correct, it could, for example, have

undetected flaws that make target identi-fication impossible.

• Conflicting classifiers. A classifier’s iden-tification rules or target description mightbe false (each classifier has its own globalefficiency, in terms of error rates).

Moreover, we must respect reactivity andsecurity constraints. To be properly reactive,a system should propose results in a timelyfashion in relation to the environment’sdynamics (usually within 10 to 15 seconds).We do not have to take into account real-timeconstraints. From a decision viewpoint, secu-rity consists of never dropping assumptionsthat could lead to the identification of a dan-gerous ship. Estimating error rates preventsusers from making blind decisions (that is,simply going with the system’s decision withtotal confidence). The ability to point outflawed data significantly decreases the con-flict level.

In addition to just creating sensor-fusionalgorithms, we propose to directly assistusers—and especially tactical operators—inmanaging these difficulties. Currently, weassume that onboard expert operatorsprocess ambiguous and conflicting cases,and that they need an assisting tool that couldaccount for the distributed and partial fea-tures of information and decision.

Methodological choicesWe propose using the “cognitively con-

strained” decision trees as M1 models for allexpert decision makers, and correspondingproduction rules as M2 models. In a decision

tree, each interior node represents a feature,each arrow leaving an interior node repre-sents the node feature’s value, and each leafrepresents a category. As Figure 5a shows,we build decision trees based on training datausing inductive methods, and use them topredict categories for new events (a list offeature values). So, given an event, we startfrom the root node, and, for each interiornode, follow the outgoing arrow for the fea-ture value that matches the event. When wereach a leaf node, we return the category.

Decision trees are fast and easy to inter-pret, letting us highlight important issues innew domains and offer an easy reference forhow decisions are reached. Figure 5b showsan example decision tree with the categories“bee,” “ant,” “dog,” “cow,” and “goat.” Theavailable features are: number of legs, wings,horns, and mass. Bold arrows show how acorresponding event (for example, legs = 4,wings = no, horns = yes, and mass = 50) leadsto the correct category (“goat”).

Using decision trees to account for cogni-tive constraints gives us a generic approachthat fuses individual decision tree structures.We can algorithmically justify choosing thisclassification structure with the followingsupplementary arguments:

• It’s easy to mix heterogeneous attributestogether in a same-tree structure.

• Tree structures have proven perfectly suit-able for classification problems.

• Numerous metrics exist for trees. We caneasily evaluate their efficiency in terms oferror rates and can apply the Quinlan

JULY/AUGUST 2003 computer.org/intelligent 31

Feature:mass

<100 >100

Category:goat

Category:cow

(b)(a)

Training dataand categories

Category

Decisiontree

Newevent

Feature:number of legs

4 6

Feature:horns

Feature:wings

yes no

Category:dog

yes no

Category:bee

Category:ant

Figure 5. An example decision tree. (a) The decision tree helps categorize new eventsby comparing them with training data and existing categories. (b) Feature values leadsto the correct categories, “goat” (left) and “bee” (right).

entropy measure for choosing locally bet-ter selectors.10 Such evaluations will beuseful for feeding the consensus algorithm.

With HCS for single decision makers, wewould embed M3 and M4 models that mimicthe expert’s own decision models and usethese structures as the basis for human-machine communication. But with the dis-tributed process, we must add a comple-mentary functionality to HCS models that isdevoted to consensus setting. To this end, wedeveloped a new method inspired by SaschaOssowski’s consensus algorithm.11

Models for individual agents.All of a team’sindividual decision makers should fit with acognitive decision tree model. From thisstarting point, we propose taking intoaccount the limited capabilities of the(human or machine) sensors that feed thedecision processes by weighting each treepath with a probabilistic measure (resultingfrom the intersections of path-defining inter-vals with intervals attached to the uncertainmeasure). We also attach a second weight toeach potential tree path, related to the asso-ciated leaf’s intrinsic discriminating power.From these two operations, we can order pos-sible conclusions (such as individual plans)according to the set preferences of eachexpert decision maker.

Model for team consensus. Ossowski cre-ated his algorithm to enable emergent struc-tural coordination between agents and basedit on individual rationality. From the localviewpoint, each agent uses its own metricsto evaluate potential environment states. Thisevaluation can rely on multiple attributes ofone decision maker’s decision. The modeltakes into account objective metrics (relatedto qualities of perceptions) and subjectivemetrics (related to tree structures). The algo-rithm’s formalism lets us look at the problemfrom a “social” viewpoint—we can describerelations between plans that agents mightcarry out.

To achieve this part of the algorithm, weset metrics that represent agents’ “efforts”when they decide to change their initialchoices into more compatible ones. (Agentstypically do this when evaluating the proba-bility of joint validation of the individual treepaths, selector by selector.) To representadditional constraints that are unrelated tohypothesis incompatibility, we use the deon-tic level, which lets us represent tactical con-

straints and describe specific and contextualusers. (Deontic logic is a subset of modallogic used to describe normative systems,such as a legal system in human societies.)

We base the last part of our algorithm onan aggregation method, which lets us builda sorted list of multiplans, which are combi-nations of agents’ individual plans. Themethod uses each agent’s local metrics tocompute a utility vector for each multiplanand sorts the resulting vectors according totheir distance from an “ideal result” (allagents maximize their profit) and a worst-case result (all agents minimize their profit).

In keeping with our overall approach, ourassisting system is clearly a facilitating tool,rather than a “decision prosthesis,” for sev-eral reasons. First, all decision trees repre-senting individual expert decision makersmust be built from the expert’s decision set,and they remain under the expert’s control.Second, the consensus mechanism is for con-sultation only; it has no priority over a possi-ble human decision, and its propositions areclearly explained (decision trees are easilyunderstood models for decision processes).Finally, at the deontic level, the consensusmechanism is under the control of high-levelrules that might deeply influence its selec-tions. These rules are validated and activatedby the expert decision makers themselves(interactive property P4).

System architecture We designed our system as a module in a

larger system that consists of pluggable com-ponents, including sensor modules and a tac-tical processing unit. Our component’s goalis to reduce conflict and track identificationtasks by working on other components’sym-bolic data, which is archived in a distributeddatabase. A communication module collectsuser requests and forwards them to a useragent. These agents use actor agents to per-form tasks on classifications structures.

When a conflict appends, the systemcalls a consensus agent, which is a groupof Ossowski’s agents. Each actor agent inthe decision process declares its plan, andthe consensus agent executes the consen-sus process we described earlier. Theprocess meets reactivity constraints (any-time decision making) in that it’s inter-ruptible between each execution ofOssowski’s process.

All agents have their own control thread.To limit over-parallelization, we treat tracksas passive objects (without associated con-

trol threads). An update agent monitors sym-bolic data from the database. When new datais available or available data is updated, theupdate agent uses rules to compute the asso-ciated selector’s value. If the selector is newor its value has been modified, the updateagent sends a notify message to the useragents involved with the selector.

A track includes various data, such as arules set for update agents, currently avail-able selectors, global identification (if it’sbeen computed), and small decision trees tohandle various users’ diagnostics. The localdecision is the class given by the tree’s leaf;the global decision results from an algo-rithm that searches for consensus amonglocal decisions.

Our project is currently in the valida-tion phase. We’ve integrated the

assisting tool into an existing operationalplatform and are now working with expertdecision makers in military operations toidentify how to merge the tool with theirenvironment and decision-making protocol.Our current developments are particularlyfocused on the human-machine interface forthe consensus mechanism. We also have afew outstanding challenges. For example,some local agents might have to learn andcontinuously update each operator’s classi-fier structure. We also need to address cross-operator exchange issues and possiblydevelop new protocols for cooperative team-work. Finally, at some point we might extendthe architecture to enable plan schedulingand execution based on equilibrium so thatoperators can delegate sensor actions to theassisting system.

AcknowledgmentsOur work is part of the Algorithmic and Mate-

rial Handling of Communication, Information andKnowledge (TAMCIC) research unit at the FrenchNational Center for Science and Research(CNRS).

References1. F. Taylor, The Principles of Scientific Man-

agement, 1911, Harper & Brothers Publish-ers, 1911.

2. G. Coppin, Pour une Contribution à l’AnalyseAnthropocentrée des Systèmes d’Action, doc-toral dissertation, 1999, School for AdvancedStudies in Social Sciences (EHESS), Paris.

32 computer.org/intelligent IEEE INTELLIGENT SYSTEMS

D e c i s i o n S u p p o r t

3. R. Bisdorff, “Cognitive Support Methods forMulticriteria Expert Decision Making,” Euro-pean J. Operational Research, vol. 119, no.2, Dec. 1999, pp. 379–387.

4. E. Le Saux, P. Lenca, and P. Picouet,“Dynamic Adaptation of Rules Bases under

Cognitive Constraints,” European J. Opera-tional Research, vol. 136, no. 1, Jan. 2002,pp. 299–310.

5. H. Montgomery, “Decision Rules and theSearch for a Dominance Structure: Toward aProcess Model of Decision Making,” Ana-

lyzing and Aiding Decision Process, North-Holland, 1983, pp. 343–369.

6. J.P. Barthélemy and E. Mullet, “Choice Basis:A Model for Multiattribute Preferences,”British J. Math. and Statistical Psychology,vol. 39, 1986, pp. 419–437.

7. J.P. Barthélemy, R. Bisdorff, and G. Coppin,“Human-Centered Processes and DecisionSupport Systems,” European J. OperationalResearch, vol. 136, no. 1, Jan. 2002, pp.233–253.

8. H.A. Simon, The Sciences of the Artificial,MIT Press, 1969.

9. D. Ruta and B. Gabrys, “An Overview ofClassifier Fusion Methods,” Computing andInformation Systems, vol. 7, no. 1, Feb. 2000,pp. 1–10.

10. J.R. Quinlan, “Induction of Decision Trees,”Machine Learning, vol. 1, no. 1, Mar. 1986,pp. 81–106.

11. S. Ossowski, Coordination in Artificial AgentSocieties / Social Structure and Its Implica-tions for Autonomous Problem-SolvingAgents, Springer-Verlag, 1999.

JULY/AUGUST 2003 computer.org/intelligent 33

T h e A u t h o r sGilles Coppin is head of the Department of Artificial Intelligence and Cogni-tive Science at ENST Bretagne. His research interests include human-machinecooperation, knowledge management, and decision making. He received a PhDin computer science and mathematics from the School of Advanced Studies inSocial Sciences (EHESS), Paris. He is chair of the Human-Centered ProcessesEuro Working Group. Contact him at [email protected].

Alexandre Skrzyniarz is PhD student in the Department of Artificial Intel-ligence and Cognitive Science at ENST Bretagne. His research interestsinclude distributed systems and decision support. He received a degree intelecommunication engineering from ENST Bretagne. Contact him at [email protected].


Recommended