Human-centered processes: individual and distributed decision support

  • Published on
    20-Feb-2017

  • View
    212

  • Download
    0

Embed Size (px)

Transcript

  • JULY/AUGUST 2003 1094-7167/03/$17.00 2003 IEEE 27Published by the IEEE Computer Society

    D e c i s i o n S u p p o r t

    Human-CenteredProcesses: Individualand DistributedDecision SupportGilles Coppin and Alexandre Skrzyniarz, National Graduate School of Telecommunications (ENST), Bretagne

    Since at least the 1950s, researchers have shown iterative interest in human-centeredsystems (HCS) and human-machine cooperation. Frederick Taylors goal was to cus-tomize the workplace to human features, and these attempts have succeeded in enhancing

    what we now call ergonomics: human comfort, user interfaces, and working conditions in

    general.1 However, from the human-machine per-spective, the customization goals have largely failedbecause the resulting systems intrinsically ignoredlearning and adaptation paradigms.

    Until recently, researchers did not account for thesystems real life, particularly the slow and com-plex evolution of the human-machine association. Inknowledge-based and decision-support systems, thereal-life factor is even more critical (and criticized)because neither system has a fixed or unanimouslyaccepted form. Recent work has offered initial refer-ence points and theories that clarify, for example, howto elicit expertise, how to model an experts decision-making skills, and so on. However, while such ele-ments are key, they do not represent the whole system.

    Weve developed a methodology aimed at offeringdevelopers a holistic view of knowledge-based anddecision-support systems. Here we illustrate thismethod in a framework for analyzing and designingHCSs for decision support. Weve tested our methodon two types of decision support systems: those fora single expert and those for a team of experts andmachines interoperating in a distributed environ-ment. Although our generic approach has many fea-tures, its key emphasis is on a global ethical and sys-temic principle. HCSs must reinforce the human rolein processes by involving users in the systemsdesign, evolution, and refinement.

    Human-centered processes and toolsWe define human-centered processes as those that

    rely essentially on the systems human element. Torefine this definition, we define four basic features

    (P1 through P4). Because knowledge-based processesare prominent among human-centered processes, wetake these as P1, the first element of our definition.Knowledge-based processes arise from empiricalevaluation of processes and user roles. We thus asso-ciate them with human-centered processes and tools,which constitute the three remaining features of ourHCS framework:

    P2: Expert-centered computing. We must designthe HCS to enhance or at least maintain the usersrole in the process. Take the case of an expert deci-sion maker in an industrial process. A tool designedto assist with decisions should not automate themthrough, for example, a supposedly efficient exper-tise extractionbut rather should assist the expertby making his or her decision-making tasks easierand more efficient. We must consider P2 from anethical viewpoint. Classically, designers could usethe results of user behavior analysis as a normativetool (and in a way, produce results against theexperts) by looking to external references for thetruth rather than to individual experts. We proposeinstead not to evaluate users behavior, but ratherto understand it from the inside and possibly helpusers better control their own current and futureperformance without assuming we know what thatperformance should be.

    P3: Accounting for cognitive constraints. Thiscognitive engineering approach holds that human-centered tools must rely on models that accountfor users needs and abilities. Although this is themost commonly used and known of the properties

    Researchers have

    largely neglected the

    evolutionary aspect of

    human-machine

    relations over time. To

    account for this and

    thus better design

    decision-support

    systems, the authors

    developed a

    framework and tested

    it with individual and

    distributed decision

    makers.

  • (referring, indirectly, to ergonomics andthe like), it remains a key issue in HCSdesign.

    P4: Interactive design. Human-centeredtool design must directly involve users.Prototyping approaches to software designare an obvious example here. However, wemust also consider more complex issues,such as the need for direct user involve-ment in system operations (such as filter-ing relevant cases for analysis in an induc-tive learning approach, or validating oreliciting preferences).

    We further analyze these general featuresin our concrete examples later. But first, wemust address the key element of time in HCS.Ongoing and future HCS developments mustaccount for time within the system, and do soespecially through learning paradigms. Itsquite accepted now to consider the user andmachine as a whole instead of as separateentities. But this (relatively) new vision of ahybrid human-machine system must alsoconsciously consider the pairings internalevolution over time. To that end, we base our

    HCS analysis and design on a model of sys-tem dynamics at two different levels.

    The first modeling level is the human-machine association (or, more generally, thehuman-environment association). We proposea model elsewhere that automatically distin-guishes between expert and novice users, andidentifies routine action-reaction sequencesbetween users and the system.2

    At the second modeling level, our empha-sis is on the continual tuning of knowledgewithin the human-machine coupling overtime. This is primarily associated with knowl-edge revision and updating, but, according toP4, we must also consider this from an inter-active, rather than just purely logical view-point. We can express this second level as afour-part model:

    M1 represents the users mental modelof the task in the spirit of cognitive engi-neering. The models form might varyconsiderably, depending on which facetsof cognitive psychology the designerchooses to emphasize (decision models,task models, and so on).

    M2 represents the users view of how heor she uses the system to perform the task.

    M3 represents the designers view of howthe user uses the system to perform the task.

    M4 represents the designers mentalmodel of the task, mirroring M1.

    Given this multimodel description, adesigner might view prototyping as a lim-ited loop between M2 and M3, while the oldTaylorist approach would be to fix M3 andM4, while M2 (and possibly M1) evolved tokeep the global system viable. Introducingthe learning paradigm within HCS meansviewing M3 (and if possible, M4) as intrin-sically evolutionary; human and machine arejointly evolving entitiesor, better yet, asingle global entity that tunes its internalrules to perform better in its environment.

    We applied these generic principles andmodels to two concrete cases. The first caseis a decision-support system for solitarydecision makers in different production sys-tems. In the second case, we look at groupdecision making in complex and criticalsystems.

    Decision support for a single expert

    In the first case, we targeted processes thatrely on one expert decision makeror a teamof experts considered as a single epistemicdecision makerin three high-tech compa-nies. Figure 1 shows an example of such a sys-tem. The HCSs purpose is to offer nonintru-sive, expert feedback that raises the decisionmakers awareness of his or her strategies and,beyond that, helps the user adapt these strate-gies when necessary. All models in this caserepresent decision-related information, andwe thus limit them to the following classicalmulticriteria decision framework.

    The algorithm One of the systems most important fea-

    tures is its ability to extract expertise in anonintrusive way. This minimizes biasedbehavior by monitoring expert decisionmakers directly (through the informationsystem) during their daily tasks, withoutchanging anything about their choices orstrategies. The point here is to understandbehaviors and not evaluate them in terms ofperformance. (Such performances are typi-cally rated according to the usual qualitymeasurements, which are devoted to con-trolling process performance.) Our focus ison the experts own strategies, which we

    28 computer.org/intelligent IEEE INTELLIGENT SYSTEMS

    Control rules base as decision tree

    Initial knowledge extraction

    Online expert control rules extraction

    Updated decision treeRules base updated with online rules

    Conflict resolution

    Figure 2. Knowledge extraction principles. The system uses customized decision treesto account for cognitive constraints.

    Process/subprocess

    Quality check

    Strategy Control Extraction Adaptation

    Qualityevaluation

    Display andhandling service

    Actualcontrolstrategy

    Proposedstrategies

    Experience

    Operator

    Figure 1. Human-centered production system. The aim of such systems is to offer expertfeedback that raises decision makers awareness of their own decision-making strategies.

    D e c i s i o n S u p p o r t

  • assume are sufficient to ensure good processefficiency.

    As Figure 2 shows, the HCS offers feed-back on the control situationshistory and onthe current state of rules as the HCS extractsand understands them. To extract rules, thesystem uses customized decision trees toaccount for cognitive constraints (a limitednumber of attributes per rule, for example,lets it match userscognitive limits). The sys-tem continually updates the correspondingrule set based on the new cases that the expertprocesses daily. As Figure 3 shows, the tooldisplays only its current state of understand-ing of the expert strategies, instantly colorcoding cases to be processed (we furtherdescribe this check as you decide protocol3

    later) and synthetically representing currentlyextracted rules. The tool simultaneously indi-cates the subset of relevant attributes, therelated value intervals, and the associateddecisions that define the rules. In essence, ourrules-updating methodology splits, extends,restricts, or creates rules on the basis of sta-tistically significant new inputs.4

    Links to properties and modelsBeginning with M1, we assume that the

    expert decision makers behaviors can bedescribed, thanks to cognitive psychologyand, in particular, to the spirit of Henry Mont-gomerys dominance structures.5 Thesestructures are attribute subsets that serve asthe basis for modeling an individual decision.When choosing a new car, for example, youmight indicate subsets such as price under$10,000 and maximum speed over 100 mph,or red color and reasonable gasoline con-sumption. If an option validates one of thosecombinations of attributes, its enough totrigger a positive decision. Researchers havealso proposed a general framework, the Mov-ing Basis Heuristics, to describe such a deci-sion process.6

    As for M2, corresponding operationaldecision structures are typically viewed asmulticriteria vectors (context and situationdescription) that the observed decision com-pletes. We therefore think of M3 and M4 asmirrors of these user hypotheses. Figure 3sinterface shows this: each line of the table

    corresponds to one control situation thatrequires the users decision. The color codingindicates whether the machine understandsthe users current decision according to itsknowledge state and rules set (which resultfrom clustering past cases). For example,

    Red indicates that either the users behav-ior is different from past cases or that themachines understanding has yet to reachthe adapted level

    Green indicates that the expert decision iscompatible with previous analyses

    Blue indicates that the configuration iscompletely unknown to the machine

    The user can interactively choose to drop casesand whether to validate proposed rules. In anycase, the experts final decision is uncon-strained. He or she remains fully responsiblefor the decision and its consequences.

    Distributed decision supportWhen a single decision maker is respon-

    sible for a process, it is naturally human-cen-

    JULY/AUGUST 2003 computer.org/intelligent 29

    Figure 3. The human-machine interface. Each line of the table corresponds to a control situation that requires the users decision.The color coding indicates where each case is in the process: red denotes incompatibility, green indicates that the users decision iscompatible with previous analyses, and blue indicates a new case. The orange shading is a selected line.

  • tered. However, decision aid and decisionmaking have greatly changed with the emer-gence of information and communicationtechnology (ICT). Decision makers are nowfar less statically located; on the contrary,they typically play the role in a distributedway. This fundamental methodologicalchange creates a new set of requirements7:

    Distributed decisions are necessarilybased on incomplete data. Distributeddecision means that several entitieshumans and machinescooperate toreach an acceptable decision, and thatthese entities are distributed and possiblymobile along networks. Given physicaland semantic limitations, such entitiesexchange only partial information witheach other and the environment. Theselimitations arise from cognitive constraints(bounded rationality,8 for example) andconsequently, each entity processes limitedinformation.

    Distributed decisions must be robust.According to the continuous changes thatthe network provokes and supports, andthe resulting questions of urgency andsecurity, distributed decision makers mustreach robust decisions.

    Distributed decisions must tolerate andreact to evolution. Although similar to the

    previous point, this point extends it byacknowledging that evolutions are inde-terminate and thus unplanned for withinthe decision process.

    Distributed decisions must be secure. Dis-tributed decision making includes domainsthat involve possibly extreme dangers andgreat security needs. Such applications arebecoming increasingly common, espe-cially given the nonlocalized and cooper-ative decision modalities that ICT allows.

    Distributed decisions must be multitime-scaled. Distributed decision making mustbe possible at any moment; it might be nec-essary to interrupt a decision process and toprovide another, more viable decision. Thisconstraint of course reinforces the need forreactivity and security features.

    In addition to these requirements, distrib-uted decision makers can have many differ-ent objectives. The system therefore needsa metagoal that would try, for the sake of alldecision makers, to keep the process viable.

    Distributed classification: Existing approaches

    According to cognitive psychology, we canidentify different decision tasks: binary choice,selection, and categorization. We propose hereto reformulate and instantiate the quite abstract

    and generic distributed decision requirementsinto the more specialized field of categoriza-tion, which consists of associating an objectwith a predefined class based on an analysisof its attributes.

    In extending our modeling and analysisfrom one decision maker to many, we focushere on information processing from a classi-fication viewpoint and dont deal with thecommunication issues that the process co...

Recommended

View more >