43
KnowledgeAcquisition (1990) 2, 301-343 Improving explanations in knowledge-based systems: RATIONALEt SUIIAYYA ABu-HAKIMA Knowledge Systems Laboratory, Institutefor Information Technology, National Research Council of Canada, Ottawa, Canada, KIA OR8 AND FRANZ OPPACHER School of Computer Science, Cadeton University, Ottawa, Canada, K1S 5136 (Received 10 October 1989and accepted in revisedform 27July 1990) The paper* describes a framework, RATIONALE, for building knowledge-based diagnostic systems that explain by reasoning explicitly. Unlike most existing explanation facilities that are grafted onto an independently designed inference engine, RATIONALE behaves as though it has to deliberate over and explain to itself, each refinement step. By treating explanation as primary, RATIONALE forces the system designer to represent knowledge explicitly that might otherwise be left implicit. This includes knowledge as to why a particular hypothesis is preferred, an exception is ignored, and a global inference strategy is chosen. RATIONALE integrates explanations with reasoning by allowing a causal and/or functional description of the domain to be represented explicitly. Reasoning proceeds by constructing a hypothesis-based classification tree whose root hypothesis contains the most general diagnosis of the system. Guided by a focusing algorithm, the classification tree branches into more specific hypotheses that explain the more detailed symptoms provided by the user. As the system is used, the classification tree also forms the basis for a dynamically generated explanation tree which holds both the successful and failed branches of the reasoning knowledge. RATIONALE is implemented in Quintus Prolog with a hypertext and graphics oriented interface tinder NEWS.§ It provides an environment for tying together the processes of knowledge acquisition, system implementation and explanation of system reasoning. 1. Introduction The ability to construct explanations is widely regarded as a crucial feature of knowledge-based systems, and much has been expected of it (Ganascia, 1984; Josephson, Chandrasekaran & Smith, 1984; Hardman, 1985; Steels, 1985; Bobrow, Mittal & Stefik, 1986). However, explanations generated by current knowledge- based systems are often terse, include bookkeeping information irrelevant to most users, are not well integrated with the reasoning processes of the system, and are poorly designed from a user interface point of view. Explanations have been variously described as: a means of debugging knowledge- based programs and of providing dynamic justifications for unexpected system behavior; as a means of verifying the reasoning of the domain expert; and as training aids for novice users (Hasling, Clancey & Rennels, 1984). Hayes and f This paper is published with permission of the National Research Council of Canada. NRC number 31663. § NeWS is the Network Windowing System from Sun Micro Systems Inc. 301 1042-8143/90/040301 + 43503.00/0

Improving explanations in knowledge-based systems: RATIONALE

Embed Size (px)

Citation preview

Knowledge Acquisition (1990) 2, 301-343

Improving explanations in knowledge-based systems: RATIONALEt SUIIAYYA ABu-HAKIMA

Knowledge Systems Laboratory, Institute for Information Technology, National Research Council of Canada, Ottawa, Canada, KIA OR8

AND

FRANZ OPPACHER

School of Computer Science, Cadeton University, Ottawa, Canada, K1S 5136

(Received 10 October 1989 and accepted in revised form 27July 1990)

The paper* describes a framework, RATIONALE, for building knowledge-based diagnostic systems that explain by reasoning explicitly. Unlike most existing explanation facilities that are grafted onto an independently designed inference engine, RATIONALE behaves as though it has to deliberate over and explain to itself, each refinement step. By treating explanation as primary, RATIONALE forces the system designer to represent knowledge explicitly that might otherwise be left implicit. This includes knowledge as to why a particular hypothesis is preferred, an exception is ignored, and a global inference strategy is chosen. RATIONALE integrates explanations with reasoning by allowing a causal and/or functional description of the domain to be represented explicitly. Reasoning proceeds by constructing a hypothesis-based classification tree whose root hypothesis contains the most general diagnosis of the system. Guided by a focusing algorithm, the classification tree branches into more specific hypotheses that explain the more detailed symptoms provided by the user. As the system is used, the classification tree also forms the basis for a dynamically generated explanation tree which holds both the successful and failed branches of the reasoning knowledge. RATIONALE is implemented in Quintus Prolog with a hypertext and graphics oriented interface tinder NEWS.§ It provides an environment for tying together the processes of knowledge acquisition, system implementation and explanation of system reasoning.

1. Introduct ion The ability to construct explanations is widely regarded as a crucial feature of knowledge-based systems, and much has been expected of it (Ganascia, 1984; Josephson, Chandrasekaran & Smith, 1984; Hardman, 1985; Steels, 1985; Bobrow, Mittal & Stefik, 1986). However , explanations generated by current knowledge- based systems are often terse, include bookkeeping information irrelevant to most users, are not well integrated with the reasoning processes of the system, and are poorly designed from a user interface point of view.

Explanations have been variously described as: a means of debugging knowledge- based programs and of providing dynamic justifications for unexpected system behavior; as a means of verifying the reasoning of the domain expert; and as training aids for novice users (Hasling, Clancey & Rennels, 1984). Hayes and

f This paper is published with permission of the National Research Council of Canada. NRC number 31663.

§ NeWS is the Network Windowing System from Sun Micro Systems Inc.

301 1042-8143/90/040301 + 43503.00/0

302 S. ABU-HAKIMA AND F. OPPACHER

Reddy (1983) have defined them as "a means of capturing the knowledge-based program behavior of a system that reasons symbolically".

At the very least, explanations are expected to assure the user that the system based its decisions or recommendations on all available relevant knowledge and all available data. Few explanation facilities also assure the user that sound strategies were used in the derivation of rules from a causal domain model.

It is obvious that such strong expectations are very difficult to live up to and that existing, off-the-shelf explanation facilities fall far short of realizing them. It is less obvious why this should be so, but it is likely that the reason lies in the fact that an explanation facility is usually added to an inference engine as a mere "afterthought" instead of being built first or at least concurrently with it.

By contrast, RATIONALE is designed to reason explicitly for the purpose of clearly explaining its reasoning strategies, i.e. its inference engine coincides with its explanation module. Explicit reasoning is supported by knowledge as to: why a particular hypothesis is preferred in a given situation to an alternative, why exceptions may be overruled in some situations, and why global inference strategies succeed in certain circumstances but fail in others.

By aligning the functioning of the inference engine with that of the explanation module, the latter has immediate access to all the domain knowledge and strategic information that drives the former.

RATIONALE's explanations answer basic user questions----at user-determined levels of detail--about strategic or heuristic decisions and, in general, about the knowledge-based behavior of the system, and about the system's domain model. Moreover to make it easy for the user to request explanations, R A T I O N A L E provides a carefully designed graphical hypermedia user interface which displays the available explanation options with selectable icons.

In the remainder of the paper: section 1.1 presents some key concepts in human explanation and shows how they are implemented in a knowledge-based system by giving an informal description of RATIONALE. The mechanism which tracks reasoning in order to explain is the main distinguishing characteristic of current explanation facilities surveyed in section 2. This section surveys explanation types that are usually associated with three widely used tracking mechanisms: successful rule tracking, executed program trace, and reasoning trace. Section 2.4 summarizes the key design considerations in constructing knowledge-based systems that can explain their reasoning by integrating explanations with their reasoning processes. This section also discusses some criteria that are extracted from our review of existing explanation facilities and are shown to underlie the design of RATIONALE. Section 3 describes RATIONALE's reasoning algorithm, its ex- planation generator, and the types of explanations it provides. Sections 3.4 and 3.5 contain many examples of our hypertext and graphics oriented interface. Future extensions as well as discussions of reasoning-based explanation conclude the paper.

1.1. H U M A N VERSUS K N O W L E D G E - B A S E D SYSTEM E X P L A N A T I O N

In order to better describe how the approach in RATIONALE was arrived at, this section outlines some of the key aspects of human explanation and how they were carried over to explanation in knowledge-based systems.

IMPROVING EXPLANATIONS IN KBS 3 0 3

1.1.1. Aspects of human explanation Although there is much research on explanation in philosophy of science (Hempel, 1965; Achinstein, 1971), human explanation as a model for knowledge-based explanation has unfortunately not been studied thoroughly to date. Important exceptions to this are described in (Scott, Clancey, Davis & Shortliffe, 1977; Weiner, 1980; Hayes & Glasner, 1982; Wallis & Shortliffe, 1982; Hayes & Reddy, 1983; Chandrasekaran, Josephson & Keuneke, 1986; Swartout & Smoliar, 1987).

Humans explain with one of three objectives in mind (Hayes & Reddy, 1983). The first is to clarify a particular intent. Clarifications can be given in any situation where information is being gathered (for example, when a person is asked a question but does not quite understand the question, its objectives, or the terminology used to pose the question).

Another reason one explains is to teach or instruct a listener. Here the person explaining is required to judge the knowledge state and the level of understanding of the listener and to construct an explanation to fit that level of understanding as well as to provide the instructive information.

A third reason one explains is to convince a listener. This is one of the more difficult forms since the person explaining has to provide a solid argument for the hypothesis that the listener is to be convinced of. The argument itself is made up of supporting information as to why a particular hypothesis is true and why competing hypotheses are false.

Goguen, Weiner and Linde (1983), after careful analysis of naturally occurring explanations have identified three major modes of explanatory justifications: giving examples, giving reasons and eliminating alternatives.

It would seem to us that to clarify, humans often use examples; to convince, they often give reasons; and to teach, clarify or convince they often eliminate alternatives. In several studies of human explanations it has been shown that people often explain on the basis of similarities and differences (Weiner, 1980; Goguen et al. 1983; Hayes & Reddy, 1983). Thus, in explaining something, people often describe something else that is analogous to what they wish to explain and point out the differences. They present similar or alternative ideas and describe how they are related. In our approach, we also generate explanations that point out why a potentially satisfactory alternative explanation was in fact not chosen. (See the description of Why-not and What-if explanations in section 3.5).

1.1.2. Aspects of knowledge-based explanation In order to achieve explanations that clarify, convince and teach users, one must construct a knowledge-based system that can reason explicitly. By "reasoning explicitly" we do not simply mean pattern-matching and firing If-Then-rules that directly compile an expert's knowledge. Rather, "reasoning explicitly" refers to the system's activity of accessing both a causal domain model and explicitly represented strategic or heuristic knowledge. While reasoning explicitly about its hypotheses, the system gives justifications for its decisi6ns (Wallis & Shortliffe, 1982; Chandraseka- ran, Josephson & Keuneke, 1986; Swartout & Smoliar, 1987). Conversely, a system which reasons implicitly is one that does not use strategic information about its domain model while reasoning. A production or rule-based system is an example of a system which reasons implicitly, because the knowledge used to write its rules,

304 S. A B U - H A K 1 M A A N D F. O P P A C H E R

including their justification, ordering, and planned execution, is not captured for the system to reason with (Swartout, 1981; Neches, Swartout & Moore, 1985; Abu-Hakima, 1988a, 1988b). Accordingly, such a system cannot meet the above explanation objectives.

In order to support explicit reasoning in a domain that is structured as a hypothesis hierarchy, and thereby to achieve the explanation objectives, all pertinent knowledge about the hypotheses must be captured.

This knowledge includes: what observable states suffice to activate a given hypothesis, what observable states in turn deactivate an activated hypothesis, what observable states constitute exceptions to hypothesis activation, what observable states in one hypothesis activate an alternative or similar hypothesis, what observable states in one hypothesis activate any of its subhypotheses, and what hypothesis has priority over another hypothesis. (See section 3 for a detailed description of RATIONALE's explicit reasoning and explanation generation algorithm).

1.1.3. Types of knowledge-based questions Event questions: Answers to the first set of questions explain why particular observable states are or are not required for activating hypotheses. Another set explain why particular solutions are or are not included in the solution space. Yet another set explain how a particular solution was arrived at. These three sets can be classified as answers to event questions and are intended to address a user's questions about the dynamic solution space of the system.

Hypothetical questions: Answers ~to these questions explain hypothetical situa- tions, i.e. a user can ask and view how the current solution space is affected by what-if questions. What-if questions allow the user to include hypothetical observed states under the current solution. They address a user's needs to examine similarities and differences in the solution space and act as both an instruction and justification mechanism for explaining the reasoning strategies of the system.

Ability questions: Answers to these questions explain the methods and capabi- lities of the system. They explain the static hypothesis knowledge and its predicted interaction with other hypotheses assuming the system's reasoning strategies. They act as instructional and clarification explanations.

Factual questions: These questions can be likened to database queries in that they allow the user to ask for descendents and alternates of a particular hypothesis as well as hypothesis parameters such as activation conditions, weightings and priorities.

Current explanation facilities support one of these three methods of explanation requests: natural language questions, canned text questions, or question selections off menus. Accepting free form natural language queries may be an ideal solution. However, the complexity of implementation of a robust natural language parser prevents its use as a feasible mode for requesting explanation in current knowledge- based systems (Brady & Berwick, 1983). Accepting canned text questions raises the problem of which questions to include and how to indicate their availability to the user. Accepting template-based questions off menus allows the user to view the questions within the current context and to ask about the dynamic solution space.

IMPROVING EXPLANATIONS IN KBS 305

In addition, it provides the user with a guide as to what questions can be asked and reduces the burden on the system. In our system, we have implemented a graphic hypertext interface illustrated in the examples provided in section 3.

1.1.4. Explanation generation An explanation is planned given the knowledge of the currently executed reasoning strategies. This knowledge resides in the leaves of the dynamic solution hierarchy. The solution hierarchy is a subset of the domain hierarchy which holds each hypothesis as a potential solution. Due to the explicit knowledge representation at each local hypothesis the explanation generation itself becomes a very simple task (Abu-Hakima, 1988b). The interaction between hypotheses, whether they are alternatives or descendents of each other as well as why the local hypothesis does or does not contribute to the current solution space can be derived directly from the dynamic solution hierarchy. Thus, the local reasoning strategies, such as what observable states and resultant weightings activate or deactivate each hypothesis in the solution hierarchy, provide the knowledge for very powerful explanations.

R A T I O N A L E (Abu-Hakima, 1988b) results from an attempt to identify the generic activities that permit a system to reason explicitly. This shell-like system is described in section 3.

Explanation facilities have generally been added to knowledge-based systems to explain the inferences that occur as the system reasons. Explaining in such systems takes the form of tracking inferences. The mechanism which tracks reasoning in order to explain is the main distinguishing characteristic of current facilities. For purposes of comparison of other research in explanation to RATIONALE, section 2 describes the explanation types that are usually associated with three widely used tracking mechanisms. Section 2 is followed by a detailed description of RATIONALE.

2. Survey of explanation in knowledge-based systems

2.1. SUCCESSFUL RULE TRACKING AND CANNED TEXT EXPLANATIONS

Canned text explanations are used in most systems whose tracking mechanisms only maintain a list of successfully triggered rules or satisfied goals. Canned text explanations are by far the most common type of explanation in current knowledge- based systems because they are so easy to implement.

Strings of canned text are attached to all rules so that premises and conclusions are explainable in terms of the programmer's predicted execution of the program, and the text string associated with any successful rule is emitted when an explanation is requested. A knowledge-based system builder is faced with the difficult task of having to try to anticipate the many possible questions a user might want to ask and of maintaining the consistency of the evolving system with the canned text. Such an approach is bound to ignore the system's reasoning about its recommendations and strategic choices. Moreover, it does not provide explanations for decisions taken in refining hypotheses according to the validity or invalidity of the user's observed symptoms versus the system's expected symptoms.

306 S. ABU-HAKIMA AND F. OPPACHER

2.2. EXECUTED PROGRAM TRACE AND TEMPLATE EXPLANATIONS

2.2. I. M Y C I N The executed program trace is the basis for another common type of explanation in knowledge-based systems. This approach to explanation was pioneered by the builders of MYCIN (Buchannan & Shortliffe, 1984; Scott et al. 1977). MYCIN's static knowledge consists of production rules and general facts about its consultation domain, that of advising physicians on the method of treatment for patients with microbial infections. Its dynamic knowledge is the knowledge gathered in a production run. This knowledge encompasses dynamic facts about the problem input by the user as well as current assertions or deductions made by the system which are accessed by both the rule interpreter as well as by the explanation capability mechanism.

The explanation mechanism is used to answer the user's questions about deductions in the current session. Note that it does not use knowledge of the rule interpreter or inference mechanism for explanation. The explanation mechanism can only access the static and dynamic knowledge components of MYCIN, thus excluding explanation of the reasoning mechanism.

MYCIN tracks successfully triggered rules in forward chaining and it maintains a goal-oriented trace in backward chaining. Knowledge is represented in attribute- object-value triples. In response to the system's questions, the users input details about objects with which MYCIN establishes contexts (i.e. the patients, their infections, cultures taken and drugs administered). Contexts are uniquely named and their interrelationships are represented in the context tree. The consultation dialogue is used to establish values of the attributes or clinical parameters of contexts (e.g. the age of the patient, the identity of the organism with streptococcus as the possible value, etc.). These clinical parameters are established by asking the user or by inference from the rules.

MYCIN can explain its procedural knowledge in two ways. The first matches user questions to conceptual primitives attached to static rules. The primitives have associated strings of canned text which are output as explanations. The second form of procedural explanation uses specialists or demons dedicated to answering a unique question. The explanation specialists access a record of the consultation by using MYCIN's history tree. The tree is goal-directed and it forms a dynamic trace of the current consultation. Each node in the tree represents a goal and contains information about rules used by the system to try to meet the goal. Associated with each goal is a record of whether the rule succeeded or not and what inference it failed on.

MYCIN accepts two forms of user questions for explanation of session reasoning. The first type are described as reasoning-status checker questions (Scott et al., 1977). These are why and how single word questions. Why questions are available for the user to ask the system about the relevance of information the system is directed to ask the user for. H o w questions are designed to explain how particular goals are achieved. An example of why and how explanations proceed as follows:

. . . preceded by the first 14 questions in the consultation

14) Is the patient's illness with ORGANISM-1 a hospital-acquired infection? **WHY i.e. why is it important to determine whether or not the infection with ORGANISM-1 was acquired while the patient was hospitalized?

IMPROVING EXPLANATIONS IN KBS 307

[1.0] This will aid in determining the category of ORGANISM-1. It has already been established that

[1.1] the gram stain of ORGANISM-1 is gramneg, and [1.2] the morphology of ORGANISM-1 is rod, and [1.3] the aerobicity of ORGANISM-1 is facultative

Therefore, if [1.4] the infection with ORGANISM-1 was not acquired while the patient was hospitalized

then there is strongly suggestive evidence (0.8) that the category of ORGANISM-1 is enterobacteriacecae

ALSO: there is weakly suggestive evidence (.1) that the identity of ORGANISM-1 is pseudomonas-aeruginosa] [RULE0371

**HOW 1.3 i.e. how was it established that the aerobicity o f ORGANISM-1 is facultative?

The following were used: [3.1] RULE027 indicated there is strongly suggestive evidence (0.8) that the aerobicity of ORGANISM-1 is faculative [3.2] RULE 027 indicated there is weakly suggestive evidence (0.2) that the aerobicity of ORGANISM-1 is anaerobic Since this gave a cumulative CF of (0.8) for facultative, and (0.2) for anaerobic, it has been established that the aerobicity of ORGANISM-1 is facultative

**HOW 1.1 i.e. how was it established that the gram stain o f ORGANISM-1 is gramneg?

You said so [question 11].

An apparent problem with MYCIN's above explanations is the assumed level of intimacy with the rules that the user must have to follow the dialogue. Such intimacy cannot always be assumed, especially with less experienced users.

The second form of user questions are processed by MYCIN's general quest ion- answerer or G Q A facility. The G Q A answers template-based questions about both static and dynamic knowledge. A list of question fragments of root words serves as a parser dictionary for user questions. A question is parsed to determine whether it requires a static or dynamic explanation demon. Once the question is identified and the specialist has retrieved the knowledge for the explanation, a template of the answer is filled. The questions accepted by the system are matched against a set of explanation schemas which are filled with appropriate parts of the translated trace of the session.

A template has the following structure:

Question: How do you know the value of (parameter) of (context)?

Answer 1 Template: I used (rule number) to conclude that (parameter) of (context) is (value). This gave a certainty factor (CF). The last question asked before the conclusion was (Q-number).

Answer 2 Template: In answer to (Q-number) you said that (parameter) of (context) is (value).

Note that the GQA, unlike the reasoning-status checker, does not use the history tree. This restricts G Q A explanations to those that can be pattern matched rather

308 S. ABU-HAKIMA AND F. OPPACHER

than using the goals in the history tree as the basis for the explanations. Although MYCIN uses its trace to explain session sensitive reasoning, it does not explain its reasoning strategies. Much of the strategic information is implicit in the ordering of the rule clauses and is not accessible for explanation (Hasling et al., 1984; Clancey, 1986). Therefore, information that is used to write the rules, including justification, ordering, and planning is lost or left implicit. This inexplicable information is, in essence, a large part of the strategy employed to do the diagnosis.

A shortcoming of the implementation of MYCIN was that its explanations included a fair amount of extraneous bookkeeping information and required its users to be familiar with the details of the representation of its rules in the knowledge base. As a consequence, ordinary users and even computationally naive domain experts could not always follow its reasoning or understand its explanations based on MYCIN's "state of the world". Such a strategy is becoming less and less acceptable in the design of human-machine dialogue, where the users are deliberately being placed outside the application to shield them from the internals of the application (Hayes & Reddy, 1983). In RATIONALE, the explanation capability is integrated with both the reasoning mechanism as well as the knowledge of the system. As a result of this integration, user interaction is performed through a single unified interface which serves as a shield from the internals of the application. Despite its deficiencies however, MYCIN's explanation facility has served as a model for explanation in knowledge-based systems for well over a decade and its explanation capability remains better than many others.

2.2.Z G U I D O N and N E O M Y C I N MYCIN's shortcomings were somewhat alleviated with GUIDON (Clancey, 1983, 1986, 1987). GUIDON attempted to extend MYCIN's capabilities with the objective of tutoring medical students. GUIDON was designed to be domain-independent. It models the student's knowledge as a subset of what the expert knows. It has four user levels ranging from beginner to expert with enhanced mechanisms for user tutoring. However, like MYCIN, GUIDON is not built on the basis of a causal domain model. As a result, user explanations merely describe actions that happen without indicating why they are reasonable.

NEOMYCIN, MYCIN's successor, and another medical diagnosis program, expanded MYCIN's disease knowledge to include competing alternatives. This expansion provided the opportunity for diagnostic strategy tutoring.

NEOMYCIN's knowledge base was developed to facilitate the recognition and explanation of diagnostic strategies. This required that its reasoning strategies be separable from the domain and be explicit enough for explanation. NEOMYCIN accomplished its objectives of explaining strategies by representing its control knowledge explicitly as metarules. This allows NEOMYCIN to give strategic explanations and make clear the plans and methods it uses to reach its goals (Hasling et al., 1984). Citing the specifics of a situation while keeping the strategy implicit is the first form of its action explanations. For example:

I'm asking whether the patient is receiving any medications in order to determine if she's receiving penicillin.

NEOMYCIN makes the underlying strategy explicit in its second form of system action explanations. Continuing with the above example:

IMPROVING EXPLANATIONS 1N KBS 309

I'm asking whether the patient is receiving any medications because I'm interested in determining whether she's receiving penicillin. I ask a general question before a specific one when possible.

NEOMYCIN's task structure represents a general diagnostic problem solving method designed to be portable to other domains. An ordered collection of metarules constitutes a procedure for achieving a task. The premise of the metarule examines either the domain-dependent knowledge base or the current problem history for findings and hypotheses such as follow-up questions for a recent finding or a subgoal of an active hypothesis. Associated metarule actions may ask the user a question or call a task to refine the hypothesis under consideration. An example consultation session which illustrates NEOMYCIN's strategic explanations proceeds as follows:

• . . Please enter information about the patient. 3) How severe is Mary's headache (on a scale of 0 to 4 with 0 for very mild and 4 for very

severe)? **WHY

[i.e. why is it important to determine the severity of Mary's headache?]

[1.0] We are trying to characterize the data just obtained and develop the differential. Headache severity is a characterizing feature of a headache.

[back to question 3 . . . ]

To explain why NEOMYCIN is performing an action, the question is restated as "why are we trying to do this task?" The query is answered by stating the task and the metarule that invoked the task in question since at the lowest level, there are implicit tasks of requesting data and applying rules.

The user is given the option to ask for an abstract explanation as follows:

**ABSTRACT [i.e. why is it important to determine the severity of Mary's headache?

[1.0] We are trying to characterize the date just obtained and develop the differential. A datum that can be requested is a characterizing feature of the recent finding that is currently being considered.

When subsequent why 's are input, the strategy tree is ascended much as MYCIN's history tree was ascended. H o w is asked in terms of previously mentioned tasks. The explanation includes any completed metarules associated with the current instance of the task as well as the current metarule being executed. H o w explanations state the action of the metarule task whereas why explanations state the premise of the metarule task.

NEOMYCIN uses explanation in a manner similar to MYCIN's. The main difference is that in MYCIN, rules invoke subgoals through their premises, while in NEOMYCIN metarules invoke subtasks through their actions. This results in the latter explaining at the level of general strategies instantiated with domain knowledge, when possible, to make them concrete. NEOMYCIN shares some of MYCIN's shortcomings in explanatiori. Its explanation detail includes all the bookkeeping information associated with every task and metarule which could again overwhelm users.

NEOMYCIN does not explain why metarules fail, only why they succeed. Explanations of failure may illustrate errors in system reasoning and deficiencies in knowledge compilation. This is important to users who are not familiar with

310 S. ABU-HAKIMA AND F. OPPACHER

the system's reasoning strategies or to domain experts evaluating system failures as well as successes. Such an explanation mechanism is used in RATIONALE to explain why the system failed to make a deduction expected by the user, and is described in section 3.

Clancey's work with NEOMYCIN and GUIDON has evolved into GUIDON- WATCH, GUIDON-DEBUG, and GUIDON-MANAGE. GUIDON-WATCH is a graphic interface to NEOMYCIN that uses multiple windows to allow the user to browse through the knowledge base and view reasoning processes during a consultation. GUIDON-DEBUG is a system that allows the developer to roll back the consultation display and view the windows that trace changes in system values and their resultant inferences.

GUIDON-MANAGE is planned as a tutoring aid for medical students which views diseases as processes, and a diagnosis as a network that causally links manifestations and states to processes.

A criticism that carries over from MYCIN to NEOMYCIN and to the GUIDON substytems is the level and nature of detail that users are presented with. No strict distinction is made between users and developers.

2.3. R E A S O N I N G T R A C E A N D E X P L A N A T I O N S B A S E D ON R E A S O N I N G

The third class of explanations is directly related to the reasoning taking place in the session. "Reasoning" does not simply imply pattern-matching and firing If-Then- rules that directly compile an expert's knowledge. Rather, "reasoning" refers to the system's activity of accessing both a causal domain model and explicitly represented strategic or heuristic advice. The domain model is captured as a hierarchy of hypotheses which are refined by applying heuristics to observed symptoms. Separating domain knowledge from strategic knowledge has the well known advantage of enhancing the system's modifiability. However, from RATIONALE's prespective, the greatest advantage lies in the fact that control decisions are not buried in the knowledge base, i.e. they are not hidden by details of the ordering of rules or of the premises within rules, but are immediately accessible to the explanation generating module. As a result, genuine explanations can be given: instead of regurgitating the actions taken during a session, an explanation can be planned to indicate why an action is reasonable in the light of available symptoms, heuristics and domain principles, and why, for instance, an alternative action was not performed.

2.3.1. BLAH One system that explains on the basis of a reasoning trace is BLAH by Weiner (1980). BLAH constructs explanations that are modeled after psycholinguistic studies carried out with humans. These studies determined that people often explain decisions, choices and plans by emphasizing differences between alternatives rather than similarities. BLAH is a'question-answering system whose domain is the filing of income tax returns. It uses a data base of assertions and justifications to handle three types of questions. Users can ask it whether and why some assertion is believed or how make a choice between two alternatives. The system resolves the choice by generating a hypothetical instance of each choice with justifications

IMPROVING EXPLANATIONS IN KBS 311

for its selection or rejection. This is similar to the system answering two what-if questions and interpreting its own answer.

RATIONALE takes the results of such choices into account by providing hypothetical explanations which justify an event by pointing out why otherwise likely alternatives did not occur.

BLAH has a structured knowledge base which is segmented into worlds, a reasoning component, and an explanation generator as illustrated in Figure 1. Its ability to create new worlds based on some assumption enables BLAH to reason hypothetically.

The output of the reasoning component is a tree representing a statement and its justifications. The explanation generator translates the reasoning tree into text, which is generated using templates associated with each assertion.

BLAH compares favourably with other explanation facilities but its primary limitation is the restriction it places on the format of the user's question. It requires that the user type in a question in a highly artificial form which is very much like a pattern-directed program call. Since BLAH lacks a domain model, the user must be f~imiliar with the knowledge base structure and organization. To guide BLAH in its reasoning, the user must supply a list of knowledge base partitions, again in restrictive syntax.

Reasoning ~ , ~ mponent )

base ~,)

) v ~,fienerator ~,)v

A AND / X

B C

~, Reasoning tree

FIGURE 1. Organization of BLAH.

2.3.2. X P L A I N and EES A very ambitious system constructed to explain on the basis of a reasoning trace is Swartout's XPLAIN (Swartout, 1981; 1983). XPLAIN is based on two key principles: explicitly distinguishing different types of domain knowledge represented in the knowledge base and formally recording the system development process. XPLAIN is an automatic programmer which uses a domain model and domain principles to write the knowledge-based system itself. The domain model is a network of causal relationships and the domain principles express reasoning strategies that guide the knowledge refinement process, as illustrated in Figure 2. Its knowledge separation enables the system to provide justifications for executed rules, allow diagnostic methods to be stated more abstractly as well as allow the domain model and principles to be modified independently of one another.

The general structure of XPLAIN is illustrated in Figure 3. The knowledge needed for its automatic programmer to generate the code is abstracted through refinement by a hierarchical planner from the domain model and principles. The

312 s. ABU-HAKIMA AND F. OPPACHER

KNOWLEDGE %

I ma,n pr,r ,p,os) Prescriptive ~1

~lu~ PERFORMANCE J PROGRAM FIGUaE 2. XPLAIN's knowledge separation.

trace of writing the knowledge-based system with the generated code becomes the refinement structure. This structure documents all strategies taken in writing the code. The top refinement structure becomes the top goal and is refined into subgoals. The leaves of the refinement tree form the performance program.

The automatic programmer starts off with partial specifications that are expanded by matching against the domain model. The instantiated goals of a method are placed by the writer into the refinement structure as children of the goal being refined. XPLAIN uses the generated refinement structure for explaining. It answers user questions about its abilities, the domain principles, its session reasoning as well as its static domain knowledge.

Unlike the systems previously described, XPLAIN parses natural language questions and uses a language generator to plan the explanations. It generates answers with an English phrase generator which converts the refinement structures into templates. Its explanations are sensitive to the current context the user is in and to knowledge of what it has already explained to set the level of detail of the explanation.

t

I system

=@ FIGURE 3. General structure of XPLAIN.

IMPROVING EXPLANATIONS IN KBS 313

XPLAIN demonstrates a powerful concept in knowledge-based system develop- ment, but it is not without limitations (Neches et al., 1985). One of the primary objectives of XPLAIN is to use domain justifications to indicate correspondences between terms at different levels of refinement. Since a term definition provided by a domain principle is uniquely associated with that specific piece of problem solving knowledge, term definition knowledge in the domain model cannot be shared across domain principles.

Another shortcoming of XPLAIN is the restriction placed on the explanation generator. It consists of fixed procedures capable of answering a limited number of question types. Thus, the system can only answer a very limited set of questions quite well. Limiting explanations as such illustrates that Swartout's facility is similar to those explanation facilities which are not fully integrated with the system and as a result they track rather than explain reasoning (Swartout, 1983). Another limitation of XPLAIN is that its automatic program writer is restricted to predefined goal/subgoal refinement. Thus, if no defined principle could be found to refine a goal, the system stops without explanation. Another problem is reliance on the program writer's predefined structures, this reliance prevents new explanation types from being added to XPLAIN's repertoire.

Swartout and Neches (Neches et aL, 1985) have defined EES, the Explainable Expert System approach, illustrated in Figure 4. EES is directed at two tasks: the task of generating explanations to clarify or justify the behavior and conclusions of the system; and the task of extending or modifying the system's knowledge base or capabilities. This system is their response to the shortcomings of XPLAIN.

The domain model in EES (as in XPLAIN) will describe how the domain works by using typological and causal links. Domain principles represent the problem solving strategies and are used by the program writer to drive the refinement process. They associate tradeoffs with domain principles to indicate the advantages and disadvantages of a choice of strategy. Similarly, preferences are associated with goals to set priorities based on tradeoffs. EES is planned to address XPLAIN's term definition limitation by separating the definition of terms from the contexts that use

f "%

KNOWLEDGE BASE

( Domain model )

( D~main pr~nc~oles )

( Tradeoffs I preferences )

( Term definitions )

L Integration knowledge )

( Optimization knowledge )

~,. generator J ~

FIGURE 4. Planned structure of EES.

314 S. A B U - t t A K I M A A N D F. O P P A C t t E R

them. Integration knowledge will serve to resolve potential conflicts among knowledge sources as a basis for a single recommendataion. Optimization knowl- edge is intended to direct the execution of the knowledge-based system in the most efficient way given a particular context. The knowledge base will be represented in NIKL, a refinement of KL-ONE (Neches et al., 1985), a semantic network-based representational formalism. Given a new concept to add to an existing network, the NIKL classifier automatically places the concept in the subsumption hierarchy of the network.

EES will generate a runnable knowledge-based system by applying the program writer to the knowledge base. The steps taken in generating the code record the development history (or refinement structure as defined previously in XPLAIN). While refining EES will allow goal/subgoal refinement as in XPLAIN, Covering Reformulation or generating a subgoal hierarchy from the original goal to cover the possibilities presented by the original goal is a planned enhancement. As stated earlier, a limitation of XPLAIN is that its automatic program writer is restricted to predefined goal/subgoal refinement. If no principle could be found to refine a goal, the system stops. Such a restriction reduces the reusability of the system's problem solving knowledge in new situations. In EES, the program writer is being defined to be capable of reformulating a goal when a match cannot be found. The reformula- tion is intended to effect a more explicit development history which is expected to aid explanation.

EES, unlike XPLAIN, is viewing explanation as a planning task while expressing knowledge as a set of declarative explanation strategies. As in RATIONALE, EES will classify user questions into question types about system behaviour such as justification and appropriateness of system action, as well as definition of, and capabilities in, system concepts. Unlike R A T I O N A L E however, EES is not addressing "what-if" or hypothetical explanations.

2.3.3. M D X The approach to reasoning in RATIONALE has been influenced most strongly by Chandrasekaran, in particular, the reasoning in MDX (Chandrasekaran, Gomez, Mittal & Smith, 1979; Chandrasekaran, Mittal & Smith, 1982; Chandrasekaran & Mittal, 1983).

In early work on MDX, Chandrasekaran proposes "conceptual structures" as the knowledge representation scheme (Chandrasekaran et al., 1979). He defines concepts as labels which organize how-type or methodology knowledge, by representing them as a collection of diagnostic production rules (Gomez & Chandrasekaran, 1981). The intent of this early scheme is to group diagnostic rules into an accessible knowledge structure, with this knowledge structure evolving into a frame in later work on MDX (Chandrasekaran, 1986). The structure is dominantly a hierarchical one, with the successors of a conceptual node representing subcon- cepts which refine that node. Associated with each concept is a set of procedural "experts" which attempt t6 apply the relevant knowledge to the current case. The procedural experts also have the ability to relinquish control to selected subconcepts for more detailed analysis. The conceptual structure can be viewed as a means of organizing knowledge so as to allow purposeful, focused access to concepts relevant to a particular context in a user session. The problem solving strategy is one of

IMPROVING EXPLANATIONS IN KBS 315

hypothesize and test: given a set of user symptoms, generate hypotheses that match the symptoms and test whether these hypotheses give a valid diagnosis.

Chandrasekaran draws a relationship between the principles that lead to effective communication amongst physicians and those used to organize and structure knowledge within a medical concept hierarchy. One of these stipulates that there exists a hierarchical organization in which control transfer is achieved (e.g. between a General Practitioner and a Specialist). Also, the expert contemplating control transfer is knowledgeable enough of the domain subexperts to know which are relevant to a particular problem, the decision to consult a particular subexpert is based on the observed symptom data and how well it m~itches a subexpert's knowledge. Moreover, experts are knowledgeable enough about their domain of specialty to conclude when they have been mistakenly called. RATIONALE takes this one step further by encoding the knowledge to suggest immediate alternative hypotheses in the expert's domain knowledge. These were some of the main principles applied to organize the knowledge for MDX, a system that diagnoses liver diseases or Cholestasis. Its conceptual structure forms a taxonomic tree, where a node represents a concept and its refinements represent its subconcepts. The implied process of recognition is intrinsically top-down and the problem solving strategy is one of hypothesize and test. Note that the principles of organization of conceptual structures use knowledge efficiently, guide the knowledge acquisition process, and exposeunderlying structure that is hidden in single level systems like MYCIN (Chandrasekaran et al., 1979; Chandrasekaran et al., 1982).

MDX has no global calculus for handling uncertainty (Chandrasekaran et al., 1982), as offered by Bayesian techniques, or as there is in MYCIN. This is a consequence of having no separation between the knowledge base and the problem solver. The diagnostic specialists represented as concepts in the hierarchy work under the hypothesize and test strategy, hence, they hold distributed knowledge and represent distributed problem solvers. This approach provides the flexibility to make each specialist combine the uncertainties of its constituent data or knowledge in a manner appropriate to the context (Chandrasekaran & Tanner, 1986; Bylander & Mittal, 1986; Punch III, Tanner & Josephson, 1986). Most medical decision-making systems are faced with the task of reasoning from uncertain knowledge of various types and arriving at a credible definitive decision.

MDX's manner of handling uncertainty appears to be a promising one and is similar to the one included in the overall framework of RATIONALE. In RATIONALE, a hypothesis is satisfied by successfully matching the observed symptoms to those the hierarchy specialist expects according to a degree of match that the domain expert sets. For example, if the degree of match to refine a particular hypothesis is 50%, then at least 50% of the symptoms required to refine the hypothesis must be observed. The element of uncertainty is introduced in the hypothesis within the affected context. If the hypothesis is refined, the new uncertainty would be that of the next hypothesis to refine with a degree of match independent of that of its parent. "l'hus, like MDX, RATIONALE handles uncertainty on a discrete scale of distinct values that are hypothesis-specific.

MDX is one of the first successful diagnostic knowledge-based systems to be implemented with the notion of conceptual hierarchies. It uses its knowledge in context; once a context is established, it can reason about data at the appropriate

316 S. ABU-HAKIMA AND F. OPPACIIER

level in the concept hierarchy. However, it lacks a good user dialogue mechanism. User "explanations" are given as a programmer trace output, while user input is like that in BLAH, pattern-invoked program calls.

Explanation of system reasoning to the user seems to have taken a secondary role in Chandrasekaran's work. The term "explanation" has been used as meaning reasoning by hypotheses that "explain" the symptoms, rather than informing users of the strategies that the system follows as well as answering their questions about reasoning as RATIONALE, XPLAIN and BLAH attempt to. However, Chandra- sekaran presents a powerful framework for reasoning explicitly for the purpose of explanation which has been key in developing the framework for reasoning in RATIONALE.

As in MDX and its successors, reasoning in RATIONALE follows a hypothesis tree. The hypothesis tree is built from the top down, i.e. from the most general diagnosis of a problem to the most specific. As the user inputs more symptoms for a diagnosis, a focusing algorithm guides the refinement of the hypothesis tree. RATIONALE's explanations of the reasoning trace are directly related to triggered hypotheses. These are used to explain to the user the validity of the hypothesis and any related refinements. Explanations of session events and abilities aimed to answer user questions about current reasoning are given on the basis of the generated hypothesis trace. These are elaborated upon in section 3.5.

The key difference between RATIONALE's approach and Chandrasekaran's is that in RATIONALE, the diagnosis requires that the system reason explicitly so that it may explain to itself, and if asked, explain to the user, its decisions in reasoning to refine any hypotheses. Thus, all validating and invalidating conditions for hypothesis refinement as well as alternative points of diagnosis on the hypothesis tree are tracked to allow the system to reason explicitly and as a result explain. Although MDX and its successors have the framework to provide such explanations, they do not as yet.

The reasoning trace of the executed program seems to provide a very promising basis for constructing helpful, genuine explanations. As Chandrasekaran notes (Chandrasekaran, 1986), reasoning in order to explain hypotheses, directly affects the processes of knowledge acquisition and domain modelling. The assumption of the approach in RATIONALE is that building a knowledge-based system is inseparable from building an explanation facility. After all, a knowledge-based system can explain its reasoning only to the extent to which it has access to a sufficiently rich record of its reasoning.

2.4. D E S I G N C O N S I D E R A T I O N S F O R K N O W L E D G E - B A S E D E X P L A N A T I O N S

To adequately summarize the key points in current explanation facilities arising from systems such as MYCIN, NEOMYCIN, GUIDON, XPLAIN, EES, BLAH, MDX and RATIONALE, one needs to consider a set of system criteria. The criteria fall into the four categories described below.

Aspect 1, knowledge organization and representation: It is important to recognize at this stage that "deep explanation" facilities cannot be added to an existing inference mechanism but should be developed concurrently. Knowledge should be organized in a manner that makes it easily accessible. The various systems surveyed above have taken different approaches to knowledge organiza-

IMPROVING EXPLANAT IONS IN KBS 317

tion. A summary of the different approaches is given in Table 1. For example, MYCIN acknowledges only a flat set of rules without any internal organization, i.e. there are no explicit structures that relate the various rules. Much of the information required to formulate MYCIN's rules, including justification, ordering, and plan- ning, is either lost or made implicit. NEOMYCIN and GUIDON use meta-rules in addition to MYCIN's rules. These recta-rules are again not internally structured and have been criticized as a partial solution to the problems of domain modeling (Gomez & Chandrasekaran, 1981).

In XPLAIN and EES, knowledge is organized into descriptive and prescriptive knowledge. Each of the latter two types of knowledge is internally structured as a semantic net. The prescriptive knowledge in XPLAIN and EES is used to reason about the descriptive knowledge. In MDX and RATIONALE, both types of knowledge are organized into a hierarchy of hypothesis or concept frames to group knowledge relevant to a particular aspect of the domain together.

Chandrasekaran states that not all the deep knowledge may be required for a particular problem of diagnosis (Chandrasekaran & Mittal, 1983). While this may be debatable, deep knowledge is certainly required for good explanations. RATIONALE uses such knowledge not only to explain but also to reason.

For the representation of knowledge to be adequate, a model of the domain with a clear history of the interactions between its components during reasoning needs to be accessible. This criterion is satisfied by both semantic networks and frame hierarchies.

Aspect 2, explicitness of tracking the reasoning mechanism: An important design consideration is the explicitness of the mechanism which tracks the reasoning in the system. In MYCIN and its successors, there exists no explicit mechanism to track reasoning, and explanations of system behavior are generated using a program trace. However, in BLAH, XPLAIN, EES and RATIONALE, the mechanism generates a refined trace which holds only the reasoning information relevant for explanation. As illustrated above the best explanations are those derived from the most explicit refined trace. BLAH uses a reasoning trace of assertions with belief and disbelief justifications that are attached to its production rules to explain its actions. XPLAIN and EES use a refinement trace generated by their respective automatic programmers to explain their hierarchical goals. Chandrasekaran has not published information about how explanations are generated in MDX. As illustrated in Table 1, RATIONALE has the most explicit reasoning trace.

RATIONALE's refined trace includes detailed knowledge of hypothesis refine- ment. Hypotheses are triggered by a set of enabling conditions which are checked against a set of invaliding conditions. If a hypothesis becomes invalid, a set of special case tolerance conditions are examined that may override the invalidation. Once the hypothesis is valid again, it is refined. In addition, a set of conditions for an alternative hypothesis are checked, and, if one is enabled, a trace is kept of its relationship to the original hypothesis and the alternative is later refined once the original hypothesis refinement is complete. Thus, RATIONALE's refinement trace holds information about enabled, invalidated, special case, alternative and refine- ment hypotheses. Such a rich reasoning trace is used to generate powerful explanations.

Aspect 3, question types: Another design consideration concerns the variety of

318 S. ABU-ItAKIMA AND F. OPPACHER

TABLE 1 Comparison of explanation facility design decisions

Explicitness of mechanism

Knowledge Knowledge tracking Question Method of organization representation reasoning typest explanation

MYCIN

NEOMYCIN

GUIDON

BLAH

XPLAIN

Rules Rules None Event, (Program trace) ability,

factual

Meta-rules Meta-rules Meta-rule MYCIN's (Program trace)

Meta-rules Meta-rules MYCIN's MYCIN's

Worlds ::> ~ Rules Reasoning Hypothetical, trace of event, assertions factual and belief/ disbelief justifications

Descriptive ::> Semantic Refinement and network trace of prescriptive automatic knowledge program

EES Descriptive =), and prescriptive knowledge

MDX Conceptual ::> hierarchies

RATIONALE Conceptual => hierarchies

hierarchy of goals

Semantic XPLAIN's network

Frames None published

Frames Reasoning trace of enabling/ invalidating/ speical case/ refinement/ alternate conditions

Templates, canned text

MYCIN's

MYCIN's

Templates

Event, Explanation ability, generator factual

Event, Explanation ability, generator factual

None None published published

Hypothetical, Templates event, ability, factual

i Refer to section 3.5.2 for a full description of the question types. ~t ::> = represented by.

the question types the system handles. Table 1 summarizes the types of questions that are handled by the systems discussed previously. Ideally, question types should include: event or why and how questions; hypothetical or what-if questions; system ability questions; and factual questions (see section 3.5.2 for a full description of these question types and their use in R A T I O N A L E ) . The question types affect the knowledge representat ion in the knowledge-based system, because richer knowledge structures are required to answer a wider range of questions. The question types also impact the h u m a n - m a c h i n e interface of the knowledge-based system. In R A T I O N A L E , dialogue questions of unrestricted text were avoided due to the over- head associated with the implementat ion of a rich natural language interface. Instead, we have opted to implement a graphical hypertext user interface (see section 3.4).

IMPROVING EXPLANATIONS IN KBS 319

Aspect 4, method of explanation generation: Explanation generation is another important design consideration. Some of its more common forms are canned text, templates, or natural language responses. MYCIN, BLAH and RATIONALE use templates, whereas XPLAIN and EES use natural language generation. Natural language generation is complex and is likely to be restricted to the question types the system can handle, thus, it may be simpler to implement templates.

Adhering to the above criteria will result in a system that can explain its reasoning and may also facilitate the process of knowledge acquisition by offering explanations during the construction of a knowledge-based system.

3. RATIONALE

(i) Objectives: There are two main objectives behind constructing a tool such as RATIONALE: to provide an end user of a diagnostic knowledge-based system with a comprehensive explanation facility; a n d t o construct a knowledge-based tool that allows a domain to be modeled, modified and run as an application in a single integrated environment. (ii) Approach: In RATIONALE domain knowledge is hierarchically organized into hypotheses and subhypotheses. Reasoning proceeds by constructing a hypothesis-based classification tree whose root hypothesis contains the most general diagnosis of the system. Guided by a focusing algorithm, the classification tree branches into more specific hypotheses that explain the more detailed symptoms provided by the user. As the system is used, the classification tree also forms the basis for a dynamically generated explanation tree which holds both the successful and failed branches of the reasoning knowledge.

RATIONALE is designed to reason explicitly for the purpose of clearly explaining its reasoning strategies, i.e. its inference engine coincides with its explanation module. Explicit reasoning is supported by knowledge as to why a particular rule is preferred in a given situation to an alternative, why exceptions may be overruled in some situations, and why global inference strategies succeed in certain circumstances but fail in others.

By aligning the functioning of the inference engine with that of the explanation module, the latter has immediate access to all the domain knowledge and strategic information that drives the former.

To align the functioning of the inference engine with that of the explanation module, the hierarchy of the domain hypotheses in RATIONALE is represented by frames, as illustrated in Figure 5. At the root of the hierarchy is the top level goal or the overall aim of a particular knowledge-based system domain. RATIONALE could be used to define a multi-domain knowledge-based system with each domain having a unique aim and a unique hypothesis tree. The top level goal serves as an initial entry point to the domain under consideration. For example, if several domains were defined in RATIONALE, e.g. a heart, a liver and a kidney domain, and a patient's symptoms related to "a heart problem, then only the domain hierarchy for the heart problems would be searched for a hypothesis explanation.

The second level of the hypothesis tree classifies the problem under consideration into the top level subgoals of the domain. For a system that diagnoses problems with computers, such top level subgoals could be as general as whether the problem is a hardware, software or power problem~ The user may focus the search of the

320 S. A B U - I t A K I M A A N D F. O P P A C t t E R

. . . . . . . . . Top level goal r- woI'Ksl aIlop,

J I ~ Top level subgoals Software(--~ ( ' ~ H.~ rr~w.~r~ t~"~ Power

0 0 (~) " • nefinedsubgoals Processor Memory Disk • problem problem crash

FIGURE 5. Hypothesis tree for workstation example.

reasoning mechanism by specifying at which subgoal the diagnosis should start. If the user has no idea, RATIONALE presents to the user the activating symptoms for the top level subgoals to decide which branch of the tree to refine. In the absence of specific directions by the user, the hypothesis tree is traversed left-to-right, breadth-first and the user is questioned about observed symptoms until a match is found. In this case, the search is implicitly directed since the knowledge engineer has arranged the hypotheses in decreasing order of priority, with the leftmost hypothesis having the highest priority.

3.1. DESIGN

RATIONALE has been designed to operate in two modes. The first mode provides the user with a graphical knowledge base editor which facilitates the task of knowledge acquisition. In the second mode the created knowledge-based system is run and the end user is given explanations for the observed data and reasoning strategies used.

RATIONALE's top level structure, as illustrated in Figure 6, consists of four modules. The main control for the program resides in the finite state machine module. This module controls the states the program goes to as specified by the user's interaction with the interface module. The interface module, described in section 3.4, is based on GoodNeWS and HyperNeWS, both are windowing facilities that run under NeWS (Arden, Gosling & Rosenthal, 1989). These windowing facilities communicate with the finite state machine by message passing which is triggered by user selections.

The other modules that interact with the state machine are the frame generator and the explanation generator. The frame generator, described in section 3.3, instantiates the frames that represent the hypotheses and updates them according to user edits. The user interacts with the editor in a hypermedia environment implemented under NeWS.~ In it the user has access to a knowledge browser which displays the hypothesis hierarchy thus far. The user can open up as many frame editor windows as desired by simply selecting a hypothesis name off the browser. Each frame editor window allows the user to fill in the slots of the selected hypothesis. This user environment is shown in Figures 8b and 9b.

t NeWS is a multi-tasking graplaieal windowing facility from Sun Micro Systems.

IMPROVING EXPLANATIONS IN KBS 321

~_________~ndow 9ener~ f

f

|explanation J J explanation| |exptanation J J expIanation

# , ~ Exptanation generator

• * r \ Assert ~'~ ( r ~ Generate symptoms. J J Frame J exptanations subprobtems J J base J for from window / / matcher / matched

generator j L J hypotheses.

FIGURE 6. RATIONALE's top level structure.

The explanation generator constructs explanations from the reasoning trace and answers questions about the session, the system's capabilities and the knowledge base. The explanation generator and its explanation types are discussed in section 3.5.

3.2. S T A T E G E N E R A T O R

The state machine traverses the 16 statds illustrated in Figure 7. State 1 allows users to set a level--novice, experienced or expert--for their interaction with the system (novice is the default user level) and State 2 allows them to work with an existing domain or to define a new one. If they wish to work with an existing domain, they select its name from the menu in State 3, and indicate in State 4, whether they wish to edit or run it by selecting the browser icon. If they choose to edit, they can

322 S. ABU-IIAKIMA AND F. OPPACHER

Name new

Name, priority o new sub goal

Name new

State 12 to state 15

C ~ Get ) user J level

"~New . ) or existing

. i domain

Edit top goal or sub goal

Edit of tun domain

Get existing domain name

pmb~m

Select problem

' to start session

Explain reasoning strategies

Advise on

pmb~m

subgoal State I to state 15

~ Ouit

FIGURE 7. R A T I O N A L E ' s finite state machine.

indicate in State 5 whether they wish to edit the top goal or one of the subgoals by simply selecting the nodes off the hypothesis tree illustrated in the browser window. The top goal frame is edited in State 6 and subgoals are named and edited in States 7 and 8.

Had the users chosen to define a new domain, they would be asked, in State 9, to specify the name of the new knowledge base in an edit window, to name, in State 10, the new top level goal, and to edit it in State 6. Once the top level goal is defined, they are asked to name the first top level subgoal and indicate its priority in State 11 relative to the other subgoals. This subgoal is then edited in State 8.

If they opt to run a sysfem, they are asked, in State 12, to select the top level problem the system should pursue, or to specify unknown to indicate that they do not know where to start the refinement for the diagnosis.

If a problem is selected, the possible symptoms required for refining the subgoal are output and the users are prompted for the observed symptoms in State 13.

IMPROVING EXPLANATIONS IN KBS 323

Otherwise, a set of symptoms that allow RATIONALE to activate a particular problem are output and the users are asked to select observed ones. The problem refinement mechanism is described in more detail below in section 3.5.1. State 14, the explanation facility state, can be entered from States 12, 13 and 15 by the user's selection of the explanation icon. Users are placed in this state whenever they wish for an explanation of symptoms or subproblems. Once the system has enough symptoms for refinement from State 13, a list of resultant diagnoses can be given.

3.3. FRAME G E N E R A T O R

RATIONALE's frame generator module generates templates for two types of objects, i.e. top goals and subgoals. This is done in an object-oriented style as described by Stabler (1986). The facility allows objects and their specific methods which can either be facts or rules, to be defined. In addition, the facility provides a parent-sibling relationship between the objects. This is very useful for the construction of classification hierarchies for organizing and representing the knowl- edge in the domain. The top goal template is shown in Figure 8a and its RATIONALE window is shown in Figure 8b.

The predicate add_object [Superclass, Object, (Methods)] adds an object to the tree. Superclass is the root, i.e. the knowledge-based system, and Object is the top goal. Methods include the knowledge-based system's aim, input as text in the definition stage, and an enabling, an invalidating and a tolerance condition. The enabling condition is used to discriminate between domains in a multiple domain system. The invaliding condition disables the top goal on the basis of an ill-suited problem type. The tolerance condition overrides the invalidating condition for special cases of the problem type.

The second template (shown in Figures 9a and 9b) is used to generate instances of subgoals. Its Superclass is its parental goal which could be either a top goal, a top level subgoal or a refined subgoal. The subgoal frame includes three explanation slots for the hypothesis it represents. The novice, experienced and expert explanation slots are input by the knowledge engineer when the subgoals are being defined. An explanation detail slot particular to the hypothesis, holds the expert's numeric estimate of the conceptual complexity of the hypothesis and of its explanatory importance relative to the other hypotheses in the domain hierarchy. This provides a customizable range of detail to distinguish between user levels to output explanations.

The set of all possible symptoms for a hypothesis are grouped to form an enabling, an invalidating and a tolerance subset. These subsets are matched against the user's observed symptoms and their conditions are applied according to the degree of match between the expected symptoms and the ones observed. The subgoal has an alternative-subgoal slot which is activated when a set of symptoms is shared with another hypothesis. An urgency indication is also included in the subgoal frame, thus allowing an urgent problem to be highlighted as such to both the user and RATIONALE. An urgent hypothesis to be refined is placed at the front of the refinement queue. The final slot in the subgoal is the refinement slot. This points to the subgoal to be refined next and to the sufficient symptoms required for its refinement. The refinement mechanism is outlined in section 3.5.

324 S. ABU-HAKIMA AND F. OPPACHER

3.4. INTERFACE MODULE

RATIONALE has a multi-window interface, an extensive explanation facility and a knowledge browser/editor implemented under GoodNeWS and HyperNeWS. Two powerful NeWSTM-based tools intended to simplify the task of developing good interfaces for knowledge-based systems.

RATIONALE accomplishes its objective of ease of use by providing the user with a stack-based interface (similar to a HyperCard T M interface). Its first stack allows the user to select options by "pushing iconic buttons" for: displaying the browser/editor; asking for explanations; changing the user level; or for running a

Top goal'. add_object( ES_name,

TopG._name, [es_name (EN), topG_name(TGN), es_aim(AIM), enabling_condition(problem type(EC)), invalidating__condition(p roblem_type(DC)), tolerance_condilion(problem_type(RC))]).

FIGURE 8(a). RATIONALE's generalized top goal.

~'~¢-r~u~m

:g-~ntl~ l~_¢o~drs ........ E O I l l t tonul l_ fleES-hilt RllTl--n~ H T I brae__: 'fire::: I~:el'l n vOrkl'.t ¢t On rt n_vorkaTtt t bnat~¢ uort|ts=toa'FrM~ Ill toe l l ~ [a~e~/1~l¢ ~g¢ : ~ [~l~ ~ I [

[: ~ workltttlon ~ 1 ~ I in,tlta_cond

oc : **r~er toler*nce ¢ond

I I i . . . . . . . I

level problcra from ~ menu il' y art Imow it, ffyou don't know it pl¢~e ,¢Iecl:

untaur, cn

° Ityou wi~h the n ~ ot all the problcn-s in the diagnoi~.c hierarchy to choose from, plebe select

( List.AllProblerr~ )

~nkno~m zorn rn unica ~oo..prob/em risk_crnsh disk $ e ~ o t ~ o b t e m

ha rclv~re J~ol~em rn em ory_ba nk..pr~em rnem ~y_c~ rd_.prob~em mem oty_pa ~ fi on .ptob/em m ern cxy._pt ob~em rn em o r y _ s c ~ ent..problem

[elhemet..prel~em

M FIGURE 8(b). RATIONALE's top goal editor.

IMPROVING EXPLANATIONS IN KBS 325

domain. The second stack, the explanation stack, is displayed when the user selects the "?" icon and provides the user with a "map" of the available explanations by means of menus.

RATIONALE's main stack HomeCard is illustrated in Figure 10. Off this main stack the browser/editor icon can be selected by the user to create or modify a knowledge base. The browser is illustrated in Figures 8b and 9b.

Sub goal: add_object(SubG_name,

Refined_subG_name, [subG(refined_from,refined_SG), subG_name(refined_SG),

novice_explanation(NE), experienced_explanation(EE), expert_explanation(EXE), explanation_detail(Complexity_of_Subgoal,Importance_Ranking), enabling_conditions(Enable_Symptoms,Enabled_degree), invalidating_conditions(Invalid_Symptoms ,Invalid_degree), tolerance_conditions(Special_Case_Symptoms,Tolerance_degree), alternate_subG(SubG_name, shared_symptoms(SS),Alternate_degree), urgency_indication(Urgency), refinement_subG(Refine_to_subG_name,Suff_Symps,Refine_degree)]).

FIGURE 9(a) RATIONALE's generalized subgoal.

l O l l l t l O m l | l Praml (d t to r " e ~ - - + top.~q~t H i l

iS_hill : E l l T+ + i i f+xVOPkltlt+on

I ~ I I R l t i a n l l l F r a i l [altDr t+n-9¢¢1

~ I f S l n l l l : ~ l O r ~ r G b l ¢ l $ 4 _ m l l l _ r l f l n e d f r ¢ l : lOf tVl rCjPObl+ l ~ ~ ' I G ' ~ e ~ :

R l f S 6 f l l l l : l e l ~ v - - p r o b l e l ~ j a~T~e . . . . . . . . . . . . . . . . . . I I Nlv lC¢ [ : I1her¢ i | ~ be ! p r o b l e l v l t h v ~ k l t l t l O n l e l ~ V

l O f t v l r l In4 St I I V r c q u | r ¢ r¢1o141. 9 Run l e l+P+ +11+mGltl¢l* I

f~+irllm¢l+_Ixp11+l++G~

+*~¢ttcmpllnltlon (xpcr¢( : I~IIOPM IOt t l l r l bu+ ~ corru~tlom r¢~u+rll re1

t . p l l n l ¢ ( O n d l t l l l 5ubO_Cofl¢¢pt_CoIplcxlt+ + $ Detll I Xsp~t*n¢¢ : I

r.lbllnLcond

(o+~i i : e.s ,.+.+I+.t¢_c.~+ d e c t dle nlu~e o [ f f ¢ top

Z.v1|++_SW.+s : £po+¢+...r~] ObtCm fr~cl ~ I~¢:l.t ~'~OU z+¢+.¢+ : 1,0 ta~e.*x. +:.+.~ , i~y~u don't know it please

SOc~II ¢lll_S~.pl : C^~*3

I Imcrntml_iub| + R1tcrmltl $+ . l i p : I ( IOP~C I r d J P O b + l l $~Ir l d_S~mpl : [ l l l O r L l r r ~ , I¢IOPLICCl I I dl+11d] ~ ~+*+,,. : *.e "+5 ¢hh th~ n ~ r ~ o[ ~.II the

. , ¢~ , , ¢ ,~_ ,~e =;! ~e from, pleue sdecr"

communica~on problem disk crash ~ k - s e ~ o r p r o b l e m ethemet_problem haro%~re_problem memory_bank_prob lem rnemcxy card .~x'oblem m e m ory_pa r~lion p r o b l e m m e m o r y . . p r o b l e m mem ory_se gm ent_o~ob~em

i

FIGURE 9(b) RATIONALE's generalized subgoai.

M I

(---0x--3

326 S . A B U - H A K I M A A N D F . O P P A C I t E R

RATIONALE

an tnvlrontnent to cr¢~e. ¢dlt and run a m a u , a tno,~k~g, ~ ,

h'allanal Resea.tch Caundl of Canada

Suhayya Abu.llah'm.a

1988

",h'¢lcom e to RATIO.'~d~E.

For help select '? ' . Rer, ct user level to experienced or expert using '(JaangeUset I~vcr . Ent~ the Imo~,ledge base you wida to work with and continue wi~ h by selccthng the Browsex icea or runl(B.

l l404~etEngtne

" I ame ]

FtGURE 10. R A T I O N A L E ' s main stack HomeCard .

Users may focus the search of the reasoning mechanism by specifying at which subgoal the diagnosis should start as illustrated in the problem selection card (window) in Figure 11. The user can then select from relevant symptoms as illustrated in Figure 12.

Once their symptoms are selected the hypothesis is refined until a diagnosis can be presented as illustrated in Figure 13.

The user can select the explanation icon which invokes the explanation stack at any point during the session. The user can then ask for event and hypothetical explanations that make use of the dynamic reasoning strategies used in the current context. The second two categories, ability and factual explanations, address the user's questions about the methods and facts represented statically in the domain hierarchy. The explanations address the user queries described in section 3.5.2 and the explanation stack is illustrated in Figures 15-21.

3.5. R A T I O N A L E ' S I N T E R F A C E D E V E L O P M E N T E N V I R O N M E N T

To better appreciate the interface to RATIONALE, it is worthwhile to examine briefly its development environment. The four development tools used are Quintus Prolog TM, NeWS TM, GoodNeWS and HyperNeWS. The reasoning strat- egies and domain representations in RATIONALE have all been written as Quintus Prolog predicates. The multiple stack interface of RATIONALE runs under HyperNeWS, which itself runs under the Sun Micro Systems Network Windowing System (NeWSBI). A brief description is offered to clarify GoodNeWS and HyperNeWS, the two NeW~;-based tools.

GoodNeWS: GoodNeWS is a NeWS-based window interface that has been developed by van Hoff at the Turing Institute in Glasgow, Scotland and is described in (van Hoff & Abu-Hakima, 1988a). It is written in PostScript and makes use of NeWS graphics primitives. GoodNeWS provides a complete windowing

IMPROVING EXPLANATIONS IN KBS 327

~,ow~e.~ e l:,oa¢ :

I,=o J

Reme selcot x~c name of the top l~¢d problem from the menu if you know it, ffyou don*t know it plcmc

umb*e~m

lXyou wbh d'~ names of ~II the problen~ In the d i ~ n ~ a c bier trdr/ W ~ e from, please select:

( ~ I ~ )

unknov~ 1 [~l: I l t l l III i [ [ ( i l [ o [ J l I : ( i ] i ] [N J

disk oesh dis k"se clof...~'oble m ethe-raet..Jwoblem he rdw~re..Jxoblem mere ~ t b~ nk._prol~em m emoc,/ e~td_problem m e m ory..~ rli5 on_,l~o/~e m m em o~'7..l~'ol~e m mem ory $e grn ent..pr o~lem

i coa~m unien lion_~coblem J

f"IEdiutble ~]Tramp~ent [~Boxed~.gcmllable I~Line

Value: I ~ 1 [ cnmm~ c~ Dn .ptoblnn ~lk crmh

memoq~ ~'iFn~ .gt ob~ ~'n

[ ' r ° ~ m r - u z d ' F m b / ~

FIGURE ll(a). RATIONALE's problem selection card showing a HyperNeWS object value.

tno-~te.~ e ba.~: I a=e I

FIche select the name of ~ top level problem from the m~nu if you know it, if you <lon't know it 1~I¢~c s d ~

If'you wish the names of ill the Froblcrrn in the dia~r,~ tic hi cr a.,rdr/ to d-ooze from. plebe select:

( I..istAllProblen~ )

unknown 1 [1[$~ l t J i } | I I } i t ml~la f o : i I s: f * ] 11 [s4 i

dsk crnsh disk_-$e~Ot .l~oblem el~emet .problem hardwere..lwoblem memory bank lxoblem memory card ixolNem m e m or,/..pe r5 ~ on..l:X oble m mem oty_la'oblem mem oqt_se ~n ent..lXo~em

Icommuni~l i~=l w°blem I

f nit

N i)zne: [pol, pro~ ]

I-'lEditiblelYlTramp~xent I~Boxrdl~Scrollable [ ]Lira

FIGURE ll(b). RATIONALE's problem selection card showing a HyperNeWS object action.

328 S. ABU-HAKIMA AND F. OPPACIIER

lao~led¢e base:

e~ablishPd problem: l c o m m u n i c a t i o n - p reb lem J

"o V- -g -q

R A T I O N A L E is considc~ 8 RELEVA.N'r SYMPTOMS that help foc=s yogr problem.

/~ ALL the RELEVA.N-f ~p(oms bare bccn obs=wA, pIcasc ~lcct o21

H ~y some of tl~cs¢ syrup=ms l~v¢ been ot~=~wc~, please sclc~ tho~ obscrv~.

If NON'E were observe, please select: non~

possible symptoms:

hardware en'of conneclorefror rn ern ory,. acc~s s._denled ladled server access al

observed ~m,~toms:

m e m ° q c a c c e s s ' - d e n i e d 1 ai ed server_access

FIGURE 12. RATIONALE's symptom selection card.

environment which includes terminal emulation, graphics drawing tools (colour and black and white), use of captured images and a LaTeX previewer.

HyperNeWS Slack, PostScript and Prolog Messages: HyperNeWS (van Hoff & Abu-Hakima, 1988b) is a GoodNeWS tool which is similar in some ways to Apple's HyperCard TM, but which can be interfaced to Quintus Prolog and C. HyperNeWS stacks are created using the GoodNeWS drawing tool (hence developers can draw any stack shape they want). An important difference between the two Hypermedia

I a ~ e I

~ the p.Qbk~ ff~gnos~ ~ .

~ you ~quk~ d,c~*~l tt~le#'~ ~ y ~ t-v~. acl~t ~ ' ~ L'~r l.xvt I' b~.oa to eaxtnt e to "zmvic~'.'=lxa@-ax~" or "ct l:,~"

t*tt.

Dia~ao~s of vrob?em: Your ¢omm tmlcallor t_probh=n camlm FOCUSED to give ~ following e.xptLnat~ons:

A *~wc* lxoblcan hadicati~g that: ta-oIM ~m with work.c~ion-sci~r

oomm,anication may bc due to • tctx~tl-.m-nct c o m ~ c a ~ J o n probltza. CotCa~ adan kais~at at.

your ¢omm artication_prob Icm has an allcmaze. • mcmoty~[xobl~.

~ m sy bc a lambl~n wlth s sclgn~ of work~x=~ion m~mooj. Run m~a~ ~tiagnofdca and co r a~ AC.~ ~ i ~

( NcwDia~mc~Ls )

(ChangcUsaLcvcl)

FIGURE 13. RATIONALE's diagnosis card.

IMPROVING EXPLANATIONS IN KBS 329

tools, HyperCard and HyperNeWS, is that HyperNeWS runs under the Unix multi-tasking environment which allows one to have several interactive HyperNeWS stacks at once. The multiple stacks can send PostScript messages to be handled by HyperNeWS or by a client process. Quintus Prolog is such a client process. Messages from various objects off the stacks can be handled using Prolog predicates (Clocksin & Mellish, 1984; Sterling & Shapiro, 1986). The types of objects that can be defined on a HyperNeWS stack include: text objects (static and dynamic); check boxes; push buttons; arrow buttons (previous and next buttons); and user created iconic buttons (created using the GoodNeWS drawing tool).

An example of a PostScript message, programmed as an action of an object is given in Figure 11. The particular message allows the user to select an item off a menu (a menu of problems) and have it appear in a selection box. The form of a HyperNeWS - Prolog message predicate is: hn_message('StackName': 'Object'('Ob]ectName'),'Action'(A)). Thus, in Figure 11, when the user selects the 'ListAllProblems" button, the following message is sent to the Prolog client process:

hn_message('ratstack':'Button'('ListAllProblems'),_):- what_subPs_exist(NamesL), append([unknown], NamesL, NamesList), hn_set_text('ratstack':'poss_probs',NamesList).

The message is sent from the ListAllProblems object to the ratstack stack. The predicate what_subPs_exist instantiates a list of problems from Prolog. Predicates such as hn_get_text, hn_goto_card, and hn_set_text are predicates defined in the HyperNeWS-Prolog interface which allow one to respectively, read text from the stack, go to a particular card within the stack and set new text in a text object. Thus, hn_set_text sends a message to display the list of problems in the poss_probs object menu.

3.6. EXPLANATION GENERATOR

The explanation generator has two related tasks. The first is to generate hypothesis explanations to advise the user on actions that resolve observed symptoms. The second task is to explain the system's reasoning strategies to the user. This second task is outlined in section 3.5.2.

3. 6. I. RA TIONALE's reasoning algorithm A top level subgoal is refined into more specific hypotheses on the basis of various relationships among the following sets of conditions:

• Sc _D S, Se contains all symptoms that enable H i.e. conditions that partially satisfy a goal in the goal hierarchy. The enabling conditions activate a branch of the goal hierarchy for subsequent refinement. These conditions can be viewed as providing positive evidence for belief in a particular hypothesis.

• Sl ~ S, Sl contains the symptoms that invalidate H i.e. conditions that override the enabling conditions of a goal. The invalidating conditions deactivate an activated branch of the goal hierarchy. These conditions can be viewed as providing negative evidence for belief in a particular hypothesis.

330 S. A B U - I t A K I M A AND F. O P P A C H E R

• St = S, St contains the tolerance or special case symptoms that override St i.e. conditions that override the invalidating conditions of a goal. These conditions can be viewed as special case conditions which when satisfied provide exceptions against negative evidence. The tolerance conditions reactivate a deactivated branch of the goal hierarchy, provided that the deactivation was caused by the invalidating conditions.

• S, ~ S, S, contains a sufficient set of symptoms necessary to refine H to H, i.e. conditions placed on a higher level goal to activate a lower level goal. A goal may have several subgoals that it can be refined to. Each of these subgoals has an associated set of sutiicient symptoms for refinement and a required percentage of match.

• S, ~ S, S, contains a set of symptoms that enable alternate hypothesis H' i.e. conditions placed on a goal to activate a goal in another branch of the goal hierarchy. A goal may have several alternative goals that it can activate. The alternative problem branch is refined in a manner identical to that of the original problem.

• S,,~,.,~,~ ~ S Sob . . . . a contains all the symptoms observed by the user and presented to the system for activating and refining a hypothesis.

• H,, one of the subgoals that can be refined from H • S, the set of all possible symptoms that are relevant to hypothesis H

i.e. S consists of the union of So, Sl, St, S,, S,.

I RATIONALE outputs top level subgoal problems of the current problem domain I

!

I

User selects a problem to refine User selects'unknown' I i

v Output pairs of problems with associated enabIing symptoms & ask for observed symptoms

i,

I II I Useraskedtoselect re,evantobservedsymptoms v IIII I Check if observed symptoms enable selected problem I

yes ~ no

I Check if symptoms invalidate problem I

! Check if special case symptoms override invalidation I 130

yes

Check if observed symptoms are sufficient to refine the I selected problem to one of its refinement problems I

yes

Refine problem as far as refinement hierarchy aIIows I

t Check if symptoms point to an alternate problem I

LI ,e, I Refine alternate as far as refinement hierarchy allows I

Prioritize & Output the collected problem explanations to the user

FIGURE 14. RATIONALE's reasoning algorithm.

IMPROVING EXPLANATIONS IN KBS 331

The algorithm in Figure 14 is used to generate advice or hypothesis explanations. The user is first given a set of top level problems defined as top level sub goals in the domain hierarchy running in RATIONALE. If the user picks a problem to refine, it becomes the current hypothesis and its relevant symptoms are output for selection.

To refine a particular hypothesis, a flexible match mechanism has been designed. Associated with each set of symptoms required to satisfy an enabling, invalidating, tolerance, refinement or alternative problem condition is a required degree of match. This degree of match can be specified and modified by the developer. The degree of match is represented as a combined weighting of observed symptoms, Sobsc~ed, against the symptoms required to satisfy a condition. Thus, to enable or activate hypothesis H, a degree of match for the enabling symptom set, S¢ against Sobsc~,=d, has to be satisfied. If the hypothesis is enabled from the observed symptoms, the algorithm proceeds. If however, the hypothesis is not enabled, the system proceeds as if the user had selected the hypothesis or problem as unknown. If the user selects the problem as unknown, the enabling symptom-problem pairs of all the top level problems are output. The user is thus prompted for enabling symptoms relevant to a particular problem.

If the enabling condition, Se, is satisfied, the degree of match of Sob . . . . d against the invalidating symptom set, Si, is checked to confirm that the observed symptoms do not contradict the hypothesis. If the hypothesis is contradicted, the degree of match of the special case or tolerance symptom set, St, is calculated. If the invalidating symptoms are overridden by the tolerance symptoms, H remains enabled and is refined.

To refine H to Hr the degree of match between the sufficient symptoms for refinement, S~, and the observed symptoms, Sob~c~¢d, must be satisfied. If the degree of match between Ss and Sob~c~¢d is lower than the required degree, the hypothesis is not refined. Similarly, a degree of match between Sob~¢~v~d and the symptoms that enable an alternative hypothesis, Sa, has to be satisfied before the alternative hypothesis can be activated. The shared alternative symptoms allow the alternative subgoal to be activated and placed on a queue for subsequent refinement. Thus, the objective of RATIONALE is to continue refining hypotheses until all possible sufficient sets of symptoms are satisfied and a hypothesis can explain the symptoms presented by the user.

Once all explanation texts associated with the problem, its refinements and its alternates are collected, they are ordered and output to the user (as illustrated in Figure 13). Note that after refinement is complete, the original and alternative problem diagnosis texts are output according to the user level. Thus, if the user is a novice, the text in the novice explanation slot of the associated hypothesis frame is shown.

3.6.2. Explaining the reasoning strategies For a knowledge-based system to explain its reasoning, it should be built with four objectives in mind. It should be able to:

(1) tell the user what the system can or cannot do; (2) tell the user what the system has done;

332 S. ABU-HAKIMA AND F. OPPACHER

(3) explain system objectives by telling the user what the system is trying to do, and

(4) tell the user why it is doing what it does by responding to the user's clarification questions.

For RATIONALE to meet these objectives, it has been designed to give the user four types of explanation. The first two types are session sensitive, and they are event and hypothetical explanations. The second two types explain system capabi- lities, and they are ability and factual questions. Note that event questions, unlike system capability questions, are dynamic and thus are displayed only after the user has started a session.

3.6.2.1. Session sensitioe questions. System event explanations are directly related to the reasoning in the current system session. Event questions that a user is able to ask are directly related to the symptoms and hypotheses the knowledge-based system is working with. Event questions can be categorized into why, how and what-if questions.

Event questions: Why questions address immediate system actions. How questions ask for explanations of diagnosis methods related to the symptoms and hypotheses within the current context. The emphasis in these questions is the system's objectives as they are related to its methods. Event questions in RATIONALE also include why-not questions. These give the user an idea of why the system failed to use particular symptoms or deduce particular hypotheses. The types of event questions and respective examples are illustrated in Figures 15, 16, 17, 18 and 19.

For m~ pmbbm b~ ~ carom1 c~es5:

merest ~ s q ~ m ~ ; Why do ¥o~ l~eed ~ h m Irymptoms ~

I w~hy de y°~ NO T ~eed I h ~ sYmP~mst I

[Vc'hy d kt ~ou c~ nclude th~ r e kk~! pr~blemt I

IHow d ~ I n ~ n c ~ h this r i i b d pnbkm? ]

What ff ~t~ = i I K t i d pruhkm r iphu~d m~

tmo~d,.J~, ba~: I

q u ~ ~snt ~ N d .

torrent ~re~lem l~cml: =*irmr~ pr=~s_la=t pow~ m~r

rebkm I t IL l t~d to tm rre~t problem : t dt_t ~ r _ p m ~ l ~

sylu I~t~m I~ l~ t lon : ~'r~s,t_*roan ..1o~

~bl~m ital~.etlo n: ]

Y ~ l d ~ ~ f e t e d ~

~r~_d~i r.c~ J..dmlcd] m~l~ ~ ~ ~Jth s0 ~= It m~¢ be Col;rased f~ a dllEnollt

od~" ob~rved qcm~mJ m~ be ~ d ~ Wind d

-> t~e r y m p ~ ~ [ ~ n ~ o r pr o ~ ] p.~d des rp~[d

lympt~ ~ I~ 1~.¢ ~lk ~mh m ~it It mlF bc f ~e.u~d m a

J' unk.qovcrt z jJ c ~ n rn un~ca 1i o~l l l~,oble m

II e~e'-met .t~r~z/em I hardwere. .prcrblem ill mem 0ry_l~ n k . . ~ b l e m I m e m o r y c~rd Ia'oblem

Jim ern o ~ / . . ~ o ~ e m

FIGURE 15. Example---For my current problems, why do you need these symptoms?

i

IMPROVING EXPLANATIONS IN KBS

F~r my pr, o b ~ b~ the mrrlm| co/l~z~:

[wlty do yo~ ae=t ~=,~ ~ = p t o = u ~ I

[W~ty do ¥o~t NO T need lhe~l ITm pro m s ? ~ _ . J

I W/~y did ynu ~nclud e $/ds r l lz t~l probIem?

{Ho~ ~kl ¥0= ¢=nclud, t/t~ r*l=t~4 problem? I

~p.th=~'=/~,"~'~=:

t - r tow/de=/ms*: II _ _

' . . . .

~r~b~m • rtllt~ to ~tTent l~b~m :

I J

You~ sd ecr=! =7=r.~t== wl= NOT n ~ M s~cc J4~

d~k ct=h which ~Id h~i~q~r~ o ~ obsctvc~ sToat=ms to be ~xtmln~ f~" sFcd d csTe coc~ont

3 3 3

~nknown i corn m u n i c ~ o n d x o b l e r n

:Isk_sec~or prot~em e t h e r n e l . p ~ / e r n ha rdwa r e _ p r o b l e m rn e m o o / _ b ~ n k p r o b / e m rnem or, /_¢a rd p~ob~em rn em ory. .pa ~t~on pr oble rn m em oo/_p¢ob/em m e m ory_s e gm e n t !~ob / cm

~skc~'ash I

f--ez-3

FIGURE 16. Example--For my current problem, why do you NOT need these symptoms?

For n~/pr~ b~mu ba Ihe curreat comesq:

I ~ ' ~ - ~ ~ I

m~mit ~ I I ~ I I S :

Iwby do yaa ne l l Ihes~ sTmpton~zT J

[Why d= yu= NO T Itlmd t l ~ s~mptomsl ]

IWlaY d Id You C°nclud e t h~ rllatm:l P ~°bl~ml J

[How dld?v~t~onclud*thlsrelat~dprobl~m? I

115":, ,=: wX ,'.' =, ] I m%Tqt ~: t~ == ~i%~¢,~t~?== ..... IJ=o U'uet'=r ]

the p~b~ •/or r/n~0 m= ~o= vMt =¢kl~r~l and I/~ qu~Tgu w~ ~gNd.

¢=rrtnt pr Bbllm symptoms :

=~rmrejtroce,..1ott had d ~ scx~s dmicd r ~ ~,~ - problmms rel l~xl Lo ca r r l n t problem :

• i c~sh * pt off=~ J=ttw~r¢ Frob~m

t,'y~ p Lo m s e | ~ o n :

p r = b ~ m Nl=cclon:

F.ATION/d.E~n= ~1¢ to ¢=nc2act¢ • had~re_Fr~bl~ stnc~

jr=r =ym~*= =a [~ofr~i-ej=m~ Ioz}

m d~cmleFro~a~ i b u ~ r c Fablcmto m~tda ~e reC,~r ¢1 ~ $:-te =t ~ gem~tom mlt~. 1"~ s d~raat¢ F=bl~ roll7 IiZ~ be t~¢

t'--0x-3

FIGURE 17. Example---For my current problem, why did you conclude this related problem?

hardwa reJa'oblem i rnem ory_t~nk .pco~em rn ern ory_¢~ r d_proble m rn em ory_pa rtit~on_problern rnem ory_pcoblem ra em ocy._segm ent joceblem power_pcoblem processor card_problem process oLs otlwa re_pco~em

softwa re. pcoblem I

334 S. ABU-HAKIMA AND F. OPPACHER

For nU pl~blmu I~ ~ cur/~tt cmq~at:

m, lma ~ : ] ~lay de you n~,d t h ~ TTTnp~mSt

~V~'h T do you ~OT nHd thes4 IWmptomlT

Why did you conclude this rslated pt~blem~

I pq~ro~b~dm~;O~t N OT eonclud . . . . . ltta~

I How did ITOU c~nclud~ th~s roland problem~

~g'aat I1' the selected Frobkm r ep ]~c~l my I . . . . . p r o b l . m . Bh th . 141eckgl gym , toms , J

What U these v/rap toms • ere als~ true gor l int . . . . . , r . b k m , [

( ~ . . . . t ~ . , )

->To ~ an ap/m~t~B f~r ~ o n r emonl~ pk~m mlecl t i t p~blmu & ~ r g~txpt~ms ~ u ~kh Iddgmu~ and t ,~ q u ~ n / ~ u ~ t at trig,wed

torrent pToMem P/ml~ t~ms :

p r o c ~ r Tmr ~

he~d ~ acc~s d ~

problem s related ~ eurro~¢ p~b]~m :

&~k cTm.h

ITmptom ~lecUon:

probbm n l ~ t l o n :

KATI0 NAI.E w~ NOT I~Jc to c~n~ude • [5~

~OUT O b ~ Wn~ma ~T. [?rouser errs, [ r ~rmre y~tm s._l~ ~, bml_ee th ~ s_e~e~ I I

I f sd c~b o ~] ~ ~1 NO T ~2o~v Y~r t c~znc~..P,~bl ~ ~ ] ] bc fncanal ~o t m ~ b l ~ To sc~eve ~e I I

sl~ ~ ~o o~¢avc C~c s a [remain'7 er*rt

ha r dvta re-la'°blem 1 m emery t~nk.problem m emocy_c~ rd i~ob l em m e m ory . j :~ r~lioa. . iwoblem m e r n ~ / _ p r o b l e m m em o~y_se ~rn ent_l:VOblem

Ip~wer_l~ol~em ]~oee~9 o¢¢~ rd..jwoblem [ pr oce s s or_s old'we r e .proble m I server~c~co/~lem I ~"~ I tyr, i o.~ .1 ~*I .: IM, |l

soflw~ re..gxob/em

FIGURE 18. Example---For my current problem, why did you NOT conclude this related problem?

For nW p~obksa In ~ cua~lgtt c~ntL~tt:

m e ~ ~ s t ~ t s :

I ~ y do Tou n . ~ ~ m s~m~tomsl

I 'Wh¥ do you NO T n,~-.d I b ~ ~ m ptomsT I

Why did you CcBcludo t)~h rehted problem~ ]

~How did Tou cunc~de tla~ re~tted problemY

What If the |elect4nl l~Tab ~m FIF hc~l In~

(. ~ . , t , , , , ~ . , a )

-~ To I~a Im mpMmt~n for I m ~ n r ~ nlr~. pie ame flu pmbbmt i t o r s~n~p~oms ~ vlth Iddrelsld ard t/~

current problem TTmptoml:

F o ~ n n a r

pT13~kml l i l l t~ l ILl ~ql IT i n t prob~l~: ~ k cr~h ~ pro c~t set Ic~v~ ¢..~o~I~nl ~ t _ s ~ J~blcm

Trmp~m s*l~Mon:

!L__

your l T m I ~ gt ~t"d ~ h ~ l s dex~cd]

• d~at ~ h ~o m ~ h ¢c r~¢dzed dtgr~ 0g I

hardv~re problem memory bank/xob/em mere ~4/Zc2rd l~oblem mem ~L.pa ~ Tic~..pr ~1:~ e m rn ernory..l~oblern m ern ory_se ~n~ enl .iwe/zlem ~wer .prOem proce s s ~'_ca r d..proUem proce s s o¢_s oftw-a re_ j~ob lem

sollvare I~'otgern

f--er--3

FIGURE 19, Example--For my current problem, how did you conclude this related problem?

M I

IMPROVING EXPLANATIONS IN KBS 335

Hypothetical questions: Hypothetical explanations are often referred to as what-if explanations. They explain to the user the results of adding new symptoms to an established problem or of replacing an established problem with a new one with a modified set of symptoms. Hypothetical explanations can be both event dependent and independent. In the event dependent case, new symptoms are associated with an already established problem. In the event independent case, a new problem with new symptoms could replace the established problem. In RATIONALE, hypothetical explanations are generated using a parallel trace of the session in which the user defined hypothetical symptoms and hypotheses. The difficulty in providing such a facility is the ability to return to the original user trace without confusing the two sets of symptoms and hypotheses. Such a facility is valuable for users testing the knowledge of the system. Hypothetical explanations, unlike Event explanations, are not readily available in most knowledge-based Systems. This is mainly due to the overhead associated with tracking a user's current and hypothetical sessions. The types of hypothetical questions and respective examples are illustrated in Figures 20 and 21.

3.6.2.2. System capability questions. Questions about system capabilities allow the user to get explanations about reasoning strategies and the use of symptoms and problems, independently of any particular context.

Ability questions: Ability questions, illustrated in Figures 22, 23 and 24, are independent of user sessions and can be asked at any time, thus giving the user an insight into what the system is capable of. These explanations are useful in aiding

F*r nW p ~ b l m u tn ~ c u ~ e n t con~e~:

Why d o y o u need t ~ s T m p l o m s ~

I WhT do y o u N O T need t h ~ ITm p t o m s ?

I ',Vhy d i d ¥ o u ennc lud i th is re lated p ~ o b l e m ?

Wb 7 d i d y o u NOT ~ n c ~ d,t th is r e la t~ t I p r o b l e m ~

I I Io~ d i d ¥ 0 ~ c~nc lu tle t h k re l~t~d p r o b l e m ?

~ h e a c d q~a/ouz: W h t t I! the 1 ,1 .¢t~1 p r o b l e m r e p ~ e e d m y

[ . . . . . lsr rifle= vl~h I f . . ] ~ t e d eTmplo~? I

~ e a g , base:

->To ~ an ~plamt~n ~or station r emcrd~L p k m e select Ou pmblmu • ~or sTmp~onu ~ wish addressed trod ¢b~ quam/o a ~ v m~t It, nnr ered.

~ l r / t l~ l pt otaleal rlralp I~ m s :

i~'w~rt ~0ces I lost h i d ~ . ;cct~l'd~ed e ~ 3 , a - p r o b k m s related to e u g e n e p r o b k m :

~sk ctmh lsk-s ex:tor pmble.m pro r~ mr mttw~ e.~r ,ble~n

J T m p t o m ~lect lo~t:

~f~w~re_~ror H memar/ err~

t~ dae set [ h q t ~ e e~r~t, so fr, ulrej .~blern] we~e id led 1 co ~ t pred a~ ~ lot ~ ej~tohle~

In Id~d m m ywa ic~ gul l~hl ~m~, i mm~grl..gtDbt ma m e / d so b e pr cleat ~ d R tn~ ca.'el ~ l t t ~ e mey be a robI ~m wlda Ihe m ~ o r y pt~d~J or. Try t e l o l ~ n l a ne~

hardw'are..proldern 1 mern ory_bank_.pr~ern m ern ory_ca rd..pt oblem mem ory_.pa r~l~on..Ixobte m m ernoryAxoblem m em oty. s e grn ent_problem power_problem pt~cess~" card e~oblem p¢oce s s or_$ oftwa re_p~oble m

,software. problem I

FIGURE 20. Example---What-if these symptoms were also true for my current problem?

336 S. ABU'IIAKIMA AND F. OPPACHER

F=r rn~ ~ b ~ r n ~ Ov ~ . rma t eorlt~=s:

I ~ ~ I

~v~r~ 5 s ~ s ~ a n x :

I *O.'hy do Tea nee4 t~*s4 r l m p l o m s ~

['d, rhT d ~ you NO T ~e~d t h e ~m ptOl~ll'

Why did y o u conclude this re la t~l p ~ b l e m ~

L'~ohb~ ~z~ ~ ou NOT c ~ n c ~ 4 . ~L~ r l | a t~d

I

I liow did y o u conclude this related problem~

"iV~ a t If the I e~ect~:[ problem replaced m y I . . . . . p , o b ] . m . It h th. r.zl.~cJ ~Tm, t~m., I

FIGURE 2 L Example- -What- i f this

->Te I ~ t n e ~ l , ~ m t ~ , far ~ s ~ n r m ~ M r ~ pkatw select L~ pcoblm~ i ~or s~npeon~ 3 ~ wish s~klr~3d I~d L~d q u e a n , yJ~ wmat ara~ e~d.

c u r r e n t F reb lem z~rmp~ms :

p r o ~ z ~ _ t r r ~

b~a ~s~ irz~ll de~cd

p ~ b l e m s related ~ ~ r r e n ; p r l ) bkm :

d ~ cr~b & sk-sc~ter Frob;cm p: o r~s =_sc~x~r c_Fobl~m

s ~ m p t a m s41~Uon:

s u ~ 1~-t.k

pr~bbem ~Iec~on

FOC~SE~ zo l~V¢~ foX.ArtS cc~l le.~ c~c

~. Fr~eJa ar_mtrdj~r o51 e:m ~.~ u t ~ ~ ~.er~: mw be I ~'Ob] ~ ~ h v ~ a ~ o a ~ ' ~ C41~d ir, d IT ~',ztn: r e f i l l ~ m r i d ~ qF,~.rdcl Ind £~1 ~¢J~ U JZ ~a,~ l,

', zr~=my ~.j~o~Inn ~cates ~t ~e m ~ be •

i Ihu~,v~re_prc~em 1 Imem cr'/~bank pcoblem [m em ory~ca r d_pr oblern Jm em c¢~' .pa rli~x~jxcblem { rnem ocy. . txcR:~em [ m e m o r y s e g m e n t . . p r o b l e m I~wer_lxoblem [processor c~ rd problem [proce s s or_s of~a re_pro~e m set~mr~Droblem

Is°f~w~re.p r~em I

1 problem replaced my current problem with these symptoms?

[unl~own , corn m u n i c a ~onJ :Xob lem

-~To I k ~ m alx, u l ~ ~ ' ~ n n~vanb l ,.~ - F t ; ~ . r,,!cl i

( ~ , ~ lum,;,~ ~}. ~ ~ ' ~ ' risk sector problem ~ " ' ~ I ethe'-met_pr o l~em

-~.Te J~t sm a l l . ra t ion for • mt;¢ q~l l lon, ple~m ha r~ r ~._~o~ Cr~l • ~,xp~J1ai~on$ ~ I '~ pr~bk m ~ / ~ . f.h ~ - a ~ ( I I .M t.M Imemory bank .l~obl~m

qamlta n ~mu w L.u ,ram, ~ d. memory_card_Ixo~em q,.Jes~ns on the s t a r k knowk.&g~: domm~" lecme I m em ory_pe r ~ o l l . J ~ o ~ e m

memo(7~oblem ~ : ~ a~&m~=,d ~ t/~ d.maC~- mere ory segrn ent prob;em

Hey do ~u d l d a c e i n 111~ I~a~l 4 ~agnosls for I,.h. logo . ~4~ d ( .g.o sb , } et~met~rc~m c~sk_sectcr/~em CJSk (7~aSh

lnvdldl~nl ~Tn:~c~ pov~_ctrc~ is ob~c~e4 k Is f=c~l

~z~z~_prob~em s p ~ c d for • del :~ ~f m~tda o~ I~C~. | If ~e I~ '~t = t~ o b s c ~ it ~ ' d ~ ~h= tnvd~d{ ~n C ~ d ~ L~A d ~ d L c ~ Is d e~ :~L

FIGURE 22. Example--How do you deduce this problem?

M ]

MPROVING EXPLANATIONS IN KBS

Explanations

questions on the static kn~,kdSe:

[ H°w tie y°u deduce the ~I~wt~g diagnosis? 1

I I.o.,.,o.,o~.,~.,o,o.,~,,~,.o,,,,

Wt~ ~'n~orm mxt di~gree~ d m~t~ ~.'e:

i _ _

-~To mk q~t~lem •ban t the seml~n mmmnbq I~yn.am~ ~ t + ~ ut~ct e~ 'D~mnk~' batlurt.

->To ~ tn *xt~lanat~on for • static ~aet~n. 1alms•

p - d , k ~ .

[memcay_e~r ~. IcflwIr e pmcc~ I as• ] ~c ctu~t e4 ~ai.aet H • softw~e~roblena d~e d:ernat~'s r~,&~ ly~"~p~ on'*

7o~obl~vedr/m~=ms. Lflt Imla ~,~ lgml~omml~ls ~l~evl~ ~m ~ c / t t ~ t ¢ Frab2 em Is deffac:~l ~,d r~d~e<l [ [

hardvrare p r o U e m memory_be nk. .Ixeblem m em ~ 'y_card_problem m em ory_.pa rtila On_l:xoble m m emoty_ptoblem m em ory_seg"n en t Ixoblem power .p rob lem pt o¢c s s or_ca rd..Ixoblem Ix'ccess~" soflware prob/em

sot~are, problem

337

1

FIGURE 23. Example--How do you deduce an alternative problem for this problem?

F~planat~ns

qu~sBom on tJ~ static ltnowledg¢:

I Hay rio ?ou d educe the foI~w tn~ dlagnes b~ I

~ow do ~rou deduce an il tarnall diagnosis for [t~• fono.I~l[ 4l..nosb t [

IHo~ ~lo Yoa fo~as the f~=a~tng tll~Z nosI~ [

s ) ~ md de&-ee~ d m~da far ~is t:~c~ nn?

i _

sd*et ~ p~bkm y~a ~lsh addrmr~d lind the q~es~on ~ou w~t ~wlr ed.

~ _ .

A I ~ n r e F ~ I ~ ~ bc re~xed w fQgo~-J: [5~

Ir ~ m . Ixlr~d~lk le~f_an~l~ Is to be *b ~ 1 ~ • dqv~ 0~mu~ o~ ~ I .

I be o ~ e a ~ ± t ~tq~e~ d m~td~ d ~'~. I I

N ~cq~ N |1 To be 0~ s ~ M ~ • degrr_e a~ match d 1OCS~. -

ha r ~,,ra re_problem m em ory_ba nk_problem memecy_card_Droblem m em ory._pa ~ ~ on J3roblem m em ory_prc~lem memory $¢~]lent problem pOWer . p r c~em pcoce s s or_ca r d_l:Xoblem I:x'o¢~ ssor_s oftwa r e,.woblem

Jsoffw'are. ~obtem

FIGURE 24. Example---How do you refine this problem?

338 S . A B U - t t A K I M A A N D F . O P P A C H E R

the user to test as well as to learn the reasoning strategies of the system. Ability questions allow the user to ask questions about what the system can do with particular symptoms to arrive at particular hypotheses.

Factual questions: Factual questions are similar to ability questions. They differ in that they allow the user to query the system's static knowledge directly and to ask about the system's capability in using certain symptoms and problems. The types of factual questions and respective examples are illustrated in Figure 25.

3. 6. 2. 3. Generated explanations Generated explanations in current knowledge-based systems are frequently clumsy and hide the reasoning strategies. Ideally, generated explanations should be considered as important for providing the explanation capability itself. There are six important considerations for generated explanations that facilities would benefit from adhering to. Illustrations of these six considerations are given below demonstrating the strength of RATIONALE's explanations. RATIONALE's ex- planations are template-based. This form of explanation connects pieces of text to variables that are instantiated from the knowledge in the system. This allows explanation templates to be independent of the domain to which RATIONALE is applied. It also simplifies the task of generating dynamic explanations according to

q~es~ ,m on ~ ~ d e knowledge:

I H°w d° ¥°u+ d'~lu~j ~l~j f ° ~ w ~ 8 dla~m~b~ I

V ~ t ~ p t o r m tad d~gre~ ¢ff n~teh ~'r~

--> TO ask qves~.~m L~mt ~ s~s~ n redo ~b~

button.

-> 'J['I I~ n mlph~in for • I'la~tl: ~ e s ~ . ~ I I I Idmct I~I probbm ~ • bh eddremed ~ t~I qBmlnyuu i~ta~I ed

~ , , , = , ~ . , , . ~ ~ ~ ~. ,~_-

~ . , . ~ .

~,~lcornctoRATIONALE.

F-or hdp select'?'. Reset u~er level experienced or expertuslng "£1aa~geUser level'. Enter the knowledge base you ' ~ h t~ workwith and condnue with it by sdecdng ~ Brwuser icon or mr~KB.

j85_,. letE ng ine I404 . Je tEng ine

I ' ~ ° I

? ' - " " ~ Q SetUscrLtvcl )

FIGURE 25. Example---What symptoms and respective degrees of match are: the relevant;/the enabling; /the invalidating;/the special case;/the focusing;/the alternate symptoms and degrees of match for this

problem?

IMPROVING EXPLANATIONS IN KBS 339

the current context. Several examples that demonstrate the use of the explanation templates for an example domain are given below.

Contextually dependent references: Once a context is introduced in a user question, the generated explanation should reference it much as a human explaining within a particular context would. An example of the explanation templates in R A T I O N A L E for a session dependent question follows. (Note that " / " separates the template phrase choices-- the choices are made based on the current context of the session.)

User Question: Why did you not deduce subproblem B?

Explanation: Subproblem B could not be deduced since/ the alternate subproblem symptoms S,, of current subproblem A could not be concluded/ the sufficient focusing symptoms S, of current subproblem A could not be concluded/ none of the selected symptoms could lead to it.

The use of the template can be illustrated using another example from the diagnosis of Figures 10 to 13.

User Question 1: Why did you not deduce a hardware problem?

Explanation: A hardware problem could not be deduced since the alternate sub- problem symptom memory error, of the current software problem could not be concluded.

User Question 2: Why did you not deduce a processor software problem?

Explanation: A processor software problem could not be deduced since the sufficient focusing symptoms failed boot and memory error of the current software problem could not be concluded.

Anaphoric references: These are references to an already introduced entity. Thus, users could ask questions with anaphoric references and the explanation facility would be expected to understand the reference and hence deduce the entity referred to. Continuing with the introduced example:

User Question: How did you use selected symptoms? Selected symptoms S were used as [

Explanation: enabling symptoms for subproblem A ] tolerance symptoms for inoalidating symptoms Si of subproblems A / alternate symptoms for alternate subproblem B of subproblem A / focusing symptoms for refining subproblem A to subproblem C.

Continuing with a diagnosis example:

User Question: How did you use selected symptoms?

Explanation: Selected symptoms failed boot and software process lost were used as: ---~ enabling symptoms for the software problem, ...~alternate focusing symptoms for the alternate problem, hardware

problem, ---~ and focusing symptoms for refining the software problem to a processor

software problem.

340 S. ABU-HAKIMA AND F. OPPACHER

Elliptic references: Such references involve indirect references to introduced entities. Again such interpretation of user questions attempts to follow the human model of explanation. An example of an elliptic reference is:

User Question 1: Why did you not deduce subproblem B?

User Question 2: What were its symptoms?

Explanation: Subproblem B's enabling symptoms are S,.

Echo user's question: This is the ability to include several messages in one generated explanation. Such an explanation could echo the user's question subtly to assure the user that their intentions were understood. This is illustrated in RATIONALE's explanations generated above.

Clarification questions: Questions about the user's intent should be specific and well directed, and understandable by the facility itself. Clarification is mostly applicable to a dialogue-based interface versus a menu-based interface such as RATIONALE's. An example of a clarification question would be"

User Question: What is the basis of hypothesis A?

Clarification: Do you mean, what are the symptoms for hypothesis A?

Versus Clarify 'bas/s' please?

In the latter, the system's request for clarification is open ended which may leave the user confused as to the system's expectations of a response. It is imperative for the facility that requires explanation clarification questions to be able to question the user in a meaningful manner.

4. Conclusions and future enhancements

We have described a framework, RATIONALE, for building knowledge-based diagnostic systems that explain by reasoning explicitly. The design of our system is based on criteria extracted from psychological studies of the nature of human explanation, from recent work on user interface design, and from a critical survey of explanation in knowledge-based systems. The design aims to primarily integrate context-sensitive explanation with reasoning in order to enhance the usefulness of the explanations given and, at the same time, to provide a better structure for the knowledge acquisition task. A secondary aim of our design is to construct a good hypermedia interface that can be profitably operated by both first-time users and seasoned experts.

In the present version of RATIONALE we have not attempted to build and maintain a user model, but we have included a facility to issue explanations at three levels of user expertise--novice, intermediate and expert. These levels concern not just a choice as to which text templates to instantiate but, more importantly, they are implemented to take the complexity and importance of assignments, of concepts and concept-linking rules or hypotheses, as provided by an expert, into account.

Adhering to the considerations described in section 2.4 for generating explana- tions, may seem to be ambitious and to place an excessive burden on the

IMPROVING EXPLANATIONS IN KBS 341

development of a knowledge-based system. But apart from the fact that this type of context-sensitive help supplies knowledge-debugging aids that speed up the de- velopment process, we also believe that users will soon reject systems that generate clumsy and cryptic utterances for explanations in any knowledge-based application.

It was pointed out in section 1 that humans explain to others in order to clarify, instruct or convince, and that the approximation of such behavior in an explanation facility assumes a tight coupling between the explanation module and the inference engine. We have attempted to achieve this by giving our explanation module immediate access to all the domain knowledge and strategic information that drives the reasoner. For the present system, explicit reasoning is properly supported by having the inference engine react at each step to explicitly represented knowledge as to why a particular hypothesis is preferred in a given situation to an alternative, why exceptions may be overruled in some situations, and why global inference strategies succeed in certain circumstances but fail in others.

Explicit reasoning in the sense emphasized in this paper does not only lead to useful explanations. It gives the system the ability to partially inspect and monitor itself. The ability for a system to monitor its own operations is a precondition for it to be able to modify its own operations. Consider a decision point or hypothesis in RATIONALE with associated abstract control principles that are to be instantiated to match a specific situation, for example, "choose actions on the basis of their ease of implementation" or "always try to establish a special case first". The accessibility of such control principles produces, as we have shown, powerful explanations. More importantly, this accessibility could enable meta-processes to monitor the system's reasoning behavior by determining which situation-specific control strategies lead to successful solutions. While we have not attempted to implement such a reflective and self-modifying version of RATIONALE, it seems that this would be required in order to completely overcome the widely deplored brittleness and inflexibility of current knowledge-based systems.

Another extension of the research reported in this paper, and one we are currently working on, addresses the development of deep causal models versus elaborate hypothesis hierarchies. Mittal, Bobrow and deKleer (1988) have pointed out that theory-based expert systems, because they are expensive to develop and maintain, are only useful for applications with a long, stable lifespan. For rapidly changing technologies, e.g. for tasks such as repairing computers, they recommend a plan-based approach. This approach to diagnostic and repair tasks is particularly appropriate for systems whose subsystems have complex implementations (espe- cially those with combinations of mechanical and/or electrical equipment such as aircraft engines), and requires the incremental development of repair plans. All repair plans share a common structure: they start with an initial test indicating some malfunction, run diagnostic tests to pinpoint a specific malfunction, apply a repair, verify that the problem is repaired, or repeat. The plan-based approach is supported by the metaphor of a community m~mory, different experts can incrementally formalize, extend and propagate the]r community knowledge base about the modeling and testing of some device in the form of repair plans. RATIONALE can readily accommodate this approach; after all, there is no formal need to represent the hypothesis hierarchy as a real tree. The existing alternates slot can be used to attach arbitrary plan structures to incrementally enrich the diagnostic and repair

342 S. ABU-IIAKIMA AND F. OPPACIIER

capabilities of the system. This results in a diagnostic network rather than a diagnostic tree.

Reasoning by examining previous cases is another activity we have started to work on. R A T I O N A L E ' s cases are traces of reasoning which can be accessed to explain and justify why the user's current problem is one that has been reasoned about previously. What makes this case-based approach interesting is that we are allowing the reasoner to fall back to refining a hypothesis upon failing to diagnose a problem by examining previous cases. We also hope to use successful and failed cases to refine the underlying model of the domain by combining explanation-based learning with case-based reasoning techniques.

References ABU-HAK1MA, S. (1988a). RATIONALE: a tool that reasons explicitly for the purpose of

explanation. Proceedings of AAAI '88 Workshop on Explanation. ABu-HAKIMA, S. (1988b). RATIONALE: A Tool for Developing Knowledge-Based Systems

That Explain by Reasoning Explicitly. Masters Thesis, Carleton University, Ottawa, Canada.

ACHINSWnXN, P. (1971). Law and Explanation. Oxford: Clarendon Press. ARDEN, M. J., GOSLING, J. & ROSENTHAL, D. S. H. (1989). The NeWS Book. New York:

Springer-Verlag. BOBROW, D. G., MrrrAL, S. & SrnHn, M. J. (1986). Expert systems: perils and promise.

Communications of the ACM, 29, 880-894. BRADY, M. & BERWICK, R. C. (1983). Computational Models Of Discourse. Cambridge,

MA: MIT Press. BtJCHArn,~AN, B. G. & SHORTLtFFE, E. H. (1984). Rule-Based Expert Systems: The MYCIN

Experiments of the Stanford Heuristic Programming Project. Cambridge, MA: Addison- Wesley.

BYLANDER, T. & MrrrAL, S. (1986). CSRL: a language for classificatory problem solving and uncertainty handling. The AI Magazine, 7, 66-67.

CHANDRASEKARAN, B., GOMEZ, F., MITrAL, S. & SMITH, J. W. (1979). An approach to medical diagnosis based on conceptual structures. Proceedings of the Joint Conference on Artificial Intelligence, pp. 134-142.

CtlANDRASEKARAN, B., M1TI'AL, S. • SMITH, J. W. (1982). Reasoning with uncertain knowledge: the MDX approach. In Proceedings of the First Annual Joint Conference American Informatics Association, pp. 335-339.

CHANDRASEKARAN) B. & MITTAL) S. (1983). Deep versus compiled knowledge approaches to diagnostic problem-solving. International Journal of Man-Machine Studies, 19) 425-436.

CHANDRASEKARAN, B. (1986). Generic tasks in knowledge-based reasoning: high-level building blocks for expert system design. 1EEE Expert, 1, 23-30.

CHANDRASEKARAN, B., JOSEPtlSON, J. & KEUNEKE, A. (1986). Functional representations as a basis for generating explanations. In IEEE International Conference Proceedings on Systems, Man and Cybernetics, pp. 726-731.

CHANDRASEKARAN, B. & TANNER, M. C. (1986). Uncertainty handling in expert systems: uniform vs task-specific formalisms. In L. N. Kanal & J. Lemmer, Eds. Uncertainty in Artificial Intelligence. Amsterdam: North Holland.

CLANCEY) W. J. (1983). The epistemology of a rule-based expert system--a framework for explanation. Artificial Intelligence, 20, 215-251.

CLANCEY, W. J. (1986). From GUIDON to NEOMYCIN and HERACLES in twenty short lessons: ORN final report 1979-1985. The AI Magazine, 7, 40-60.

CLANCEY, W. J. (1987). Knowledge-Based Tutoring: The GUIDON Program. Cambridge, MA: MIT Press.

IMPROVING EXPLANATIONS IN KBS 343

CLOCKSlN, W. F. ~¢: MELLISH, C. S. (1984). Programming in Prolog. 2nd edit. New York: Springer-Verlag.

GANASCIA, J. G. (1984). Explanation facilities for diagnosis systems. In Cybernetics and Systems Research 2. Proceedings of the 7th European Meeting, pp. 805-810.

GO6UEN, J. A., WEXNER, J. L. & LINDE, C. (1983). Reasoning and natural explanation. International Journal of Man-Machine Studies, 19, 521-559.

GOMEZ, F. & CHANDRASEKARAN, B. (1981). Knowledge organization and distribution for medical diagnosis. IEEE Transactions on Systems, Man and Cybernetics, 11, 34-43.

HARDMAN, J. (1985). Expert user-friendly. Systems International (GB), 13(75-76). HAYES, P. J. & GLASNER, I. D. (1982). Automatic construction of explanation networks for

a cooperative user interface. SIGSOC Bulletin (USA), 13, 6-14. HAYES, P. J. & REDDY, D. R. (1983). Steps towards graceful interaction in spoken and

written man-machine communication. International Journal of Man-Machine Studies, 19, 231-284.

HASUNO, D. W., CLANCEY, W. J. & RENNEt.S, G. (1984). Strategic explanations for a diagnostic consultation system. International Journal of Man-Machine Studies, 20, 3-19.

HEMPEL, C. G. (1965). Aspects of Scientific Explanation. New York: Free Press. JOSEPHSON, J. R., CHANDRASEKARAN, B. & SMm~, J. W. (1984). Assembling the best

explanation. In Proceedings IEEE Workshop on Principles of Knowledge-Based Systems. IEEE Computer Society, California. pp. 185-190.

MrrrAL, S., BOBROW, D. G. & DE KLEER, J. (1988). DARN: toward a community memory for diagnosis and repair tasks. In J. A. Hendler, Ed. Expert Systems: the User Interface. Norwood, NJ: Ablex.

NECHES, R., SWARTOi_rr, W. R. & MOOaE, J. D. (1985). Enhanced maintenance and explzination of expert systems through explicit models of their development. IEEE Transactions on Software Engineering, 11, 1337-1351.

PUNCH III, W. F., TANNER, M. C. & JOSEPHSON, J. (1986). Design Considerations for PIERCE, a High-Level Language for Hypothesis Assembly. IEEE Conference Proceed- ings of Expert Systems in Government Symposium. pp. 279-281.

Scour, A. C., CLANCEY, W. J., DAws, R. & SHORTLWFE, E. H. (1977). Explanation Capabilities of Production-Based Consultation Systems. Report No. STAN-CS-77-593, Stanford University, Department of Computer Science.

STABLER, E. P. (1986). Object-oriented programming in Prolog. AI Expert, pp. 46-57. STEELS, L. (1985). Second generation expert systems. Future Generation Computer Systems,

pp. 213-221. STERUN6, L. & SHAPLRO, E. (1986). The Art of Prolog. Cambridge, MA: MIT Press. SwAR'rotrr, W. R. (1981). Producing Improved Explanations and Justifications of Expert

Consulting Programs Using an Automatic Programming Approach. PhD Thesis. Cam- bridge, MA: MIT Press.

SWARTOtrr, W. R. (1983). XPLAIN: A system for creating and explaining consulting programs. Artificial Intelligence, 21, 285-325.

SWARTOtrr, W. R. & SMOUAR, S. W. (1987). On making expert systems more like experts. Expert Systems, 4, 196-207.

VAN HOFF, A. A. & ABu-HAmMA, S. (1988a). Introducing 'GoodNeWS'. Unpublished Report, NRC, Ottawa, Canada.

VAN H o ~ , A. A. & ABu-HAmMA, S. (1988b). Introducing 'HyperNeWS'. Unpublished Report, NRC, Ottawa, Canada.

WALUS, J. W. & SHORTLI~E, E. H. (1982). Explanatory power for medical expert systems: studies in the representation of causal relationships for clinical consultations. Meth. Inform. Med. (Germany), 21, 127-136.

WINNER, J. L. (1980). BLAH, a system which explains its reasoning. Artificial Intelligence, 15) 19-48.