6
American Institute of Aeronautics and Astronautics 1 An approach to Verification and Validation in Large Scale System of Systems Dr. Kirstie L. Bellman. 1 The Aerospace Corporation, El Segundo, California, 90009-2957 We describe an approach that potentially could dramatically improve the ability to test Fault Management System (FMS) specifications and software by re-expressing them as a rule-base, and then applying previously developed verification and validation methods for rule-based software systems. We then generalize the strategy underlying this approach to other complicated aspects of verifying and validating space systems from design and development to operations. e begin our discussion of the type of verification and validation (V&V) approaches needed in very large, complex, space systems by focusing first on some new approaches for the V&V of Fault Management Systems (FMS). This is because the problems of a FMS are representative of the problems of providing an integrated assessment of sufficiency and correctness across a large distributed system with enormously diverse components. We then generalize the strategy underlying this approach to other complicated aspects of verifying and validating space systems from design and development to operations. One critical function in modern flight software is to monitor hardware and software subsystem behaviors, to identify pre-defined anomalies, to provide the appropriate response from a limited repertoire of corrective actions and to alert the operators on the ground to continue additional corrective actions as needed. Traditionally the FMS responded only to hardware failures and critical anomalies, but in several software-intensive space systems designs, certain critical software errors are now being included. The FMS is expected to operate during all mission phases to protect the spacecraft (S/C), e.g. during ascent, transfer orbit and mission orbit when fully deployed. One of the chief requirements of the FMS is to “safe” the S/C, which may include switching to redundant components as needed in subsystems such as GNC (Guidance, Navigation, and Control), power, thermal, etc., as well as placing the S/C in a sun-safe attitude and placing the payload into a safe configuration. As can be seen from even this brief description, the FMS must correctly collect and integrate the information from a broad range of conditions, involving the characterization of a complicated set of measures and fault conditions and then it must decide and respond quickly enough with an ordered set of safing responses. In order to test the correctness and the adequacy of the FMS, developers spend hundreds of hours producing specifications and design documents and testing code, nearly matched in hours by the government reviewers. Because relevant FMS specifications may be necessarily scattered over a number of specification documents, it is especially difficult to assess the overall completeness (have all pertinent forms of the fault case been covered) or consistency (are any of the fault procedures mutually contradictory when conditions combine them). Similarly, during code development, it may be difficult to test all the relevant FMS code, especially when relevant code may include both the flight software as well as distributed embedded code segments, co-located with components throughout the space system. Because of the time-intensive nature of this evaluation, many space systems end up running out of time, resulting in an increased risk that vital parts of the anomaly detection or corrective responses will not perform as desired during flight. In this paper, we describe an approach that potentially could dramatically improve the ability to test FMS specifications and software by re-expressing them as a rule-base, and then applying previously developed verification and validation methods for rule-based software systems (1-10). GNC mode controllers have sometimes been characterized as a state machine, which means one defines initial states and then the rules for transitions. One of the supporting analyses for a FMS is the FMECA (Failure Mode and Effects Criticality Analysis), which produces decision trees that map failure symptoms to likely fault causes. Decision trees and state machines are mathematically equivalent to classes of simple rule-bases. In one direction, this translation is easy: conditions 1 Computer Systems Division/Director, Aerospace Integration Science Center W AIAA Infotech@Aerospace 2010 20 - 22 April 2010, Atlanta, Georgia AIAA 2010-3432 Copyright © 2010 by Dr. Kirstie L. Bellman. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

[American Institute of Aeronautics and Astronautics AIAA Infotech@Aerospace 2010 - Atlanta, Georgia ()] AIAA Infotech@Aerospace 2010 - An Approach to Verification and Validation in

  • Upload
    kirstie

  • View
    215

  • Download
    1

Embed Size (px)

Citation preview

American Institute of Aeronautics and Astronautics

1

An approach to Verification and Validation in Large Scale

System of Systems

Dr. Kirstie L. Bellman.

1

The Aerospace Corporation, El Segundo, California, 90009-2957

We describe an approach that potentially could dramatically improve the ability to test Fault Management

System (FMS) specifications and software by re-expressing them as a rule-base, and then applying previously

developed verification and validation methods for rule-based software systems. We then generalize the

strategy underlying this approach to other complicated aspects of verifying and validating space systems

from design and development to operations.

e begin our discussion of the type of verification and validation (V&V) approaches needed in very large,

complex, space systems by focusing first on some new approaches for the V&V of Fault Management

Systems (FMS). This is because the problems of a FMS are representative of the problems of providing an

integrated assessment of sufficiency and correctness across a large distributed system with enormously diverse

components. We then generalize the strategy underlying this approach to other complicated aspects of verifying and

validating space systems from design and development to operations.

One critical function in modern flight software is to monitor hardware and software subsystem behaviors, to identify

pre-defined anomalies, to provide the appropriate response from a limited repertoire of corrective actions and to alert

the operators on the ground to continue additional corrective actions as needed. Traditionally the FMS responded

only to hardware failures and critical anomalies, but in several software-intensive space systems designs, certain

critical software errors are now being included. The FMS is expected to operate during all mission phases to protect

the spacecraft (S/C), e.g. during ascent, transfer orbit and mission orbit when fully deployed. One of the chief

requirements of the FMS is to “safe” the S/C, which may include switching to redundant components as needed in

subsystems such as GNC (Guidance, Navigation, and Control), power, thermal, etc., as well as placing the S/C in a

sun-safe attitude and placing the payload into a safe configuration.

As can be seen from even this brief description, the FMS must correctly collect and integrate the information from a

broad range of conditions, involving the characterization of a complicated set of measures and fault conditions and

then it must decide and respond quickly enough with an ordered set of safing responses. In order to test the

correctness and the adequacy of the FMS, developers spend hundreds of hours producing specifications and design

documents and testing code, nearly matched in hours by the government reviewers. Because relevant FMS

specifications may be necessarily scattered over a number of specification documents, it is especially difficult to

assess the overall completeness (have all pertinent forms of the fault case been covered) or consistency (are any of

the fault procedures mutually contradictory when conditions combine them). Similarly, during code development, it

may be difficult to test all the relevant FMS code, especially when relevant code may include both the flight

software as well as distributed embedded code segments, co-located with components throughout the space system.

Because of the time-intensive nature of this evaluation, many space systems end up running out of time, resulting in

an increased risk that vital parts of the anomaly detection or corrective responses will not perform as desired during

flight.

In this paper, we describe an approach that potentially could dramatically improve the ability to test FMS

specifications and software by re-expressing them as a rule-base, and then applying previously developed

verification and validation methods for rule-based software systems (1-10). GNC mode controllers have sometimes

been characterized as a state machine, which means one defines initial states and then the rules for transitions. One

of the supporting analyses for a FMS is the FMECA (Failure Mode and Effects Criticality Analysis), which

produces decision trees that map failure symptoms to likely fault causes. Decision trees and state machines are

mathematically equivalent to classes of simple rule-bases. In one direction, this translation is easy: conditions

1 Computer Systems Division/Director, Aerospace Integration Science Center

W

AIAA Infotech@Aerospace 2010 20 - 22 April 2010, Atlanta, Georgia

AIAA 2010-3432

Copyright © 2010 by Dr. Kirstie L. Bellman. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

American Institute of Aeronautics and Astronautics

2

leading from one state to others can easily be written as clauses in rule hypotheses, as can the source state itself, and

the target state can easily be written as a new value for a variable. Decision trees are even easier, taking the tree

nodes as the states, and the branch conditions as the transition rules. While it is also possible to map in the reverse

direction, from rule-bases to state machines and from loop-free state machines to decision trees, it is not as useful

(and the sizes can get very large).

In 1988-1990, we developed new mathematical analysis methods for simple rule-bases, because they were the basis

of increasingly-used knowledge-based approaches, such as Expert Systems. In our earlier work, we developed five

principles of rule-base correctness: Consistency (no conflict), Completeness (no oversight), Irredundancy (no

superfluity), Connectivity (no isolation), and Distribution of rules (neither over development or underdevelopment

of rule content). The Consistency criteria address the logical consistency of the rules, and can be considered as a

default mathematical notion of ``correctness'' when not superseded by domain-specific knowledge. Completeness

and Irredundancy criteria help identify oversights in specifications and redundancy in the rules. The Connectivity

criteria concern the inference system defined by the rules, and are like completeness and irredundancy criteria for

the inference system. Finally, the Distribution criteria are ``esthetic'' criteria for the content of the rule-base and

include the simplicity of the rules, as well as the distribution of the content into rules.

For each principle we developed largely graph-theoretic mathematical methods that included incidence matrices,

clause graphs, rule dynamics, and association matrices (See 1, 9, 10). To demonstrate the approach, we pull slightly

simplified rules and fictionalized error examples from a real rule-base (11). The Manned Maneuvering Unit (MMU)

is essentially a backpack unit for moving a human astronaut around a spacecraft in space. In order to provide

maneuverability, there are several thrusters oriented in various directions and Hand Control Devices for useful

groups of them. The thrusters use Nitrogen Dioxide gas for motion. The FDIR rule-base, developed in the late „80s

(see 11) is concerned with the problem of fault diagnosis, isolation, and recovery (FDIR) for this MMU. Like a

FMS today, its purpose is to determine whether the MMU has a fault, to isolate the fault to a particular subsystem

when it is possible, and to take corrective action when that is possible. The rule-base has 104 rules, written in the

expert system shell CLIPS (see 7), the C Language Interface to Production Systems, developed at NASA's Johnson

Space Center JSC). The MMU FDIR rule-base was kindly provided to us by Chris Culbert of NASA JSC, as was

CLIPS. The details of the MMU analysis are in 10.

For these examples, a rule-base is a finite set R of ``if-then'' rules in the form of pairs of assertions to be interpreted

as: IF hypothesis hyp, THEN conclusion conc.

For example, a rule-base for anomaly detection and diagnosis might contain the rule:

r10: IF thruster = on AND thruster-command = off

THEN signal anomaly.

An assertion is a Boolean combination of primitive assertions (or atomic formulas), using standard Boolean

connectives (e.g., and, or, not, xor, if-then-else, and iff). A primitive assertion is a predicate expression that uses

known predicates (e.g., equality, inequality, or other comparison operators).

The simplest incidence matrix of a rule-base is the matrix RV indexed by rules R and variables V, with 1 if variable v

occurs in rule r and 0 if not. Hence, the rule listed above would have a row for ``r10'', and columns for ``thruster''

and ``thruster-command'', with both entries 1. Using ordinary matrix multiplication, a large variety of incidence

matrices were created with the rule-bases, including incidence matrices that showed the number of times a variable /

variable value occurs in rules across a RB; the number of rules containing a variable across a RB, and the number of

rules containing a clause across a RB. In the matrix RV, the (v,w) entry of the matrix product (we use –tr to indicate

transpose) (RV-tr)(RV) is the number of pairs of instances of variable v and variable w contained in the same rule,

and the (q,r) entry of the product (RV)(RV-tr) is the number of pairs of instances of rule q and rule r that contain the

same variable v. For example, the entry in the ``thruster'' row and ``thruster-command'' column of the product (RV-

tr)(RV) is at least 1, and will be more than 1 if the two variables occur together in other rules besides ``r10''.

Several other simple incidence matrices are useful for rule analyses. The counting incidence matrix RC for rules and

clauses is indexed by rules R and clauses C, with the number of occurrences of clause c in rule r, and the counting

incidence matrix CV for clauses and variables is indexed by C and V, with the number of occurrences of variable v

American Institute of Aeronautics and Astronautics

3

in clause c. These matrices are analogous to the rule-variable incidence matrix RV. For this purpose, a clause can

be considered as a predicate expression, and C is the set of clauses. For example, rule ``r10'' above has three

clauses: ``thruster = on'', ``thruster-command = off'', and ``signal anomaly''. The clause incidence matrices can also

separate clause occurrences into rule hypotheses and conclusions. The counting incidence matrix CCo for rule

conclusions and clauses is indexed by R times C, with number of occurrences of clause c in the conclusion of rule r

and the counting incidence matrix CH for rule hypotheses and clauses is indexed by R times C, with the number of

occurrences of clause c in the hypothesis of rule r.

An ordinary incidence matrix is easily displayed as an undirected bipartite graph. In such graphs, vertices can be

rules or variables or clauses and the edges will equal their non-zero incidence. For example, one graph connects two

terms if they appear in a rule together and another graph connects two rules if they have a common variable.

Furthermore, one can extend the incidence matrices to reflect ordering between variables and rules in a directed

graph, such that, for example, there is an edge from q to r if a variable v is written by q and read by r. One can use

these directed graphs to find out if any reachable state determines a variable (does the variable matter) and if any

reachable state sets or permits a rule to act (does the rule matter). In clause graphs, more complicated and

semantically interesting errors are discoverable. All these methods work with backward chaining or forward

chaining rule-bases.

As an example, using r10, there will be an edge from rule ``r10'' to any other rule that contains the clause ``signal

anomaly'', and an edge to rule ``r10'' from any other rule that contains either of the clauses ``thruster = on'' or

``thruster-command = off''. These graphs are used to analyze some dynamic properties of rules. Some rules will

only act for a situation s if some other combinations of rules have acted earlier, as an artifact of the possible input

values. The dynamic behavior of rules can be treated only in terms of a particular inference engine, which defines

the ``order of application'' of rules. The rule-base alone only defines possible orderings of rule application, and

hence one must take into account the characteristics of the inference engine or rule interpreter.

The different possible graphs especially support the connectivity and irredundancy criteria. These criteria insist that

everything in the rule-base is there for some good reason; the variables make a difference, the rules make a

difference, and nothing is extraneous. Each path in the access R graph represents a read-write chain of rules, i.e., a

sequence of rules as vertices and variables as edges, with each rule reading one variable and writing the next one,

and each variable written by one rule and read by the next. It therefore gives an ordering of application for some

rules, showing that some use values that others compute. Each cycle in the access R graph represents a recursively

used rule: a sequence of rules and variables, with each rule reading one variable and writing the next one, and the

last rule reading the last variable and writing the first one. There is therefore the same potential circularity as above

in the use of the rules for evaluation. Dangling hypotheses and conclusions can be found very easily by looking for

vertices in the clause graph (the inference C graph) that have no out edges or no in edges: dangling conclusions

occur when a rule conclusion does not match any rule hypothesis, dangling hypotheses occur when a rule hypothesis

does not match any rule conclusion, and dangling goals occur when a goal cannot be satisfied by any rule

conclusion. As warranted, one can also readily define a matrix that describes which rules can follow which others in

derivations.

An association matrix is a covariance matrix computed from occurrence patterns across a set of possible locations.

As described above, the matrix product (RV)(RV-tr) of the counting incidence matrix RV counts variables that occur

in common to two rules for each pair of rules. It defines the occurrence pattern of a rule by the set of variables the

rule contains. Then the correlations can be computed from the covariances, in the usual way: here, the q row of the

counting incidence matrix RV is the occurrence pattern for rule q, so Avg(q) is the average number of occurrences of

each variable in rule q, and Stdev(q) is the standard deviation of the numbers. There is no random variable here, so

there is no point in using the ``sample standard deviation''. The correlation is a measure of similarity between rules,

as measured by the variables in them. The correlation value is 1 if and only if the two rules use exactly the same

variables with the same frequency of occurrence of each variable. It will be negative, for example, when the two

rules use disjoint sets of variables, and -1 in rare cases only (not likely in a rulebase). See 9, 10.

Similarly, as noted before, the matrix product (RV-tr)(RV) counts rules that occur in common to two variables for

each pair of variables, defining the occurrence pattern of a variable by the set of rules containing it. Correlations are

computed as before. The other incidence matrices, CV for variables in clauses and RC for clauses in rules, can also

be used in this way.

American Institute of Aeronautics and Astronautics

4

The best use of correlations is in detecting unusual ones. If the correlation is almost 1.0 or 0.0, it is of special

interest and can signal an anomaly. That is, if clause b almost always occurs with c, then something should be noted

when they do not occur together. If variable v always occurs with w, then there may be a good reason for combining

the variables. There should also be sufficient justification for unusual correlations. However, two rules that use the

same set of variables are not necessarily redundant. There are often sets of rules that all use the same variables,

giving the rule-base a natural clustering into groups. With association matrices, one can discover these clusters of

related sets of rules for further analysis, including critical consistency and completeness checks.

Association matrices and graph methods allow one to uncover many different kinds of inconsistency, including rules

using inconsistent combinations of variables, inconsistent value inputs (also meaningful coverage of value ranges),

and inconsistencies over rule chains (“common source error”, see 1,3,4,10). Strong type checking gives a simple

way to check some of the value consistency and to use consistent criteria. For every variable v, all rules r that assign

a value to v are compared. For all the rules that act on any given situation, the values assigned must be equal. This

check finds violations of the input consistency criterion.

We have briefly described a set of methods for examining rule-bases. They included largely static analyses (the

examination of the rules as separate symbolic expressions, without stringing them together) and logical analysis (the

form of the rules as mostly un-interpreted expressions), a limited version of dynamic analysis (the interactions of the

rules during inference), and some statistical analysis (that focuses on the correlations among variables, rules and the

values of the variables). The details of how rule-bases are treated mathematically, and where the challenges and

exceptions lie, are provided in the referenced papers (See especially, 9 and 10). The point here is that rules are at

once very forgiving (they allow many diverse types of information to be brought together; they allow qualitative as

well as quantitative information to be used) but because of that very flexibility, they also can lack a coherent,

consistent, overall information model to evaluate them by. The mathematical methods noted here provide default

mathematical concepts of correctness and completeness that are superseded by any specific domain knowledge.

Hence with these methods, we place the emphasis on identifying anomalies, rather than errors. These anomalies can

then be examined by a human developer and acted upon or not, as appropriate.

It is relatively straightforward to create rules from many types of FMS and GNC specifications and code design

documents. Using a simplified and fictionalized error example based on the real set of MMU rules (10), we can

demonstrate some of the kinds of consistency and completeness that can be readily addressed based on clustered

rules. So for example, consider the following rules:

r10: IF thruster-signal = on AND thruster-command = off THEN report anomaly1

r11: IF thruster-signal = on AND thruster-command = on THEN report Thruster-On

r12: IF thruster-signal = off AND thruster-command = on THEN report anomaly2

The purpose of the specifications underlying these rules was to cover all cases of having a device (here thruster) on

or off and all cases of the command to the thruster (turn off, turn on) with an appropriate report to the anomaly

detection and resolution system (ADR). Especially, in this particular system, there were to be two kinds of

anomalies reported to the ADR for further corrective actions. A quick look shows that there is a missing fourth

condition: IF thruster-signal =off AND thruster-command = off THEN what?

This may not be an error! Depending upon how this system is designed or the ADR code written, there may be no

need to specify this condition, but it is noted as a missing specification. This of course is a very simple example; the

nice thing about these methods is that they can cover the appropriate clustering of related rules and enumeration of

very complicated combinations of IF clauses.

Let‟s take nearly the same set of rules and consider a simple example of a consistency check. To do so we have

clustered all rules that have these same condition clauses (and variables), and especially in multiple specification

documents, it is not unusual to see that there may be duplicate rules that cause inconsistencies, especially if

assumptions for operational use were somehow omitted.

r10: IF thruster-signal = on AND thruster-command = off THEN report anomaly1

r11: IF thruster-signal = on AND thruster-command = on THEN report Thruster-On

American Institute of Aeronautics and Astronautics

5

r12: IF thruster-signal = off AND thruster-command = on THEN report anomaly2

r13: IF thruster-signal = on AND thruster-command = off THEN report Thruster-On

Here, r10 and r13 are logically inconsistent. If it turns out that these rules were written for distinct operational

modes, then one needs to make sure only the applicable rules are evoked at the appropriate times. Also, note that as

written, the “thruster-command = off” clause doesn‟t make a difference to any resulting output of the system.

Using a few simple examples of rules that are similar to FMS and GNC specifications, we have emphasized the

types of errors or anomalies that can be detected by checking the rule form of specifications, design documents,

databases, and other developmental and operational materials. With these methods one, for example, can identify

many types of inconsistencies (e.g., conflicting rules, rule chains, input paths, and variable values), as well as

address whether or not rules/variables are redundant (or matter), and whether all key states are reachable with a set

of rules. Although the error examples are fictitious, they are based on the types of specifications used in two

ongoing space system developments. Also, based on this approach, we can give recommendations on the formation

of the FMS, as well as its implementation so that with additional instrumentation it can be more readily checked.

We believe that this approach can be generalized to other V&V challenges in complex space systems, including the

integration among different kinds of tests, components, and systems. As noted above, rules are a highly flexible

representational device, permitting a diversity of conditions and actions to be represented and then reasoned about in

a single rule-base. Some of this flexibility is very desirable when it comes to characterizing, integrating, analyzing,

and testing the enormous variety of relationships and interfaces; expectations for roles, inputs, and desired

performance; constraints on context and usage; signs and symptoms of fault conditions, etc., that are necessary to

perform verification and validation on Systems of Systems (SoSs). Furthermore, qualitative information as well as

quantitative information can be critical when defining desirable global behavior and the integration among complex

components and systems. Rules can be an important way of making qualitative information more computable and

more amenable to analysis.

However, there are limitations to this approach and one needs to complement these largely static analyses with more

dynamic analyses and simulations, especially detailed timing and throughput analyses. The verification of dynamic

properties can only be partially approached by these methods. Especially, as seen in several space programs,

additional detailed timing and throughput analyses are critical for FMS, for flight software/hardware systems, and

for the timing requirements of SoSs. Rules are forgiving and flexible – a gift and a problem. We want to keep the

most appropriate representation for different parts of the complex system. Rules can water down the benefits of

better analytic methods or consistent, principled models, when they exist. Rule-bases (especially with the

characteristics of their inference engines/interpreters) can be more inefficient than other methods.

Hence, it is clear that during the course of designing and developing a space system, there are many different types

of representational methods used to describe, define, model, and then implement the system that are better for a

given use than a simple rule-base. One of the reasons we noted above, is that a rule-base, with its forgiving

flexibility, frequently lacks an overall, coherent, and understandable information model; as important is that for

specific purposes, traditional simulations, mathematical methods, and programming languages for code development

will, of course, be more analytically powerful, computationally efficient, or more effective in a variety of ways.

However, what we offer with this approach is the ability to retain the benefits of our original data and analysis

structures, and yet gain the benefits of rule-bases by re-expressing selected structures as rules. That is, generalizing

on the strategy for the FMS, we believe that one could beneficially take many different aspects of complex System

of Systems and re-express their existing form (be it text specifications, code, architectural drawings, parts of

databases, heterogeneous sets of logical expressions and equations) as a set of rules and then run the V&V analyses

described for different kinds of consistency and completeness checks. This may be especially important for some

material, like architectural drawings, that still require a great deal of hand checking and do not currently lend

themselves to automated validation methods. When errors have been uncovered and fixed in these rule-bases, one

can then return to the original documents (drawings, architectures, databases, specifications, code etc.) and correct

them as appropriate to that original form. In SoSs, the qualitative rules captured by an initial rule-base may support

the creation of new specifications, interface and design documents, as well as implicate the need for additional core

services and tests.

American Institute of Aeronautics and Astronautics

6

Of course, complex systems always need to be analyzed, tested, and evaluated using a variety of different methods.

It is hoped that the use of rule-based V&V methods will become an important part of our repertoire of verification

and validation methods for complex space systems and other System of Systems.

References: 1. Bellman, K.L., ``Testing and Correcting Rule-Based Expert Systems'', Proc. Space Quality Conference, Manhattan

Beach, California, April 19-21, Washington, D.C.: NSIA, 1988.

2. Bellman, Kirstie L., "The Modeling Issues Inherent in Testing and Evaluating Knowledge-Based Systems, “Expert

Systems with Applications, Vol.1, No.3, 1990, pp.199-215.

3. Bellman, Kirstie L., "Testing and Evaluating Knowledge-based Systems". AI Expert Magazine, November, 1990.

4. Bellman, Kirstie L. and Landauer, Christopher., “Testing Knowledge-based Systems," Aerospace America, October

1991, pp 43-46.

5. Bellman, Kirstie L. and Landauer, Christopher., ``Developing Testable, Distributed Knowledge-Based Systems'',

Proceedings AIAA Computers in Aerospace 9 Conference, Workshop on Verification and Validation, 19-21 October,

San Diego, 1993.

6. Bellman, Kirstie L. and Landauer, Christopher., ``Designing Testable, Heterogeneous Software Environments'', in Robert

Plant (ed.), Special Issue: Software Quality in Knowledge-Based Systems, Journal of Systems and Software, Volume

29, No. 3, June 1995, pp.199-217.

7. Culbert, Chris.``CLIPS Reference Manual (Version 4.2),'' NASA Johnson Space Center , April, 1988.

8. Culbert,Chris and Savely, Robert T., ``Expert System Verification and Validation,'' Proc. AAAI 88 Workshop on

Validation and Testing Knowledge-Based Systems, AAAI, Minneapolis, Minnesota, 1990.

9. Landauer, Christopher., ``Principles of Rule-base Correctness,'' in K. L. Bellman (ed.), Proceedings of the IJCAI 89

Workshop on Verification, Validation and Testing of Knowledge-Based Systems, Detroit, Michigan, 19 August 1989,

AAAI. Palo Alto, California: AAAI. 1989.

10. Landauer, Christopher.,``Correctness Principles for Rule-Based Expert Systems'', Expert Systems with Applications J.,

Volume 1, No. 3, 1990, pp. 291-316.

11. Lawler, D.G. and Williams, Linda J.F.,``MMU FDIR Automation Task,'' Final Report, Contract NAS9-17650, Task

Order EC87044. Houston, Texas: McDonnell-Douglas Astronautics Co., 3 February 1988.