18
ELSEVIER Decision Support Systems 15 (1995) 1-18 Knowledge-based model validation support for end-user computing environments Ritu Agarwal a,*, Mohan Tanniru b,1, Yimin Zhang c a Department of MIS and Decision Sciences University of Dayton Dayton, OH 45469-2130 USA b School of Management Syracuse University Syracuse, N Y 13244-2130 USA c East China University of Technology Shanghai 200093 People's Republic of China Abstract Encouraging individuals to use corporate data and build computer-based decision models locally, while simulta- neously ensuring that the modelling activity is consistent with corporate policies and guidelines poses a challenge to many organizations. Although it is desirable to encourage user autonomy in decision making, it is equally imperative to assure appropriate quality of the decisions made. In this paper interdependencies within the organizational decision making activity are used to identify some generic categories of support required to maintain consistency and quality in end-user raodel construction. Five distinct cases of model building activity in an end-user computing environment are described and support for ensuring consistency in user constructed models for two of these cases is discussed. An object-oriented knowledge-based system that provides such support has been developed; the architec- ture of this system i,; described. System implementation and interaction is illustrated with the aid of a financial budgeting application. Keywords: End-user computing; Model validation; Knowledge-based system; Object-oriented system 1. Introduction The Pentagon estimates that at current rates of demand for software by the year 2010 the United States will require 60 million software professionals -approximately half the US labour force. With the increase in the number of soft- ware professionals not keeping pace with escalat- ing demand, the corporate response has been to increasingly decentralize systems development ac- " Corresponding author [email protected] Phone: (315) 443-3526 tivities to end-users; i.e., to actively encourage and facilitate the phenomenon of end-user com- puting. Fuelled by the availability of high-level, user-friendly software, firms are witnessing a sig- nificant growth in the development of informa- tion technology applications for supporting indi- vidual decisions by non-traditional developers such as managers. The proliferation of end-user computing (EUC) has raised many concerns about the effec- tive management of this process (Rockart and Flannery, 1983;Alavi et al., 1987;Munro et al., 1987;Brown and Bostrom, 1989). Prior studies have examined (both empirically and conceptu- 0167-9236/95/$09.50 ~ 1995 Elsevier Science B.V. All rights reserved SSDI 0167-9236(94)90039-5

Knowledge-based model validation support for end-user computing environments

Embed Size (px)

Citation preview

Page 1: Knowledge-based model validation support for end-user computing environments

E L S E V I E R Decision Support Systems 15 (1995) 1-18

Knowledge-based model validation support for end-user computing environments

R i t u A g a r w a l a,*, M o h a n T a n n i r u b,1, Y i m i n Z h a n g c

a Department of MIS and Decision Sciences University of Dayton Dayton, OH 45469-2130 USA b School of Management Syracuse University Syracuse, NY 13244-2130 USA

c East China University of Technology Shanghai 200093 People's Republic of China

A b s t r a c t

Encouraging individuals to use corporate data and build computer-based decision models locally, while simulta- neously ensuring that the modelling activity is consistent with corporate policies and guidelines poses a challenge to many organizations. Although it is desirable to encourage user autonomy in decision making, it is equally imperative to assure appropriate quality of the decisions made. In this paper interdependencies within the organizational decision making activity are used to identify some generic categories of support required to maintain consistency and quality in end-user raodel construction. Five distinct cases of model building activity in an end-user computing environment are described and support for ensuring consistency in user constructed models for two of these cases is discussed. An object-oriented knowledge-based system that provides such support has been developed; the architec- ture of this system i,; described. System implementation and interaction is illustrated with the aid of a financial budgeting application.

Keywords: End-user computing; Model validation; Knowledge-based system; Object-oriented system

1. I n t r o d u c t i o n

The Pentagon estimates that at current rates of demand for software by the year 2010 the Uni ted States will require 60 million software professionals -approximately half the US labour force. With the increase in the number of soft- ware professionals not keeping pace with escalat- ing demand, the corporate response has been to increasingly decentralize systems development ac-

" Corresponding author n [email protected] Phone: (315) 443-3526

tivities to end-users; i.e., to actively encourage and facilitate the phenomenon of end-user com- puting. Fuelled by the availability of high-level, user-friendly software, firms are witnessing a sig- nificant growth in the development of informa- tion technology applications for supporting indi- vidual decisions by non-traditional developers such as managers.

The proliferation of end-user comput ing (EUC) has raised many concerns about the effec- tive management of this process (Rockart and Flannery, 1983;Alavi et al., 1987;Munro et al., 1987;Brown and Bostrom, 1989). Prior studies have examined (both empirically and conceptu-

0167-9236/95/$09.50 ~ 1995 Elsevier Science B.V. All rights reserved SSDI 0167-9236(94)90039-5

Page 2: Knowledge-based model validation support for end-user computing environments

2 R. Agarwal et al./Decision Support Systems 15 (1995) 1-18

ally) many different aspects of EUC, including computing policies (Galletta and Hufnagel, 1992), structural alternatives for managing EUC (Brown and Bostrom, 1989), controls (Munro et al., 1987), support through information centres and other organizational initiatives (White and Christy, 1987), etc. A common thread underlying these studies has been on the macro level control of EUC; they all attempt to understand the phe- nomenon and prescribe methods to manage its organizational dissemination effectively. How- ever, while broader organizational initiatives such as the e s t a b l i shmen t of gu ide l ines for hardware/sof tware acquisitions in order to man- age training and maintenance support effectively are no doubt essential to the control of EUC, of equal importance are some micro-level control issues. Currently there are limited effective means to ensure that the actual processing done by individual managers, using these end-user tools, is correct. In this paper we focus on this specific micro-level aspect of end-user computing: the utilization of data and models by end-users as they build decision support systems (DSS) to sup- port individual decision making.

The integrity of data, when it is accessed by many individuals for distributed access and pro- cessing, is often the responsibility of a data base administrator and DBMS software. However, such responsibilities do not address issues related to an accurate use of these data in models devel- oped by end users. A dangerous consequence of decisions made using models which make wrong assumptions about key elements such as organi- zational operations, environmental and temporal effect on decision variables, the relevance of algo- rithms in solving certain problem conditions, etc., is that they may yield results which are unrealis- tic. When the results of such models update cor- porate data bases, most procedures that are in force today cannot verify the validity of these data, except for simple format, range, and rea- sonableness checks. It is thus incumbent upon the user to ensure that the models and data used are valid and the decision outcomes realistic.

Some attempts have been made to ensure that a user considers various assumptions underlying the algorithms used in problem solving by an

automatic formulation a n d / o r selection of such algorithms based on certain problem characteris- tics (Ghosh and Agarwal, 1991; Hong and Vogel, 1991; Krishnan, 1990; Krishnan, 1991; Murphy et al., 1992). Such approaches have limited domains of applicability; they are appropriate in instances when the modelling formulation requires the use of algorithms such as forecasting and mathemati- cal programming techniques. A significant pro- portion of modelling at the EUC level, however, is oriented towards the use of spreadsheet type formulations, where the model utilizes several discrete computational steps that are often prob- lem dependent and do not conform to any well defined structure for performing validation.

In a recent study Silver (1991) proposes a 'decision guidance' framework to analyze support requirements when significant judgements are in- volved in the decision process and its inputs. He defines decision guidance as proactive support where a system enlightens or sways its users as they structure and execute their decision making processes. The extent of guidance a system can provide is dependent on the degree of autonomy the user has in constructing models and making judgements, as well as the nature of the task itself. The greater the autonomy and more com- plex a user's perceptions of the decision making task, the greater the need for guidance. The type of guidance provided can vary anywhere from one of prescribing and proscribing an approach to solve a problem, to one of simply providing rele- vant information.

The objective of this paper is to describe meth- ods that provide the needed guidance at appro- priate times to ensure that individual managers are made aware of information that might affect the construction and use of models in their deci- sion process. An underlying assumption behind these methods is that it is desirable to provide the user with autonomy in model construction. Specifically, we will limit our discussion of guid- ance support to three areas: What types of guid- ance are essential for model validation - i.e., are there any generic support needs for decision guidance? When, during the modelling activity, should such guidance be provided? How does one incorporate this guidance when DSSs are built?

Page 3: Knowledge-based model validation support for end-user computing environments

R. Agarwal et aL /Decision Support Systems 15 (1995) 1-18 3

The issue of the impact of such guidance on the decision process a n d / o r outcome is also impor- tant, but will not be considered in this research.

The decision process framework developed by Thompson (1967) is used in section 2 to establish the types of guidance that are considered appro- priate to self-validate models built by the user. Decisions related to when to provide such guid- ance is a function of what knowledge the system has about the modelling activity at a specific point in time. While dynamic tracking of user interaction and behaviour for providing context sensitive help at appropriate junctures is most desired, we will limit our discussion here to pro- viding support at specific modelling activity steps (a form of deliberate decision guidance). Section 3 provides a typology for classifying the model building activity so that we can isolate those modelling steps where validation support is ap- propriate. Section 4 discusses a knowledge-based approach to implement this support during the model building activity of a user. Note that the support provided assists a manager in executing the decision process (as opposed to structuring it) by providing relevant information (rather than suggesting a particular action) in a predefined (as opposed to dynamic or participative) mode of operation. Section 5 illustrates the use of this support for a financial budgeting application and compares our approach to previous work in model management . The final section provides some concluding comments and directions for future research.

2. Establishing support features for model valida- tion

Model building is recognized as one of the three components in the design of DSS, the other two being dialogue management and data man- agement (Sprague and Carlson, 1982). Although some research in DSS has focused on micro-level (end user) computing activities such as model formulation (Dolk and Konsynski, 1984; Murphy et al., 1992), and model construction (Krishnan, 1991), not much has been written about model validation, even though this has been discussed in

the literature as an important component to en- sure overall consistency in the decision making process. Certain micro-level verification at the statement level has been proposed (Blanning, 1985), but such a verification is cumbersome and ignores the issues of autonomy of the decision maker.

One may argue that a micro-level validation of a user's model is a non-issue as the use of data and the formulation of decision models, com- puter based or non-computer based, is the re- sponsibility of individual managers. Since one's job performance is based on the decisions made, there is sufficient incentive for a manager to seek out the best and the most accurate information possible to arrive at a decision. Several factors can make this claim somewhat suspect, however.

First, the perception of validity that comes with computer based information may inhibit a man- ager from continuously questioning the model 's underlying assumptions, especially when the judgements made in constructing the model are temporally sensitive. Second, the speed with which information is transmitted and communicated to others via networks makes it easy to propagate an incorrect decision faster, even if such an error is later recognized and corrected. Third, the sheer volume of information that is being made avail- able today to end users to support decisions can make the process of tracking and challenging the assumptions and judgements that went into model construction difficult. This situation is further aggravated when the model user is not often the model builder. Finally, even if the ultimate re- sponsibility for the decision made rests with the manager, it is often difficult to isolate the source of a bad decision due to the interdependent nature of many of today's complex decisions and the role of multiple sources of data used to support them. It is often easy to blame the system under these conditions for improper decisions, rather than the individual who is involved in making the decision.

These difficulties should not lead one, how- ever, to resort to centralized decision making where a single or a group of models at the corpo- rate level support all the decision processes, as this detracts from the autonomy of the individual

Page 4: Knowledge-based model validation support for end-user computing environments

4 R. Agarwal et al./Decision Support Systems 15 (1995) 1-18

decision maker. What is requi red is a guidance or support mechan i sm that provides managers with relevant informat ion, makes them aware of the nuances of the model buil t and the assumptions and judgemen t s made about the data used in model execution, so that these are taken into

cons idera t ion each t ime a decision is made. This type of guidance is critical specifically in situa-

t ions such as budget ing and resource allocation, where the data genera ted by one individual has a significant and immedia te impact on others. In a recent article Senge and S te rman (1992) evoca- tively characterize this p rob lem in the following s ta tement : " local decision making and individual au tonomy lead to m a n a g e m e n t anarchy unless

managers account for the in te rconnec t ions and long- term side effects of their local decisions".

Al though the group decision suppor t system liter- a ture addresses the issue of providing informa-

tion for member interact ion, assumpt ion surfac- ing, etc. (Gray and Nunamaker , 1989), its focus is more geared to resolving conflicts and reaching a consensus, and not on ensur ing that the models used by individual managers to arrive at their decisions are valid.

Assuming that suppor t for model val idat ion is to occur at the individual decision level without significantly affecting the au tonomy a user enjoys

in making decisions, our objective is to de te rmine the type of informat ion that can suppor t such validation. This may include in format ion such as assumptions made by others in similar si tuations, potent ia l error propagat ion from the inputs used if the assumptions and judgmen t s are not valid, the clarification of concepts that have mul t ip le meanings , etc. The under ly ing not ion beh ind such

support is to reduce the dis tance be tween an individual 's menta l model of a decision s i tuat ion

Table 1 Dependency categories and information support to manage reciprocal interdependency

Pooled interdependency: Each unit can perform its own activities without regard to the other units and yet makes a contribution towards the organizational objective. Each unit is, in turn, supported by the organization. Sequential interdependency: Each unit, while maintaining pooled interdependency to achieve overall organizational objectives, may also depend on other units to complete their normal operations. The degree of one unit's dependence on other units varies significantly among units. Reciprocal interdependency: Each unit, while remaining pooled and sequentially dependent, may be recursively dependent, i.e., output of a unit X may become the input of another unit, Y, the output of Y may be the input for unit Z, and finally, the output of Z may be the input for X. In this case each unit has to constantly adjust to other units' activities, thus making the dependency relatively complex and dynamic. Information support for reciprocal interdependency: Is the individual unit information lexically correct, i.e., are the variables used known to the organization and do they convey the same meaning to all involved? For example, the variable 'sales' may connote 'unit sales' to production and 'dollar sales' to accounting. Is the individual unit information log!cally correct, i.e., does it make sense to use certain information by a given unit in the context of a decision making process? For example, is it appropriate to use detailed sales data when developing long term financial plans? Is the use of certain relationships such as accounting identities consistent with normal practice? For example, did a unit use both earnings before tax and tax to derive earnings after tax? Is there consistency between the functional and organizational policies, i.e., if the organization has made certain decisions on goals, strategies, and priorities, is this information available to all concerned? For example, if the organization's strategy is to increase market share by cutting prices, then all units need to be informed of this in order to synchronize their activities with that of the corporation, i.e., marketing may use mass advertising rather than selective magazine advertising, production may produce more items for stock rather than tighten inventories, and accounting may plan for an increased cash outflow in the short run. Is there an internal dependency between various organizational units? Is the system capable of providing the dependency information when the dependency is not direct, i.e., output of unit X is needed by unit Z via the input/output of unit Y? Is there a reciprocal dependency between units, i.e., is there a sharing of both inputs and outputs, with or without any intervening units?

Page 5: Knowledge-based model validation support for end-user computing environments

R. Agarwal et al. / Decision Support Systems 15 (1995) 1-18 5

and the real-world ]phenomenon being analyzed; and to do so in an unobtrusive manner by playing the role as an advisor to the decision maker.

The information described above can be pro- vided upon requesl:, but that assumes that an individual user knows explicitly what information to seek at all times. Several times users may not be aware of the information that is available, how such information might impact their decision pro- cess, and what underlying assumptions are being made in generating that information. While the support provided to end-users has often been limited to the information they need to effectively make a decision, we cannot ignore the fact that this information is communicated to others who, in turn, treat this a,,; their input. Thus, validation of models used at a local level has to consider decision interdependencies.

Thompson (1967), in his study of organiza- tions, describes three basic levels of interdepen- dencies (pooled, sequential and reciprocal) that may exist among various components of an orga- nization and discusses how these dependencies require different types of information. Table 1 briefly describes each of these dependencies and discusses the information support needed for the case of reciprocal interdependency, as it sub- sumes the features of the other two dependen-

cies. Table 2 provides a summary of the generic information support needed for decision guid- ance or model validation under conditions of reciprocal interdependency.

Typically a large portion of this information is not easily available to an end user as s / h e em- barks on building a model. While it would be desirable to have a generalized model manage- ment system (Applegate et al., 1986; Konsynski, 1981) that can coordinate various activities of end users and provide all the needed support, the current state of the art in model management makes this a normative rather than practical goal. The difficulty in accomplishing this objective is similar to the one faced by organizations when a corporate-wide data model has to be designed. The problem in both these cases is one of insuffi- cient information on the organization's require- ments for building models or accessing data. Within the context of building a global data model, a reasonable approach is to build an en- terprise model incrementally through view inte- gration as individual applications are developed. Similarly, a generalized model management sys- tem may be developed over time as modelling knowledge is incrementally added to meet differ- ent user needs. In the next section we classify the model building activity of end-users into five dis-

Table 2 Generic information needs for model validation

Type of support Description

Assumption Syntax

Semantic

Identity Organizational context Dependency information

Dependency execution

Problem processing

Information about tools and their underlying assumptions Lexically correct use of inputs and outputs (are the variables used by individuals the same as defined by the corporation?) Logically correct use of inputs and outputs (is the use of variables appropriate for the decision context? e.g., can we use detailed sales data by product when planning in aggregate?) Consistent use of identities based on normal practice Consistent use of policies, as prescribed by the organization or some functional unit Degree of dependency that exists between those that use this model's output and vice versa Extent of dependency: sequentially dependent or reciprocally dependent and how the dependencies manifest themselves in operations Definition of models based on problem articulated or query posed by user

Page 6: Knowledge-based model validation support for end-user computing environments

6 R. Agarwal et al./Decision Support Systems 15 (1995) 1-18

t inct scenarios and re la te the suppor t needs de-

scribed above to these scenarios.

3. Validation support for alternate end-user com- puting env ironments

In general , suppor t at an individual decision

level can manifes t i tself in a range of services

ex tending f rom providing simple access to prede-

f ined data and m o d e l s / t o o l s to those which re-

quire construct ing a sequence of models to satisfy

a user need. Several scenarios are p re sen ted be-

low to i l lustrate the di f ferent types of model

bui lding activities users may engage in and the

s u p p o r t / g u i d a n c e a system can provide in each

o f these cases.

End user model const ruct ion typically includes

the def ini t ion of one or more of the following

components : input variables whose values come

from external sources (I), input variables whose

values are internal ly def ined and under the con-

trol of the user ( e n d o g e n o u s / p a r a m e t r i c input

(P)), mode l formula t ion (M) that re la tes all the

input variables to ou tput variables (O), and spe-

cial ized a lgor i thmic p rocedures or tools (T) that

may be needed to execute the user 's mode l and

gene ra t e outputs . An end user may need infor-

mat ion on any or all of these componen t s de-

pend ing on the task being per formed. Val idat ion

complexity is directly re la ted to the extent o f

control being exercised by a user over the five

components . For example, val idat ion is relat ively

easy if the system constructs or extracts model(s)

based on a user query, as it can use known,

establ ished and val idated models that are s tored

Table 3 Examples describing five distinct cases of model use

Case Model usage

(a) (h)

(c)

(d)

(e)

Sales of bobsleds by Toyco in 1983? Sales of bobsleds by Toyco in 1983? Price is $12.00. Change in price? - - - Credit ratio (credit to cash sales) is 0.5. Change? - - - Sales of bobsleds by Toyco in 1983? Price is $12.00. Change in price? - - - Credit ratio (credit to cash sales) is 0.5. Change? - - - - CGS ~ forecast (sales) Choose the appropriate option for forecast from: REGRESSion; EXPonential SMOOTHing - - - Profit model REM input (system provided) definition Sales (1987) = derived by MMS Interest (1987) = derived by MMS REM parameter input definition cs factor = 0.45 adv factor = 0.12 REM model definition CGS = cs factor * sales Operating expense = REGRESS (sales) Promotion expense = adv factor " sales Profits = sales -CGS -promotion expense -interest -operating expense PS ratio = profits/sales Profit model

Sales (1987) = data from data base Interest (1987) = data from data base . . . . . . . . . . . . . . . . . . . . .

Operating expense = REGRESS (sales)

Page 7: Knowledge-based model validation support for end-user computing environments

R. Agarwal et aL / Decision Support @stems 15 (1995) 1-18 7

in the corpora te knowledge base. However, as the user takes control o f a larger number of these componen t s , validation becomes more challeng- ing, since the system has limited knowledge about the user 's intent. Five different model building envi ronments are :presented below in order to unders tand the bread th of support requirements for validation.

Case 1. The u s e : provides ei ther the name of the model that is l:,redefined or defines a query. The system validates the user 's query (or output desired) and defines the models and data needed to genera te the desired ou tpu t (See Table 3, Case (a)). Ou tpu t query validation has been the focus of much research as it dominates the rest of the processing. Query by Example (Zloof, 1975), nat- ural language interfaces (Vassiliou et al., 1983), and semantic nets (Elam et al., 1980) are some methods used to reduce the mismatch between the user 's real-world data requirements and the formulat ion of the query. Model definit ion based on a validated user query is also the subject of much research. Defining and sequencing models

iteratively based on a query until such a process results in data f rom a data base has been pursued using semantic nets (Bonczek et al., 1981; Elam et al., 1980), predicate calculus (Dut ta and Basu, 1984; Bonczek et al., 1984), frames (Dolk and Konsynski, 1984), and relational data bases (Blan- ning, 1987).

Case 2. This is similar to Case 1, except that a user is allowed to download the model data and per form sensitivity analysis on certain parametr ic data associated with that model . Validat ion here is concerned with ensuring that the parametr ic data that is input by the user for sensitivity analy- sis is within bounds (see Table 3, Case (b)).

Case 3. In this case, the user has the opt ion to change the tools used to execute the model if the assumptions inherent in the model change over time, thus making the use of existing tools inap- propriate. Here the scope of validation extends to the tool identified for incorporation, specifically, the assumptions inherent in the use of this tool and its relevance to the purpose underlying model construct ion (Table 3, Case (c)).

Steps Controlled By

SYSTEM USER

Features of Model Validation System (MVS) Validation: (*) Definition (@)

OUTPUT PARAMETERS TOOLS MODELS INPUTS

I M T P O (*) (@) (@) (@) (@)

I M T P O (*) (*)/(@) (@) (@) (@)

I M T P O (*) (*) (*)/(@) (@) (@)

M T P O

I T M P O

(*)

Note: Once the user defines M, the validation of O and P will not become an fuvction of the user defined model.

Case 1: User defines O, MVS validates O and defines P, T, M, 1

(*)/(@) (@)

(*)

issue as these will be a

Case 2: User defines O and suggests values for P; MVS validates O and P, and defines T, M, I.

Case 3: User defines O and suggests values for P and T; MVS validates O,P, and T, and defines M, I.

Case 4: User defines M and T; MVS validates M and T and defines I (this may require definition of some pre-defined models)

Case 5: User defines I, M, and T; MVS validates I, M, and T.

Fig. 1. User/system responsibilities in different case scenarios.

Page 8: Knowledge-based model validation support for end-user computing environments

8 R. Agarwal et al./Decision Support Systems 15 (1995) 1-18

Case 4. H e r e bo th the m o d e l and tools n e e d e d a re de f ined by the user, and suppo r t is l imi ted to provid ing the n e e d e d input f rom the c o r p o r a t e da t a base or f rom m o d e l def in i t ions ( if these da ta a re not avai lable f rom s to rage and have to be computed ) . Va l ida t ion he re r equ i res tes t ing the user ' s m o d e l and the tools u sed (Tab le 3, Case (d)); the re is a pauc i ty of r e sea rch tha t suggests how this may be accompl i shed wi thou t compro - mising the dec is ion m a k e r ' s au tonomy.

Case 5. H e r e the use r def ines the model , its inpu t and ou tput , and the tool(s) to be used to c o m p l e t e the task. The suppo r t he re is l imi ted to one of genera l in fo rmat ion on mode l s of this type and any in format ion on the assumpt ions m a d e abou t the da ta when the pa r t i cu l a r tool is used.

See Tab le 3, Case (e). These five cases are summar i zed in Fig. 1. The

env i ronmen t r e p r e s e n t e d by Cases 4 and 5 (one tha t is becoming more pervas ive with the prol i fer - a t ion o f pe r sona l c o m p u t e r s and h igh-end mod- el l ing sof tware such as sp r eadshee t s ) necess i ta tes a higher level of mode l va l ida t ion suppor t as the use r is taking more cont ro l o f cons t ruc t ing the model . On the o the r hand, in Cases 1 and 2, the

system does most of the mode l cons t ruc t ion to satisfy a user ' s query and very l i t t le va l ida t ion is ne e de d , except the r easonab leness o f the ou tpu t reques t o r query.

Based upon the nature of the m o d e l bui ld ing activity as discussed above, the suppo r t needs for dec is ion gu idance vary across the five cases. A m a p p i n g be tween the gener ic suppor t needs iden- t i f ied in sect ion 2 and the five cases is p rov ided in Fig. 2. The focus of the research here is to p rov ide va l ida t ion suppor t for Cases 4 and 5 th rough a knowledge-based suppor t system. The d e v e l o p m e n t of such a system that inc ludes as- sumpt ion , syntactic, semant ic , ident i ty and orga- n iza t iona l context suppor t is desc r ibed next.

4. A knowledge-based architecture for providing val idat ion support

The knowledge base (KB) n e e d e d to suppor t m o d e l activit ies in case scenar ios 4 and 5 is d iscussed first and the i m p l e m e n t a t i o n scenar io is d iscussed later . This knowledge base may be ac- cessed e i the r di rect ly or indirect ly by the user to ensure that the decis ion m o d e l is valid.

System Support

Model Definition

Dependency Information + Problem Processing $

Model Validation

Assumption Syntax/Semantic

Identity/Organizational Context *

Scenario

Cases 4 and 5 Cases 4 and 5 Cases 4 and 5

Case 3 Cases 1 and 2

+

Provide information on other models that are 'similar' in nature so the user can relate his model with those of others (this requires the storage of each user developed model and the definition of criteria for computing a similarity index so that only relevant models are retrieved)

Provide information on other models that need to be executed before the user's model if the input used has to reflect a real time operating environment (this can be used in conjunction with group decision making where the link has to bring other users for active participation, or in a static execution of models that are predefined and stored in the model base);

Provide support in defining the model itself that needs to be executed to answer a user query (this requires a comprehensive definition of model inputs and outputs and model execution in accordance with some a priori defined sequence);

Fig. 2. Model validation support in each case scenario.

Page 9: Knowledge-based model validation support for end-user computing environments

R. Agarwal et al. / Decision Support Systems 15 (1995) 1-18 9

4.1. The knowledge base

Assumption support for (T): It is i ncumben t upon the use r to f ind ou t if the a s sumpt ions unde r ly ing the use of cer ta in a lgo r i thmic tools a re sa t is f ied wi thin the p r o b l e m tha t is be ing mod- eled. F o r example , while some of the s p r e a d s h e e t

sof tware al lows users to call a t o o l / a l g o r i t h m such as regress (for regress ion) or op t imize (for running an op t imiza t ion rout ine) , they do not necessar i ly check to make sure that the d a t a be ing ac ted upon mee t the cr i ter ia necessa ry for these tools to apply. The following knowledge about tools is s to red for user access: assumptions,

I C O R P O R A T E K N O W L E D G E B A S E

I T O O L

• assumptions

• applicable_situations

• related_tools

VARIABLE

• long_names

• synonyms

• related_information

• computations/guidelines

• (usesme)

• (generates_me)

• usedwhere

• generated_by_whom

M O D E L

• input_variables

• output variables

• plan_type

• functional_area

• time_horizon

• contact_person

• needed_by_whom

• output_similar_to

• needs_who

• input_similar_to

• in out similar_to

LEGEND:

O]BJECT

A'I~?R.IBUTES

(dynamically inst~mtiated)

METHODS

Fig. 3. Knowledge base architecture.

Page 10: Knowledge-based model validation support for end-user computing environments

10 R. Agarwal et al. / Decision Support Systems 15 (1995) 1-18

applicable situations, related tools (i.e., sub-or su- per-category of tools with similar characteristics).

Syntax and Semantic support of (I,P,O): It is important that the user be made aware of the syntactic and semantic meaning associated with the model variables within the organizational context. For example, 'cost of sales' and 'cost of goods sold' may have different meanings associ- ated with them depending on the context within which they are used, and the same applies when the terms 'standard costs' and 'direct costs' are used. The knowledge stored about variables in- cludes long names, synonyms, and any other re- lated information.

Identity Support (with regard to M): Computa- tional information associated with certain vari- ables (primarily accounting) should be defined consistently. For example, 'earnings after tax' is computed using 'earnings before tax' and 'tax'. Similarly, if a user is estimating financial ratios such as ROI, then questions related to how one computes ROI (are we to use income before tax or income after tax?) or how one arrives at funds flow are relevant. Thus, knowledge about such computational relationships or guidelines should be made available for users' verification before they use them in their models.

Context Support (with regard to M): In addi- tion, if a user defines a model that has 'profit ' as the output, then the system should list all those models that have the same output but different inputs, along with any other information. Also, if a user wishes to estimate values for 'advertising expense', it is useful to know information about other models that are involved in estimating this variable, as well as information about other vari- ables that are defined within this model and any assumptions made in their estimation by these models. For example, a user who is estimating 'advertising expense' may benefit from models that are used to establish corporate marketing strategy, which may emphasize 'mass marketing'. Such related information may help the user stay consistent with corporate strategies. Knowledge stored about models includes input variables, out- put variables, and contextual information, such as planning horizon, functional area, contact person, etc.

The knowledge base, thus, includes three ob- ject classes: TOOLS, VARIABLES, and MOD- ELS. Each of these objects have associated with them certain attributes and methods. See Fig. 3 for a schematic illustration of the KB architec- ture.

The VARIABLE class has associated with it two additional attributes: uses me and gener- ates me, along with others such as: l ongnames , synonyms and related information. The values for these two attributes are dynamically gener- ated by the system by invoking two methods: used where and generated by whom. These methods send messages to the MODEL class and determine the list of models that will use this variable as their input, and estimate its value as a part of their output. This localizes any changes in the model definition to the MODEL class and ensures that the information accessed is always current.

In the knowledge base, the TO O L class can be further subdivided into sub-classes such as fore- casting, optimization, etc., and these, in turn, into more specialized sub-classes (e.g. forecasting into time series, regression, etc., and optimization into linear, non-linear, integer, etc.). Since the focus here is on contextual information related to a user's definition of input and output variables, the TOOLS class is kept relatively simple. Future extensions to the implementation will enhance the T O O L class.

Whenever a user identifies input /output vari- ables that are to be used in h i s /her model, the KB is accessed to list a set of models that have certain contextual similarity. This similarity is es- tablished by answering the following questions: (1) how does this model's output affect other

models /users in the system? (method:needed by whom)

(2) what other models generate values for the same output set? (method:output_similar to)

(3) which models generate the input requested by the user? (method:needs who)

(4) what other models use the same input set? (method:input_similar)

(5) what models use similar input and output sets? (method:in out similar to)

The objectiv e is to let the user seek informa-

Page 11: Knowledge-based model validation support for end-user computing environments

R. Agarwal et aI. / Decision Support Systems 15 (I995) 1-18

Table 4 Balance sheet and income statement variables

11

Assets Liabilities

A assets CA current assets

C cash MS mkt securities AR accounts receivable PPE prepaid expense INV inventory

RI raw material inventory SI supplies inventory WP work in proce,;s inventory FI finished good inventory

FA fixed assets PE plant and equipment CDE cumulative delzreciation

LE liabilities and equity L liabilities

CL current liabilities AP accounts payable TP tax payable IP interest payable

STD short-term debt LD long-term debt

E equity CE common equity RE retained earnings

EAT earnings after tax EBT earnings before tax

OPI operating income SR sales revenue CGS cost of goods sold

DE depreciation expense OR other revenue IE interest expense OFE office expense ADE administrative expense TE tax expense

DIV dividends

t ion about these models and assess their rele- vance to the decision s i tuat ion at hand. No at- tempt is made here to select a single, most closely matched model to that def ined by the user, even

though such a match ing is feasible (e.g., by find- ing models that have both input and ou tpu t simi- larity to the user 's model). The in tersec t ion of the i n p u t / o u t p u t variables of the user ' s model with those that are s tored in the knowledge base are used to answer these questions, and these are genera ted dynamical ly by invoking appropr ia te methods in the M O D E L class. The a t t r ibutes of the individual models can be extracted directly from the knowledge base (as will be seen in the

next section). In addi t ion to obta in ing rela ted models this way ( through user and knowledge

base model intersect ion), one can directly ask for a list of all models in the knowledge base and select those that appear to be appropriate .

4.2. System implementation scenario

There are two possible ways a user 's model can be val idated indirectly. The user can def ine a model in the model l ing env i ronmen t such as LO- TU S 1-2-3 and before its execution, the valida- t ion module can be called to ascertain model validity, by answering some of the quest ions posed

Table 5 Some computational identities/relationships

GI gross income EBT earnings before tax

EAT earnings after tax ROI return on investment DR debt ratio

Sales revenue (SR) -cost of goods'sold (CGS) Gross income (GI) -administrative expense (ADE) -interest expense (IE) -depreciation expense (DE) Earnings before tax (EBT) -tax expense (TE) Earnings before tax (EBT)/assets (A) Long-term debt (LD)/assets (A)

Page 12: Knowledge-based model validation support for end-user computing environments

12 R. Agarwal et aL / Decision Support Systems 15 (1995) 1-18

~n ¢.1

0

. ,o "S

[..

= I = 0 " - =

0

,r,..~ .,V_ '

< ~

~ n ; D

0 0 o

[....., [.-, [....., [--, [..-, [ . .

:E : E ~ I E < :E ~ <

.o .~ .~ .~ .~ .~

"~ . o v >

~ o ~ ' J - E-"

I ~ ~.i.i=: ~ I

Page 13: Knowledge-based model validation support for end-user computing environments

R. Agarwal et al. / Decision Support Systems 15 (1995) 1-18 13

earlier. On the other hand, a user can enter the validation module directly and look for appropri- ate information on tools and variables, before entering the modelling environment to define h i s / he r own models. While the former scenario is most desirable d~ue to its unobtrusive nature and minimal interference with the modelling en- vironment of the user, the current implementa- tion operates under the second scenario. A user enters the 'knowledge base environment ' to eval- uate the relevance of h i s / h e r model definition and the support system uses the object-oriented environment of corporation's A D S / P C software package. The following section illustrates the use of the knowledge base for a financial budgeting scenario.

5. Validation supp¢,rt in financial budgeting -An illustration

Financial budgeting requires the iterative and coordinated application of varying functional ex- pertise to arrive at a single budget. Any system that supports an individual decision maker in this situation has to consider the compatibility of as- sumptions made b'¢ h i m / h e r against those that are either pre-established by the corporation or against other models of similar type defined in the past. Such support has to minimally impact the autonomy of the individual decision maker. To facilitate this support, the knowledge base used here contains, the definition of all balance sheet and income statement accounts (see Table 4), many financial identities and performance measures (see Table 5), and a variety of models that have been stored for appropriate reference (see Table 6).

A user enters the model validation system (i.e., the validation component of the model manage- ment system). A typical interaction with the sys- tem is described in Appendix A. The system provides four options: variables, tools, model kb (models in the knowledge base) and model user (which a user defines interactively for retrieving context sensitive information). Assume that a user wants to define a model that uses variables such as 's tandard cost ' (SC) as its input and estimates 'sales revenue' (SR) and 'earnings after tax'

(EAT). The user may select E A T for additional information on how it is defined and computed. The system provides this information using its own attribute values and other data from the M O D E L class, by invoking appropriate methods in this class.

The user wants to use a forecasting algorithm to estimate sales and needs information on the underlying assumptions associated with such tools before calling upon any of these in model con- struction. The scenario displays information on a particular forecasting algorithm: 'simple regres- sion'.

There are many variables a user may include in h i s /he r model locally such as advertising ex- pense, office expense, other expense, etc. Before any decision is made on the relevance of some of these variables, the user may want to inquire about models that have been defined in the knowledge base that have certain similarity to the model being constructed. The related model in- formation for the user 's model (with SC as input, and SR and EAT as outputs) is shown in the scenario described. The contact person is pro- vided, in case more detailed information is needed for analysis, but is not accessible through the knowledge base (since some of the corporate knowledge may be sensitive and cannot be pro- vided for general access).

Note that the information provided is primar- ily advisory in nature and a user may alter h i s / he r model based on this information before entering the modelling environment for constructing the model. This user defined model and any variables locally defined by the user can be added to the knowledge base after they undergo certain orga- nizational validation and verification checks. This is similar to the practice used by data base admin- istrators when revising corporate data base defi- nitions. Such type of an iterative development of a model base can eventually lead to the genera- tion of a corporate model base.

6. Discussion

The validation proposed in this paper is advi- sory in nature and ultimately it is the user 's responsibility to ensure that the models selected

Page 14: Knowledge-based model validation support for end-user computing environments

14 R. Agarwal et al. /Decision Support Systems 15 (1995) 1-18

for display are examined carefully for their rele- vance. Some prior research in model and associ- ated data validation has characterized this task as a part of model integration, thus supporting Cases 1-3 discussed in section 3. However, as we dis- cuss in this section, some of the information used for unobtrusive validation can be easily used for integration, if the role of the system changes from one of an 'advisor' to that of a 'director ' .

Prior research has addressed data validity within a model management environment in or- der to address incompatibilities in variable names (e.g., unit of sales, sales units, unit sales), dimen- sionality (e.g., singular, vector), units of measure (e.g. dollars, thousands of dollars), data types (e.g., numeric, alphameric), and granularity (e.g. quarterly, monthly) (Batini et al., 1988; Bradley and Clemence, 1988; Bhargava et al., 1991). In addition, validity of data from relative (does the data comply with user needs?) and absolute (does it reflect the reality?) perspectives has also been proposed (Agmon and Ahituv, 1987). Such valida- tion is critical when a system is intended to inte- grate models automatically to support a user in- quiry, but its value is limited in the framework proposed here as only the variable names are utilized for retrieving appropriate models. The other data properties can be stored as attributes of the variable class and a user can peruse such information in establishing their relevance to h i s /he r decision making process. Some attributes such as granularity are used in the selection pro- cess, but at the model as opposed to the variable level.

It has been suggested that appropriate models may be selected based on a match between input and output variables. Liang (1988) proposed a graph theoretic approach to select models using input and output variables, which are internally networked based on the reasoning embedded in the model. Since the objective here is to retrieve appropriate models as opposed to integrating them, such reasoning is appropriately repre- sented here either as a model attribute or as a sub-classification (e.g., different types of sales forecasting models can be represented as sub- classes under the sales model).

Courtney et al. (1987) propose a system to

store user defined inpu t /ou tpu t relationships as causation trees in a semantic network and used the network to check for internal consistency between models defined by the same user over time. The framework proposed here does not preclude one from incorporating such internal consistency within models defined by the same user, as the retrieval now focuses on previous models defined by the same user. However, the consistency check has to be made by the u s e r as opposed to the system.

Liang (1988) provides a knowledge based MMS framework for selecting and integrating models; the validity property stored in the knowledge base captures the model performance over time when appropriate integrity constraints (e.g., cer- tain input and output characteristics) are satis- fied. Further, it has been suggested that a self- evolving DSS (Liang and Jones, 1987) contain information about user profiles (to understand their modelling needs), default action rules (to execute a model under normal circumstances) and control mechanisms (to evaluate the model performance). These type of modelling properties can be stored as attributes of the model class and made available to user during the 'advisory' ses- sion.

7. Conclusions and extensions

This paper discussed how increasing user con- trol over model definition complicates the valida- tion process, i.e., the correct use of corporate data and relationships. Given that this type of user control dominates E U C environments, five distinct cases of model definition and use were presented and two cases, which offer the most challenge for validation, were used for discussing potential validation support. The literature on decision process interdependencies helped iden- tify the nature of 'guidance ' needed under these cases. A knowledge-based support system was used to capture and make available such knowl- edge to the user prior to model construction. The features of such a support mechanism were illus- trated using a financial budgeting scenario. The object-oriented architecture of the knowledge- base provides for modular extensibility and also

Page 15: Knowledge-based model validation support for end-user computing environments

R. Agarwal et aL / Decision Support Systems 15 (1995) 1-18 15

the benefits typically associated with encapsula- tion such as easier maintenance and reusability of knowledge.

The current implementat ion requires a user to enter the model validation system prior to em- barking on the model construction phase, so as to ensure that all the definitions of var iab les / re la - tionships are consistent with the corporate-wide knowledge. A natural extension is to move di- rectly from the validation phase to the construc- tion phase, with the model variables input during the validation phase forming the initial template for model construction. A more desirable exten- sion is to enter the model construction environ- ment (such as LOTUS) and, upon definition of the variables (row identifiers), access the valida- tion support system for a variable and model check. This will reduce the need to enter two different environments for model construction. Linkages between Level 5 Object and E X C E L are under investigation for providing this type of ' t ransparent ' validation support.

While the extension discussed raises chal- lenges related to linking knowledge-based and model definition environments for dynamic vali- dation of models in a supportive role, a more interesting research question is the impact of such a validation on the effectiveness of the deci- sions made. A comparative evaluation of deci- sions under two different environments (with and without validation support) on measures such as accuracy in model definitions and the integration capability of local models for corporate-wide planning would be of significant interest. Future research could, thus, address the impact of such an 'unobtrusive ' model validation support on in- dividual as well as group decisions in applications such as financial planning and budgeting, and resource allocation.

Appendix A

System Interaction

System

Select one of the,. options.

User

Variable Tool Model kb Model user Exit

System

Enter one of the variables listed below for more information. Enter ' none ' to quit. ade: advertising expense ap: accounts payable

. . . . . . . . . . . . . . . . . . . . . . . .

eat: earnings after tax

sc: standard cost

sr: sales revenue

User

eat

System

The variable is: eat The long name is: earnings after tax Synonym: profits after tax, net income after tax Related information: this information is added to retained earnings of the balance sheet. Computat ions/guidel ines: earnings after tax = earnings before tax - tax expense. The tax ex- pense is = earnings before tax * tax rate. The tax rate that is applicable is 48%. uses me: generated by whom: long term corporate plan-

ning

System

Select one of the options.

User

Variable Tool Model kb Model user Exit

System

Enter one of the tools for more information. Enter none to quit. regression: forecasting tool time series: forecast-

Page 16: Knowledge-based model validation support for end-user computing environments

16 R. Agarwal et al. / Decision Support Systems 15 (1995) 1-18

ing tool exponential smoothing: forecasting tool linear programming: optimization tool

User

regression

eat: earnings after tax

se: s t a n d a r d cost

sr: sales revenue

System @stem

The tool name is: regression Assumptions: One of the major assumptions is that there is a linear relationship between a set of independent variables and the dependent vari- ables. It also assumes that the past is a good indicator of the future. Refer to manual * * for more information on when and how to apply this tool to a problem. Applicable situations: This is widely used to fore- cast sales based on price and other economic and demographic factors. Related tools: Time series tools -used when you are simply extrapolating past data to future on a time dimension; simple regression is appropriate when you have a single independent variable, while multiple regression is used when you have many independent variables that have a cumula- tive effect on the dependent variable.

Models that need your output: funds budget; corporate top down; tax record planning; long term corporate planning Models that are similar to your output: sales budget; long term corporate planning Models that you may need: Models with similar input: sales budget Models with both input and output similarity: You may select any of these models from the knowledge base to obtain more information.

System

Select one of the options.

User

Variable Tool Model kb Model user Exit

System System

Select one of the options.

User

Variable Tool Model kb M o d e l u s e r Exit

System

Select the input variables in your model. After selecting all the variables, enter again to proceed. I f you do not select any variable and press enter, it is assumed that you have no input variables to select.

Enter one of the models listed below for more information. Enter none to quit. 1 purchase budget 2 production budget 3 sales budget

11 advertising planning 12 long term corporate planning

User

3

User System

ade: advertising expense up: accounts payable

Model n a m e : sa l e s budget Input variables: standard cost

Page 17: Knowledge-based model validation support for end-user computing environments

R. Agarwal et al. / Decision Sl~pport Systems 15 (1995) I-18 17

O u t p u t va r i ab le s : sales r e v e n u e , a c c o u n t s r ece iv -

ab les , cash , adve r t i s ing e x p e n s e , f i n i shed g o o d s

i nven to ry , cost o f g o o d s sold, un i t sales, p r ice .

P l a n n i n g h o r i z o n : tac t ica l p l a n n i n g

F u n c t i o n a l a rea : m a r k e t i n g

T i m e h o r i z o n : m o n t h l y

C o n t a c t p e r s o n : Jol~:n Smi th , t e l .no . 315-443-6703,

fax: 315-443-5389

References

Agmon N. and N. Ahituv, Assessing Data Reliability in an IS, Journal of Managerrtent Information Systems, Vol.4, No. 2., (Fall 1987).

Alavi, M., R.R. Nelson, and I. Weiss, Strategies for End-User Computing: An Integrative Framework, Journal of Man- agement Information Systems, Vol. 4, No. 3 (Winter 1987- 88) pp. 28-49.

Applegate, L.M., G. Klien, B.R. Konsynski and J.F. Nuna- maker, Model Management Systems: Design for Decision Support, Decision Sapport Systems, Vol. 2, No. 1 (1986) pp. 81-91.

Batini, C., M. Lenseri~ai and S. Navathe, A Comparative Analysis of Methodologies for Data Base Schema Integra- tion, ACM Computiag Surveys (1988) Vol. 18, No. 4, pp. 232-364.

Bhargava, H.K., S. Kimbrough and R. Krishnan, Unique Names Violations: A Problem for Model Integration or You say Tomato, I say Tomahto, ORSA Journal of Com- puting, Vol. 3, No. 2 (1991) pp. 107-120.

Blanning R.W., A Rela~;ional Framework for Assertion Man- agement, Decision Support Systems, Vol. 1, No. 2 (April 1985) pp. 67-72.

Blanning, R.W., A Relational Theory of Model Management, in Decision Support Systems: Theory and Applications, eds. Clyde Holsapple and Andrew Whinston, Springer- Verlag (1987) pp. 19-53.

Bonczek, R.H., C.W. Holsapple and A.W. Whinston, Founda- tions of Decision Support Systems, Academic Press, New York, NY (1981).

Bonczek, R.H., C.W. Holsapple and A.W. Whinston, A Gen- eralized Decision Support System Using Predicate Calcu- lus and Network Database Management, Operations Re- search, Vol. 29, No. 2 (September 1984) pp. 263-281.

Bradley, G. and R. Clemenee, Model Integration with a Typed Executable Modelling Language, Proceedings of the 21st. HICSS Colaference (1988).

Brown, C.V and R.P. Bostrom, Effective Management of End-User Computing: A Total Organization Perspective, Journal of Manageraent Information Systems, Vol. 6, No. 2 (Fall 1989) pp. 77-92.

Courtney J.F. Jr., D.B. Paradice, and N.H. Mohammed, A Knowledge-Based 13)SS for Managerial Problem Diagnosis, Decision Sciences, Vol. 18, No. 3 (1987) pp. 373-399.

Dolk, D.R. and B.R. Konsynski, Knowledge Representation for Model Management Systems, IEEE Tran. on Software Engineering, Vol. SE-10, No. 6 (Nov. 1984).

Dutta, A. and A. Basu, An Artificial Intelligence Approach to Model Management in Decision Support Systems, IEEE Computer, Vol. 17, No. 9 (Sept. 1984) pp. 89-97.

Elam, J.J., J.C. Henderson and L.W. Miller, Model Manage- ment Systems: An Approach to Decision Support in Com- plex Organizations, Proc. First Intl. Conf. Information Systems (Dec. 1980) pp. 98-110.

Gray, P. and J. Nunamaker, Group Decision Support Systems, in Decision Support Systems: Putting Theory into Practice, ed. R. Sprague and H. Watson, Prentice Hall (1989) pp. 272-287.

Galletta, D. and E. Hufnagel, A Model of End-user Comput- ing Policy, Information and Management (1992) pp. 1-18.

Ghosh, D. and Agarwal, R., Model Selection and Sequencing in Decision Support Systems, OMEGA: The International Journal of Management Science, Vol. 19, No. 2/3 (1991) pp. 157-167.

Hong, I.B., and D.R. Vogel, Data and Model Management in a Generalized MCDM-DSS, Decision Sciences, Vol. 22, No. 1 (Winter 1991) pp. 1-25. Klein, G., B.R. Konsynski, and P.O. Beck, A Linear Representation for Model Man- agement in a Decision Support System, Journal of Man- agement Information Systems, Vol. 2, No. 2. (1982) pp. 40-54.

Konsynski, B.R., On the Structure of Generalized Model Management Systems, Proceedings of the 14th. Hawaii International Conference of System Sciences, Vol. 1 (Jan. 1981) pp. 630-638. Konsynski, B. and D. Dolk, Knowl- edge Abstractions in Model Management, DSS-82 Trans- actions (1982).

Krishnan, R., A Logic Modelling Language for Automated Model Construction, Decision Support Systems, Vol. 6, No. 2 (1990) pp. 123-152.

Krishnan, R., PDM: A Knowledge Based Tool for Model Construction, Decision Support Systems, Vol. 7, No. 4 (1991) pp. 301-314.

Liang, T.P., Development of a Knowledge-Based Model Man- agement System, Operations Research (Nov.-Dec. 1988) Vol. 36, No. 6.

Liang, T.P. and C.V. Jones, Design of a Self-evolving DSS, Journal of Management Information Systems (Summer 1987) Vol 4., No. 1.

Munro, M.C., S.L. Huff, and G.C Moore, Expansion and Control of End-User Computing, Journal of Management Information Systems (Winter 1987-88) pp. 5-27.

Murphy, F.H., E.A. Stohr, and P-C. Ma, Composition Rules for Building Linear Programming Models from Compo- nent Models, Management Science, Vol. 38, No. 7 (July 1992) pp. 948-963.

Rockart, J.F. and L.S. Flannery, The Management of End- User Computing, Communications of the ACM, Vol. 26, No. 10 (October 1983) pp. 776-784.

Senge, P.M., and J.D. Sterman, Systems Thinking and Organi- zational Learning: Acting Locally and Thinking Globally

Page 18: Knowledge-based model validation support for end-user computing environments

18 R. Agalwal et aL / Decision Support Systems 15 (1995) 1-18

in the Organization o f the Future, European Journal of Operations Research, Vol. 59, No. 1 (1992) pp. 137-150.

Silver, M., Decisional Guidance for Computer-Based Deci- sion Support, MIS Quarterly (March 1991) pp. 105-122.

Sprague, R.H. and E.D. Carlson, Building Decision Support Systems, Englewood Cliffs, NJ: Prentice-Hall (1982).

Thompson, J.D., Organizations in Action, NY: McGraw-Hill (1967).

Vassiliou, Y., M. Jarke, E.A. Stohr, J.A. Turner and N.H. White, Natural Language for Database Queries: A Labo- ratory Study, MIS Quarterly, Vol. 7, No. 4 (Dec. 1983).

White, C.E, and D.P. Christy, The Information Centre Con- cept: A Normative Model and a Study of Six Installations, MIS Quarterly, Vol. 11, No. 4 (December 1987) pp. 451- 458.

Zloof, M.M., Query by Example, Proceedings of National Computer Conference, Montvale, NJ: AFIPS Press (1975) pp. 431-437.

Ritu Agarwal is the University of Dayton where she is an Associate Professor of MIS. She received her Ph.D. in MIS and M.S. in Computer Science from Syracuse University in 1988. Professor Agarwal's publications have appeared in Jour- nal of Management Information Systems, Information and Management, OMEGA, Decision Support Systems, Knowledge- Based Systems, International Journal of Man-Machine Studies, Knowledge Acquisition, and elsewhere and she has presented papers at several national and international meetings. She

serves as an Associate Editor for the International Journal of Human-Computer Studies. Her current research focuses on knowledge-based systems, decision support systems, and diffu- sion of new technologies.

Mohan R. Tanniru is an Associate Professor in MIS in the School of Management of Syracuse University, Syracuse, New York. He received his Ph.D. in MIS from Northwestern University. His current research interests are in the area of decision support and expert systems, structured systems development methodologies, and information systems plan- ning/technology management. He has published in JMIS, DSS, Information and Management, 1SR, Knowledge Based Systems, Expert Systems with Applications, Decision Sciences, Intl. Journal of Man-Machine Studies, and presented at various national and international conferences. He has consulted with Carrier-UTC, Bristol-Myers Squibb, P&G Pharmaceuticals and TCS-India, among others, on expert systems and systems methodology projects.

Prof. Yimin Zhang is the Dean of the Commerce College at East China University of Technology in Shanghai, China. He received his doctoral degree in engineering and has extensive experience in product design. His current interests are in the use of expert/knowledge based technologies to address busi- ness problems. He has served as a research associate at Syracuse University during the years 91-93.