17
Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician Author(s): William E. Remus and Jeffrey E. Kottemann Source: MIS Quarterly, Vol. 10, No. 4 (Dec., 1986), pp. 403-418 Published by: Management Information Systems Research Center, University of Minnesota Stable URL: http://www.jstor.org/stable/249197 . Accessed: 28/06/2014 12:59 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR to digitize, preserve and extend access to MIS Quarterly. http://www.jstor.org This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PM All use subject to JSTOR Terms and Conditions

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Embed Size (px)

Citation preview

Page 1: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Toward Intelligent Decision Support Systems: An Artificially Intelligent StatisticianAuthor(s): William E. Remus and Jeffrey E. KottemannSource: MIS Quarterly, Vol. 10, No. 4 (Dec., 1986), pp. 403-418Published by: Management Information Systems Research Center, University of MinnesotaStable URL: http://www.jstor.org/stable/249197 .

Accessed: 28/06/2014 12:59

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR todigitize, preserve and extend access to MIS Quarterly.

http://www.jstor.org

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 2: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

By: William E. Remus Jeffrey E. Kottemann Department of Decision Sciences University of Hawaii 2404 Maile Way Honolulu, Hawaii

Abstract There are three important considerations in DSS de- velopment. (1) Decision making involves both pri- mary and secondary processes, where secondary processes concern selecting appropriate decision making tools, approaches, and information. (2) In making decisions, humans are subject to numerous cognitive limitations. (3) In order for end users to develop DSSs, sophisticated, problem-oriented DSS generators must replace technically demand- ing DSS tools. These three considerations can be effectively addressed by including expert system components in DSSs. An expert DSS for statistical analysis is proposed and used as an illustration. Decision making scenarios are used to indicate the potential of such a system. In particular, it appears that an expert DSS can provide support for both pri- mary and secondary decision making and help ame- liorate human cognitive limitations.

Keywords: Expert systems, artificial intelligence, decision support systems, cognitive biases

ACM Categories: H.4.2, 1.2.1

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

By: William E. Remus Jeffrey E. Kottemann Department of Decision Sciences University of Hawaii 2404 Maile Way Honolulu, Hawaii

Abstract There are three important considerations in DSS de- velopment. (1) Decision making involves both pri- mary and secondary processes, where secondary processes concern selecting appropriate decision making tools, approaches, and information. (2) In making decisions, humans are subject to numerous cognitive limitations. (3) In order for end users to develop DSSs, sophisticated, problem-oriented DSS generators must replace technically demand- ing DSS tools. These three considerations can be effectively addressed by including expert system components in DSSs. An expert DSS for statistical analysis is proposed and used as an illustration. Decision making scenarios are used to indicate the potential of such a system. In particular, it appears that an expert DSS can provide support for both pri- mary and secondary decision making and help ame- liorate human cognitive limitations.

Keywords: Expert systems, artificial intelligence, decision support systems, cognitive biases

ACM Categories: H.4.2, 1.2.1

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

By: William E. Remus Jeffrey E. Kottemann Department of Decision Sciences University of Hawaii 2404 Maile Way Honolulu, Hawaii

Abstract There are three important considerations in DSS de- velopment. (1) Decision making involves both pri- mary and secondary processes, where secondary processes concern selecting appropriate decision making tools, approaches, and information. (2) In making decisions, humans are subject to numerous cognitive limitations. (3) In order for end users to develop DSSs, sophisticated, problem-oriented DSS generators must replace technically demand- ing DSS tools. These three considerations can be effectively addressed by including expert system components in DSSs. An expert DSS for statistical analysis is proposed and used as an illustration. Decision making scenarios are used to indicate the potential of such a system. In particular, it appears that an expert DSS can provide support for both pri- mary and secondary decision making and help ame- liorate human cognitive limitations.

Keywords: Expert systems, artificial intelligence, decision support systems, cognitive biases

ACM Categories: H.4.2, 1.2.1

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

By: William E. Remus Jeffrey E. Kottemann Department of Decision Sciences University of Hawaii 2404 Maile Way Honolulu, Hawaii

Abstract There are three important considerations in DSS de- velopment. (1) Decision making involves both pri- mary and secondary processes, where secondary processes concern selecting appropriate decision making tools, approaches, and information. (2) In making decisions, humans are subject to numerous cognitive limitations. (3) In order for end users to develop DSSs, sophisticated, problem-oriented DSS generators must replace technically demand- ing DSS tools. These three considerations can be effectively addressed by including expert system components in DSSs. An expert DSS for statistical analysis is proposed and used as an illustration. Decision making scenarios are used to indicate the potential of such a system. In particular, it appears that an expert DSS can provide support for both pri- mary and secondary decision making and help ame- liorate human cognitive limitations.

Keywords: Expert systems, artificial intelligence, decision support systems, cognitive biases

ACM Categories: H.4.2, 1.2.1

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

By: William E. Remus Jeffrey E. Kottemann Department of Decision Sciences University of Hawaii 2404 Maile Way Honolulu, Hawaii

Abstract There are three important considerations in DSS de- velopment. (1) Decision making involves both pri- mary and secondary processes, where secondary processes concern selecting appropriate decision making tools, approaches, and information. (2) In making decisions, humans are subject to numerous cognitive limitations. (3) In order for end users to develop DSSs, sophisticated, problem-oriented DSS generators must replace technically demand- ing DSS tools. These three considerations can be effectively addressed by including expert system components in DSSs. An expert DSS for statistical analysis is proposed and used as an illustration. Decision making scenarios are used to indicate the potential of such a system. In particular, it appears that an expert DSS can provide support for both pri- mary and secondary decision making and help ame- liorate human cognitive limitations.

Keywords: Expert systems, artificial intelligence, decision support systems, cognitive biases

ACM Categories: H.4.2, 1.2.1

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

By: William E. Remus Jeffrey E. Kottemann Department of Decision Sciences University of Hawaii 2404 Maile Way Honolulu, Hawaii

Abstract There are three important considerations in DSS de- velopment. (1) Decision making involves both pri- mary and secondary processes, where secondary processes concern selecting appropriate decision making tools, approaches, and information. (2) In making decisions, humans are subject to numerous cognitive limitations. (3) In order for end users to develop DSSs, sophisticated, problem-oriented DSS generators must replace technically demand- ing DSS tools. These three considerations can be effectively addressed by including expert system components in DSSs. An expert DSS for statistical analysis is proposed and used as an illustration. Decision making scenarios are used to indicate the potential of such a system. In particular, it appears that an expert DSS can provide support for both pri- mary and secondary decision making and help ame- liorate human cognitive limitations.

Keywords: Expert systems, artificial intelligence, decision support systems, cognitive biases

ACM Categories: H.4.2, 1.2.1

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

By: William E. Remus Jeffrey E. Kottemann Department of Decision Sciences University of Hawaii 2404 Maile Way Honolulu, Hawaii

Abstract There are three important considerations in DSS de- velopment. (1) Decision making involves both pri- mary and secondary processes, where secondary processes concern selecting appropriate decision making tools, approaches, and information. (2) In making decisions, humans are subject to numerous cognitive limitations. (3) In order for end users to develop DSSs, sophisticated, problem-oriented DSS generators must replace technically demand- ing DSS tools. These three considerations can be effectively addressed by including expert system components in DSSs. An expert DSS for statistical analysis is proposed and used as an illustration. Decision making scenarios are used to indicate the potential of such a system. In particular, it appears that an expert DSS can provide support for both pri- mary and secondary decision making and help ame- liorate human cognitive limitations.

Keywords: Expert systems, artificial intelligence, decision support systems, cognitive biases

ACM Categories: H.4.2, 1.2.1

Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

By: William E. Remus Jeffrey E. Kottemann Department of Decision Sciences University of Hawaii 2404 Maile Way Honolulu, Hawaii

Abstract There are three important considerations in DSS de- velopment. (1) Decision making involves both pri- mary and secondary processes, where secondary processes concern selecting appropriate decision making tools, approaches, and information. (2) In making decisions, humans are subject to numerous cognitive limitations. (3) In order for end users to develop DSSs, sophisticated, problem-oriented DSS generators must replace technically demand- ing DSS tools. These three considerations can be effectively addressed by including expert system components in DSSs. An expert DSS for statistical analysis is proposed and used as an illustration. Decision making scenarios are used to indicate the potential of such a system. In particular, it appears that an expert DSS can provide support for both pri- mary and secondary decision making and help ame- liorate human cognitive limitations.

Keywords: Expert systems, artificial intelligence, decision support systems, cognitive biases

ACM Categories: H.4.2, 1.2.1

Introduction Designing decision support systems (DSS) re- quires more than just knowledge about hard- ware and software design. To design the DSS well, the designer must understand decision makers and their limitations. The designer must also know how decision makers deter- mine what methods to use to reach a deci- sion. Only with all of the above can a DSS be constructed which is useful in making a parti- cular decision.

In this article, we describe a hypothetical in- telligent DSS for statistical decision making. But first we address the motivations for such a DSS. In the following section, we review the literature on human decision making and point out the limitations that decision makers have in dealing with data and actually making decisions. The literature reported in that sec- tion deals with not only decision making but deciding how to decide. The former are pri- mary decisions; the latter have been termed secondary decisions [81], metadecisions [50], and predecisions [55]. The research suggests that both primary and secondary decisions may be biased [70]. An intelligent DSS for sta- tistical analysis must both help choose appro- priate statistical tools (a secondary decision) and also provide output that will not mislead the user when making a primary decision. The use of artificial intelligence techniques may make active expert support for both kinds of decisions possible.

Managerial Decision Making Biases Making a good decision starts with having or gathering the right information upon which to base a decision. In most cases the necessary information is obtained through the manager's visual and auditory senses. The processing of information through these senses is fairly well understood and is briefly described in Sidebar 1. The problem with the processing that occurs in the sensory input channels is that processes can distort information.

Consider the impact of the processing de- scribed in Sidebar 1. First, humans do not

Introduction Designing decision support systems (DSS) re- quires more than just knowledge about hard- ware and software design. To design the DSS well, the designer must understand decision makers and their limitations. The designer must also know how decision makers deter- mine what methods to use to reach a deci- sion. Only with all of the above can a DSS be constructed which is useful in making a parti- cular decision.

In this article, we describe a hypothetical in- telligent DSS for statistical decision making. But first we address the motivations for such a DSS. In the following section, we review the literature on human decision making and point out the limitations that decision makers have in dealing with data and actually making decisions. The literature reported in that sec- tion deals with not only decision making but deciding how to decide. The former are pri- mary decisions; the latter have been termed secondary decisions [81], metadecisions [50], and predecisions [55]. The research suggests that both primary and secondary decisions may be biased [70]. An intelligent DSS for sta- tistical analysis must both help choose appro- priate statistical tools (a secondary decision) and also provide output that will not mislead the user when making a primary decision. The use of artificial intelligence techniques may make active expert support for both kinds of decisions possible.

Managerial Decision Making Biases Making a good decision starts with having or gathering the right information upon which to base a decision. In most cases the necessary information is obtained through the manager's visual and auditory senses. The processing of information through these senses is fairly well understood and is briefly described in Sidebar 1. The problem with the processing that occurs in the sensory input channels is that processes can distort information.

Consider the impact of the processing de- scribed in Sidebar 1. First, humans do not

Introduction Designing decision support systems (DSS) re- quires more than just knowledge about hard- ware and software design. To design the DSS well, the designer must understand decision makers and their limitations. The designer must also know how decision makers deter- mine what methods to use to reach a deci- sion. Only with all of the above can a DSS be constructed which is useful in making a parti- cular decision.

In this article, we describe a hypothetical in- telligent DSS for statistical decision making. But first we address the motivations for such a DSS. In the following section, we review the literature on human decision making and point out the limitations that decision makers have in dealing with data and actually making decisions. The literature reported in that sec- tion deals with not only decision making but deciding how to decide. The former are pri- mary decisions; the latter have been termed secondary decisions [81], metadecisions [50], and predecisions [55]. The research suggests that both primary and secondary decisions may be biased [70]. An intelligent DSS for sta- tistical analysis must both help choose appro- priate statistical tools (a secondary decision) and also provide output that will not mislead the user when making a primary decision. The use of artificial intelligence techniques may make active expert support for both kinds of decisions possible.

Managerial Decision Making Biases Making a good decision starts with having or gathering the right information upon which to base a decision. In most cases the necessary information is obtained through the manager's visual and auditory senses. The processing of information through these senses is fairly well understood and is briefly described in Sidebar 1. The problem with the processing that occurs in the sensory input channels is that processes can distort information.

Consider the impact of the processing de- scribed in Sidebar 1. First, humans do not

Introduction Designing decision support systems (DSS) re- quires more than just knowledge about hard- ware and software design. To design the DSS well, the designer must understand decision makers and their limitations. The designer must also know how decision makers deter- mine what methods to use to reach a deci- sion. Only with all of the above can a DSS be constructed which is useful in making a parti- cular decision.

In this article, we describe a hypothetical in- telligent DSS for statistical decision making. But first we address the motivations for such a DSS. In the following section, we review the literature on human decision making and point out the limitations that decision makers have in dealing with data and actually making decisions. The literature reported in that sec- tion deals with not only decision making but deciding how to decide. The former are pri- mary decisions; the latter have been termed secondary decisions [81], metadecisions [50], and predecisions [55]. The research suggests that both primary and secondary decisions may be biased [70]. An intelligent DSS for sta- tistical analysis must both help choose appro- priate statistical tools (a secondary decision) and also provide output that will not mislead the user when making a primary decision. The use of artificial intelligence techniques may make active expert support for both kinds of decisions possible.

Managerial Decision Making Biases Making a good decision starts with having or gathering the right information upon which to base a decision. In most cases the necessary information is obtained through the manager's visual and auditory senses. The processing of information through these senses is fairly well understood and is briefly described in Sidebar 1. The problem with the processing that occurs in the sensory input channels is that processes can distort information.

Consider the impact of the processing de- scribed in Sidebar 1. First, humans do not

Introduction Designing decision support systems (DSS) re- quires more than just knowledge about hard- ware and software design. To design the DSS well, the designer must understand decision makers and their limitations. The designer must also know how decision makers deter- mine what methods to use to reach a deci- sion. Only with all of the above can a DSS be constructed which is useful in making a parti- cular decision.

In this article, we describe a hypothetical in- telligent DSS for statistical decision making. But first we address the motivations for such a DSS. In the following section, we review the literature on human decision making and point out the limitations that decision makers have in dealing with data and actually making decisions. The literature reported in that sec- tion deals with not only decision making but deciding how to decide. The former are pri- mary decisions; the latter have been termed secondary decisions [81], metadecisions [50], and predecisions [55]. The research suggests that both primary and secondary decisions may be biased [70]. An intelligent DSS for sta- tistical analysis must both help choose appro- priate statistical tools (a secondary decision) and also provide output that will not mislead the user when making a primary decision. The use of artificial intelligence techniques may make active expert support for both kinds of decisions possible.

Managerial Decision Making Biases Making a good decision starts with having or gathering the right information upon which to base a decision. In most cases the necessary information is obtained through the manager's visual and auditory senses. The processing of information through these senses is fairly well understood and is briefly described in Sidebar 1. The problem with the processing that occurs in the sensory input channels is that processes can distort information.

Consider the impact of the processing de- scribed in Sidebar 1. First, humans do not

Introduction Designing decision support systems (DSS) re- quires more than just knowledge about hard- ware and software design. To design the DSS well, the designer must understand decision makers and their limitations. The designer must also know how decision makers deter- mine what methods to use to reach a deci- sion. Only with all of the above can a DSS be constructed which is useful in making a parti- cular decision.

In this article, we describe a hypothetical in- telligent DSS for statistical decision making. But first we address the motivations for such a DSS. In the following section, we review the literature on human decision making and point out the limitations that decision makers have in dealing with data and actually making decisions. The literature reported in that sec- tion deals with not only decision making but deciding how to decide. The former are pri- mary decisions; the latter have been termed secondary decisions [81], metadecisions [50], and predecisions [55]. The research suggests that both primary and secondary decisions may be biased [70]. An intelligent DSS for sta- tistical analysis must both help choose appro- priate statistical tools (a secondary decision) and also provide output that will not mislead the user when making a primary decision. The use of artificial intelligence techniques may make active expert support for both kinds of decisions possible.

Managerial Decision Making Biases Making a good decision starts with having or gathering the right information upon which to base a decision. In most cases the necessary information is obtained through the manager's visual and auditory senses. The processing of information through these senses is fairly well understood and is briefly described in Sidebar 1. The problem with the processing that occurs in the sensory input channels is that processes can distort information.

Consider the impact of the processing de- scribed in Sidebar 1. First, humans do not

Introduction Designing decision support systems (DSS) re- quires more than just knowledge about hard- ware and software design. To design the DSS well, the designer must understand decision makers and their limitations. The designer must also know how decision makers deter- mine what methods to use to reach a deci- sion. Only with all of the above can a DSS be constructed which is useful in making a parti- cular decision.

In this article, we describe a hypothetical in- telligent DSS for statistical decision making. But first we address the motivations for such a DSS. In the following section, we review the literature on human decision making and point out the limitations that decision makers have in dealing with data and actually making decisions. The literature reported in that sec- tion deals with not only decision making but deciding how to decide. The former are pri- mary decisions; the latter have been termed secondary decisions [81], metadecisions [50], and predecisions [55]. The research suggests that both primary and secondary decisions may be biased [70]. An intelligent DSS for sta- tistical analysis must both help choose appro- priate statistical tools (a secondary decision) and also provide output that will not mislead the user when making a primary decision. The use of artificial intelligence techniques may make active expert support for both kinds of decisions possible.

Managerial Decision Making Biases Making a good decision starts with having or gathering the right information upon which to base a decision. In most cases the necessary information is obtained through the manager's visual and auditory senses. The processing of information through these senses is fairly well understood and is briefly described in Sidebar 1. The problem with the processing that occurs in the sensory input channels is that processes can distort information.

Consider the impact of the processing de- scribed in Sidebar 1. First, humans do not

Introduction Designing decision support systems (DSS) re- quires more than just knowledge about hard- ware and software design. To design the DSS well, the designer must understand decision makers and their limitations. The designer must also know how decision makers deter- mine what methods to use to reach a deci- sion. Only with all of the above can a DSS be constructed which is useful in making a parti- cular decision.

In this article, we describe a hypothetical in- telligent DSS for statistical decision making. But first we address the motivations for such a DSS. In the following section, we review the literature on human decision making and point out the limitations that decision makers have in dealing with data and actually making decisions. The literature reported in that sec- tion deals with not only decision making but deciding how to decide. The former are pri- mary decisions; the latter have been termed secondary decisions [81], metadecisions [50], and predecisions [55]. The research suggests that both primary and secondary decisions may be biased [70]. An intelligent DSS for sta- tistical analysis must both help choose appro- priate statistical tools (a secondary decision) and also provide output that will not mislead the user when making a primary decision. The use of artificial intelligence techniques may make active expert support for both kinds of decisions possible.

Managerial Decision Making Biases Making a good decision starts with having or gathering the right information upon which to base a decision. In most cases the necessary information is obtained through the manager's visual and auditory senses. The processing of information through these senses is fairly well understood and is briefly described in Sidebar 1. The problem with the processing that occurs in the sensory input channels is that processes can distort information.

Consider the impact of the processing de- scribed in Sidebar 1. First, humans do not

MIS Quarterly/December 1986 403 MIS Quarterly/December 1986 403 MIS Quarterly/December 1986 403 MIS Quarterly/December 1986 403 MIS Quarterly/December 1986 403 MIS Quarterly/December 1986 403 MIS Quarterly/December 1986 403 MIS Quarterly/December 1986 403

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 3: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

S.d...e.b.a......r. P s i IThe brain's wiring affects not only the input Sidebar 1. Processing Visual Information processing but also the decisions made. Note Visual processing can be viewed as the fol- the many human decision making biases lowing sequence of actions [37]: shown in Table 2. Again, one gets the impres-

sion that these obvious biases should be (1)The light from an object strikes the sss,s (1)The

ltght from an obJect strikes

thee easy to correct through proper training. Unfor- tunately, these errors persist because they

(2)The light Is changed into neural coding. are also a function of the brain's organiza- (3) The Image of the object and the object's tion. To illustrate this relationship consider

location are separated and processed the neural underpinning of the task of grasp- Independently. ing an object as described in Sidebar 2.

(4) The image is reduced to geometric fea- The way in which the future is forecast paral- tures (e.g., arcs, lines, corners, edges) lels that in Sidebar 2. Some point is anchored and Fourier representations by simple on and then adjusted to try to reach a goal. feature extractors. Human decision makers underadjust just as

(5) The small geometric features are recon- the conservative fractionation processes structured Into a global rendering of the underadjust. The motor system can correct object. using dynamic feedback - but decision

(6) The global rendering of the object is re- makers don't have dynamic feedback on their ferred to associative memory for identi- decisions. Feedback is so potent that it may fication. have more effect on performance than other

(7) The identity of the object is found using biases [38] a "fuzzy key" to access memory. Another source of decision making behavior

(8) The object's identity allows associative results more from the balance between cogni- memory to look-up the relationships the tive hardware (i.e., the right and left hemi- object has to other objects. sphere of the brain) than from its limitations.

At this pt, ts ry to be This theory (called cognitive style) suggests At this point, the information is ready to be passed on for higher levels of processing.

Sidebar 2. Grasping An Object

deal directly with holistic images; they deal Consider the way in which the brain pro- with images as coded by the feature extrac- cesses a command to grasp an object [37]: tors (that is, images composed on linked (1)The task Is fractlonated and passed to lines, arcs, corners, edges, and Fourier signa- several parallel processors. The frac- tures). The feature extractors fit lines to any tionation is conservative (that is the image. Therefore, it is not surprising that sum of th smller tasks Is less than managers can see patterns (i.e., linked line the overall task) because overshooting segments in some plausible order) in random the goal can injure the hand but under- plots of data points (the brain reconstructs a shooting will not. pattern from the pieces found by the feature ( E extractors). Table 1 summarizes other human2) cserate fracoaor epeats he input errors that bias interpretations of data. conservative fractionat on and passes

that task on to sub-processors and on- From Table 1, one gets the impression that ward to muscle groups. humans are prone to numerous errors which (3)The impact of the conservative frac- they could easily avoid. Yet even training often tionatlon Is movement toward the goal fails to eliminate these errors [24]. The reason (but not totally reaching the goal). that these errors persist is that most of the er (4) Dynamic feedback allows the fractona- rors result from the neurophysiological limita- ton of th remaning distance untl the tions of the human brain. And training can't hand s correctly positoned. do much to overcome such limitations.

S.d...e.b.a......r. P s i IThe brain's wiring affects not only the input Sidebar 1. Processing Visual Information processing but also the decisions made. Note Visual processing can be viewed as the fol- the many human decision making biases lowing sequence of actions [37]: shown in Table 2. Again, one gets the impres-

sion that these obvious biases should be (1)The light from an object strikes the sss,s (1)The

ltght from an obJect strikes

thee easy to correct through proper training. Unfor- tunately, these errors persist because they

(2)The light Is changed into neural coding. are also a function of the brain's organiza- (3) The Image of the object and the object's tion. To illustrate this relationship consider

location are separated and processed the neural underpinning of the task of grasp- Independently. ing an object as described in Sidebar 2.

(4) The image is reduced to geometric fea- The way in which the future is forecast paral- tures (e.g., arcs, lines, corners, edges) lels that in Sidebar 2. Some point is anchored and Fourier representations by simple on and then adjusted to try to reach a goal. feature extractors. Human decision makers underadjust just as

(5) The small geometric features are recon- the conservative fractionation processes structured Into a global rendering of the underadjust. The motor system can correct object. using dynamic feedback - but decision

(6) The global rendering of the object is re- makers don't have dynamic feedback on their ferred to associative memory for identi- decisions. Feedback is so potent that it may fication. have more effect on performance than other

(7) The identity of the object is found using biases [38] a "fuzzy key" to access memory. Another source of decision making behavior

(8) The object's identity allows associative results more from the balance between cogni- memory to look-up the relationships the tive hardware (i.e., the right and left hemi- object has to other objects. sphere of the brain) than from its limitations.

At this pt, ts ry to be This theory (called cognitive style) suggests At this point, the information is ready to be passed on for higher levels of processing.

Sidebar 2. Grasping An Object

deal directly with holistic images; they deal Consider the way in which the brain pro- with images as coded by the feature extrac- cesses a command to grasp an object [37]: tors (that is, images composed on linked (1)The task Is fractlonated and passed to lines, arcs, corners, edges, and Fourier signa- several parallel processors. The frac- tures). The feature extractors fit lines to any tionation is conservative (that is the image. Therefore, it is not surprising that sum of th smller tasks Is less than managers can see patterns (i.e., linked line the overall task) because overshooting segments in some plausible order) in random the goal can injure the hand but under- plots of data points (the brain reconstructs a shooting will not. pattern from the pieces found by the feature ( E extractors). Table 1 summarizes other human2) cserate fracoaor epeats he input errors that bias interpretations of data. conservative fractionat on and passes

that task on to sub-processors and on- From Table 1, one gets the impression that ward to muscle groups. humans are prone to numerous errors which (3)The impact of the conservative frac- they could easily avoid. Yet even training often tionatlon Is movement toward the goal fails to eliminate these errors [24]. The reason (but not totally reaching the goal). that these errors persist is that most of the er (4) Dynamic feedback allows the fractona- rors result from the neurophysiological limita- ton of th remaning distance untl the tions of the human brain. And training can't hand s correctly positoned. do much to overcome such limitations.

S.d...e.b.a......r. P s i IThe brain's wiring affects not only the input Sidebar 1. Processing Visual Information processing but also the decisions made. Note Visual processing can be viewed as the fol- the many human decision making biases lowing sequence of actions [37]: shown in Table 2. Again, one gets the impres-

sion that these obvious biases should be (1)The light from an object strikes the sss,s (1)The

ltght from an obJect strikes

thee easy to correct through proper training. Unfor- tunately, these errors persist because they

(2)The light Is changed into neural coding. are also a function of the brain's organiza- (3) The Image of the object and the object's tion. To illustrate this relationship consider

location are separated and processed the neural underpinning of the task of grasp- Independently. ing an object as described in Sidebar 2.

(4) The image is reduced to geometric fea- The way in which the future is forecast paral- tures (e.g., arcs, lines, corners, edges) lels that in Sidebar 2. Some point is anchored and Fourier representations by simple on and then adjusted to try to reach a goal. feature extractors. Human decision makers underadjust just as

(5) The small geometric features are recon- the conservative fractionation processes structured Into a global rendering of the underadjust. The motor system can correct object. using dynamic feedback - but decision

(6) The global rendering of the object is re- makers don't have dynamic feedback on their ferred to associative memory for identi- decisions. Feedback is so potent that it may fication. have more effect on performance than other

(7) The identity of the object is found using biases [38] a "fuzzy key" to access memory. Another source of decision making behavior

(8) The object's identity allows associative results more from the balance between cogni- memory to look-up the relationships the tive hardware (i.e., the right and left hemi- object has to other objects. sphere of the brain) than from its limitations.

At this pt, ts ry to be This theory (called cognitive style) suggests At this point, the information is ready to be passed on for higher levels of processing.

Sidebar 2. Grasping An Object

deal directly with holistic images; they deal Consider the way in which the brain pro- with images as coded by the feature extrac- cesses a command to grasp an object [37]: tors (that is, images composed on linked (1)The task Is fractlonated and passed to lines, arcs, corners, edges, and Fourier signa- several parallel processors. The frac- tures). The feature extractors fit lines to any tionation is conservative (that is the image. Therefore, it is not surprising that sum of th smller tasks Is less than managers can see patterns (i.e., linked line the overall task) because overshooting segments in some plausible order) in random the goal can injure the hand but under- plots of data points (the brain reconstructs a shooting will not. pattern from the pieces found by the feature ( E extractors). Table 1 summarizes other human2) cserate fracoaor epeats he input errors that bias interpretations of data. conservative fractionat on and passes

that task on to sub-processors and on- From Table 1, one gets the impression that ward to muscle groups. humans are prone to numerous errors which (3)The impact of the conservative frac- they could easily avoid. Yet even training often tionatlon Is movement toward the goal fails to eliminate these errors [24]. The reason (but not totally reaching the goal). that these errors persist is that most of the er (4) Dynamic feedback allows the fractona- rors result from the neurophysiological limita- ton of th remaning distance untl the tions of the human brain. And training can't hand s correctly positoned. do much to overcome such limitations.

S.d...e.b.a......r. P s i IThe brain's wiring affects not only the input Sidebar 1. Processing Visual Information processing but also the decisions made. Note Visual processing can be viewed as the fol- the many human decision making biases lowing sequence of actions [37]: shown in Table 2. Again, one gets the impres-

sion that these obvious biases should be (1)The light from an object strikes the sss,s (1)The

ltght from an obJect strikes

thee easy to correct through proper training. Unfor- tunately, these errors persist because they

(2)The light Is changed into neural coding. are also a function of the brain's organiza- (3) The Image of the object and the object's tion. To illustrate this relationship consider

location are separated and processed the neural underpinning of the task of grasp- Independently. ing an object as described in Sidebar 2.

(4) The image is reduced to geometric fea- The way in which the future is forecast paral- tures (e.g., arcs, lines, corners, edges) lels that in Sidebar 2. Some point is anchored and Fourier representations by simple on and then adjusted to try to reach a goal. feature extractors. Human decision makers underadjust just as

(5) The small geometric features are recon- the conservative fractionation processes structured Into a global rendering of the underadjust. The motor system can correct object. using dynamic feedback - but decision

(6) The global rendering of the object is re- makers don't have dynamic feedback on their ferred to associative memory for identi- decisions. Feedback is so potent that it may fication. have more effect on performance than other

(7) The identity of the object is found using biases [38] a "fuzzy key" to access memory. Another source of decision making behavior

(8) The object's identity allows associative results more from the balance between cogni- memory to look-up the relationships the tive hardware (i.e., the right and left hemi- object has to other objects. sphere of the brain) than from its limitations.

At this pt, ts ry to be This theory (called cognitive style) suggests At this point, the information is ready to be passed on for higher levels of processing.

Sidebar 2. Grasping An Object

deal directly with holistic images; they deal Consider the way in which the brain pro- with images as coded by the feature extrac- cesses a command to grasp an object [37]: tors (that is, images composed on linked (1)The task Is fractlonated and passed to lines, arcs, corners, edges, and Fourier signa- several parallel processors. The frac- tures). The feature extractors fit lines to any tionation is conservative (that is the image. Therefore, it is not surprising that sum of th smller tasks Is less than managers can see patterns (i.e., linked line the overall task) because overshooting segments in some plausible order) in random the goal can injure the hand but under- plots of data points (the brain reconstructs a shooting will not. pattern from the pieces found by the feature ( E extractors). Table 1 summarizes other human2) cserate fracoaor epeats he input errors that bias interpretations of data. conservative fractionat on and passes

that task on to sub-processors and on- From Table 1, one gets the impression that ward to muscle groups. humans are prone to numerous errors which (3)The impact of the conservative frac- they could easily avoid. Yet even training often tionatlon Is movement toward the goal fails to eliminate these errors [24]. The reason (but not totally reaching the goal). that these errors persist is that most of the er (4) Dynamic feedback allows the fractona- rors result from the neurophysiological limita- ton of th remaning distance untl the tions of the human brain. And training can't hand s correctly positoned. do much to overcome such limitations.

S.d...e.b.a......r. P s i IThe brain's wiring affects not only the input Sidebar 1. Processing Visual Information processing but also the decisions made. Note Visual processing can be viewed as the fol- the many human decision making biases lowing sequence of actions [37]: shown in Table 2. Again, one gets the impres-

sion that these obvious biases should be (1)The light from an object strikes the sss,s (1)The

ltght from an obJect strikes

thee easy to correct through proper training. Unfor- tunately, these errors persist because they

(2)The light Is changed into neural coding. are also a function of the brain's organiza- (3) The Image of the object and the object's tion. To illustrate this relationship consider

location are separated and processed the neural underpinning of the task of grasp- Independently. ing an object as described in Sidebar 2.

(4) The image is reduced to geometric fea- The way in which the future is forecast paral- tures (e.g., arcs, lines, corners, edges) lels that in Sidebar 2. Some point is anchored and Fourier representations by simple on and then adjusted to try to reach a goal. feature extractors. Human decision makers underadjust just as

(5) The small geometric features are recon- the conservative fractionation processes structured Into a global rendering of the underadjust. The motor system can correct object. using dynamic feedback - but decision

(6) The global rendering of the object is re- makers don't have dynamic feedback on their ferred to associative memory for identi- decisions. Feedback is so potent that it may fication. have more effect on performance than other

(7) The identity of the object is found using biases [38] a "fuzzy key" to access memory. Another source of decision making behavior

(8) The object's identity allows associative results more from the balance between cogni- memory to look-up the relationships the tive hardware (i.e., the right and left hemi- object has to other objects. sphere of the brain) than from its limitations.

At this pt, ts ry to be This theory (called cognitive style) suggests At this point, the information is ready to be passed on for higher levels of processing.

Sidebar 2. Grasping An Object

deal directly with holistic images; they deal Consider the way in which the brain pro- with images as coded by the feature extrac- cesses a command to grasp an object [37]: tors (that is, images composed on linked (1)The task Is fractlonated and passed to lines, arcs, corners, edges, and Fourier signa- several parallel processors. The frac- tures). The feature extractors fit lines to any tionation is conservative (that is the image. Therefore, it is not surprising that sum of th smller tasks Is less than managers can see patterns (i.e., linked line the overall task) because overshooting segments in some plausible order) in random the goal can injure the hand but under- plots of data points (the brain reconstructs a shooting will not. pattern from the pieces found by the feature ( E extractors). Table 1 summarizes other human2) cserate fracoaor epeats he input errors that bias interpretations of data. conservative fractionat on and passes

that task on to sub-processors and on- From Table 1, one gets the impression that ward to muscle groups. humans are prone to numerous errors which (3)The impact of the conservative frac- they could easily avoid. Yet even training often tionatlon Is movement toward the goal fails to eliminate these errors [24]. The reason (but not totally reaching the goal). that these errors persist is that most of the er (4) Dynamic feedback allows the fractona- rors result from the neurophysiological limita- ton of th remaning distance untl the tions of the human brain. And training can't hand s correctly positoned. do much to overcome such limitations.

S.d...e.b.a......r. P s i IThe brain's wiring affects not only the input Sidebar 1. Processing Visual Information processing but also the decisions made. Note Visual processing can be viewed as the fol- the many human decision making biases lowing sequence of actions [37]: shown in Table 2. Again, one gets the impres-

sion that these obvious biases should be (1)The light from an object strikes the sss,s (1)The

ltght from an obJect strikes

thee easy to correct through proper training. Unfor- tunately, these errors persist because they

(2)The light Is changed into neural coding. are also a function of the brain's organiza- (3) The Image of the object and the object's tion. To illustrate this relationship consider

location are separated and processed the neural underpinning of the task of grasp- Independently. ing an object as described in Sidebar 2.

(4) The image is reduced to geometric fea- The way in which the future is forecast paral- tures (e.g., arcs, lines, corners, edges) lels that in Sidebar 2. Some point is anchored and Fourier representations by simple on and then adjusted to try to reach a goal. feature extractors. Human decision makers underadjust just as

(5) The small geometric features are recon- the conservative fractionation processes structured Into a global rendering of the underadjust. The motor system can correct object. using dynamic feedback - but decision

(6) The global rendering of the object is re- makers don't have dynamic feedback on their ferred to associative memory for identi- decisions. Feedback is so potent that it may fication. have more effect on performance than other

(7) The identity of the object is found using biases [38] a "fuzzy key" to access memory. Another source of decision making behavior

(8) The object's identity allows associative results more from the balance between cogni- memory to look-up the relationships the tive hardware (i.e., the right and left hemi- object has to other objects. sphere of the brain) than from its limitations.

At this pt, ts ry to be This theory (called cognitive style) suggests At this point, the information is ready to be passed on for higher levels of processing.

Sidebar 2. Grasping An Object

deal directly with holistic images; they deal Consider the way in which the brain pro- with images as coded by the feature extrac- cesses a command to grasp an object [37]: tors (that is, images composed on linked (1)The task Is fractlonated and passed to lines, arcs, corners, edges, and Fourier signa- several parallel processors. The frac- tures). The feature extractors fit lines to any tionation is conservative (that is the image. Therefore, it is not surprising that sum of th smller tasks Is less than managers can see patterns (i.e., linked line the overall task) because overshooting segments in some plausible order) in random the goal can injure the hand but under- plots of data points (the brain reconstructs a shooting will not. pattern from the pieces found by the feature ( E extractors). Table 1 summarizes other human2) cserate fracoaor epeats he input errors that bias interpretations of data. conservative fractionat on and passes

that task on to sub-processors and on- From Table 1, one gets the impression that ward to muscle groups. humans are prone to numerous errors which (3)The impact of the conservative frac- they could easily avoid. Yet even training often tionatlon Is movement toward the goal fails to eliminate these errors [24]. The reason (but not totally reaching the goal). that these errors persist is that most of the er (4) Dynamic feedback allows the fractona- rors result from the neurophysiological limita- ton of th remaning distance untl the tions of the human brain. And training can't hand s correctly positoned. do much to overcome such limitations.

S.d...e.b.a......r. P s i IThe brain's wiring affects not only the input Sidebar 1. Processing Visual Information processing but also the decisions made. Note Visual processing can be viewed as the fol- the many human decision making biases lowing sequence of actions [37]: shown in Table 2. Again, one gets the impres-

sion that these obvious biases should be (1)The light from an object strikes the sss,s (1)The

ltght from an obJect strikes

thee easy to correct through proper training. Unfor- tunately, these errors persist because they

(2)The light Is changed into neural coding. are also a function of the brain's organiza- (3) The Image of the object and the object's tion. To illustrate this relationship consider

location are separated and processed the neural underpinning of the task of grasp- Independently. ing an object as described in Sidebar 2.

(4) The image is reduced to geometric fea- The way in which the future is forecast paral- tures (e.g., arcs, lines, corners, edges) lels that in Sidebar 2. Some point is anchored and Fourier representations by simple on and then adjusted to try to reach a goal. feature extractors. Human decision makers underadjust just as

(5) The small geometric features are recon- the conservative fractionation processes structured Into a global rendering of the underadjust. The motor system can correct object. using dynamic feedback - but decision

(6) The global rendering of the object is re- makers don't have dynamic feedback on their ferred to associative memory for identi- decisions. Feedback is so potent that it may fication. have more effect on performance than other

(7) The identity of the object is found using biases [38] a "fuzzy key" to access memory. Another source of decision making behavior

(8) The object's identity allows associative results more from the balance between cogni- memory to look-up the relationships the tive hardware (i.e., the right and left hemi- object has to other objects. sphere of the brain) than from its limitations.

At this pt, ts ry to be This theory (called cognitive style) suggests At this point, the information is ready to be passed on for higher levels of processing.

Sidebar 2. Grasping An Object

deal directly with holistic images; they deal Consider the way in which the brain pro- with images as coded by the feature extrac- cesses a command to grasp an object [37]: tors (that is, images composed on linked (1)The task Is fractlonated and passed to lines, arcs, corners, edges, and Fourier signa- several parallel processors. The frac- tures). The feature extractors fit lines to any tionation is conservative (that is the image. Therefore, it is not surprising that sum of th smller tasks Is less than managers can see patterns (i.e., linked line the overall task) because overshooting segments in some plausible order) in random the goal can injure the hand but under- plots of data points (the brain reconstructs a shooting will not. pattern from the pieces found by the feature ( E extractors). Table 1 summarizes other human2) cserate fracoaor epeats he input errors that bias interpretations of data. conservative fractionat on and passes

that task on to sub-processors and on- From Table 1, one gets the impression that ward to muscle groups. humans are prone to numerous errors which (3)The impact of the conservative frac- they could easily avoid. Yet even training often tionatlon Is movement toward the goal fails to eliminate these errors [24]. The reason (but not totally reaching the goal). that these errors persist is that most of the er (4) Dynamic feedback allows the fractona- rors result from the neurophysiological limita- ton of th remaning distance untl the tions of the human brain. And training can't hand s correctly positoned. do much to overcome such limitations.

S.d...e.b.a......r. P s i IThe brain's wiring affects not only the input Sidebar 1. Processing Visual Information processing but also the decisions made. Note Visual processing can be viewed as the fol- the many human decision making biases lowing sequence of actions [37]: shown in Table 2. Again, one gets the impres-

sion that these obvious biases should be (1)The light from an object strikes the sss,s (1)The

ltght from an obJect strikes

thee easy to correct through proper training. Unfor- tunately, these errors persist because they

(2)The light Is changed into neural coding. are also a function of the brain's organiza- (3) The Image of the object and the object's tion. To illustrate this relationship consider

location are separated and processed the neural underpinning of the task of grasp- Independently. ing an object as described in Sidebar 2.

(4) The image is reduced to geometric fea- The way in which the future is forecast paral- tures (e.g., arcs, lines, corners, edges) lels that in Sidebar 2. Some point is anchored and Fourier representations by simple on and then adjusted to try to reach a goal. feature extractors. Human decision makers underadjust just as

(5) The small geometric features are recon- the conservative fractionation processes structured Into a global rendering of the underadjust. The motor system can correct object. using dynamic feedback - but decision

(6) The global rendering of the object is re- makers don't have dynamic feedback on their ferred to associative memory for identi- decisions. Feedback is so potent that it may fication. have more effect on performance than other

(7) The identity of the object is found using biases [38] a "fuzzy key" to access memory. Another source of decision making behavior

(8) The object's identity allows associative results more from the balance between cogni- memory to look-up the relationships the tive hardware (i.e., the right and left hemi- object has to other objects. sphere of the brain) than from its limitations.

At this pt, ts ry to be This theory (called cognitive style) suggests At this point, the information is ready to be passed on for higher levels of processing.

Sidebar 2. Grasping An Object

deal directly with holistic images; they deal Consider the way in which the brain pro- with images as coded by the feature extrac- cesses a command to grasp an object [37]: tors (that is, images composed on linked (1)The task Is fractlonated and passed to lines, arcs, corners, edges, and Fourier signa- several parallel processors. The frac- tures). The feature extractors fit lines to any tionation is conservative (that is the image. Therefore, it is not surprising that sum of th smller tasks Is less than managers can see patterns (i.e., linked line the overall task) because overshooting segments in some plausible order) in random the goal can injure the hand but under- plots of data points (the brain reconstructs a shooting will not. pattern from the pieces found by the feature ( E extractors). Table 1 summarizes other human2) cserate fracoaor epeats he input errors that bias interpretations of data. conservative fractionat on and passes

that task on to sub-processors and on- From Table 1, one gets the impression that ward to muscle groups. humans are prone to numerous errors which (3)The impact of the conservative frac- they could easily avoid. Yet even training often tionatlon Is movement toward the goal fails to eliminate these errors [24]. The reason (but not totally reaching the goal). that these errors persist is that most of the er (4) Dynamic feedback allows the fractona- rors result from the neurophysiological limita- ton of th remaning distance untl the tions of the human brain. And training can't hand s correctly positoned. do much to overcome such limitations.

404 MIS Quarterly/December 1986 404 MIS Quarterly/December 1986 404 MIS Quarterly/December 1986 404 MIS Quarterly/December 1986 404 MIS Quarterly/December 1986 404 MIS Quarterly/December 1986 404 MIS Quarterly/December 1986 404 MIS Quarterly/December 1986

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 4: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

Table 1. Biases Associated With Presenting Data to a Decision Maker

Irrelevant Information Irrelevant information can influence decision makers and may reduce the quality of their decisions [18, 65].

Data Presentation Biases (a) The type of Information - Data acquired through human interaction has more impact

on the decision maker than just the data itself [10]. (b) The format of the data display - The display format of the data crucially affects the

decision made. Generally speaking, summarized data presentations (e.g., statistics, tables, graphs) are preferred to raw data [14]. The choice of whether to use graphical summaries or tabular summaries depends on the level of environmental stability [58].

(c) Logical data displays - When a set of alternatives is presented which seems to cap- ture all the possibilities, then the decision maker may not detect other alternatives that may exist [25].

(d) Order effects - The order in which data are presented can affect the data retention [66, 52]. In particular, the first piece of data may assume undue importance (primary effect) or the last piece of data may become overvalued (recency effect).

(e) Context - When decision makers assess the variability in a series of data points, their assessment will be affected by the absolute (i.e., mean) value of the data points and the sequence in which they are presented [42].

Selective Perception Decision makers selectively perceive and remember only certain portions of the data [20].

(a) People filter data in ways that reflect their experience. When data from a problem is presented to decision makers, they will particularly notice the data that relates to areas in which they have expertise [16, 20]. This leads to differing and often limited problem formulations.

(b) People's expectations can bias perceptions. When decision makers are reviewing data which is contrary to their expectations, they may remember incongruent pieces of data inaccurately [13, 45].

(c) People seek Information consistent with their own views. If decision makers bring to a problem some prejudices about the problem, they will seek data confirming their pre- judices [5, 12, 78].

(d) People downplay data which contradicts their views. If the decision makers have an expectation about a problem or if they think they have arrived at the solution, they will downplay or ignore conflicting evidence [2, 64]. Their expectations will often persist even in light of continued conflicting evidence [78].

Frequency (a) Ease of recall - The ease with which certain data points can be recalled will affect

not only the use of this data but the decision maker's perception of the likelihood of similar events occurring [40, 44, 72, 73, 74].

(b) Base rate error - Often the decision maker determines the likelihood of two events by comparing the number of times each of the two events has occurred. However, the base rate is the crucial measure and is often ignored [10, 34, 35]. This is a particular problem when the decision makers have concrete experience with one problem but only statistical information about another. They will generally be biased toward think- ing the concrete problem to be more troublesome. Decision makers particularly overestimate the frequency of catastrophic events [44].

Table 1. Biases Associated With Presenting Data to a Decision Maker

Irrelevant Information Irrelevant information can influence decision makers and may reduce the quality of their decisions [18, 65].

Data Presentation Biases (a) The type of Information - Data acquired through human interaction has more impact

on the decision maker than just the data itself [10]. (b) The format of the data display - The display format of the data crucially affects the

decision made. Generally speaking, summarized data presentations (e.g., statistics, tables, graphs) are preferred to raw data [14]. The choice of whether to use graphical summaries or tabular summaries depends on the level of environmental stability [58].

(c) Logical data displays - When a set of alternatives is presented which seems to cap- ture all the possibilities, then the decision maker may not detect other alternatives that may exist [25].

(d) Order effects - The order in which data are presented can affect the data retention [66, 52]. In particular, the first piece of data may assume undue importance (primary effect) or the last piece of data may become overvalued (recency effect).

(e) Context - When decision makers assess the variability in a series of data points, their assessment will be affected by the absolute (i.e., mean) value of the data points and the sequence in which they are presented [42].

Selective Perception Decision makers selectively perceive and remember only certain portions of the data [20].

(a) People filter data in ways that reflect their experience. When data from a problem is presented to decision makers, they will particularly notice the data that relates to areas in which they have expertise [16, 20]. This leads to differing and often limited problem formulations.

(b) People's expectations can bias perceptions. When decision makers are reviewing data which is contrary to their expectations, they may remember incongruent pieces of data inaccurately [13, 45].

(c) People seek Information consistent with their own views. If decision makers bring to a problem some prejudices about the problem, they will seek data confirming their pre- judices [5, 12, 78].

(d) People downplay data which contradicts their views. If the decision makers have an expectation about a problem or if they think they have arrived at the solution, they will downplay or ignore conflicting evidence [2, 64]. Their expectations will often persist even in light of continued conflicting evidence [78].

Frequency (a) Ease of recall - The ease with which certain data points can be recalled will affect

not only the use of this data but the decision maker's perception of the likelihood of similar events occurring [40, 44, 72, 73, 74].

(b) Base rate error - Often the decision maker determines the likelihood of two events by comparing the number of times each of the two events has occurred. However, the base rate is the crucial measure and is often ignored [10, 34, 35]. This is a particular problem when the decision makers have concrete experience with one problem but only statistical information about another. They will generally be biased toward think- ing the concrete problem to be more troublesome. Decision makers particularly overestimate the frequency of catastrophic events [44].

Table 1. Biases Associated With Presenting Data to a Decision Maker

Irrelevant Information Irrelevant information can influence decision makers and may reduce the quality of their decisions [18, 65].

Data Presentation Biases (a) The type of Information - Data acquired through human interaction has more impact

on the decision maker than just the data itself [10]. (b) The format of the data display - The display format of the data crucially affects the

decision made. Generally speaking, summarized data presentations (e.g., statistics, tables, graphs) are preferred to raw data [14]. The choice of whether to use graphical summaries or tabular summaries depends on the level of environmental stability [58].

(c) Logical data displays - When a set of alternatives is presented which seems to cap- ture all the possibilities, then the decision maker may not detect other alternatives that may exist [25].

(d) Order effects - The order in which data are presented can affect the data retention [66, 52]. In particular, the first piece of data may assume undue importance (primary effect) or the last piece of data may become overvalued (recency effect).

(e) Context - When decision makers assess the variability in a series of data points, their assessment will be affected by the absolute (i.e., mean) value of the data points and the sequence in which they are presented [42].

Selective Perception Decision makers selectively perceive and remember only certain portions of the data [20].

(a) People filter data in ways that reflect their experience. When data from a problem is presented to decision makers, they will particularly notice the data that relates to areas in which they have expertise [16, 20]. This leads to differing and often limited problem formulations.

(b) People's expectations can bias perceptions. When decision makers are reviewing data which is contrary to their expectations, they may remember incongruent pieces of data inaccurately [13, 45].

(c) People seek Information consistent with their own views. If decision makers bring to a problem some prejudices about the problem, they will seek data confirming their pre- judices [5, 12, 78].

(d) People downplay data which contradicts their views. If the decision makers have an expectation about a problem or if they think they have arrived at the solution, they will downplay or ignore conflicting evidence [2, 64]. Their expectations will often persist even in light of continued conflicting evidence [78].

Frequency (a) Ease of recall - The ease with which certain data points can be recalled will affect

not only the use of this data but the decision maker's perception of the likelihood of similar events occurring [40, 44, 72, 73, 74].

(b) Base rate error - Often the decision maker determines the likelihood of two events by comparing the number of times each of the two events has occurred. However, the base rate is the crucial measure and is often ignored [10, 34, 35]. This is a particular problem when the decision makers have concrete experience with one problem but only statistical information about another. They will generally be biased toward think- ing the concrete problem to be more troublesome. Decision makers particularly overestimate the frequency of catastrophic events [44].

Table 1. Biases Associated With Presenting Data to a Decision Maker

Irrelevant Information Irrelevant information can influence decision makers and may reduce the quality of their decisions [18, 65].

Data Presentation Biases (a) The type of Information - Data acquired through human interaction has more impact

on the decision maker than just the data itself [10]. (b) The format of the data display - The display format of the data crucially affects the

decision made. Generally speaking, summarized data presentations (e.g., statistics, tables, graphs) are preferred to raw data [14]. The choice of whether to use graphical summaries or tabular summaries depends on the level of environmental stability [58].

(c) Logical data displays - When a set of alternatives is presented which seems to cap- ture all the possibilities, then the decision maker may not detect other alternatives that may exist [25].

(d) Order effects - The order in which data are presented can affect the data retention [66, 52]. In particular, the first piece of data may assume undue importance (primary effect) or the last piece of data may become overvalued (recency effect).

(e) Context - When decision makers assess the variability in a series of data points, their assessment will be affected by the absolute (i.e., mean) value of the data points and the sequence in which they are presented [42].

Selective Perception Decision makers selectively perceive and remember only certain portions of the data [20].

(a) People filter data in ways that reflect their experience. When data from a problem is presented to decision makers, they will particularly notice the data that relates to areas in which they have expertise [16, 20]. This leads to differing and often limited problem formulations.

(b) People's expectations can bias perceptions. When decision makers are reviewing data which is contrary to their expectations, they may remember incongruent pieces of data inaccurately [13, 45].

(c) People seek Information consistent with their own views. If decision makers bring to a problem some prejudices about the problem, they will seek data confirming their pre- judices [5, 12, 78].

(d) People downplay data which contradicts their views. If the decision makers have an expectation about a problem or if they think they have arrived at the solution, they will downplay or ignore conflicting evidence [2, 64]. Their expectations will often persist even in light of continued conflicting evidence [78].

Frequency (a) Ease of recall - The ease with which certain data points can be recalled will affect

not only the use of this data but the decision maker's perception of the likelihood of similar events occurring [40, 44, 72, 73, 74].

(b) Base rate error - Often the decision maker determines the likelihood of two events by comparing the number of times each of the two events has occurred. However, the base rate is the crucial measure and is often ignored [10, 34, 35]. This is a particular problem when the decision makers have concrete experience with one problem but only statistical information about another. They will generally be biased toward think- ing the concrete problem to be more troublesome. Decision makers particularly overestimate the frequency of catastrophic events [44].

Table 1. Biases Associated With Presenting Data to a Decision Maker

Irrelevant Information Irrelevant information can influence decision makers and may reduce the quality of their decisions [18, 65].

Data Presentation Biases (a) The type of Information - Data acquired through human interaction has more impact

on the decision maker than just the data itself [10]. (b) The format of the data display - The display format of the data crucially affects the

decision made. Generally speaking, summarized data presentations (e.g., statistics, tables, graphs) are preferred to raw data [14]. The choice of whether to use graphical summaries or tabular summaries depends on the level of environmental stability [58].

(c) Logical data displays - When a set of alternatives is presented which seems to cap- ture all the possibilities, then the decision maker may not detect other alternatives that may exist [25].

(d) Order effects - The order in which data are presented can affect the data retention [66, 52]. In particular, the first piece of data may assume undue importance (primary effect) or the last piece of data may become overvalued (recency effect).

(e) Context - When decision makers assess the variability in a series of data points, their assessment will be affected by the absolute (i.e., mean) value of the data points and the sequence in which they are presented [42].

Selective Perception Decision makers selectively perceive and remember only certain portions of the data [20].

(a) People filter data in ways that reflect their experience. When data from a problem is presented to decision makers, they will particularly notice the data that relates to areas in which they have expertise [16, 20]. This leads to differing and often limited problem formulations.

(b) People's expectations can bias perceptions. When decision makers are reviewing data which is contrary to their expectations, they may remember incongruent pieces of data inaccurately [13, 45].

(c) People seek Information consistent with their own views. If decision makers bring to a problem some prejudices about the problem, they will seek data confirming their pre- judices [5, 12, 78].

(d) People downplay data which contradicts their views. If the decision makers have an expectation about a problem or if they think they have arrived at the solution, they will downplay or ignore conflicting evidence [2, 64]. Their expectations will often persist even in light of continued conflicting evidence [78].

Frequency (a) Ease of recall - The ease with which certain data points can be recalled will affect

not only the use of this data but the decision maker's perception of the likelihood of similar events occurring [40, 44, 72, 73, 74].

(b) Base rate error - Often the decision maker determines the likelihood of two events by comparing the number of times each of the two events has occurred. However, the base rate is the crucial measure and is often ignored [10, 34, 35]. This is a particular problem when the decision makers have concrete experience with one problem but only statistical information about another. They will generally be biased toward think- ing the concrete problem to be more troublesome. Decision makers particularly overestimate the frequency of catastrophic events [44].

Table 1. Biases Associated With Presenting Data to a Decision Maker

Irrelevant Information Irrelevant information can influence decision makers and may reduce the quality of their decisions [18, 65].

Data Presentation Biases (a) The type of Information - Data acquired through human interaction has more impact

on the decision maker than just the data itself [10]. (b) The format of the data display - The display format of the data crucially affects the

decision made. Generally speaking, summarized data presentations (e.g., statistics, tables, graphs) are preferred to raw data [14]. The choice of whether to use graphical summaries or tabular summaries depends on the level of environmental stability [58].

(c) Logical data displays - When a set of alternatives is presented which seems to cap- ture all the possibilities, then the decision maker may not detect other alternatives that may exist [25].

(d) Order effects - The order in which data are presented can affect the data retention [66, 52]. In particular, the first piece of data may assume undue importance (primary effect) or the last piece of data may become overvalued (recency effect).

(e) Context - When decision makers assess the variability in a series of data points, their assessment will be affected by the absolute (i.e., mean) value of the data points and the sequence in which they are presented [42].

Selective Perception Decision makers selectively perceive and remember only certain portions of the data [20].

(a) People filter data in ways that reflect their experience. When data from a problem is presented to decision makers, they will particularly notice the data that relates to areas in which they have expertise [16, 20]. This leads to differing and often limited problem formulations.

(b) People's expectations can bias perceptions. When decision makers are reviewing data which is contrary to their expectations, they may remember incongruent pieces of data inaccurately [13, 45].

(c) People seek Information consistent with their own views. If decision makers bring to a problem some prejudices about the problem, they will seek data confirming their pre- judices [5, 12, 78].

(d) People downplay data which contradicts their views. If the decision makers have an expectation about a problem or if they think they have arrived at the solution, they will downplay or ignore conflicting evidence [2, 64]. Their expectations will often persist even in light of continued conflicting evidence [78].

Frequency (a) Ease of recall - The ease with which certain data points can be recalled will affect

not only the use of this data but the decision maker's perception of the likelihood of similar events occurring [40, 44, 72, 73, 74].

(b) Base rate error - Often the decision maker determines the likelihood of two events by comparing the number of times each of the two events has occurred. However, the base rate is the crucial measure and is often ignored [10, 34, 35]. This is a particular problem when the decision makers have concrete experience with one problem but only statistical information about another. They will generally be biased toward think- ing the concrete problem to be more troublesome. Decision makers particularly overestimate the frequency of catastrophic events [44].

Table 1. Biases Associated With Presenting Data to a Decision Maker

Irrelevant Information Irrelevant information can influence decision makers and may reduce the quality of their decisions [18, 65].

Data Presentation Biases (a) The type of Information - Data acquired through human interaction has more impact

on the decision maker than just the data itself [10]. (b) The format of the data display - The display format of the data crucially affects the

decision made. Generally speaking, summarized data presentations (e.g., statistics, tables, graphs) are preferred to raw data [14]. The choice of whether to use graphical summaries or tabular summaries depends on the level of environmental stability [58].

(c) Logical data displays - When a set of alternatives is presented which seems to cap- ture all the possibilities, then the decision maker may not detect other alternatives that may exist [25].

(d) Order effects - The order in which data are presented can affect the data retention [66, 52]. In particular, the first piece of data may assume undue importance (primary effect) or the last piece of data may become overvalued (recency effect).

(e) Context - When decision makers assess the variability in a series of data points, their assessment will be affected by the absolute (i.e., mean) value of the data points and the sequence in which they are presented [42].

Selective Perception Decision makers selectively perceive and remember only certain portions of the data [20].

(a) People filter data in ways that reflect their experience. When data from a problem is presented to decision makers, they will particularly notice the data that relates to areas in which they have expertise [16, 20]. This leads to differing and often limited problem formulations.

(b) People's expectations can bias perceptions. When decision makers are reviewing data which is contrary to their expectations, they may remember incongruent pieces of data inaccurately [13, 45].

(c) People seek Information consistent with their own views. If decision makers bring to a problem some prejudices about the problem, they will seek data confirming their pre- judices [5, 12, 78].

(d) People downplay data which contradicts their views. If the decision makers have an expectation about a problem or if they think they have arrived at the solution, they will downplay or ignore conflicting evidence [2, 64]. Their expectations will often persist even in light of continued conflicting evidence [78].

Frequency (a) Ease of recall - The ease with which certain data points can be recalled will affect

not only the use of this data but the decision maker's perception of the likelihood of similar events occurring [40, 44, 72, 73, 74].

(b) Base rate error - Often the decision maker determines the likelihood of two events by comparing the number of times each of the two events has occurred. However, the base rate is the crucial measure and is often ignored [10, 34, 35]. This is a particular problem when the decision makers have concrete experience with one problem but only statistical information about another. They will generally be biased toward think- ing the concrete problem to be more troublesome. Decision makers particularly overestimate the frequency of catastrophic events [44].

Table 1. Biases Associated With Presenting Data to a Decision Maker

Irrelevant Information Irrelevant information can influence decision makers and may reduce the quality of their decisions [18, 65].

Data Presentation Biases (a) The type of Information - Data acquired through human interaction has more impact

on the decision maker than just the data itself [10]. (b) The format of the data display - The display format of the data crucially affects the

decision made. Generally speaking, summarized data presentations (e.g., statistics, tables, graphs) are preferred to raw data [14]. The choice of whether to use graphical summaries or tabular summaries depends on the level of environmental stability [58].

(c) Logical data displays - When a set of alternatives is presented which seems to cap- ture all the possibilities, then the decision maker may not detect other alternatives that may exist [25].

(d) Order effects - The order in which data are presented can affect the data retention [66, 52]. In particular, the first piece of data may assume undue importance (primary effect) or the last piece of data may become overvalued (recency effect).

(e) Context - When decision makers assess the variability in a series of data points, their assessment will be affected by the absolute (i.e., mean) value of the data points and the sequence in which they are presented [42].

Selective Perception Decision makers selectively perceive and remember only certain portions of the data [20].

(a) People filter data in ways that reflect their experience. When data from a problem is presented to decision makers, they will particularly notice the data that relates to areas in which they have expertise [16, 20]. This leads to differing and often limited problem formulations.

(b) People's expectations can bias perceptions. When decision makers are reviewing data which is contrary to their expectations, they may remember incongruent pieces of data inaccurately [13, 45].

(c) People seek Information consistent with their own views. If decision makers bring to a problem some prejudices about the problem, they will seek data confirming their pre- judices [5, 12, 78].

(d) People downplay data which contradicts their views. If the decision makers have an expectation about a problem or if they think they have arrived at the solution, they will downplay or ignore conflicting evidence [2, 64]. Their expectations will often persist even in light of continued conflicting evidence [78].

Frequency (a) Ease of recall - The ease with which certain data points can be recalled will affect

not only the use of this data but the decision maker's perception of the likelihood of similar events occurring [40, 44, 72, 73, 74].

(b) Base rate error - Often the decision maker determines the likelihood of two events by comparing the number of times each of the two events has occurred. However, the base rate is the crucial measure and is often ignored [10, 34, 35]. This is a particular problem when the decision makers have concrete experience with one problem but only statistical information about another. They will generally be biased toward think- ing the concrete problem to be more troublesome. Decision makers particularly overestimate the frequency of catastrophic events [44].

MIS Quarterly/December 1986 405 MIS Quarterly/December 1986 405 MIS Quarterly/December 1986 405 MIS Quarterly/December 1986 405 MIS Quarterly/December 1986 405 MIS Quarterly/December 1986 405 MIS Quarterly/December 1986 405 MIS Quarterly/December 1986 405

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 5: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

Table I Continued

(c) Frequency to Imply strength of relationship - The more pairs of co-varying data points that decision makers have, the stronger they think the relationship between variables is [22, 33, 77].

(d) Illusory correlation - Decision makers may erroneously believe certain variables to be correlated. From plots of data they can often "recognize" patterns, even when the points displayed here have no true correlation [67, 73, 77].

Table 2. Biases in Information Processing

Heuristics As Ackoff [1] pointed out, the decision maker's major problem is often not the lack of informa- tion upon which to base a decision but too much information. Generally the decision maker reduces the problem to a problem involving only 3 or 4 crucial factors. The decision is made using a heuristic based on those factors. Unfortunately, these heuristics often have built-in biases [74]. The following are some of the problems with heuristics.

(a) Structuring problems based on experiences. Decision makers may try to find the best fit between the new problems they have and old problems they have had. When a match is found, the decision maker then alters the old decision slightly to reflect the new circumstances [34, 35, 73, 74].

(b) Rules of thumb. If the decision maker has prior experience in solving a problem, he may again use the same rule of thumb since it proved satisfactory last time [39].

(c) Anchoring and adjustment. Prediction is often made by selecting an "anchoring" (e.g., on a mean) value for a set of data and then adjusting the value for the circumstances in the present case. Generally, the adjustments are insufficient [66, 74].

(d) Inconsistency in the use of the heuristic. Bowman [11] theorized that the major economic consequences of decisions did not result from a manager's use of poor heuristics but from the manager's inconsistent use of the heuristics. Numerous studies have shown that heuristics based on regression outperform the decision maker himself [29, 51, 59] and the performance difference, as theorized, has been shown to result from inconsistency [57].

Misunderstanding of the Statistical Properties of Data

(a) Mistaking random variation as a persisting change. Data available to a decision maker often have statistical properties; that is, they have a mean value around which the data points are randomly distributed. When the next observation is a high or low value data point, the decision maker may believe an upward or downward trend is oc- curring. In fact, it is just a random variation and not a persisting change. Often deci- sion makers infer causes for these random variations [41, 67, 72, 73, 77].

(b) Inferring from small samples. The characteristics of small samples are often believed to be representative of the population from which they are drawn. Thus, in data in- ference too much weight is given to small sample results [72, 74].

(c) Gambler's fallacy. In a probabilistic process where each event is independent of the ones preceding it (i.e., it is random), decision makers may erroneously make in- ferences about future events based on the occurrence of past events [34, 41].

(d) Ignoring uncertainty. When a decision maker is faced with several sources of uncer- tainty simultaneously, he may simplify the problem by ignoring or discounting some of the uncertainty. The decision maker may then choose the most likely alternative for his decision [28].

Table I Continued

(c) Frequency to Imply strength of relationship - The more pairs of co-varying data points that decision makers have, the stronger they think the relationship between variables is [22, 33, 77].

(d) Illusory correlation - Decision makers may erroneously believe certain variables to be correlated. From plots of data they can often "recognize" patterns, even when the points displayed here have no true correlation [67, 73, 77].

Table 2. Biases in Information Processing

Heuristics As Ackoff [1] pointed out, the decision maker's major problem is often not the lack of informa- tion upon which to base a decision but too much information. Generally the decision maker reduces the problem to a problem involving only 3 or 4 crucial factors. The decision is made using a heuristic based on those factors. Unfortunately, these heuristics often have built-in biases [74]. The following are some of the problems with heuristics.

(a) Structuring problems based on experiences. Decision makers may try to find the best fit between the new problems they have and old problems they have had. When a match is found, the decision maker then alters the old decision slightly to reflect the new circumstances [34, 35, 73, 74].

(b) Rules of thumb. If the decision maker has prior experience in solving a problem, he may again use the same rule of thumb since it proved satisfactory last time [39].

(c) Anchoring and adjustment. Prediction is often made by selecting an "anchoring" (e.g., on a mean) value for a set of data and then adjusting the value for the circumstances in the present case. Generally, the adjustments are insufficient [66, 74].

(d) Inconsistency in the use of the heuristic. Bowman [11] theorized that the major economic consequences of decisions did not result from a manager's use of poor heuristics but from the manager's inconsistent use of the heuristics. Numerous studies have shown that heuristics based on regression outperform the decision maker himself [29, 51, 59] and the performance difference, as theorized, has been shown to result from inconsistency [57].

Misunderstanding of the Statistical Properties of Data

(a) Mistaking random variation as a persisting change. Data available to a decision maker often have statistical properties; that is, they have a mean value around which the data points are randomly distributed. When the next observation is a high or low value data point, the decision maker may believe an upward or downward trend is oc- curring. In fact, it is just a random variation and not a persisting change. Often deci- sion makers infer causes for these random variations [41, 67, 72, 73, 77].

(b) Inferring from small samples. The characteristics of small samples are often believed to be representative of the population from which they are drawn. Thus, in data in- ference too much weight is given to small sample results [72, 74].

(c) Gambler's fallacy. In a probabilistic process where each event is independent of the ones preceding it (i.e., it is random), decision makers may erroneously make in- ferences about future events based on the occurrence of past events [34, 41].

(d) Ignoring uncertainty. When a decision maker is faced with several sources of uncer- tainty simultaneously, he may simplify the problem by ignoring or discounting some of the uncertainty. The decision maker may then choose the most likely alternative for his decision [28].

Table I Continued

(c) Frequency to Imply strength of relationship - The more pairs of co-varying data points that decision makers have, the stronger they think the relationship between variables is [22, 33, 77].

(d) Illusory correlation - Decision makers may erroneously believe certain variables to be correlated. From plots of data they can often "recognize" patterns, even when the points displayed here have no true correlation [67, 73, 77].

Table 2. Biases in Information Processing

Heuristics As Ackoff [1] pointed out, the decision maker's major problem is often not the lack of informa- tion upon which to base a decision but too much information. Generally the decision maker reduces the problem to a problem involving only 3 or 4 crucial factors. The decision is made using a heuristic based on those factors. Unfortunately, these heuristics often have built-in biases [74]. The following are some of the problems with heuristics.

(a) Structuring problems based on experiences. Decision makers may try to find the best fit between the new problems they have and old problems they have had. When a match is found, the decision maker then alters the old decision slightly to reflect the new circumstances [34, 35, 73, 74].

(b) Rules of thumb. If the decision maker has prior experience in solving a problem, he may again use the same rule of thumb since it proved satisfactory last time [39].

(c) Anchoring and adjustment. Prediction is often made by selecting an "anchoring" (e.g., on a mean) value for a set of data and then adjusting the value for the circumstances in the present case. Generally, the adjustments are insufficient [66, 74].

(d) Inconsistency in the use of the heuristic. Bowman [11] theorized that the major economic consequences of decisions did not result from a manager's use of poor heuristics but from the manager's inconsistent use of the heuristics. Numerous studies have shown that heuristics based on regression outperform the decision maker himself [29, 51, 59] and the performance difference, as theorized, has been shown to result from inconsistency [57].

Misunderstanding of the Statistical Properties of Data

(a) Mistaking random variation as a persisting change. Data available to a decision maker often have statistical properties; that is, they have a mean value around which the data points are randomly distributed. When the next observation is a high or low value data point, the decision maker may believe an upward or downward trend is oc- curring. In fact, it is just a random variation and not a persisting change. Often deci- sion makers infer causes for these random variations [41, 67, 72, 73, 77].

(b) Inferring from small samples. The characteristics of small samples are often believed to be representative of the population from which they are drawn. Thus, in data in- ference too much weight is given to small sample results [72, 74].

(c) Gambler's fallacy. In a probabilistic process where each event is independent of the ones preceding it (i.e., it is random), decision makers may erroneously make in- ferences about future events based on the occurrence of past events [34, 41].

(d) Ignoring uncertainty. When a decision maker is faced with several sources of uncer- tainty simultaneously, he may simplify the problem by ignoring or discounting some of the uncertainty. The decision maker may then choose the most likely alternative for his decision [28].

Table I Continued

(c) Frequency to Imply strength of relationship - The more pairs of co-varying data points that decision makers have, the stronger they think the relationship between variables is [22, 33, 77].

(d) Illusory correlation - Decision makers may erroneously believe certain variables to be correlated. From plots of data they can often "recognize" patterns, even when the points displayed here have no true correlation [67, 73, 77].

Table 2. Biases in Information Processing

Heuristics As Ackoff [1] pointed out, the decision maker's major problem is often not the lack of informa- tion upon which to base a decision but too much information. Generally the decision maker reduces the problem to a problem involving only 3 or 4 crucial factors. The decision is made using a heuristic based on those factors. Unfortunately, these heuristics often have built-in biases [74]. The following are some of the problems with heuristics.

(a) Structuring problems based on experiences. Decision makers may try to find the best fit between the new problems they have and old problems they have had. When a match is found, the decision maker then alters the old decision slightly to reflect the new circumstances [34, 35, 73, 74].

(b) Rules of thumb. If the decision maker has prior experience in solving a problem, he may again use the same rule of thumb since it proved satisfactory last time [39].

(c) Anchoring and adjustment. Prediction is often made by selecting an "anchoring" (e.g., on a mean) value for a set of data and then adjusting the value for the circumstances in the present case. Generally, the adjustments are insufficient [66, 74].

(d) Inconsistency in the use of the heuristic. Bowman [11] theorized that the major economic consequences of decisions did not result from a manager's use of poor heuristics but from the manager's inconsistent use of the heuristics. Numerous studies have shown that heuristics based on regression outperform the decision maker himself [29, 51, 59] and the performance difference, as theorized, has been shown to result from inconsistency [57].

Misunderstanding of the Statistical Properties of Data

(a) Mistaking random variation as a persisting change. Data available to a decision maker often have statistical properties; that is, they have a mean value around which the data points are randomly distributed. When the next observation is a high or low value data point, the decision maker may believe an upward or downward trend is oc- curring. In fact, it is just a random variation and not a persisting change. Often deci- sion makers infer causes for these random variations [41, 67, 72, 73, 77].

(b) Inferring from small samples. The characteristics of small samples are often believed to be representative of the population from which they are drawn. Thus, in data in- ference too much weight is given to small sample results [72, 74].

(c) Gambler's fallacy. In a probabilistic process where each event is independent of the ones preceding it (i.e., it is random), decision makers may erroneously make in- ferences about future events based on the occurrence of past events [34, 41].

(d) Ignoring uncertainty. When a decision maker is faced with several sources of uncer- tainty simultaneously, he may simplify the problem by ignoring or discounting some of the uncertainty. The decision maker may then choose the most likely alternative for his decision [28].

Table I Continued

(c) Frequency to Imply strength of relationship - The more pairs of co-varying data points that decision makers have, the stronger they think the relationship between variables is [22, 33, 77].

(d) Illusory correlation - Decision makers may erroneously believe certain variables to be correlated. From plots of data they can often "recognize" patterns, even when the points displayed here have no true correlation [67, 73, 77].

Table 2. Biases in Information Processing

Heuristics As Ackoff [1] pointed out, the decision maker's major problem is often not the lack of informa- tion upon which to base a decision but too much information. Generally the decision maker reduces the problem to a problem involving only 3 or 4 crucial factors. The decision is made using a heuristic based on those factors. Unfortunately, these heuristics often have built-in biases [74]. The following are some of the problems with heuristics.

(a) Structuring problems based on experiences. Decision makers may try to find the best fit between the new problems they have and old problems they have had. When a match is found, the decision maker then alters the old decision slightly to reflect the new circumstances [34, 35, 73, 74].

(b) Rules of thumb. If the decision maker has prior experience in solving a problem, he may again use the same rule of thumb since it proved satisfactory last time [39].

(c) Anchoring and adjustment. Prediction is often made by selecting an "anchoring" (e.g., on a mean) value for a set of data and then adjusting the value for the circumstances in the present case. Generally, the adjustments are insufficient [66, 74].

(d) Inconsistency in the use of the heuristic. Bowman [11] theorized that the major economic consequences of decisions did not result from a manager's use of poor heuristics but from the manager's inconsistent use of the heuristics. Numerous studies have shown that heuristics based on regression outperform the decision maker himself [29, 51, 59] and the performance difference, as theorized, has been shown to result from inconsistency [57].

Misunderstanding of the Statistical Properties of Data

(a) Mistaking random variation as a persisting change. Data available to a decision maker often have statistical properties; that is, they have a mean value around which the data points are randomly distributed. When the next observation is a high or low value data point, the decision maker may believe an upward or downward trend is oc- curring. In fact, it is just a random variation and not a persisting change. Often deci- sion makers infer causes for these random variations [41, 67, 72, 73, 77].

(b) Inferring from small samples. The characteristics of small samples are often believed to be representative of the population from which they are drawn. Thus, in data in- ference too much weight is given to small sample results [72, 74].

(c) Gambler's fallacy. In a probabilistic process where each event is independent of the ones preceding it (i.e., it is random), decision makers may erroneously make in- ferences about future events based on the occurrence of past events [34, 41].

(d) Ignoring uncertainty. When a decision maker is faced with several sources of uncer- tainty simultaneously, he may simplify the problem by ignoring or discounting some of the uncertainty. The decision maker may then choose the most likely alternative for his decision [28].

Table I Continued

(c) Frequency to Imply strength of relationship - The more pairs of co-varying data points that decision makers have, the stronger they think the relationship between variables is [22, 33, 77].

(d) Illusory correlation - Decision makers may erroneously believe certain variables to be correlated. From plots of data they can often "recognize" patterns, even when the points displayed here have no true correlation [67, 73, 77].

Table 2. Biases in Information Processing

Heuristics As Ackoff [1] pointed out, the decision maker's major problem is often not the lack of informa- tion upon which to base a decision but too much information. Generally the decision maker reduces the problem to a problem involving only 3 or 4 crucial factors. The decision is made using a heuristic based on those factors. Unfortunately, these heuristics often have built-in biases [74]. The following are some of the problems with heuristics.

(a) Structuring problems based on experiences. Decision makers may try to find the best fit between the new problems they have and old problems they have had. When a match is found, the decision maker then alters the old decision slightly to reflect the new circumstances [34, 35, 73, 74].

(b) Rules of thumb. If the decision maker has prior experience in solving a problem, he may again use the same rule of thumb since it proved satisfactory last time [39].

(c) Anchoring and adjustment. Prediction is often made by selecting an "anchoring" (e.g., on a mean) value for a set of data and then adjusting the value for the circumstances in the present case. Generally, the adjustments are insufficient [66, 74].

(d) Inconsistency in the use of the heuristic. Bowman [11] theorized that the major economic consequences of decisions did not result from a manager's use of poor heuristics but from the manager's inconsistent use of the heuristics. Numerous studies have shown that heuristics based on regression outperform the decision maker himself [29, 51, 59] and the performance difference, as theorized, has been shown to result from inconsistency [57].

Misunderstanding of the Statistical Properties of Data

(a) Mistaking random variation as a persisting change. Data available to a decision maker often have statistical properties; that is, they have a mean value around which the data points are randomly distributed. When the next observation is a high or low value data point, the decision maker may believe an upward or downward trend is oc- curring. In fact, it is just a random variation and not a persisting change. Often deci- sion makers infer causes for these random variations [41, 67, 72, 73, 77].

(b) Inferring from small samples. The characteristics of small samples are often believed to be representative of the population from which they are drawn. Thus, in data in- ference too much weight is given to small sample results [72, 74].

(c) Gambler's fallacy. In a probabilistic process where each event is independent of the ones preceding it (i.e., it is random), decision makers may erroneously make in- ferences about future events based on the occurrence of past events [34, 41].

(d) Ignoring uncertainty. When a decision maker is faced with several sources of uncer- tainty simultaneously, he may simplify the problem by ignoring or discounting some of the uncertainty. The decision maker may then choose the most likely alternative for his decision [28].

Table I Continued

(c) Frequency to Imply strength of relationship - The more pairs of co-varying data points that decision makers have, the stronger they think the relationship between variables is [22, 33, 77].

(d) Illusory correlation - Decision makers may erroneously believe certain variables to be correlated. From plots of data they can often "recognize" patterns, even when the points displayed here have no true correlation [67, 73, 77].

Table 2. Biases in Information Processing

Heuristics As Ackoff [1] pointed out, the decision maker's major problem is often not the lack of informa- tion upon which to base a decision but too much information. Generally the decision maker reduces the problem to a problem involving only 3 or 4 crucial factors. The decision is made using a heuristic based on those factors. Unfortunately, these heuristics often have built-in biases [74]. The following are some of the problems with heuristics.

(a) Structuring problems based on experiences. Decision makers may try to find the best fit between the new problems they have and old problems they have had. When a match is found, the decision maker then alters the old decision slightly to reflect the new circumstances [34, 35, 73, 74].

(b) Rules of thumb. If the decision maker has prior experience in solving a problem, he may again use the same rule of thumb since it proved satisfactory last time [39].

(c) Anchoring and adjustment. Prediction is often made by selecting an "anchoring" (e.g., on a mean) value for a set of data and then adjusting the value for the circumstances in the present case. Generally, the adjustments are insufficient [66, 74].

(d) Inconsistency in the use of the heuristic. Bowman [11] theorized that the major economic consequences of decisions did not result from a manager's use of poor heuristics but from the manager's inconsistent use of the heuristics. Numerous studies have shown that heuristics based on regression outperform the decision maker himself [29, 51, 59] and the performance difference, as theorized, has been shown to result from inconsistency [57].

Misunderstanding of the Statistical Properties of Data

(a) Mistaking random variation as a persisting change. Data available to a decision maker often have statistical properties; that is, they have a mean value around which the data points are randomly distributed. When the next observation is a high or low value data point, the decision maker may believe an upward or downward trend is oc- curring. In fact, it is just a random variation and not a persisting change. Often deci- sion makers infer causes for these random variations [41, 67, 72, 73, 77].

(b) Inferring from small samples. The characteristics of small samples are often believed to be representative of the population from which they are drawn. Thus, in data in- ference too much weight is given to small sample results [72, 74].

(c) Gambler's fallacy. In a probabilistic process where each event is independent of the ones preceding it (i.e., it is random), decision makers may erroneously make in- ferences about future events based on the occurrence of past events [34, 41].

(d) Ignoring uncertainty. When a decision maker is faced with several sources of uncer- tainty simultaneously, he may simplify the problem by ignoring or discounting some of the uncertainty. The decision maker may then choose the most likely alternative for his decision [28].

Table I Continued

(c) Frequency to Imply strength of relationship - The more pairs of co-varying data points that decision makers have, the stronger they think the relationship between variables is [22, 33, 77].

(d) Illusory correlation - Decision makers may erroneously believe certain variables to be correlated. From plots of data they can often "recognize" patterns, even when the points displayed here have no true correlation [67, 73, 77].

Table 2. Biases in Information Processing

Heuristics As Ackoff [1] pointed out, the decision maker's major problem is often not the lack of informa- tion upon which to base a decision but too much information. Generally the decision maker reduces the problem to a problem involving only 3 or 4 crucial factors. The decision is made using a heuristic based on those factors. Unfortunately, these heuristics often have built-in biases [74]. The following are some of the problems with heuristics.

(a) Structuring problems based on experiences. Decision makers may try to find the best fit between the new problems they have and old problems they have had. When a match is found, the decision maker then alters the old decision slightly to reflect the new circumstances [34, 35, 73, 74].

(b) Rules of thumb. If the decision maker has prior experience in solving a problem, he may again use the same rule of thumb since it proved satisfactory last time [39].

(c) Anchoring and adjustment. Prediction is often made by selecting an "anchoring" (e.g., on a mean) value for a set of data and then adjusting the value for the circumstances in the present case. Generally, the adjustments are insufficient [66, 74].

(d) Inconsistency in the use of the heuristic. Bowman [11] theorized that the major economic consequences of decisions did not result from a manager's use of poor heuristics but from the manager's inconsistent use of the heuristics. Numerous studies have shown that heuristics based on regression outperform the decision maker himself [29, 51, 59] and the performance difference, as theorized, has been shown to result from inconsistency [57].

Misunderstanding of the Statistical Properties of Data

(a) Mistaking random variation as a persisting change. Data available to a decision maker often have statistical properties; that is, they have a mean value around which the data points are randomly distributed. When the next observation is a high or low value data point, the decision maker may believe an upward or downward trend is oc- curring. In fact, it is just a random variation and not a persisting change. Often deci- sion makers infer causes for these random variations [41, 67, 72, 73, 77].

(b) Inferring from small samples. The characteristics of small samples are often believed to be representative of the population from which they are drawn. Thus, in data in- ference too much weight is given to small sample results [72, 74].

(c) Gambler's fallacy. In a probabilistic process where each event is independent of the ones preceding it (i.e., it is random), decision makers may erroneously make in- ferences about future events based on the occurrence of past events [34, 41].

(d) Ignoring uncertainty. When a decision maker is faced with several sources of uncer- tainty simultaneously, he may simplify the problem by ignoring or discounting some of the uncertainty. The decision maker may then choose the most likely alternative for his decision [28].

406 MIS Quarterly/December 1986 406 MIS Quarterly/December 1986 406 MIS Quarterly/December 1986 406 MIS Quarterly/December 1986 406 MIS Quarterly/December 1986 406 MIS Quarterly/December 1986 406 MIS Quarterly/December 1986 406 MIS Quarterly/December 1986

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 6: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

Table 2 Continued Limited Search Strategies Decision makers have to decide when to stop gathering data and begin the analysis. They often use truncated search strategies which prematurely exclude potentially relevant informa- tion [61]. The use of truncated search strategies increases as task complexity increases [54].

Conservatism In Decision Making When new information arrives, the decision maker will tend to revise his probability estimates in the direction prescribed by Bayes' theorem, but usually the revision is too small [52]. This conservatism increases with message informativeness [66] and is subject to the primary effect [47].

Inability to Extrapolate Growth Processes When exponential growth is occurring, the decision maker may underestimate the outcomes of the growth process [71, 75]. This underestimation is not improved by presenting more data points [76]. Note: Major reviews of cognitive bias research are given in [6, 21, 56, 60, 65].

Table 2 Continued Limited Search Strategies Decision makers have to decide when to stop gathering data and begin the analysis. They often use truncated search strategies which prematurely exclude potentially relevant informa- tion [61]. The use of truncated search strategies increases as task complexity increases [54].

Conservatism In Decision Making When new information arrives, the decision maker will tend to revise his probability estimates in the direction prescribed by Bayes' theorem, but usually the revision is too small [52]. This conservatism increases with message informativeness [66] and is subject to the primary effect [47].

Inability to Extrapolate Growth Processes When exponential growth is occurring, the decision maker may underestimate the outcomes of the growth process [71, 75]. This underestimation is not improved by presenting more data points [76]. Note: Major reviews of cognitive bias research are given in [6, 21, 56, 60, 65].

Table 2 Continued Limited Search Strategies Decision makers have to decide when to stop gathering data and begin the analysis. They often use truncated search strategies which prematurely exclude potentially relevant informa- tion [61]. The use of truncated search strategies increases as task complexity increases [54].

Conservatism In Decision Making When new information arrives, the decision maker will tend to revise his probability estimates in the direction prescribed by Bayes' theorem, but usually the revision is too small [52]. This conservatism increases with message informativeness [66] and is subject to the primary effect [47].

Inability to Extrapolate Growth Processes When exponential growth is occurring, the decision maker may underestimate the outcomes of the growth process [71, 75]. This underestimation is not improved by presenting more data points [76]. Note: Major reviews of cognitive bias research are given in [6, 21, 56, 60, 65].

Table 2 Continued Limited Search Strategies Decision makers have to decide when to stop gathering data and begin the analysis. They often use truncated search strategies which prematurely exclude potentially relevant informa- tion [61]. The use of truncated search strategies increases as task complexity increases [54].

Conservatism In Decision Making When new information arrives, the decision maker will tend to revise his probability estimates in the direction prescribed by Bayes' theorem, but usually the revision is too small [52]. This conservatism increases with message informativeness [66] and is subject to the primary effect [47].

Inability to Extrapolate Growth Processes When exponential growth is occurring, the decision maker may underestimate the outcomes of the growth process [71, 75]. This underestimation is not improved by presenting more data points [76]. Note: Major reviews of cognitive bias research are given in [6, 21, 56, 60, 65].

Table 2 Continued Limited Search Strategies Decision makers have to decide when to stop gathering data and begin the analysis. They often use truncated search strategies which prematurely exclude potentially relevant informa- tion [61]. The use of truncated search strategies increases as task complexity increases [54].

Conservatism In Decision Making When new information arrives, the decision maker will tend to revise his probability estimates in the direction prescribed by Bayes' theorem, but usually the revision is too small [52]. This conservatism increases with message informativeness [66] and is subject to the primary effect [47].

Inability to Extrapolate Growth Processes When exponential growth is occurring, the decision maker may underestimate the outcomes of the growth process [71, 75]. This underestimation is not improved by presenting more data points [76]. Note: Major reviews of cognitive bias research are given in [6, 21, 56, 60, 65].

Table 2 Continued Limited Search Strategies Decision makers have to decide when to stop gathering data and begin the analysis. They often use truncated search strategies which prematurely exclude potentially relevant informa- tion [61]. The use of truncated search strategies increases as task complexity increases [54].

Conservatism In Decision Making When new information arrives, the decision maker will tend to revise his probability estimates in the direction prescribed by Bayes' theorem, but usually the revision is too small [52]. This conservatism increases with message informativeness [66] and is subject to the primary effect [47].

Inability to Extrapolate Growth Processes When exponential growth is occurring, the decision maker may underestimate the outcomes of the growth process [71, 75]. This underestimation is not improved by presenting more data points [76]. Note: Major reviews of cognitive bias research are given in [6, 21, 56, 60, 65].

Table 2 Continued Limited Search Strategies Decision makers have to decide when to stop gathering data and begin the analysis. They often use truncated search strategies which prematurely exclude potentially relevant informa- tion [61]. The use of truncated search strategies increases as task complexity increases [54].

Conservatism In Decision Making When new information arrives, the decision maker will tend to revise his probability estimates in the direction prescribed by Bayes' theorem, but usually the revision is too small [52]. This conservatism increases with message informativeness [66] and is subject to the primary effect [47].

Inability to Extrapolate Growth Processes When exponential growth is occurring, the decision maker may underestimate the outcomes of the growth process [71, 75]. This underestimation is not improved by presenting more data points [76]. Note: Major reviews of cognitive bias research are given in [6, 21, 56, 60, 65].

Table 2 Continued Limited Search Strategies Decision makers have to decide when to stop gathering data and begin the analysis. They often use truncated search strategies which prematurely exclude potentially relevant informa- tion [61]. The use of truncated search strategies increases as task complexity increases [54].

Conservatism In Decision Making When new information arrives, the decision maker will tend to revise his probability estimates in the direction prescribed by Bayes' theorem, but usually the revision is too small [52]. This conservatism increases with message informativeness [66] and is subject to the primary effect [47].

Inability to Extrapolate Growth Processes When exponential growth is occurring, the decision maker may underestimate the outcomes of the growth process [71, 75]. This underestimation is not improved by presenting more data points [76]. Note: Major reviews of cognitive bias research are given in [6, 21, 56, 60, 65].

that decision makers can be classified in terms of their right hemisphere orientation (intuitive, sensory, feeling) versus their left hemisphere orientation (analytic and system- atic). The ideal manager could function in ei- ther mode as required by the situation. In many cases the ideal would be a balance be- tween the hemispheres. Although the cogni- tive style concept has been expanded to in- corporate dimensions other than hemispheric lateralization, the concept still suggests that non-optimal decision making can be traced to neurophysiological factors other than brain wiring limitations.

The neural underpinning of cognitive style is not clear. It may be the balance between left and right is hard-wired or it may be under con- scious control. But in any case, the theory suggests the relative balance will affect deci- sions actually made. The inconsistencies in cognitive style research, however, confound application of the theory [27, 32].

At this point, it is important to also question the paradigms, methods, and results of cogni- tive bias research

1. Are biases an artifact of the way in which information is presented to decision mak- ers? Several recent studies indicate that cognitive bias can be ameliorated by pre- senting information in a form that is con- gruent with the type of cognitive processing required by a task and with the manner in

that decision makers can be classified in terms of their right hemisphere orientation (intuitive, sensory, feeling) versus their left hemisphere orientation (analytic and system- atic). The ideal manager could function in ei- ther mode as required by the situation. In many cases the ideal would be a balance be- tween the hemispheres. Although the cogni- tive style concept has been expanded to in- corporate dimensions other than hemispheric lateralization, the concept still suggests that non-optimal decision making can be traced to neurophysiological factors other than brain wiring limitations.

The neural underpinning of cognitive style is not clear. It may be the balance between left and right is hard-wired or it may be under con- scious control. But in any case, the theory suggests the relative balance will affect deci- sions actually made. The inconsistencies in cognitive style research, however, confound application of the theory [27, 32].

At this point, it is important to also question the paradigms, methods, and results of cogni- tive bias research

1. Are biases an artifact of the way in which information is presented to decision mak- ers? Several recent studies indicate that cognitive bias can be ameliorated by pre- senting information in a form that is con- gruent with the type of cognitive processing required by a task and with the manner in

that decision makers can be classified in terms of their right hemisphere orientation (intuitive, sensory, feeling) versus their left hemisphere orientation (analytic and system- atic). The ideal manager could function in ei- ther mode as required by the situation. In many cases the ideal would be a balance be- tween the hemispheres. Although the cogni- tive style concept has been expanded to in- corporate dimensions other than hemispheric lateralization, the concept still suggests that non-optimal decision making can be traced to neurophysiological factors other than brain wiring limitations.

The neural underpinning of cognitive style is not clear. It may be the balance between left and right is hard-wired or it may be under con- scious control. But in any case, the theory suggests the relative balance will affect deci- sions actually made. The inconsistencies in cognitive style research, however, confound application of the theory [27, 32].

At this point, it is important to also question the paradigms, methods, and results of cogni- tive bias research

1. Are biases an artifact of the way in which information is presented to decision mak- ers? Several recent studies indicate that cognitive bias can be ameliorated by pre- senting information in a form that is con- gruent with the type of cognitive processing required by a task and with the manner in

that decision makers can be classified in terms of their right hemisphere orientation (intuitive, sensory, feeling) versus their left hemisphere orientation (analytic and system- atic). The ideal manager could function in ei- ther mode as required by the situation. In many cases the ideal would be a balance be- tween the hemispheres. Although the cogni- tive style concept has been expanded to in- corporate dimensions other than hemispheric lateralization, the concept still suggests that non-optimal decision making can be traced to neurophysiological factors other than brain wiring limitations.

The neural underpinning of cognitive style is not clear. It may be the balance between left and right is hard-wired or it may be under con- scious control. But in any case, the theory suggests the relative balance will affect deci- sions actually made. The inconsistencies in cognitive style research, however, confound application of the theory [27, 32].

At this point, it is important to also question the paradigms, methods, and results of cogni- tive bias research

1. Are biases an artifact of the way in which information is presented to decision mak- ers? Several recent studies indicate that cognitive bias can be ameliorated by pre- senting information in a form that is con- gruent with the type of cognitive processing required by a task and with the manner in

that decision makers can be classified in terms of their right hemisphere orientation (intuitive, sensory, feeling) versus their left hemisphere orientation (analytic and system- atic). The ideal manager could function in ei- ther mode as required by the situation. In many cases the ideal would be a balance be- tween the hemispheres. Although the cogni- tive style concept has been expanded to in- corporate dimensions other than hemispheric lateralization, the concept still suggests that non-optimal decision making can be traced to neurophysiological factors other than brain wiring limitations.

The neural underpinning of cognitive style is not clear. It may be the balance between left and right is hard-wired or it may be under con- scious control. But in any case, the theory suggests the relative balance will affect deci- sions actually made. The inconsistencies in cognitive style research, however, confound application of the theory [27, 32].

At this point, it is important to also question the paradigms, methods, and results of cogni- tive bias research

1. Are biases an artifact of the way in which information is presented to decision mak- ers? Several recent studies indicate that cognitive bias can be ameliorated by pre- senting information in a form that is con- gruent with the type of cognitive processing required by a task and with the manner in

that decision makers can be classified in terms of their right hemisphere orientation (intuitive, sensory, feeling) versus their left hemisphere orientation (analytic and system- atic). The ideal manager could function in ei- ther mode as required by the situation. In many cases the ideal would be a balance be- tween the hemispheres. Although the cogni- tive style concept has been expanded to in- corporate dimensions other than hemispheric lateralization, the concept still suggests that non-optimal decision making can be traced to neurophysiological factors other than brain wiring limitations.

The neural underpinning of cognitive style is not clear. It may be the balance between left and right is hard-wired or it may be under con- scious control. But in any case, the theory suggests the relative balance will affect deci- sions actually made. The inconsistencies in cognitive style research, however, confound application of the theory [27, 32].

At this point, it is important to also question the paradigms, methods, and results of cogni- tive bias research

1. Are biases an artifact of the way in which information is presented to decision mak- ers? Several recent studies indicate that cognitive bias can be ameliorated by pre- senting information in a form that is con- gruent with the type of cognitive processing required by a task and with the manner in

that decision makers can be classified in terms of their right hemisphere orientation (intuitive, sensory, feeling) versus their left hemisphere orientation (analytic and system- atic). The ideal manager could function in ei- ther mode as required by the situation. In many cases the ideal would be a balance be- tween the hemispheres. Although the cogni- tive style concept has been expanded to in- corporate dimensions other than hemispheric lateralization, the concept still suggests that non-optimal decision making can be traced to neurophysiological factors other than brain wiring limitations.

The neural underpinning of cognitive style is not clear. It may be the balance between left and right is hard-wired or it may be under con- scious control. But in any case, the theory suggests the relative balance will affect deci- sions actually made. The inconsistencies in cognitive style research, however, confound application of the theory [27, 32].

At this point, it is important to also question the paradigms, methods, and results of cogni- tive bias research

1. Are biases an artifact of the way in which information is presented to decision mak- ers? Several recent studies indicate that cognitive bias can be ameliorated by pre- senting information in a form that is con- gruent with the type of cognitive processing required by a task and with the manner in

that decision makers can be classified in terms of their right hemisphere orientation (intuitive, sensory, feeling) versus their left hemisphere orientation (analytic and system- atic). The ideal manager could function in ei- ther mode as required by the situation. In many cases the ideal would be a balance be- tween the hemispheres. Although the cogni- tive style concept has been expanded to in- corporate dimensions other than hemispheric lateralization, the concept still suggests that non-optimal decision making can be traced to neurophysiological factors other than brain wiring limitations.

The neural underpinning of cognitive style is not clear. It may be the balance between left and right is hard-wired or it may be under con- scious control. But in any case, the theory suggests the relative balance will affect deci- sions actually made. The inconsistencies in cognitive style research, however, confound application of the theory [27, 32].

At this point, it is important to also question the paradigms, methods, and results of cogni- tive bias research

1. Are biases an artifact of the way in which information is presented to decision mak- ers? Several recent studies indicate that cognitive bias can be ameliorated by pre- senting information in a form that is con- gruent with the type of cognitive processing required by a task and with the manner in

which humans seem to store and process information [15, 31, 48, 56].

2. Are the normative models chosen to serve as the baseline in judging decisions non- optimal truly appropriate? Wright and Mur- phy [83] have shown that, while decision makers may not judge correlations well when compared to judgements dictated by normative models such as Pearson's r, they may judge correlations in noisy data (i.e., with outliers) similarly to measurements using robust estimates of correlation (i.e., weighted local linear least squares).

3. Are biases indeed rational in the context of organizational cultures? James March and colleagues [23, 46] have eloquently argued that biased, seemingly irrational decision maker behavior is sensible, even rational, given the nature of organizational life.

While the above three questions may have a fundamental impact on behavioral decision making research, they do not particularly con- found the exploration of intelligent DSS - in this case, of an artificially intelligent statisti- cian (AIS). First, it is primary goal of an AIS to present information in a way congruent with human cognitive processes. And further, there is a fundamental difference between con- straining the information made available to a decision maker, as is done in a laboratory study, and making various information alter- natives available as an AIS would do. Second, the number of statistical tools available to

which humans seem to store and process information [15, 31, 48, 56].

2. Are the normative models chosen to serve as the baseline in judging decisions non- optimal truly appropriate? Wright and Mur- phy [83] have shown that, while decision makers may not judge correlations well when compared to judgements dictated by normative models such as Pearson's r, they may judge correlations in noisy data (i.e., with outliers) similarly to measurements using robust estimates of correlation (i.e., weighted local linear least squares).

3. Are biases indeed rational in the context of organizational cultures? James March and colleagues [23, 46] have eloquently argued that biased, seemingly irrational decision maker behavior is sensible, even rational, given the nature of organizational life.

While the above three questions may have a fundamental impact on behavioral decision making research, they do not particularly con- found the exploration of intelligent DSS - in this case, of an artificially intelligent statisti- cian (AIS). First, it is primary goal of an AIS to present information in a way congruent with human cognitive processes. And further, there is a fundamental difference between con- straining the information made available to a decision maker, as is done in a laboratory study, and making various information alter- natives available as an AIS would do. Second, the number of statistical tools available to

which humans seem to store and process information [15, 31, 48, 56].

2. Are the normative models chosen to serve as the baseline in judging decisions non- optimal truly appropriate? Wright and Mur- phy [83] have shown that, while decision makers may not judge correlations well when compared to judgements dictated by normative models such as Pearson's r, they may judge correlations in noisy data (i.e., with outliers) similarly to measurements using robust estimates of correlation (i.e., weighted local linear least squares).

3. Are biases indeed rational in the context of organizational cultures? James March and colleagues [23, 46] have eloquently argued that biased, seemingly irrational decision maker behavior is sensible, even rational, given the nature of organizational life.

While the above three questions may have a fundamental impact on behavioral decision making research, they do not particularly con- found the exploration of intelligent DSS - in this case, of an artificially intelligent statisti- cian (AIS). First, it is primary goal of an AIS to present information in a way congruent with human cognitive processes. And further, there is a fundamental difference between con- straining the information made available to a decision maker, as is done in a laboratory study, and making various information alter- natives available as an AIS would do. Second, the number of statistical tools available to

which humans seem to store and process information [15, 31, 48, 56].

2. Are the normative models chosen to serve as the baseline in judging decisions non- optimal truly appropriate? Wright and Mur- phy [83] have shown that, while decision makers may not judge correlations well when compared to judgements dictated by normative models such as Pearson's r, they may judge correlations in noisy data (i.e., with outliers) similarly to measurements using robust estimates of correlation (i.e., weighted local linear least squares).

3. Are biases indeed rational in the context of organizational cultures? James March and colleagues [23, 46] have eloquently argued that biased, seemingly irrational decision maker behavior is sensible, even rational, given the nature of organizational life.

While the above three questions may have a fundamental impact on behavioral decision making research, they do not particularly con- found the exploration of intelligent DSS - in this case, of an artificially intelligent statisti- cian (AIS). First, it is primary goal of an AIS to present information in a way congruent with human cognitive processes. And further, there is a fundamental difference between con- straining the information made available to a decision maker, as is done in a laboratory study, and making various information alter- natives available as an AIS would do. Second, the number of statistical tools available to

which humans seem to store and process information [15, 31, 48, 56].

2. Are the normative models chosen to serve as the baseline in judging decisions non- optimal truly appropriate? Wright and Mur- phy [83] have shown that, while decision makers may not judge correlations well when compared to judgements dictated by normative models such as Pearson's r, they may judge correlations in noisy data (i.e., with outliers) similarly to measurements using robust estimates of correlation (i.e., weighted local linear least squares).

3. Are biases indeed rational in the context of organizational cultures? James March and colleagues [23, 46] have eloquently argued that biased, seemingly irrational decision maker behavior is sensible, even rational, given the nature of organizational life.

While the above three questions may have a fundamental impact on behavioral decision making research, they do not particularly con- found the exploration of intelligent DSS - in this case, of an artificially intelligent statisti- cian (AIS). First, it is primary goal of an AIS to present information in a way congruent with human cognitive processes. And further, there is a fundamental difference between con- straining the information made available to a decision maker, as is done in a laboratory study, and making various information alter- natives available as an AIS would do. Second, the number of statistical tools available to

which humans seem to store and process information [15, 31, 48, 56].

2. Are the normative models chosen to serve as the baseline in judging decisions non- optimal truly appropriate? Wright and Mur- phy [83] have shown that, while decision makers may not judge correlations well when compared to judgements dictated by normative models such as Pearson's r, they may judge correlations in noisy data (i.e., with outliers) similarly to measurements using robust estimates of correlation (i.e., weighted local linear least squares).

3. Are biases indeed rational in the context of organizational cultures? James March and colleagues [23, 46] have eloquently argued that biased, seemingly irrational decision maker behavior is sensible, even rational, given the nature of organizational life.

While the above three questions may have a fundamental impact on behavioral decision making research, they do not particularly con- found the exploration of intelligent DSS - in this case, of an artificially intelligent statisti- cian (AIS). First, it is primary goal of an AIS to present information in a way congruent with human cognitive processes. And further, there is a fundamental difference between con- straining the information made available to a decision maker, as is done in a laboratory study, and making various information alter- natives available as an AIS would do. Second, the number of statistical tools available to

which humans seem to store and process information [15, 31, 48, 56].

2. Are the normative models chosen to serve as the baseline in judging decisions non- optimal truly appropriate? Wright and Mur- phy [83] have shown that, while decision makers may not judge correlations well when compared to judgements dictated by normative models such as Pearson's r, they may judge correlations in noisy data (i.e., with outliers) similarly to measurements using robust estimates of correlation (i.e., weighted local linear least squares).

3. Are biases indeed rational in the context of organizational cultures? James March and colleagues [23, 46] have eloquently argued that biased, seemingly irrational decision maker behavior is sensible, even rational, given the nature of organizational life.

While the above three questions may have a fundamental impact on behavioral decision making research, they do not particularly con- found the exploration of intelligent DSS - in this case, of an artificially intelligent statisti- cian (AIS). First, it is primary goal of an AIS to present information in a way congruent with human cognitive processes. And further, there is a fundamental difference between con- straining the information made available to a decision maker, as is done in a laboratory study, and making various information alter- natives available as an AIS would do. Second, the number of statistical tools available to

which humans seem to store and process information [15, 31, 48, 56].

2. Are the normative models chosen to serve as the baseline in judging decisions non- optimal truly appropriate? Wright and Mur- phy [83] have shown that, while decision makers may not judge correlations well when compared to judgements dictated by normative models such as Pearson's r, they may judge correlations in noisy data (i.e., with outliers) similarly to measurements using robust estimates of correlation (i.e., weighted local linear least squares).

3. Are biases indeed rational in the context of organizational cultures? James March and colleagues [23, 46] have eloquently argued that biased, seemingly irrational decision maker behavior is sensible, even rational, given the nature of organizational life.

While the above three questions may have a fundamental impact on behavioral decision making research, they do not particularly con- found the exploration of intelligent DSS - in this case, of an artificially intelligent statisti- cian (AIS). First, it is primary goal of an AIS to present information in a way congruent with human cognitive processes. And further, there is a fundamental difference between con- straining the information made available to a decision maker, as is done in a laboratory study, and making various information alter- natives available as an AIS would do. Second, the number of statistical tools available to

MIS Quarterly/December 1986 407 MIS Quarterly/December 1986 407 MIS Quarterly/December 1986 407 MIS Quarterly/December 1986 407 MIS Quarterly/December 1986 407 MIS Quarterly/December 1986 407 MIS Quarterly/December 1986 407 MIS Quarterly/December 1986 407

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 7: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

Sidebar 3. What Is an Expert System?

An expert system Incorporates the knowledge of expert human decision makers and at- tempts to induce or deduce new knowledge using the system's a priorl knowledge, Inferred knowledge and real world data. One key In expert system research, then, Is to devise a scheme for representing knowledge that is appropriate for eliciting and encoding human knowledge and that Is also expedient for use in automated Induction and deduction.

The production rule is one popular approach among the many proposed knowledge represen- tation schemes. In its simplest form, a production rule is "IF condition(s) THEN conclu- sion(s)." A group of production rules are related in that the conclusion(s) of one rule shares variables with the conditlon(s) of another rule - for example: (1) IF runny nose THEN cold virus, (2) IF cold virus THEN bed rest. Given that rules so overlap, the set of rules, termed a knowledge base (KB), can be treated as a tree. Taken as a whole, a knowledge base is a con- tingency model. Given the KB, an inference engine traverses the logical rule tree asking a user for values of the conditions or conclusions when needed to choose a path through the tree.

An expert system may try to prove a hypothesis and move backwards in the tree - e.g., "I will try to prove you need bed rest." It may also try to surmise something devoid of an Initial hypo- thesis - e.g., "Do you have a runny nose?" It has proven effective for expert systems to inter- weave the goal-driven (as In the former example) and data-driven (as In the latter example) ap- proaches (also referred to as backward and forward chaining). These "dual-driven" expert systems better emulate expert decision makers who have initial hunches, ask directed ques- tions, and try to develop new hunches. In essence, an inference engine is a contingency model analyzer. If the inference engine keeps track of where it has gone in the rule tree, then it can backtrack to explain its train of thought.

If the rules In the KB are logically treated as data, then it is possible to devise an inference engine that can analyze an arbitrary set of rules. Such an expert system generator is, in prin- ciple, no different than database-oriented application generators. An application generator allows users to define data models and then supports database manipulation based upon the definitions. An expert system generator allows users to define IF... THEN ... rules and then supports contingency analysis based upon the rule definitions.

Although experimental expert systems are typically implemented in LISP or PROLOG (which are well suited for prototype development) an expert system can be implemented In- any general purpose programming language. Indeed, for efficiency reasons, commercial systems are often implemented in languages such as Fortran or C.

Business applications include expert systems to support production plant managers, loan of- ficers, budget and portfolio managers, information resource managers, auditors and collec- tion agents, as well as many day-to-day tasks such as scheduling meetings. The Proceedings of the International Joint Conference on Artifical Intelligence Is perhaps the best source for Information on recent Al research and applications (albeit there is much to wade through in these two volume sets, which are published in odd numbered years). Winston [82] gives a readable, if a bit outdated, treatment of artificial intelligence.

Sidebar 3. What Is an Expert System?

An expert system Incorporates the knowledge of expert human decision makers and at- tempts to induce or deduce new knowledge using the system's a priorl knowledge, Inferred knowledge and real world data. One key In expert system research, then, Is to devise a scheme for representing knowledge that is appropriate for eliciting and encoding human knowledge and that Is also expedient for use in automated Induction and deduction.

The production rule is one popular approach among the many proposed knowledge represen- tation schemes. In its simplest form, a production rule is "IF condition(s) THEN conclu- sion(s)." A group of production rules are related in that the conclusion(s) of one rule shares variables with the conditlon(s) of another rule - for example: (1) IF runny nose THEN cold virus, (2) IF cold virus THEN bed rest. Given that rules so overlap, the set of rules, termed a knowledge base (KB), can be treated as a tree. Taken as a whole, a knowledge base is a con- tingency model. Given the KB, an inference engine traverses the logical rule tree asking a user for values of the conditions or conclusions when needed to choose a path through the tree.

An expert system may try to prove a hypothesis and move backwards in the tree - e.g., "I will try to prove you need bed rest." It may also try to surmise something devoid of an Initial hypo- thesis - e.g., "Do you have a runny nose?" It has proven effective for expert systems to inter- weave the goal-driven (as In the former example) and data-driven (as In the latter example) ap- proaches (also referred to as backward and forward chaining). These "dual-driven" expert systems better emulate expert decision makers who have initial hunches, ask directed ques- tions, and try to develop new hunches. In essence, an inference engine is a contingency model analyzer. If the inference engine keeps track of where it has gone in the rule tree, then it can backtrack to explain its train of thought.

If the rules In the KB are logically treated as data, then it is possible to devise an inference engine that can analyze an arbitrary set of rules. Such an expert system generator is, in prin- ciple, no different than database-oriented application generators. An application generator allows users to define data models and then supports database manipulation based upon the definitions. An expert system generator allows users to define IF... THEN ... rules and then supports contingency analysis based upon the rule definitions.

Although experimental expert systems are typically implemented in LISP or PROLOG (which are well suited for prototype development) an expert system can be implemented In- any general purpose programming language. Indeed, for efficiency reasons, commercial systems are often implemented in languages such as Fortran or C.

Business applications include expert systems to support production plant managers, loan of- ficers, budget and portfolio managers, information resource managers, auditors and collec- tion agents, as well as many day-to-day tasks such as scheduling meetings. The Proceedings of the International Joint Conference on Artifical Intelligence Is perhaps the best source for Information on recent Al research and applications (albeit there is much to wade through in these two volume sets, which are published in odd numbered years). Winston [82] gives a readable, if a bit outdated, treatment of artificial intelligence.

Sidebar 3. What Is an Expert System?

An expert system Incorporates the knowledge of expert human decision makers and at- tempts to induce or deduce new knowledge using the system's a priorl knowledge, Inferred knowledge and real world data. One key In expert system research, then, Is to devise a scheme for representing knowledge that is appropriate for eliciting and encoding human knowledge and that Is also expedient for use in automated Induction and deduction.

The production rule is one popular approach among the many proposed knowledge represen- tation schemes. In its simplest form, a production rule is "IF condition(s) THEN conclu- sion(s)." A group of production rules are related in that the conclusion(s) of one rule shares variables with the conditlon(s) of another rule - for example: (1) IF runny nose THEN cold virus, (2) IF cold virus THEN bed rest. Given that rules so overlap, the set of rules, termed a knowledge base (KB), can be treated as a tree. Taken as a whole, a knowledge base is a con- tingency model. Given the KB, an inference engine traverses the logical rule tree asking a user for values of the conditions or conclusions when needed to choose a path through the tree.

An expert system may try to prove a hypothesis and move backwards in the tree - e.g., "I will try to prove you need bed rest." It may also try to surmise something devoid of an Initial hypo- thesis - e.g., "Do you have a runny nose?" It has proven effective for expert systems to inter- weave the goal-driven (as In the former example) and data-driven (as In the latter example) ap- proaches (also referred to as backward and forward chaining). These "dual-driven" expert systems better emulate expert decision makers who have initial hunches, ask directed ques- tions, and try to develop new hunches. In essence, an inference engine is a contingency model analyzer. If the inference engine keeps track of where it has gone in the rule tree, then it can backtrack to explain its train of thought.

If the rules In the KB are logically treated as data, then it is possible to devise an inference engine that can analyze an arbitrary set of rules. Such an expert system generator is, in prin- ciple, no different than database-oriented application generators. An application generator allows users to define data models and then supports database manipulation based upon the definitions. An expert system generator allows users to define IF... THEN ... rules and then supports contingency analysis based upon the rule definitions.

Although experimental expert systems are typically implemented in LISP or PROLOG (which are well suited for prototype development) an expert system can be implemented In- any general purpose programming language. Indeed, for efficiency reasons, commercial systems are often implemented in languages such as Fortran or C.

Business applications include expert systems to support production plant managers, loan of- ficers, budget and portfolio managers, information resource managers, auditors and collec- tion agents, as well as many day-to-day tasks such as scheduling meetings. The Proceedings of the International Joint Conference on Artifical Intelligence Is perhaps the best source for Information on recent Al research and applications (albeit there is much to wade through in these two volume sets, which are published in odd numbered years). Winston [82] gives a readable, if a bit outdated, treatment of artificial intelligence.

Sidebar 3. What Is an Expert System?

An expert system Incorporates the knowledge of expert human decision makers and at- tempts to induce or deduce new knowledge using the system's a priorl knowledge, Inferred knowledge and real world data. One key In expert system research, then, Is to devise a scheme for representing knowledge that is appropriate for eliciting and encoding human knowledge and that Is also expedient for use in automated Induction and deduction.

The production rule is one popular approach among the many proposed knowledge represen- tation schemes. In its simplest form, a production rule is "IF condition(s) THEN conclu- sion(s)." A group of production rules are related in that the conclusion(s) of one rule shares variables with the conditlon(s) of another rule - for example: (1) IF runny nose THEN cold virus, (2) IF cold virus THEN bed rest. Given that rules so overlap, the set of rules, termed a knowledge base (KB), can be treated as a tree. Taken as a whole, a knowledge base is a con- tingency model. Given the KB, an inference engine traverses the logical rule tree asking a user for values of the conditions or conclusions when needed to choose a path through the tree.

An expert system may try to prove a hypothesis and move backwards in the tree - e.g., "I will try to prove you need bed rest." It may also try to surmise something devoid of an Initial hypo- thesis - e.g., "Do you have a runny nose?" It has proven effective for expert systems to inter- weave the goal-driven (as In the former example) and data-driven (as In the latter example) ap- proaches (also referred to as backward and forward chaining). These "dual-driven" expert systems better emulate expert decision makers who have initial hunches, ask directed ques- tions, and try to develop new hunches. In essence, an inference engine is a contingency model analyzer. If the inference engine keeps track of where it has gone in the rule tree, then it can backtrack to explain its train of thought.

If the rules In the KB are logically treated as data, then it is possible to devise an inference engine that can analyze an arbitrary set of rules. Such an expert system generator is, in prin- ciple, no different than database-oriented application generators. An application generator allows users to define data models and then supports database manipulation based upon the definitions. An expert system generator allows users to define IF... THEN ... rules and then supports contingency analysis based upon the rule definitions.

Although experimental expert systems are typically implemented in LISP or PROLOG (which are well suited for prototype development) an expert system can be implemented In- any general purpose programming language. Indeed, for efficiency reasons, commercial systems are often implemented in languages such as Fortran or C.

Business applications include expert systems to support production plant managers, loan of- ficers, budget and portfolio managers, information resource managers, auditors and collec- tion agents, as well as many day-to-day tasks such as scheduling meetings. The Proceedings of the International Joint Conference on Artifical Intelligence Is perhaps the best source for Information on recent Al research and applications (albeit there is much to wade through in these two volume sets, which are published in odd numbered years). Winston [82] gives a readable, if a bit outdated, treatment of artificial intelligence.

Sidebar 3. What Is an Expert System?

An expert system Incorporates the knowledge of expert human decision makers and at- tempts to induce or deduce new knowledge using the system's a priorl knowledge, Inferred knowledge and real world data. One key In expert system research, then, Is to devise a scheme for representing knowledge that is appropriate for eliciting and encoding human knowledge and that Is also expedient for use in automated Induction and deduction.

The production rule is one popular approach among the many proposed knowledge represen- tation schemes. In its simplest form, a production rule is "IF condition(s) THEN conclu- sion(s)." A group of production rules are related in that the conclusion(s) of one rule shares variables with the conditlon(s) of another rule - for example: (1) IF runny nose THEN cold virus, (2) IF cold virus THEN bed rest. Given that rules so overlap, the set of rules, termed a knowledge base (KB), can be treated as a tree. Taken as a whole, a knowledge base is a con- tingency model. Given the KB, an inference engine traverses the logical rule tree asking a user for values of the conditions or conclusions when needed to choose a path through the tree.

An expert system may try to prove a hypothesis and move backwards in the tree - e.g., "I will try to prove you need bed rest." It may also try to surmise something devoid of an Initial hypo- thesis - e.g., "Do you have a runny nose?" It has proven effective for expert systems to inter- weave the goal-driven (as In the former example) and data-driven (as In the latter example) ap- proaches (also referred to as backward and forward chaining). These "dual-driven" expert systems better emulate expert decision makers who have initial hunches, ask directed ques- tions, and try to develop new hunches. In essence, an inference engine is a contingency model analyzer. If the inference engine keeps track of where it has gone in the rule tree, then it can backtrack to explain its train of thought.

If the rules In the KB are logically treated as data, then it is possible to devise an inference engine that can analyze an arbitrary set of rules. Such an expert system generator is, in prin- ciple, no different than database-oriented application generators. An application generator allows users to define data models and then supports database manipulation based upon the definitions. An expert system generator allows users to define IF... THEN ... rules and then supports contingency analysis based upon the rule definitions.

Although experimental expert systems are typically implemented in LISP or PROLOG (which are well suited for prototype development) an expert system can be implemented In- any general purpose programming language. Indeed, for efficiency reasons, commercial systems are often implemented in languages such as Fortran or C.

Business applications include expert systems to support production plant managers, loan of- ficers, budget and portfolio managers, information resource managers, auditors and collec- tion agents, as well as many day-to-day tasks such as scheduling meetings. The Proceedings of the International Joint Conference on Artifical Intelligence Is perhaps the best source for Information on recent Al research and applications (albeit there is much to wade through in these two volume sets, which are published in odd numbered years). Winston [82] gives a readable, if a bit outdated, treatment of artificial intelligence.

Sidebar 3. What Is an Expert System?

An expert system Incorporates the knowledge of expert human decision makers and at- tempts to induce or deduce new knowledge using the system's a priorl knowledge, Inferred knowledge and real world data. One key In expert system research, then, Is to devise a scheme for representing knowledge that is appropriate for eliciting and encoding human knowledge and that Is also expedient for use in automated Induction and deduction.

The production rule is one popular approach among the many proposed knowledge represen- tation schemes. In its simplest form, a production rule is "IF condition(s) THEN conclu- sion(s)." A group of production rules are related in that the conclusion(s) of one rule shares variables with the conditlon(s) of another rule - for example: (1) IF runny nose THEN cold virus, (2) IF cold virus THEN bed rest. Given that rules so overlap, the set of rules, termed a knowledge base (KB), can be treated as a tree. Taken as a whole, a knowledge base is a con- tingency model. Given the KB, an inference engine traverses the logical rule tree asking a user for values of the conditions or conclusions when needed to choose a path through the tree.

An expert system may try to prove a hypothesis and move backwards in the tree - e.g., "I will try to prove you need bed rest." It may also try to surmise something devoid of an Initial hypo- thesis - e.g., "Do you have a runny nose?" It has proven effective for expert systems to inter- weave the goal-driven (as In the former example) and data-driven (as In the latter example) ap- proaches (also referred to as backward and forward chaining). These "dual-driven" expert systems better emulate expert decision makers who have initial hunches, ask directed ques- tions, and try to develop new hunches. In essence, an inference engine is a contingency model analyzer. If the inference engine keeps track of where it has gone in the rule tree, then it can backtrack to explain its train of thought.

If the rules In the KB are logically treated as data, then it is possible to devise an inference engine that can analyze an arbitrary set of rules. Such an expert system generator is, in prin- ciple, no different than database-oriented application generators. An application generator allows users to define data models and then supports database manipulation based upon the definitions. An expert system generator allows users to define IF... THEN ... rules and then supports contingency analysis based upon the rule definitions.

Although experimental expert systems are typically implemented in LISP or PROLOG (which are well suited for prototype development) an expert system can be implemented In- any general purpose programming language. Indeed, for efficiency reasons, commercial systems are often implemented in languages such as Fortran or C.

Business applications include expert systems to support production plant managers, loan of- ficers, budget and portfolio managers, information resource managers, auditors and collec- tion agents, as well as many day-to-day tasks such as scheduling meetings. The Proceedings of the International Joint Conference on Artifical Intelligence Is perhaps the best source for Information on recent Al research and applications (albeit there is much to wade through in these two volume sets, which are published in odd numbered years). Winston [82] gives a readable, if a bit outdated, treatment of artificial intelligence.

Sidebar 3. What Is an Expert System?

An expert system Incorporates the knowledge of expert human decision makers and at- tempts to induce or deduce new knowledge using the system's a priorl knowledge, Inferred knowledge and real world data. One key In expert system research, then, Is to devise a scheme for representing knowledge that is appropriate for eliciting and encoding human knowledge and that Is also expedient for use in automated Induction and deduction.

The production rule is one popular approach among the many proposed knowledge represen- tation schemes. In its simplest form, a production rule is "IF condition(s) THEN conclu- sion(s)." A group of production rules are related in that the conclusion(s) of one rule shares variables with the conditlon(s) of another rule - for example: (1) IF runny nose THEN cold virus, (2) IF cold virus THEN bed rest. Given that rules so overlap, the set of rules, termed a knowledge base (KB), can be treated as a tree. Taken as a whole, a knowledge base is a con- tingency model. Given the KB, an inference engine traverses the logical rule tree asking a user for values of the conditions or conclusions when needed to choose a path through the tree.

An expert system may try to prove a hypothesis and move backwards in the tree - e.g., "I will try to prove you need bed rest." It may also try to surmise something devoid of an Initial hypo- thesis - e.g., "Do you have a runny nose?" It has proven effective for expert systems to inter- weave the goal-driven (as In the former example) and data-driven (as In the latter example) ap- proaches (also referred to as backward and forward chaining). These "dual-driven" expert systems better emulate expert decision makers who have initial hunches, ask directed ques- tions, and try to develop new hunches. In essence, an inference engine is a contingency model analyzer. If the inference engine keeps track of where it has gone in the rule tree, then it can backtrack to explain its train of thought.

If the rules In the KB are logically treated as data, then it is possible to devise an inference engine that can analyze an arbitrary set of rules. Such an expert system generator is, in prin- ciple, no different than database-oriented application generators. An application generator allows users to define data models and then supports database manipulation based upon the definitions. An expert system generator allows users to define IF... THEN ... rules and then supports contingency analysis based upon the rule definitions.

Although experimental expert systems are typically implemented in LISP or PROLOG (which are well suited for prototype development) an expert system can be implemented In- any general purpose programming language. Indeed, for efficiency reasons, commercial systems are often implemented in languages such as Fortran or C.

Business applications include expert systems to support production plant managers, loan of- ficers, budget and portfolio managers, information resource managers, auditors and collec- tion agents, as well as many day-to-day tasks such as scheduling meetings. The Proceedings of the International Joint Conference on Artifical Intelligence Is perhaps the best source for Information on recent Al research and applications (albeit there is much to wade through in these two volume sets, which are published in odd numbered years). Winston [82] gives a readable, if a bit outdated, treatment of artificial intelligence.

Sidebar 3. What Is an Expert System?

An expert system Incorporates the knowledge of expert human decision makers and at- tempts to induce or deduce new knowledge using the system's a priorl knowledge, Inferred knowledge and real world data. One key In expert system research, then, Is to devise a scheme for representing knowledge that is appropriate for eliciting and encoding human knowledge and that Is also expedient for use in automated Induction and deduction.

The production rule is one popular approach among the many proposed knowledge represen- tation schemes. In its simplest form, a production rule is "IF condition(s) THEN conclu- sion(s)." A group of production rules are related in that the conclusion(s) of one rule shares variables with the conditlon(s) of another rule - for example: (1) IF runny nose THEN cold virus, (2) IF cold virus THEN bed rest. Given that rules so overlap, the set of rules, termed a knowledge base (KB), can be treated as a tree. Taken as a whole, a knowledge base is a con- tingency model. Given the KB, an inference engine traverses the logical rule tree asking a user for values of the conditions or conclusions when needed to choose a path through the tree.

An expert system may try to prove a hypothesis and move backwards in the tree - e.g., "I will try to prove you need bed rest." It may also try to surmise something devoid of an Initial hypo- thesis - e.g., "Do you have a runny nose?" It has proven effective for expert systems to inter- weave the goal-driven (as In the former example) and data-driven (as In the latter example) ap- proaches (also referred to as backward and forward chaining). These "dual-driven" expert systems better emulate expert decision makers who have initial hunches, ask directed ques- tions, and try to develop new hunches. In essence, an inference engine is a contingency model analyzer. If the inference engine keeps track of where it has gone in the rule tree, then it can backtrack to explain its train of thought.

If the rules In the KB are logically treated as data, then it is possible to devise an inference engine that can analyze an arbitrary set of rules. Such an expert system generator is, in prin- ciple, no different than database-oriented application generators. An application generator allows users to define data models and then supports database manipulation based upon the definitions. An expert system generator allows users to define IF... THEN ... rules and then supports contingency analysis based upon the rule definitions.

Although experimental expert systems are typically implemented in LISP or PROLOG (which are well suited for prototype development) an expert system can be implemented In- any general purpose programming language. Indeed, for efficiency reasons, commercial systems are often implemented in languages such as Fortran or C.

Business applications include expert systems to support production plant managers, loan of- ficers, budget and portfolio managers, information resource managers, auditors and collec- tion agents, as well as many day-to-day tasks such as scheduling meetings. The Proceedings of the International Joint Conference on Artifical Intelligence Is perhaps the best source for Information on recent Al research and applications (albeit there is much to wade through in these two volume sets, which are published in odd numbered years). Winston [82] gives a readable, if a bit outdated, treatment of artificial intelligence.

408 MIS Quarterly/December 1986 408 MIS Quarterly/December 1986 408 MIS Quarterly/December 1986 408 MIS Quarterly/December 1986 408 MIS Quarterly/December 1986 408 MIS Quarterly/December 1986 408 MIS Quarterly/December 1986 408 MIS Quarterly/December 1986

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 8: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

decision makers is large and many of these tools are not well understood. An AIS will simply help a decision maker choose appropri- ate tools while also helping to avoid the most common data manipulation and interpreta- tion errors. Third, the AIS we outline does not intend to address the use of information in an organizational, cultural sense. In short, devel- opment of an AIS can draw upon the stable aspects of the cognitive bias literature and statistical theory. It embodies the basic philo- sophy of DSS by offering support and consul- tation in statistical analysis.

Moving to an Intelligent DSS for Statistical Analysis There are numerous good statistical packages available (e.g., SAS, SPSS, TSP). In the current state-of-the-art, the decision maker (1) pro- vides the data, (2) chooses an appropriate sta- tistical technique and gives the analysis com- mands, and (3) interprets the results. Steps (2) and (3) are repeated until the decision maker is satisfied that he understands the data inso- far as is possible using available tools. In the analysis the human statistician makes judge- ments on what tests to use and/or how to pro- perly conduct those tests. Rules for these judgements are not difficult to formalize (see Andrews et al., [3]).

Techniques in artificial intelligence, particu- larly expert systems, are readily applicable here. (Sidebar 3 is a brief overview of expert systems.) This section illustrates the incor- poration of expert systems techniques in a statistical DSS using decision making sce- narios that contrast unintelligent and intelli- gent support. Following the cases, the next section gives a general design for a proposed AIS DSS, sketches a hypothetical session us- ing AIS, and outlines stumbling-blocks to commercial implementation.

An AIS will provide direct support for each of the three steps given above. Steps 1 and 2 in- volve secondary decisions. That is, decisions of which data and analysis tools are appropri- ate. Step 3 relates to the primary decision in which the results of the analysis tools are in- terpreted and used to determine a real-world

decision makers is large and many of these tools are not well understood. An AIS will simply help a decision maker choose appropri- ate tools while also helping to avoid the most common data manipulation and interpreta- tion errors. Third, the AIS we outline does not intend to address the use of information in an organizational, cultural sense. In short, devel- opment of an AIS can draw upon the stable aspects of the cognitive bias literature and statistical theory. It embodies the basic philo- sophy of DSS by offering support and consul- tation in statistical analysis.

Moving to an Intelligent DSS for Statistical Analysis There are numerous good statistical packages available (e.g., SAS, SPSS, TSP). In the current state-of-the-art, the decision maker (1) pro- vides the data, (2) chooses an appropriate sta- tistical technique and gives the analysis com- mands, and (3) interprets the results. Steps (2) and (3) are repeated until the decision maker is satisfied that he understands the data inso- far as is possible using available tools. In the analysis the human statistician makes judge- ments on what tests to use and/or how to pro- perly conduct those tests. Rules for these judgements are not difficult to formalize (see Andrews et al., [3]).

Techniques in artificial intelligence, particu- larly expert systems, are readily applicable here. (Sidebar 3 is a brief overview of expert systems.) This section illustrates the incor- poration of expert systems techniques in a statistical DSS using decision making sce- narios that contrast unintelligent and intelli- gent support. Following the cases, the next section gives a general design for a proposed AIS DSS, sketches a hypothetical session us- ing AIS, and outlines stumbling-blocks to commercial implementation.

An AIS will provide direct support for each of the three steps given above. Steps 1 and 2 in- volve secondary decisions. That is, decisions of which data and analysis tools are appropri- ate. Step 3 relates to the primary decision in which the results of the analysis tools are in- terpreted and used to determine a real-world

decision makers is large and many of these tools are not well understood. An AIS will simply help a decision maker choose appropri- ate tools while also helping to avoid the most common data manipulation and interpreta- tion errors. Third, the AIS we outline does not intend to address the use of information in an organizational, cultural sense. In short, devel- opment of an AIS can draw upon the stable aspects of the cognitive bias literature and statistical theory. It embodies the basic philo- sophy of DSS by offering support and consul- tation in statistical analysis.

Moving to an Intelligent DSS for Statistical Analysis There are numerous good statistical packages available (e.g., SAS, SPSS, TSP). In the current state-of-the-art, the decision maker (1) pro- vides the data, (2) chooses an appropriate sta- tistical technique and gives the analysis com- mands, and (3) interprets the results. Steps (2) and (3) are repeated until the decision maker is satisfied that he understands the data inso- far as is possible using available tools. In the analysis the human statistician makes judge- ments on what tests to use and/or how to pro- perly conduct those tests. Rules for these judgements are not difficult to formalize (see Andrews et al., [3]).

Techniques in artificial intelligence, particu- larly expert systems, are readily applicable here. (Sidebar 3 is a brief overview of expert systems.) This section illustrates the incor- poration of expert systems techniques in a statistical DSS using decision making sce- narios that contrast unintelligent and intelli- gent support. Following the cases, the next section gives a general design for a proposed AIS DSS, sketches a hypothetical session us- ing AIS, and outlines stumbling-blocks to commercial implementation.

An AIS will provide direct support for each of the three steps given above. Steps 1 and 2 in- volve secondary decisions. That is, decisions of which data and analysis tools are appropri- ate. Step 3 relates to the primary decision in which the results of the analysis tools are in- terpreted and used to determine a real-world

decision makers is large and many of these tools are not well understood. An AIS will simply help a decision maker choose appropri- ate tools while also helping to avoid the most common data manipulation and interpreta- tion errors. Third, the AIS we outline does not intend to address the use of information in an organizational, cultural sense. In short, devel- opment of an AIS can draw upon the stable aspects of the cognitive bias literature and statistical theory. It embodies the basic philo- sophy of DSS by offering support and consul- tation in statistical analysis.

Moving to an Intelligent DSS for Statistical Analysis There are numerous good statistical packages available (e.g., SAS, SPSS, TSP). In the current state-of-the-art, the decision maker (1) pro- vides the data, (2) chooses an appropriate sta- tistical technique and gives the analysis com- mands, and (3) interprets the results. Steps (2) and (3) are repeated until the decision maker is satisfied that he understands the data inso- far as is possible using available tools. In the analysis the human statistician makes judge- ments on what tests to use and/or how to pro- perly conduct those tests. Rules for these judgements are not difficult to formalize (see Andrews et al., [3]).

Techniques in artificial intelligence, particu- larly expert systems, are readily applicable here. (Sidebar 3 is a brief overview of expert systems.) This section illustrates the incor- poration of expert systems techniques in a statistical DSS using decision making sce- narios that contrast unintelligent and intelli- gent support. Following the cases, the next section gives a general design for a proposed AIS DSS, sketches a hypothetical session us- ing AIS, and outlines stumbling-blocks to commercial implementation.

An AIS will provide direct support for each of the three steps given above. Steps 1 and 2 in- volve secondary decisions. That is, decisions of which data and analysis tools are appropri- ate. Step 3 relates to the primary decision in which the results of the analysis tools are in- terpreted and used to determine a real-world

decision makers is large and many of these tools are not well understood. An AIS will simply help a decision maker choose appropri- ate tools while also helping to avoid the most common data manipulation and interpreta- tion errors. Third, the AIS we outline does not intend to address the use of information in an organizational, cultural sense. In short, devel- opment of an AIS can draw upon the stable aspects of the cognitive bias literature and statistical theory. It embodies the basic philo- sophy of DSS by offering support and consul- tation in statistical analysis.

Moving to an Intelligent DSS for Statistical Analysis There are numerous good statistical packages available (e.g., SAS, SPSS, TSP). In the current state-of-the-art, the decision maker (1) pro- vides the data, (2) chooses an appropriate sta- tistical technique and gives the analysis com- mands, and (3) interprets the results. Steps (2) and (3) are repeated until the decision maker is satisfied that he understands the data inso- far as is possible using available tools. In the analysis the human statistician makes judge- ments on what tests to use and/or how to pro- perly conduct those tests. Rules for these judgements are not difficult to formalize (see Andrews et al., [3]).

Techniques in artificial intelligence, particu- larly expert systems, are readily applicable here. (Sidebar 3 is a brief overview of expert systems.) This section illustrates the incor- poration of expert systems techniques in a statistical DSS using decision making sce- narios that contrast unintelligent and intelli- gent support. Following the cases, the next section gives a general design for a proposed AIS DSS, sketches a hypothetical session us- ing AIS, and outlines stumbling-blocks to commercial implementation.

An AIS will provide direct support for each of the three steps given above. Steps 1 and 2 in- volve secondary decisions. That is, decisions of which data and analysis tools are appropri- ate. Step 3 relates to the primary decision in which the results of the analysis tools are in- terpreted and used to determine a real-world

decision makers is large and many of these tools are not well understood. An AIS will simply help a decision maker choose appropri- ate tools while also helping to avoid the most common data manipulation and interpreta- tion errors. Third, the AIS we outline does not intend to address the use of information in an organizational, cultural sense. In short, devel- opment of an AIS can draw upon the stable aspects of the cognitive bias literature and statistical theory. It embodies the basic philo- sophy of DSS by offering support and consul- tation in statistical analysis.

Moving to an Intelligent DSS for Statistical Analysis There are numerous good statistical packages available (e.g., SAS, SPSS, TSP). In the current state-of-the-art, the decision maker (1) pro- vides the data, (2) chooses an appropriate sta- tistical technique and gives the analysis com- mands, and (3) interprets the results. Steps (2) and (3) are repeated until the decision maker is satisfied that he understands the data inso- far as is possible using available tools. In the analysis the human statistician makes judge- ments on what tests to use and/or how to pro- perly conduct those tests. Rules for these judgements are not difficult to formalize (see Andrews et al., [3]).

Techniques in artificial intelligence, particu- larly expert systems, are readily applicable here. (Sidebar 3 is a brief overview of expert systems.) This section illustrates the incor- poration of expert systems techniques in a statistical DSS using decision making sce- narios that contrast unintelligent and intelli- gent support. Following the cases, the next section gives a general design for a proposed AIS DSS, sketches a hypothetical session us- ing AIS, and outlines stumbling-blocks to commercial implementation.

An AIS will provide direct support for each of the three steps given above. Steps 1 and 2 in- volve secondary decisions. That is, decisions of which data and analysis tools are appropri- ate. Step 3 relates to the primary decision in which the results of the analysis tools are in- terpreted and used to determine a real-world

decision makers is large and many of these tools are not well understood. An AIS will simply help a decision maker choose appropri- ate tools while also helping to avoid the most common data manipulation and interpreta- tion errors. Third, the AIS we outline does not intend to address the use of information in an organizational, cultural sense. In short, devel- opment of an AIS can draw upon the stable aspects of the cognitive bias literature and statistical theory. It embodies the basic philo- sophy of DSS by offering support and consul- tation in statistical analysis.

Moving to an Intelligent DSS for Statistical Analysis There are numerous good statistical packages available (e.g., SAS, SPSS, TSP). In the current state-of-the-art, the decision maker (1) pro- vides the data, (2) chooses an appropriate sta- tistical technique and gives the analysis com- mands, and (3) interprets the results. Steps (2) and (3) are repeated until the decision maker is satisfied that he understands the data inso- far as is possible using available tools. In the analysis the human statistician makes judge- ments on what tests to use and/or how to pro- perly conduct those tests. Rules for these judgements are not difficult to formalize (see Andrews et al., [3]).

Techniques in artificial intelligence, particu- larly expert systems, are readily applicable here. (Sidebar 3 is a brief overview of expert systems.) This section illustrates the incor- poration of expert systems techniques in a statistical DSS using decision making sce- narios that contrast unintelligent and intelli- gent support. Following the cases, the next section gives a general design for a proposed AIS DSS, sketches a hypothetical session us- ing AIS, and outlines stumbling-blocks to commercial implementation.

An AIS will provide direct support for each of the three steps given above. Steps 1 and 2 in- volve secondary decisions. That is, decisions of which data and analysis tools are appropri- ate. Step 3 relates to the primary decision in which the results of the analysis tools are in- terpreted and used to determine a real-world

decision makers is large and many of these tools are not well understood. An AIS will simply help a decision maker choose appropri- ate tools while also helping to avoid the most common data manipulation and interpreta- tion errors. Third, the AIS we outline does not intend to address the use of information in an organizational, cultural sense. In short, devel- opment of an AIS can draw upon the stable aspects of the cognitive bias literature and statistical theory. It embodies the basic philo- sophy of DSS by offering support and consul- tation in statistical analysis.

Moving to an Intelligent DSS for Statistical Analysis There are numerous good statistical packages available (e.g., SAS, SPSS, TSP). In the current state-of-the-art, the decision maker (1) pro- vides the data, (2) chooses an appropriate sta- tistical technique and gives the analysis com- mands, and (3) interprets the results. Steps (2) and (3) are repeated until the decision maker is satisfied that he understands the data inso- far as is possible using available tools. In the analysis the human statistician makes judge- ments on what tests to use and/or how to pro- perly conduct those tests. Rules for these judgements are not difficult to formalize (see Andrews et al., [3]).

Techniques in artificial intelligence, particu- larly expert systems, are readily applicable here. (Sidebar 3 is a brief overview of expert systems.) This section illustrates the incor- poration of expert systems techniques in a statistical DSS using decision making sce- narios that contrast unintelligent and intelli- gent support. Following the cases, the next section gives a general design for a proposed AIS DSS, sketches a hypothetical session us- ing AIS, and outlines stumbling-blocks to commercial implementation.

An AIS will provide direct support for each of the three steps given above. Steps 1 and 2 in- volve secondary decisions. That is, decisions of which data and analysis tools are appropri- ate. Step 3 relates to the primary decision in which the results of the analysis tools are in- terpreted and used to determine a real-world

action. Following are four scenarios in which only passive support is offered by the DSS. After that, the four scenarios are revisited to illustrate the potentials of an AIS DSS. Final- ly, a fifth scenario is used to discuss treat- ments of ill-structured problems.

Case 1 - In assessing and predicting sales performance, the DSS presents the decision maker with a graph of time series data. How- ever, the decision maker may anchor on past data and adjust to predict the future, may see patterns in data which are due to randomness, and may see more variability in the data than really exists.

Case 2 - The decision maker wants to com- pare the performance of salespeople in Chi- cago and Boston. He has the DSS create histo- grams of these data from both locations; he concludes that the salespeople in Chicago are performing better.

Case 3 - The manager wants to determine what is causing the sales problems in Boston. To do this, he has the DSS extract data on many predictive factors from the database and display it. After mentally sorting through these factors, he concludes that it "all boils down to the sales commission structure in use in Boston."

Case 4 - The decision maker asks the DSS to predict sales using any of 10 environmental factors. The resulting regression has 3 inde- pendent variables; one of the independent vari- ables is the log of the number of personal computers sold nationwide.

In each of the four cases, the DSS has done correctly what was requested, but the decision maker could be misled by the results pre- sented. In case 1, the presentation form is technically correct but the decision maker could misinterpret the resulting display. In case 2, the DSS histograms were correct but no support was given for comparing the two factors. In case 3, the DSS gives technically correct support in displaying the factors but does not extend help to modeling the sales problem. This leads the manager to use a sim- plistic heuristic. In case 4, the technically cor- rect stepwise response regression has been performed but no support is provided to the decision maker to understand the logic of the

action. Following are four scenarios in which only passive support is offered by the DSS. After that, the four scenarios are revisited to illustrate the potentials of an AIS DSS. Final- ly, a fifth scenario is used to discuss treat- ments of ill-structured problems.

Case 1 - In assessing and predicting sales performance, the DSS presents the decision maker with a graph of time series data. How- ever, the decision maker may anchor on past data and adjust to predict the future, may see patterns in data which are due to randomness, and may see more variability in the data than really exists.

Case 2 - The decision maker wants to com- pare the performance of salespeople in Chi- cago and Boston. He has the DSS create histo- grams of these data from both locations; he concludes that the salespeople in Chicago are performing better.

Case 3 - The manager wants to determine what is causing the sales problems in Boston. To do this, he has the DSS extract data on many predictive factors from the database and display it. After mentally sorting through these factors, he concludes that it "all boils down to the sales commission structure in use in Boston."

Case 4 - The decision maker asks the DSS to predict sales using any of 10 environmental factors. The resulting regression has 3 inde- pendent variables; one of the independent vari- ables is the log of the number of personal computers sold nationwide.

In each of the four cases, the DSS has done correctly what was requested, but the decision maker could be misled by the results pre- sented. In case 1, the presentation form is technically correct but the decision maker could misinterpret the resulting display. In case 2, the DSS histograms were correct but no support was given for comparing the two factors. In case 3, the DSS gives technically correct support in displaying the factors but does not extend help to modeling the sales problem. This leads the manager to use a sim- plistic heuristic. In case 4, the technically cor- rect stepwise response regression has been performed but no support is provided to the decision maker to understand the logic of the

action. Following are four scenarios in which only passive support is offered by the DSS. After that, the four scenarios are revisited to illustrate the potentials of an AIS DSS. Final- ly, a fifth scenario is used to discuss treat- ments of ill-structured problems.

Case 1 - In assessing and predicting sales performance, the DSS presents the decision maker with a graph of time series data. How- ever, the decision maker may anchor on past data and adjust to predict the future, may see patterns in data which are due to randomness, and may see more variability in the data than really exists.

Case 2 - The decision maker wants to com- pare the performance of salespeople in Chi- cago and Boston. He has the DSS create histo- grams of these data from both locations; he concludes that the salespeople in Chicago are performing better.

Case 3 - The manager wants to determine what is causing the sales problems in Boston. To do this, he has the DSS extract data on many predictive factors from the database and display it. After mentally sorting through these factors, he concludes that it "all boils down to the sales commission structure in use in Boston."

Case 4 - The decision maker asks the DSS to predict sales using any of 10 environmental factors. The resulting regression has 3 inde- pendent variables; one of the independent vari- ables is the log of the number of personal computers sold nationwide.

In each of the four cases, the DSS has done correctly what was requested, but the decision maker could be misled by the results pre- sented. In case 1, the presentation form is technically correct but the decision maker could misinterpret the resulting display. In case 2, the DSS histograms were correct but no support was given for comparing the two factors. In case 3, the DSS gives technically correct support in displaying the factors but does not extend help to modeling the sales problem. This leads the manager to use a sim- plistic heuristic. In case 4, the technically cor- rect stepwise response regression has been performed but no support is provided to the decision maker to understand the logic of the

action. Following are four scenarios in which only passive support is offered by the DSS. After that, the four scenarios are revisited to illustrate the potentials of an AIS DSS. Final- ly, a fifth scenario is used to discuss treat- ments of ill-structured problems.

Case 1 - In assessing and predicting sales performance, the DSS presents the decision maker with a graph of time series data. How- ever, the decision maker may anchor on past data and adjust to predict the future, may see patterns in data which are due to randomness, and may see more variability in the data than really exists.

Case 2 - The decision maker wants to com- pare the performance of salespeople in Chi- cago and Boston. He has the DSS create histo- grams of these data from both locations; he concludes that the salespeople in Chicago are performing better.

Case 3 - The manager wants to determine what is causing the sales problems in Boston. To do this, he has the DSS extract data on many predictive factors from the database and display it. After mentally sorting through these factors, he concludes that it "all boils down to the sales commission structure in use in Boston."

Case 4 - The decision maker asks the DSS to predict sales using any of 10 environmental factors. The resulting regression has 3 inde- pendent variables; one of the independent vari- ables is the log of the number of personal computers sold nationwide.

In each of the four cases, the DSS has done correctly what was requested, but the decision maker could be misled by the results pre- sented. In case 1, the presentation form is technically correct but the decision maker could misinterpret the resulting display. In case 2, the DSS histograms were correct but no support was given for comparing the two factors. In case 3, the DSS gives technically correct support in displaying the factors but does not extend help to modeling the sales problem. This leads the manager to use a sim- plistic heuristic. In case 4, the technically cor- rect stepwise response regression has been performed but no support is provided to the decision maker to understand the logic of the

action. Following are four scenarios in which only passive support is offered by the DSS. After that, the four scenarios are revisited to illustrate the potentials of an AIS DSS. Final- ly, a fifth scenario is used to discuss treat- ments of ill-structured problems.

Case 1 - In assessing and predicting sales performance, the DSS presents the decision maker with a graph of time series data. How- ever, the decision maker may anchor on past data and adjust to predict the future, may see patterns in data which are due to randomness, and may see more variability in the data than really exists.

Case 2 - The decision maker wants to com- pare the performance of salespeople in Chi- cago and Boston. He has the DSS create histo- grams of these data from both locations; he concludes that the salespeople in Chicago are performing better.

Case 3 - The manager wants to determine what is causing the sales problems in Boston. To do this, he has the DSS extract data on many predictive factors from the database and display it. After mentally sorting through these factors, he concludes that it "all boils down to the sales commission structure in use in Boston."

Case 4 - The decision maker asks the DSS to predict sales using any of 10 environmental factors. The resulting regression has 3 inde- pendent variables; one of the independent vari- ables is the log of the number of personal computers sold nationwide.

In each of the four cases, the DSS has done correctly what was requested, but the decision maker could be misled by the results pre- sented. In case 1, the presentation form is technically correct but the decision maker could misinterpret the resulting display. In case 2, the DSS histograms were correct but no support was given for comparing the two factors. In case 3, the DSS gives technically correct support in displaying the factors but does not extend help to modeling the sales problem. This leads the manager to use a sim- plistic heuristic. In case 4, the technically cor- rect stepwise response regression has been performed but no support is provided to the decision maker to understand the logic of the

action. Following are four scenarios in which only passive support is offered by the DSS. After that, the four scenarios are revisited to illustrate the potentials of an AIS DSS. Final- ly, a fifth scenario is used to discuss treat- ments of ill-structured problems.

Case 1 - In assessing and predicting sales performance, the DSS presents the decision maker with a graph of time series data. How- ever, the decision maker may anchor on past data and adjust to predict the future, may see patterns in data which are due to randomness, and may see more variability in the data than really exists.

Case 2 - The decision maker wants to com- pare the performance of salespeople in Chi- cago and Boston. He has the DSS create histo- grams of these data from both locations; he concludes that the salespeople in Chicago are performing better.

Case 3 - The manager wants to determine what is causing the sales problems in Boston. To do this, he has the DSS extract data on many predictive factors from the database and display it. After mentally sorting through these factors, he concludes that it "all boils down to the sales commission structure in use in Boston."

Case 4 - The decision maker asks the DSS to predict sales using any of 10 environmental factors. The resulting regression has 3 inde- pendent variables; one of the independent vari- ables is the log of the number of personal computers sold nationwide.

In each of the four cases, the DSS has done correctly what was requested, but the decision maker could be misled by the results pre- sented. In case 1, the presentation form is technically correct but the decision maker could misinterpret the resulting display. In case 2, the DSS histograms were correct but no support was given for comparing the two factors. In case 3, the DSS gives technically correct support in displaying the factors but does not extend help to modeling the sales problem. This leads the manager to use a sim- plistic heuristic. In case 4, the technically cor- rect stepwise response regression has been performed but no support is provided to the decision maker to understand the logic of the

action. Following are four scenarios in which only passive support is offered by the DSS. After that, the four scenarios are revisited to illustrate the potentials of an AIS DSS. Final- ly, a fifth scenario is used to discuss treat- ments of ill-structured problems.

Case 1 - In assessing and predicting sales performance, the DSS presents the decision maker with a graph of time series data. How- ever, the decision maker may anchor on past data and adjust to predict the future, may see patterns in data which are due to randomness, and may see more variability in the data than really exists.

Case 2 - The decision maker wants to com- pare the performance of salespeople in Chi- cago and Boston. He has the DSS create histo- grams of these data from both locations; he concludes that the salespeople in Chicago are performing better.

Case 3 - The manager wants to determine what is causing the sales problems in Boston. To do this, he has the DSS extract data on many predictive factors from the database and display it. After mentally sorting through these factors, he concludes that it "all boils down to the sales commission structure in use in Boston."

Case 4 - The decision maker asks the DSS to predict sales using any of 10 environmental factors. The resulting regression has 3 inde- pendent variables; one of the independent vari- ables is the log of the number of personal computers sold nationwide.

In each of the four cases, the DSS has done correctly what was requested, but the decision maker could be misled by the results pre- sented. In case 1, the presentation form is technically correct but the decision maker could misinterpret the resulting display. In case 2, the DSS histograms were correct but no support was given for comparing the two factors. In case 3, the DSS gives technically correct support in displaying the factors but does not extend help to modeling the sales problem. This leads the manager to use a sim- plistic heuristic. In case 4, the technically cor- rect stepwise response regression has been performed but no support is provided to the decision maker to understand the logic of the

action. Following are four scenarios in which only passive support is offered by the DSS. After that, the four scenarios are revisited to illustrate the potentials of an AIS DSS. Final- ly, a fifth scenario is used to discuss treat- ments of ill-structured problems.

Case 1 - In assessing and predicting sales performance, the DSS presents the decision maker with a graph of time series data. How- ever, the decision maker may anchor on past data and adjust to predict the future, may see patterns in data which are due to randomness, and may see more variability in the data than really exists.

Case 2 - The decision maker wants to com- pare the performance of salespeople in Chi- cago and Boston. He has the DSS create histo- grams of these data from both locations; he concludes that the salespeople in Chicago are performing better.

Case 3 - The manager wants to determine what is causing the sales problems in Boston. To do this, he has the DSS extract data on many predictive factors from the database and display it. After mentally sorting through these factors, he concludes that it "all boils down to the sales commission structure in use in Boston."

Case 4 - The decision maker asks the DSS to predict sales using any of 10 environmental factors. The resulting regression has 3 inde- pendent variables; one of the independent vari- ables is the log of the number of personal computers sold nationwide.

In each of the four cases, the DSS has done correctly what was requested, but the decision maker could be misled by the results pre- sented. In case 1, the presentation form is technically correct but the decision maker could misinterpret the resulting display. In case 2, the DSS histograms were correct but no support was given for comparing the two factors. In case 3, the DSS gives technically correct support in displaying the factors but does not extend help to modeling the sales problem. This leads the manager to use a sim- plistic heuristic. In case 4, the technically cor- rect stepwise response regression has been performed but no support is provided to the decision maker to understand the logic of the

MIS Quarterly/December 1986 409 MIS Quarterly/December 1986 409 MIS Quarterly/December 1986 409 MIS Quarterly/December 1986 409 MIS Quarterly/December 1986 409 MIS Quarterly/December 1986 409 MIS Quarterly/December 1986 409 MIS Quarterly/December 1986 409

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 9: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

process. In all four cases, the DSS can be ex- panded to provide more intelligent support. The additional requirements are as follows:

Case 1 - In a graphics display of time series data, a good design will make the default dis- play start at zero on the Y axis. This helps avoid the distortion in the "perceived data vari- ability" (see Table 1) resulting if the Y axis were shortened to zoom-in on the data. The trend line projection tools should be automati- cally invoked to get a better prediction than the conservative "anchor and adjust" heuris- tic. Automatic testing for statistical signifi- cance of the trend and any seasonality should also be displayed. This might reduce the deci- sion makers tendency to see patterns in ran- dom data.

Implementing the display defaults can be easily done with an AIS or even a "dumb" sys- tem like SPSS. (Interestingly, SPSS defaults to the bias creating zoom display.) Invoking a trend line analysis presents more of a prob- lem since not every graphic display of data is appropriate for trend analysis (even if season- ality is built in). Normally, decision makers in- voke trend line if the data "appears right" for projecting a trend. In the AIS, a heuristic can be implemented to invoke the trend line. With trend lines, or any pattern in times series data, the AIS must automatically calculate, present, and interpret statistical tests of the significance of the fit. This can help decision makers to avoid overevaluating any patterns which are not statistically significant.

Case 2 - In this case the decision maker does not have an AIS to automatically test statistically the differences between the two cities. Thus, he makes intuitive decision. If the decision maker had a natural language in- terface to an AIS, many of the tasks would be- come simplified. Consider the query, "Com- pare the sales commission earned in Boston and Chicago." The data would be retrieved and displayed, but additionally, statistical comparisons would be automatically made that avoid the decision maker's biases. Using an AIS, the proper test would be chosen, vari- ances pooled, outliers handled, etc. These choices are not all determinate. The choice between tests on the mean and median may be subjective as is the choice of when to pool

process. In all four cases, the DSS can be ex- panded to provide more intelligent support. The additional requirements are as follows:

Case 1 - In a graphics display of time series data, a good design will make the default dis- play start at zero on the Y axis. This helps avoid the distortion in the "perceived data vari- ability" (see Table 1) resulting if the Y axis were shortened to zoom-in on the data. The trend line projection tools should be automati- cally invoked to get a better prediction than the conservative "anchor and adjust" heuris- tic. Automatic testing for statistical signifi- cance of the trend and any seasonality should also be displayed. This might reduce the deci- sion makers tendency to see patterns in ran- dom data.

Implementing the display defaults can be easily done with an AIS or even a "dumb" sys- tem like SPSS. (Interestingly, SPSS defaults to the bias creating zoom display.) Invoking a trend line analysis presents more of a prob- lem since not every graphic display of data is appropriate for trend analysis (even if season- ality is built in). Normally, decision makers in- voke trend line if the data "appears right" for projecting a trend. In the AIS, a heuristic can be implemented to invoke the trend line. With trend lines, or any pattern in times series data, the AIS must automatically calculate, present, and interpret statistical tests of the significance of the fit. This can help decision makers to avoid overevaluating any patterns which are not statistically significant.

Case 2 - In this case the decision maker does not have an AIS to automatically test statistically the differences between the two cities. Thus, he makes intuitive decision. If the decision maker had a natural language in- terface to an AIS, many of the tasks would be- come simplified. Consider the query, "Com- pare the sales commission earned in Boston and Chicago." The data would be retrieved and displayed, but additionally, statistical comparisons would be automatically made that avoid the decision maker's biases. Using an AIS, the proper test would be chosen, vari- ances pooled, outliers handled, etc. These choices are not all determinate. The choice between tests on the mean and median may be subjective as is the choice of when to pool

process. In all four cases, the DSS can be ex- panded to provide more intelligent support. The additional requirements are as follows:

Case 1 - In a graphics display of time series data, a good design will make the default dis- play start at zero on the Y axis. This helps avoid the distortion in the "perceived data vari- ability" (see Table 1) resulting if the Y axis were shortened to zoom-in on the data. The trend line projection tools should be automati- cally invoked to get a better prediction than the conservative "anchor and adjust" heuris- tic. Automatic testing for statistical signifi- cance of the trend and any seasonality should also be displayed. This might reduce the deci- sion makers tendency to see patterns in ran- dom data.

Implementing the display defaults can be easily done with an AIS or even a "dumb" sys- tem like SPSS. (Interestingly, SPSS defaults to the bias creating zoom display.) Invoking a trend line analysis presents more of a prob- lem since not every graphic display of data is appropriate for trend analysis (even if season- ality is built in). Normally, decision makers in- voke trend line if the data "appears right" for projecting a trend. In the AIS, a heuristic can be implemented to invoke the trend line. With trend lines, or any pattern in times series data, the AIS must automatically calculate, present, and interpret statistical tests of the significance of the fit. This can help decision makers to avoid overevaluating any patterns which are not statistically significant.

Case 2 - In this case the decision maker does not have an AIS to automatically test statistically the differences between the two cities. Thus, he makes intuitive decision. If the decision maker had a natural language in- terface to an AIS, many of the tasks would be- come simplified. Consider the query, "Com- pare the sales commission earned in Boston and Chicago." The data would be retrieved and displayed, but additionally, statistical comparisons would be automatically made that avoid the decision maker's biases. Using an AIS, the proper test would be chosen, vari- ances pooled, outliers handled, etc. These choices are not all determinate. The choice between tests on the mean and median may be subjective as is the choice of when to pool

process. In all four cases, the DSS can be ex- panded to provide more intelligent support. The additional requirements are as follows:

Case 1 - In a graphics display of time series data, a good design will make the default dis- play start at zero on the Y axis. This helps avoid the distortion in the "perceived data vari- ability" (see Table 1) resulting if the Y axis were shortened to zoom-in on the data. The trend line projection tools should be automati- cally invoked to get a better prediction than the conservative "anchor and adjust" heuris- tic. Automatic testing for statistical signifi- cance of the trend and any seasonality should also be displayed. This might reduce the deci- sion makers tendency to see patterns in ran- dom data.

Implementing the display defaults can be easily done with an AIS or even a "dumb" sys- tem like SPSS. (Interestingly, SPSS defaults to the bias creating zoom display.) Invoking a trend line analysis presents more of a prob- lem since not every graphic display of data is appropriate for trend analysis (even if season- ality is built in). Normally, decision makers in- voke trend line if the data "appears right" for projecting a trend. In the AIS, a heuristic can be implemented to invoke the trend line. With trend lines, or any pattern in times series data, the AIS must automatically calculate, present, and interpret statistical tests of the significance of the fit. This can help decision makers to avoid overevaluating any patterns which are not statistically significant.

Case 2 - In this case the decision maker does not have an AIS to automatically test statistically the differences between the two cities. Thus, he makes intuitive decision. If the decision maker had a natural language in- terface to an AIS, many of the tasks would be- come simplified. Consider the query, "Com- pare the sales commission earned in Boston and Chicago." The data would be retrieved and displayed, but additionally, statistical comparisons would be automatically made that avoid the decision maker's biases. Using an AIS, the proper test would be chosen, vari- ances pooled, outliers handled, etc. These choices are not all determinate. The choice between tests on the mean and median may be subjective as is the choice of when to pool

process. In all four cases, the DSS can be ex- panded to provide more intelligent support. The additional requirements are as follows:

Case 1 - In a graphics display of time series data, a good design will make the default dis- play start at zero on the Y axis. This helps avoid the distortion in the "perceived data vari- ability" (see Table 1) resulting if the Y axis were shortened to zoom-in on the data. The trend line projection tools should be automati- cally invoked to get a better prediction than the conservative "anchor and adjust" heuris- tic. Automatic testing for statistical signifi- cance of the trend and any seasonality should also be displayed. This might reduce the deci- sion makers tendency to see patterns in ran- dom data.

Implementing the display defaults can be easily done with an AIS or even a "dumb" sys- tem like SPSS. (Interestingly, SPSS defaults to the bias creating zoom display.) Invoking a trend line analysis presents more of a prob- lem since not every graphic display of data is appropriate for trend analysis (even if season- ality is built in). Normally, decision makers in- voke trend line if the data "appears right" for projecting a trend. In the AIS, a heuristic can be implemented to invoke the trend line. With trend lines, or any pattern in times series data, the AIS must automatically calculate, present, and interpret statistical tests of the significance of the fit. This can help decision makers to avoid overevaluating any patterns which are not statistically significant.

Case 2 - In this case the decision maker does not have an AIS to automatically test statistically the differences between the two cities. Thus, he makes intuitive decision. If the decision maker had a natural language in- terface to an AIS, many of the tasks would be- come simplified. Consider the query, "Com- pare the sales commission earned in Boston and Chicago." The data would be retrieved and displayed, but additionally, statistical comparisons would be automatically made that avoid the decision maker's biases. Using an AIS, the proper test would be chosen, vari- ances pooled, outliers handled, etc. These choices are not all determinate. The choice between tests on the mean and median may be subjective as is the choice of when to pool

process. In all four cases, the DSS can be ex- panded to provide more intelligent support. The additional requirements are as follows:

Case 1 - In a graphics display of time series data, a good design will make the default dis- play start at zero on the Y axis. This helps avoid the distortion in the "perceived data vari- ability" (see Table 1) resulting if the Y axis were shortened to zoom-in on the data. The trend line projection tools should be automati- cally invoked to get a better prediction than the conservative "anchor and adjust" heuris- tic. Automatic testing for statistical signifi- cance of the trend and any seasonality should also be displayed. This might reduce the deci- sion makers tendency to see patterns in ran- dom data.

Implementing the display defaults can be easily done with an AIS or even a "dumb" sys- tem like SPSS. (Interestingly, SPSS defaults to the bias creating zoom display.) Invoking a trend line analysis presents more of a prob- lem since not every graphic display of data is appropriate for trend analysis (even if season- ality is built in). Normally, decision makers in- voke trend line if the data "appears right" for projecting a trend. In the AIS, a heuristic can be implemented to invoke the trend line. With trend lines, or any pattern in times series data, the AIS must automatically calculate, present, and interpret statistical tests of the significance of the fit. This can help decision makers to avoid overevaluating any patterns which are not statistically significant.

Case 2 - In this case the decision maker does not have an AIS to automatically test statistically the differences between the two cities. Thus, he makes intuitive decision. If the decision maker had a natural language in- terface to an AIS, many of the tasks would be- come simplified. Consider the query, "Com- pare the sales commission earned in Boston and Chicago." The data would be retrieved and displayed, but additionally, statistical comparisons would be automatically made that avoid the decision maker's biases. Using an AIS, the proper test would be chosen, vari- ances pooled, outliers handled, etc. These choices are not all determinate. The choice between tests on the mean and median may be subjective as is the choice of when to pool

process. In all four cases, the DSS can be ex- panded to provide more intelligent support. The additional requirements are as follows:

Case 1 - In a graphics display of time series data, a good design will make the default dis- play start at zero on the Y axis. This helps avoid the distortion in the "perceived data vari- ability" (see Table 1) resulting if the Y axis were shortened to zoom-in on the data. The trend line projection tools should be automati- cally invoked to get a better prediction than the conservative "anchor and adjust" heuris- tic. Automatic testing for statistical signifi- cance of the trend and any seasonality should also be displayed. This might reduce the deci- sion makers tendency to see patterns in ran- dom data.

Implementing the display defaults can be easily done with an AIS or even a "dumb" sys- tem like SPSS. (Interestingly, SPSS defaults to the bias creating zoom display.) Invoking a trend line analysis presents more of a prob- lem since not every graphic display of data is appropriate for trend analysis (even if season- ality is built in). Normally, decision makers in- voke trend line if the data "appears right" for projecting a trend. In the AIS, a heuristic can be implemented to invoke the trend line. With trend lines, or any pattern in times series data, the AIS must automatically calculate, present, and interpret statistical tests of the significance of the fit. This can help decision makers to avoid overevaluating any patterns which are not statistically significant.

Case 2 - In this case the decision maker does not have an AIS to automatically test statistically the differences between the two cities. Thus, he makes intuitive decision. If the decision maker had a natural language in- terface to an AIS, many of the tasks would be- come simplified. Consider the query, "Com- pare the sales commission earned in Boston and Chicago." The data would be retrieved and displayed, but additionally, statistical comparisons would be automatically made that avoid the decision maker's biases. Using an AIS, the proper test would be chosen, vari- ances pooled, outliers handled, etc. These choices are not all determinate. The choice between tests on the mean and median may be subjective as is the choice of when to pool

process. In all four cases, the DSS can be ex- panded to provide more intelligent support. The additional requirements are as follows:

Case 1 - In a graphics display of time series data, a good design will make the default dis- play start at zero on the Y axis. This helps avoid the distortion in the "perceived data vari- ability" (see Table 1) resulting if the Y axis were shortened to zoom-in on the data. The trend line projection tools should be automati- cally invoked to get a better prediction than the conservative "anchor and adjust" heuris- tic. Automatic testing for statistical signifi- cance of the trend and any seasonality should also be displayed. This might reduce the deci- sion makers tendency to see patterns in ran- dom data.

Implementing the display defaults can be easily done with an AIS or even a "dumb" sys- tem like SPSS. (Interestingly, SPSS defaults to the bias creating zoom display.) Invoking a trend line analysis presents more of a prob- lem since not every graphic display of data is appropriate for trend analysis (even if season- ality is built in). Normally, decision makers in- voke trend line if the data "appears right" for projecting a trend. In the AIS, a heuristic can be implemented to invoke the trend line. With trend lines, or any pattern in times series data, the AIS must automatically calculate, present, and interpret statistical tests of the significance of the fit. This can help decision makers to avoid overevaluating any patterns which are not statistically significant.

Case 2 - In this case the decision maker does not have an AIS to automatically test statistically the differences between the two cities. Thus, he makes intuitive decision. If the decision maker had a natural language in- terface to an AIS, many of the tasks would be- come simplified. Consider the query, "Com- pare the sales commission earned in Boston and Chicago." The data would be retrieved and displayed, but additionally, statistical comparisons would be automatically made that avoid the decision maker's biases. Using an AIS, the proper test would be chosen, vari- ances pooled, outliers handled, etc. These choices are not all determinate. The choice between tests on the mean and median may be subjective as is the choice of when to pool

or not to pool. Handling outliers is also judge- mental. Such heuristics can be built into an AIS. As a result, the manager will receive a statement such as "There is greater than 95% chance that the median commissions differ between Boston and Chicago" (and related data) without having called for the test (and without having knowledge of the heuristics). Since a natural language interface will allow more general inquiries than non-natural lan- guage AIS's or "dumb" packages like SPSS, SAS, and IFPS/MLR, a natural language AIS can be more aggressive in choosing how to answer the general inquiry.

Case 3 - In this situation the decision maker is dealing with multivariate data (in a sequen- tial univariate way). Thus he resorts to a sim- plistic heuristic. What is needed is a multivari- ate model to support this decision. When a historical database is available, regression can be used to develop a heuristic policy from this problem. Heuristics of this sort consis- tently outperform the decision makers them- selves [29, 51, 59]. At minimum these heuris- tics will reduce erratic decision making. (Note, however, that this may not always be desirable [30].)

The keys to successful use of the regression boot-strapping model in an AIS are (1) embed- ding a user transparent regression, (2) keeping the details of its operation out of the user's way, (3) automatically determining when its use might be appropriate, and (4) promoting its use. Of particular note here is the need to competently perform the regression without involving the user in the details. This calls for an expert system to automatically take care of outliers, perform transformations, test for heteroscedascity, and perform other related tasks. Many of the choices would be based on expert heuristics embedded in the AIS. Also, the system for case 3 must have the fea- tures described next in case 4.

Case 4 - Here the AIS may develop a fore- casting model based on decision maker in- put, axiomatic regression procedures, and ex- pert heuristics for regression. Also, however, the decision maker must be able to under- stand the logic for the AIS's choice of the three variable model as the "best" model. The ubiquitous feature of expert systems, back

or not to pool. Handling outliers is also judge- mental. Such heuristics can be built into an AIS. As a result, the manager will receive a statement such as "There is greater than 95% chance that the median commissions differ between Boston and Chicago" (and related data) without having called for the test (and without having knowledge of the heuristics). Since a natural language interface will allow more general inquiries than non-natural lan- guage AIS's or "dumb" packages like SPSS, SAS, and IFPS/MLR, a natural language AIS can be more aggressive in choosing how to answer the general inquiry.

Case 3 - In this situation the decision maker is dealing with multivariate data (in a sequen- tial univariate way). Thus he resorts to a sim- plistic heuristic. What is needed is a multivari- ate model to support this decision. When a historical database is available, regression can be used to develop a heuristic policy from this problem. Heuristics of this sort consis- tently outperform the decision makers them- selves [29, 51, 59]. At minimum these heuris- tics will reduce erratic decision making. (Note, however, that this may not always be desirable [30].)

The keys to successful use of the regression boot-strapping model in an AIS are (1) embed- ding a user transparent regression, (2) keeping the details of its operation out of the user's way, (3) automatically determining when its use might be appropriate, and (4) promoting its use. Of particular note here is the need to competently perform the regression without involving the user in the details. This calls for an expert system to automatically take care of outliers, perform transformations, test for heteroscedascity, and perform other related tasks. Many of the choices would be based on expert heuristics embedded in the AIS. Also, the system for case 3 must have the fea- tures described next in case 4.

Case 4 - Here the AIS may develop a fore- casting model based on decision maker in- put, axiomatic regression procedures, and ex- pert heuristics for regression. Also, however, the decision maker must be able to under- stand the logic for the AIS's choice of the three variable model as the "best" model. The ubiquitous feature of expert systems, back

or not to pool. Handling outliers is also judge- mental. Such heuristics can be built into an AIS. As a result, the manager will receive a statement such as "There is greater than 95% chance that the median commissions differ between Boston and Chicago" (and related data) without having called for the test (and without having knowledge of the heuristics). Since a natural language interface will allow more general inquiries than non-natural lan- guage AIS's or "dumb" packages like SPSS, SAS, and IFPS/MLR, a natural language AIS can be more aggressive in choosing how to answer the general inquiry.

Case 3 - In this situation the decision maker is dealing with multivariate data (in a sequen- tial univariate way). Thus he resorts to a sim- plistic heuristic. What is needed is a multivari- ate model to support this decision. When a historical database is available, regression can be used to develop a heuristic policy from this problem. Heuristics of this sort consis- tently outperform the decision makers them- selves [29, 51, 59]. At minimum these heuris- tics will reduce erratic decision making. (Note, however, that this may not always be desirable [30].)

The keys to successful use of the regression boot-strapping model in an AIS are (1) embed- ding a user transparent regression, (2) keeping the details of its operation out of the user's way, (3) automatically determining when its use might be appropriate, and (4) promoting its use. Of particular note here is the need to competently perform the regression without involving the user in the details. This calls for an expert system to automatically take care of outliers, perform transformations, test for heteroscedascity, and perform other related tasks. Many of the choices would be based on expert heuristics embedded in the AIS. Also, the system for case 3 must have the fea- tures described next in case 4.

Case 4 - Here the AIS may develop a fore- casting model based on decision maker in- put, axiomatic regression procedures, and ex- pert heuristics for regression. Also, however, the decision maker must be able to under- stand the logic for the AIS's choice of the three variable model as the "best" model. The ubiquitous feature of expert systems, back

or not to pool. Handling outliers is also judge- mental. Such heuristics can be built into an AIS. As a result, the manager will receive a statement such as "There is greater than 95% chance that the median commissions differ between Boston and Chicago" (and related data) without having called for the test (and without having knowledge of the heuristics). Since a natural language interface will allow more general inquiries than non-natural lan- guage AIS's or "dumb" packages like SPSS, SAS, and IFPS/MLR, a natural language AIS can be more aggressive in choosing how to answer the general inquiry.

Case 3 - In this situation the decision maker is dealing with multivariate data (in a sequen- tial univariate way). Thus he resorts to a sim- plistic heuristic. What is needed is a multivari- ate model to support this decision. When a historical database is available, regression can be used to develop a heuristic policy from this problem. Heuristics of this sort consis- tently outperform the decision makers them- selves [29, 51, 59]. At minimum these heuris- tics will reduce erratic decision making. (Note, however, that this may not always be desirable [30].)

The keys to successful use of the regression boot-strapping model in an AIS are (1) embed- ding a user transparent regression, (2) keeping the details of its operation out of the user's way, (3) automatically determining when its use might be appropriate, and (4) promoting its use. Of particular note here is the need to competently perform the regression without involving the user in the details. This calls for an expert system to automatically take care of outliers, perform transformations, test for heteroscedascity, and perform other related tasks. Many of the choices would be based on expert heuristics embedded in the AIS. Also, the system for case 3 must have the fea- tures described next in case 4.

Case 4 - Here the AIS may develop a fore- casting model based on decision maker in- put, axiomatic regression procedures, and ex- pert heuristics for regression. Also, however, the decision maker must be able to under- stand the logic for the AIS's choice of the three variable model as the "best" model. The ubiquitous feature of expert systems, back

or not to pool. Handling outliers is also judge- mental. Such heuristics can be built into an AIS. As a result, the manager will receive a statement such as "There is greater than 95% chance that the median commissions differ between Boston and Chicago" (and related data) without having called for the test (and without having knowledge of the heuristics). Since a natural language interface will allow more general inquiries than non-natural lan- guage AIS's or "dumb" packages like SPSS, SAS, and IFPS/MLR, a natural language AIS can be more aggressive in choosing how to answer the general inquiry.

Case 3 - In this situation the decision maker is dealing with multivariate data (in a sequen- tial univariate way). Thus he resorts to a sim- plistic heuristic. What is needed is a multivari- ate model to support this decision. When a historical database is available, regression can be used to develop a heuristic policy from this problem. Heuristics of this sort consis- tently outperform the decision makers them- selves [29, 51, 59]. At minimum these heuris- tics will reduce erratic decision making. (Note, however, that this may not always be desirable [30].)

The keys to successful use of the regression boot-strapping model in an AIS are (1) embed- ding a user transparent regression, (2) keeping the details of its operation out of the user's way, (3) automatically determining when its use might be appropriate, and (4) promoting its use. Of particular note here is the need to competently perform the regression without involving the user in the details. This calls for an expert system to automatically take care of outliers, perform transformations, test for heteroscedascity, and perform other related tasks. Many of the choices would be based on expert heuristics embedded in the AIS. Also, the system for case 3 must have the fea- tures described next in case 4.

Case 4 - Here the AIS may develop a fore- casting model based on decision maker in- put, axiomatic regression procedures, and ex- pert heuristics for regression. Also, however, the decision maker must be able to under- stand the logic for the AIS's choice of the three variable model as the "best" model. The ubiquitous feature of expert systems, back

or not to pool. Handling outliers is also judge- mental. Such heuristics can be built into an AIS. As a result, the manager will receive a statement such as "There is greater than 95% chance that the median commissions differ between Boston and Chicago" (and related data) without having called for the test (and without having knowledge of the heuristics). Since a natural language interface will allow more general inquiries than non-natural lan- guage AIS's or "dumb" packages like SPSS, SAS, and IFPS/MLR, a natural language AIS can be more aggressive in choosing how to answer the general inquiry.

Case 3 - In this situation the decision maker is dealing with multivariate data (in a sequen- tial univariate way). Thus he resorts to a sim- plistic heuristic. What is needed is a multivari- ate model to support this decision. When a historical database is available, regression can be used to develop a heuristic policy from this problem. Heuristics of this sort consis- tently outperform the decision makers them- selves [29, 51, 59]. At minimum these heuris- tics will reduce erratic decision making. (Note, however, that this may not always be desirable [30].)

The keys to successful use of the regression boot-strapping model in an AIS are (1) embed- ding a user transparent regression, (2) keeping the details of its operation out of the user's way, (3) automatically determining when its use might be appropriate, and (4) promoting its use. Of particular note here is the need to competently perform the regression without involving the user in the details. This calls for an expert system to automatically take care of outliers, perform transformations, test for heteroscedascity, and perform other related tasks. Many of the choices would be based on expert heuristics embedded in the AIS. Also, the system for case 3 must have the fea- tures described next in case 4.

Case 4 - Here the AIS may develop a fore- casting model based on decision maker in- put, axiomatic regression procedures, and ex- pert heuristics for regression. Also, however, the decision maker must be able to under- stand the logic for the AIS's choice of the three variable model as the "best" model. The ubiquitous feature of expert systems, back

or not to pool. Handling outliers is also judge- mental. Such heuristics can be built into an AIS. As a result, the manager will receive a statement such as "There is greater than 95% chance that the median commissions differ between Boston and Chicago" (and related data) without having called for the test (and without having knowledge of the heuristics). Since a natural language interface will allow more general inquiries than non-natural lan- guage AIS's or "dumb" packages like SPSS, SAS, and IFPS/MLR, a natural language AIS can be more aggressive in choosing how to answer the general inquiry.

Case 3 - In this situation the decision maker is dealing with multivariate data (in a sequen- tial univariate way). Thus he resorts to a sim- plistic heuristic. What is needed is a multivari- ate model to support this decision. When a historical database is available, regression can be used to develop a heuristic policy from this problem. Heuristics of this sort consis- tently outperform the decision makers them- selves [29, 51, 59]. At minimum these heuris- tics will reduce erratic decision making. (Note, however, that this may not always be desirable [30].)

The keys to successful use of the regression boot-strapping model in an AIS are (1) embed- ding a user transparent regression, (2) keeping the details of its operation out of the user's way, (3) automatically determining when its use might be appropriate, and (4) promoting its use. Of particular note here is the need to competently perform the regression without involving the user in the details. This calls for an expert system to automatically take care of outliers, perform transformations, test for heteroscedascity, and perform other related tasks. Many of the choices would be based on expert heuristics embedded in the AIS. Also, the system for case 3 must have the fea- tures described next in case 4.

Case 4 - Here the AIS may develop a fore- casting model based on decision maker in- put, axiomatic regression procedures, and ex- pert heuristics for regression. Also, however, the decision maker must be able to under- stand the logic for the AIS's choice of the three variable model as the "best" model. The ubiquitous feature of expert systems, back

or not to pool. Handling outliers is also judge- mental. Such heuristics can be built into an AIS. As a result, the manager will receive a statement such as "There is greater than 95% chance that the median commissions differ between Boston and Chicago" (and related data) without having called for the test (and without having knowledge of the heuristics). Since a natural language interface will allow more general inquiries than non-natural lan- guage AIS's or "dumb" packages like SPSS, SAS, and IFPS/MLR, a natural language AIS can be more aggressive in choosing how to answer the general inquiry.

Case 3 - In this situation the decision maker is dealing with multivariate data (in a sequen- tial univariate way). Thus he resorts to a sim- plistic heuristic. What is needed is a multivari- ate model to support this decision. When a historical database is available, regression can be used to develop a heuristic policy from this problem. Heuristics of this sort consis- tently outperform the decision makers them- selves [29, 51, 59]. At minimum these heuris- tics will reduce erratic decision making. (Note, however, that this may not always be desirable [30].)

The keys to successful use of the regression boot-strapping model in an AIS are (1) embed- ding a user transparent regression, (2) keeping the details of its operation out of the user's way, (3) automatically determining when its use might be appropriate, and (4) promoting its use. Of particular note here is the need to competently perform the regression without involving the user in the details. This calls for an expert system to automatically take care of outliers, perform transformations, test for heteroscedascity, and perform other related tasks. Many of the choices would be based on expert heuristics embedded in the AIS. Also, the system for case 3 must have the fea- tures described next in case 4.

Case 4 - Here the AIS may develop a fore- casting model based on decision maker in- put, axiomatic regression procedures, and ex- pert heuristics for regression. Also, however, the decision maker must be able to under- stand the logic for the AIS's choice of the three variable model as the "best" model. The ubiquitous feature of expert systems, back

410 MIS Quarterly/December 1986 410 MIS Quarterly/December 1986 410 MIS Quarterly/December 1986 410 MIS Quarterly/December 1986 410 MIS Quarterly/December 1986 410 MIS Quarterly/December 1986 410 MIS Quarterly/December 1986 410 MIS Quarterly/December 1986

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 10: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

tracking, is used to show the logic of model development. In case 4, the decision maker would want to know how the three variables were chosen and why the log transformation was necessary. In case 3, the logic modeling multivariate heuristics using regression would also be provided.

With an AIS-like system, it is also possible for a decision maker to interrupt the logic se- quence and "what if" a step in the procedure. If applied to case 4, the decision maker could get an expert's consultation about how to proceed with the analysis by a step-by-step logic and calculations summary as the model is developed. The decision maker could also inspect the logic (and results) of several dif- ferent models (i.e., regression and discrimi- nant analysis); this would be equivalent to consulting several experts. Capabilities such as these are currently available in many ex- pert systems [63].

In some cases, (e.g., case 1), the AIS must compensate for the biases of the human deci- sion maker in performing primary decision tasks. In other cases (e.g., cases 2 and 3), the system must support secondary decision tasks and augment the analytical skills of the manager. Given the sophistication of these models, the system (as in case 4) must also backtrack and "forward track" to show the logic of its analysis and also allow managerial override. In short, the AID must provide con- sultation for both the primary and secondary aspects of decision making. While an AIS has no answer to a truly "unstructured" problem, in many problems a structure can be devel- oped given adequate data.

Case 5 - A manager states that "There is something wrong at our Chicago plant." If the DSS provides expert consultation (beyond that described in the prior section), it might be able to help identify the problem and causes. For example, the system might re- spond by asking it it was a financial, sales, manufacturing, etc. problem. If the manager replied that it was a financial problem, the system might ask what the manager thought the cause might be. If the causal relationship is testable, the system will determine and per- form the appropriate test. The capabilities needed are (1) a natural language interface,

tracking, is used to show the logic of model development. In case 4, the decision maker would want to know how the three variables were chosen and why the log transformation was necessary. In case 3, the logic modeling multivariate heuristics using regression would also be provided.

With an AIS-like system, it is also possible for a decision maker to interrupt the logic se- quence and "what if" a step in the procedure. If applied to case 4, the decision maker could get an expert's consultation about how to proceed with the analysis by a step-by-step logic and calculations summary as the model is developed. The decision maker could also inspect the logic (and results) of several dif- ferent models (i.e., regression and discrimi- nant analysis); this would be equivalent to consulting several experts. Capabilities such as these are currently available in many ex- pert systems [63].

In some cases, (e.g., case 1), the AIS must compensate for the biases of the human deci- sion maker in performing primary decision tasks. In other cases (e.g., cases 2 and 3), the system must support secondary decision tasks and augment the analytical skills of the manager. Given the sophistication of these models, the system (as in case 4) must also backtrack and "forward track" to show the logic of its analysis and also allow managerial override. In short, the AID must provide con- sultation for both the primary and secondary aspects of decision making. While an AIS has no answer to a truly "unstructured" problem, in many problems a structure can be devel- oped given adequate data.

Case 5 - A manager states that "There is something wrong at our Chicago plant." If the DSS provides expert consultation (beyond that described in the prior section), it might be able to help identify the problem and causes. For example, the system might re- spond by asking it it was a financial, sales, manufacturing, etc. problem. If the manager replied that it was a financial problem, the system might ask what the manager thought the cause might be. If the causal relationship is testable, the system will determine and per- form the appropriate test. The capabilities needed are (1) a natural language interface,

tracking, is used to show the logic of model development. In case 4, the decision maker would want to know how the three variables were chosen and why the log transformation was necessary. In case 3, the logic modeling multivariate heuristics using regression would also be provided.

With an AIS-like system, it is also possible for a decision maker to interrupt the logic se- quence and "what if" a step in the procedure. If applied to case 4, the decision maker could get an expert's consultation about how to proceed with the analysis by a step-by-step logic and calculations summary as the model is developed. The decision maker could also inspect the logic (and results) of several dif- ferent models (i.e., regression and discrimi- nant analysis); this would be equivalent to consulting several experts. Capabilities such as these are currently available in many ex- pert systems [63].

In some cases, (e.g., case 1), the AIS must compensate for the biases of the human deci- sion maker in performing primary decision tasks. In other cases (e.g., cases 2 and 3), the system must support secondary decision tasks and augment the analytical skills of the manager. Given the sophistication of these models, the system (as in case 4) must also backtrack and "forward track" to show the logic of its analysis and also allow managerial override. In short, the AID must provide con- sultation for both the primary and secondary aspects of decision making. While an AIS has no answer to a truly "unstructured" problem, in many problems a structure can be devel- oped given adequate data.

Case 5 - A manager states that "There is something wrong at our Chicago plant." If the DSS provides expert consultation (beyond that described in the prior section), it might be able to help identify the problem and causes. For example, the system might re- spond by asking it it was a financial, sales, manufacturing, etc. problem. If the manager replied that it was a financial problem, the system might ask what the manager thought the cause might be. If the causal relationship is testable, the system will determine and per- form the appropriate test. The capabilities needed are (1) a natural language interface,

tracking, is used to show the logic of model development. In case 4, the decision maker would want to know how the three variables were chosen and why the log transformation was necessary. In case 3, the logic modeling multivariate heuristics using regression would also be provided.

With an AIS-like system, it is also possible for a decision maker to interrupt the logic se- quence and "what if" a step in the procedure. If applied to case 4, the decision maker could get an expert's consultation about how to proceed with the analysis by a step-by-step logic and calculations summary as the model is developed. The decision maker could also inspect the logic (and results) of several dif- ferent models (i.e., regression and discrimi- nant analysis); this would be equivalent to consulting several experts. Capabilities such as these are currently available in many ex- pert systems [63].

In some cases, (e.g., case 1), the AIS must compensate for the biases of the human deci- sion maker in performing primary decision tasks. In other cases (e.g., cases 2 and 3), the system must support secondary decision tasks and augment the analytical skills of the manager. Given the sophistication of these models, the system (as in case 4) must also backtrack and "forward track" to show the logic of its analysis and also allow managerial override. In short, the AID must provide con- sultation for both the primary and secondary aspects of decision making. While an AIS has no answer to a truly "unstructured" problem, in many problems a structure can be devel- oped given adequate data.

Case 5 - A manager states that "There is something wrong at our Chicago plant." If the DSS provides expert consultation (beyond that described in the prior section), it might be able to help identify the problem and causes. For example, the system might re- spond by asking it it was a financial, sales, manufacturing, etc. problem. If the manager replied that it was a financial problem, the system might ask what the manager thought the cause might be. If the causal relationship is testable, the system will determine and per- form the appropriate test. The capabilities needed are (1) a natural language interface,

tracking, is used to show the logic of model development. In case 4, the decision maker would want to know how the three variables were chosen and why the log transformation was necessary. In case 3, the logic modeling multivariate heuristics using regression would also be provided.

With an AIS-like system, it is also possible for a decision maker to interrupt the logic se- quence and "what if" a step in the procedure. If applied to case 4, the decision maker could get an expert's consultation about how to proceed with the analysis by a step-by-step logic and calculations summary as the model is developed. The decision maker could also inspect the logic (and results) of several dif- ferent models (i.e., regression and discrimi- nant analysis); this would be equivalent to consulting several experts. Capabilities such as these are currently available in many ex- pert systems [63].

In some cases, (e.g., case 1), the AIS must compensate for the biases of the human deci- sion maker in performing primary decision tasks. In other cases (e.g., cases 2 and 3), the system must support secondary decision tasks and augment the analytical skills of the manager. Given the sophistication of these models, the system (as in case 4) must also backtrack and "forward track" to show the logic of its analysis and also allow managerial override. In short, the AID must provide con- sultation for both the primary and secondary aspects of decision making. While an AIS has no answer to a truly "unstructured" problem, in many problems a structure can be devel- oped given adequate data.

Case 5 - A manager states that "There is something wrong at our Chicago plant." If the DSS provides expert consultation (beyond that described in the prior section), it might be able to help identify the problem and causes. For example, the system might re- spond by asking it it was a financial, sales, manufacturing, etc. problem. If the manager replied that it was a financial problem, the system might ask what the manager thought the cause might be. If the causal relationship is testable, the system will determine and per- form the appropriate test. The capabilities needed are (1) a natural language interface,

tracking, is used to show the logic of model development. In case 4, the decision maker would want to know how the three variables were chosen and why the log transformation was necessary. In case 3, the logic modeling multivariate heuristics using regression would also be provided.

With an AIS-like system, it is also possible for a decision maker to interrupt the logic se- quence and "what if" a step in the procedure. If applied to case 4, the decision maker could get an expert's consultation about how to proceed with the analysis by a step-by-step logic and calculations summary as the model is developed. The decision maker could also inspect the logic (and results) of several dif- ferent models (i.e., regression and discrimi- nant analysis); this would be equivalent to consulting several experts. Capabilities such as these are currently available in many ex- pert systems [63].

In some cases, (e.g., case 1), the AIS must compensate for the biases of the human deci- sion maker in performing primary decision tasks. In other cases (e.g., cases 2 and 3), the system must support secondary decision tasks and augment the analytical skills of the manager. Given the sophistication of these models, the system (as in case 4) must also backtrack and "forward track" to show the logic of its analysis and also allow managerial override. In short, the AID must provide con- sultation for both the primary and secondary aspects of decision making. While an AIS has no answer to a truly "unstructured" problem, in many problems a structure can be devel- oped given adequate data.

Case 5 - A manager states that "There is something wrong at our Chicago plant." If the DSS provides expert consultation (beyond that described in the prior section), it might be able to help identify the problem and causes. For example, the system might re- spond by asking it it was a financial, sales, manufacturing, etc. problem. If the manager replied that it was a financial problem, the system might ask what the manager thought the cause might be. If the causal relationship is testable, the system will determine and per- form the appropriate test. The capabilities needed are (1) a natural language interface,

tracking, is used to show the logic of model development. In case 4, the decision maker would want to know how the three variables were chosen and why the log transformation was necessary. In case 3, the logic modeling multivariate heuristics using regression would also be provided.

With an AIS-like system, it is also possible for a decision maker to interrupt the logic se- quence and "what if" a step in the procedure. If applied to case 4, the decision maker could get an expert's consultation about how to proceed with the analysis by a step-by-step logic and calculations summary as the model is developed. The decision maker could also inspect the logic (and results) of several dif- ferent models (i.e., regression and discrimi- nant analysis); this would be equivalent to consulting several experts. Capabilities such as these are currently available in many ex- pert systems [63].

In some cases, (e.g., case 1), the AIS must compensate for the biases of the human deci- sion maker in performing primary decision tasks. In other cases (e.g., cases 2 and 3), the system must support secondary decision tasks and augment the analytical skills of the manager. Given the sophistication of these models, the system (as in case 4) must also backtrack and "forward track" to show the logic of its analysis and also allow managerial override. In short, the AID must provide con- sultation for both the primary and secondary aspects of decision making. While an AIS has no answer to a truly "unstructured" problem, in many problems a structure can be devel- oped given adequate data.

Case 5 - A manager states that "There is something wrong at our Chicago plant." If the DSS provides expert consultation (beyond that described in the prior section), it might be able to help identify the problem and causes. For example, the system might re- spond by asking it it was a financial, sales, manufacturing, etc. problem. If the manager replied that it was a financial problem, the system might ask what the manager thought the cause might be. If the causal relationship is testable, the system will determine and per- form the appropriate test. The capabilities needed are (1) a natural language interface,

tracking, is used to show the logic of model development. In case 4, the decision maker would want to know how the three variables were chosen and why the log transformation was necessary. In case 3, the logic modeling multivariate heuristics using regression would also be provided.

With an AIS-like system, it is also possible for a decision maker to interrupt the logic se- quence and "what if" a step in the procedure. If applied to case 4, the decision maker could get an expert's consultation about how to proceed with the analysis by a step-by-step logic and calculations summary as the model is developed. The decision maker could also inspect the logic (and results) of several dif- ferent models (i.e., regression and discrimi- nant analysis); this would be equivalent to consulting several experts. Capabilities such as these are currently available in many ex- pert systems [63].

In some cases, (e.g., case 1), the AIS must compensate for the biases of the human deci- sion maker in performing primary decision tasks. In other cases (e.g., cases 2 and 3), the system must support secondary decision tasks and augment the analytical skills of the manager. Given the sophistication of these models, the system (as in case 4) must also backtrack and "forward track" to show the logic of its analysis and also allow managerial override. In short, the AID must provide con- sultation for both the primary and secondary aspects of decision making. While an AIS has no answer to a truly "unstructured" problem, in many problems a structure can be devel- oped given adequate data.

Case 5 - A manager states that "There is something wrong at our Chicago plant." If the DSS provides expert consultation (beyond that described in the prior section), it might be able to help identify the problem and causes. For example, the system might re- spond by asking it it was a financial, sales, manufacturing, etc. problem. If the manager replied that it was a financial problem, the system might ask what the manager thought the cause might be. If the causal relationship is testable, the system will determine and per- form the appropriate test. The capabilities needed are (1) a natural language interface,

(2) the ability to develop testable hypotheses, (3) the ability to test the hypotheses using real data, (4) access to meta-data, i.e., infor- mation on, and the relationships among, data items and processes, and (5) the ability to backtrack on the system's logic.

In MYCIN, an expert system for diagnosing in- fectious diseases, the doctor is coached in what tests would be appropriate to make a clear diagnosis [63]. Similarily, an expert business system could coach and perform testing of managerial hypotheses about prob- lems [49]. For example, analysis of two simi- lar plants (one with and one without the prob- lem) can suggest a cause of the problem. This is an important capability since selecting relevant variables can be more important than procedural sophistication in making good decisions [38]. To do this well, the data- base must be augmented with expert infor- mation about the processes happening in each plant and data beyond that normally re- quired for data processing. This requires in- formation which conceptualizes organiza- tional tasks and task structure in addition to transactional data. Often the former is more crucial in decision making and better handled with expert systems [38].

In Blum's RX system [8], testable hypotheses are derived in a different way. In this system for performing medical research, the system asks the users what they wish to predict (for example, the causes of heart attacks). A sub- program of Blum's RX system finds the high- est correlations between the criterion variable (heart attacks) and other potential prediction variables; the Spearman non-parametric cor- relation coefficient is used for this. The re- sults of this step are filtered through a path analysis program to suggest a causal model.

The Design and Operation of an AIS Design and operation of an AIS for managerial decision making given in this section draws heavily upon the RX system for medical re- search [8]. RX supports medical research in much the same way as we are suggesting the AIS support managerial decision making and

(2) the ability to develop testable hypotheses, (3) the ability to test the hypotheses using real data, (4) access to meta-data, i.e., infor- mation on, and the relationships among, data items and processes, and (5) the ability to backtrack on the system's logic.

In MYCIN, an expert system for diagnosing in- fectious diseases, the doctor is coached in what tests would be appropriate to make a clear diagnosis [63]. Similarily, an expert business system could coach and perform testing of managerial hypotheses about prob- lems [49]. For example, analysis of two simi- lar plants (one with and one without the prob- lem) can suggest a cause of the problem. This is an important capability since selecting relevant variables can be more important than procedural sophistication in making good decisions [38]. To do this well, the data- base must be augmented with expert infor- mation about the processes happening in each plant and data beyond that normally re- quired for data processing. This requires in- formation which conceptualizes organiza- tional tasks and task structure in addition to transactional data. Often the former is more crucial in decision making and better handled with expert systems [38].

In Blum's RX system [8], testable hypotheses are derived in a different way. In this system for performing medical research, the system asks the users what they wish to predict (for example, the causes of heart attacks). A sub- program of Blum's RX system finds the high- est correlations between the criterion variable (heart attacks) and other potential prediction variables; the Spearman non-parametric cor- relation coefficient is used for this. The re- sults of this step are filtered through a path analysis program to suggest a causal model.

The Design and Operation of an AIS Design and operation of an AIS for managerial decision making given in this section draws heavily upon the RX system for medical re- search [8]. RX supports medical research in much the same way as we are suggesting the AIS support managerial decision making and

(2) the ability to develop testable hypotheses, (3) the ability to test the hypotheses using real data, (4) access to meta-data, i.e., infor- mation on, and the relationships among, data items and processes, and (5) the ability to backtrack on the system's logic.

In MYCIN, an expert system for diagnosing in- fectious diseases, the doctor is coached in what tests would be appropriate to make a clear diagnosis [63]. Similarily, an expert business system could coach and perform testing of managerial hypotheses about prob- lems [49]. For example, analysis of two simi- lar plants (one with and one without the prob- lem) can suggest a cause of the problem. This is an important capability since selecting relevant variables can be more important than procedural sophistication in making good decisions [38]. To do this well, the data- base must be augmented with expert infor- mation about the processes happening in each plant and data beyond that normally re- quired for data processing. This requires in- formation which conceptualizes organiza- tional tasks and task structure in addition to transactional data. Often the former is more crucial in decision making and better handled with expert systems [38].

In Blum's RX system [8], testable hypotheses are derived in a different way. In this system for performing medical research, the system asks the users what they wish to predict (for example, the causes of heart attacks). A sub- program of Blum's RX system finds the high- est correlations between the criterion variable (heart attacks) and other potential prediction variables; the Spearman non-parametric cor- relation coefficient is used for this. The re- sults of this step are filtered through a path analysis program to suggest a causal model.

The Design and Operation of an AIS Design and operation of an AIS for managerial decision making given in this section draws heavily upon the RX system for medical re- search [8]. RX supports medical research in much the same way as we are suggesting the AIS support managerial decision making and

(2) the ability to develop testable hypotheses, (3) the ability to test the hypotheses using real data, (4) access to meta-data, i.e., infor- mation on, and the relationships among, data items and processes, and (5) the ability to backtrack on the system's logic.

In MYCIN, an expert system for diagnosing in- fectious diseases, the doctor is coached in what tests would be appropriate to make a clear diagnosis [63]. Similarily, an expert business system could coach and perform testing of managerial hypotheses about prob- lems [49]. For example, analysis of two simi- lar plants (one with and one without the prob- lem) can suggest a cause of the problem. This is an important capability since selecting relevant variables can be more important than procedural sophistication in making good decisions [38]. To do this well, the data- base must be augmented with expert infor- mation about the processes happening in each plant and data beyond that normally re- quired for data processing. This requires in- formation which conceptualizes organiza- tional tasks and task structure in addition to transactional data. Often the former is more crucial in decision making and better handled with expert systems [38].

In Blum's RX system [8], testable hypotheses are derived in a different way. In this system for performing medical research, the system asks the users what they wish to predict (for example, the causes of heart attacks). A sub- program of Blum's RX system finds the high- est correlations between the criterion variable (heart attacks) and other potential prediction variables; the Spearman non-parametric cor- relation coefficient is used for this. The re- sults of this step are filtered through a path analysis program to suggest a causal model.

The Design and Operation of an AIS Design and operation of an AIS for managerial decision making given in this section draws heavily upon the RX system for medical re- search [8]. RX supports medical research in much the same way as we are suggesting the AIS support managerial decision making and

(2) the ability to develop testable hypotheses, (3) the ability to test the hypotheses using real data, (4) access to meta-data, i.e., infor- mation on, and the relationships among, data items and processes, and (5) the ability to backtrack on the system's logic.

In MYCIN, an expert system for diagnosing in- fectious diseases, the doctor is coached in what tests would be appropriate to make a clear diagnosis [63]. Similarily, an expert business system could coach and perform testing of managerial hypotheses about prob- lems [49]. For example, analysis of two simi- lar plants (one with and one without the prob- lem) can suggest a cause of the problem. This is an important capability since selecting relevant variables can be more important than procedural sophistication in making good decisions [38]. To do this well, the data- base must be augmented with expert infor- mation about the processes happening in each plant and data beyond that normally re- quired for data processing. This requires in- formation which conceptualizes organiza- tional tasks and task structure in addition to transactional data. Often the former is more crucial in decision making and better handled with expert systems [38].

In Blum's RX system [8], testable hypotheses are derived in a different way. In this system for performing medical research, the system asks the users what they wish to predict (for example, the causes of heart attacks). A sub- program of Blum's RX system finds the high- est correlations between the criterion variable (heart attacks) and other potential prediction variables; the Spearman non-parametric cor- relation coefficient is used for this. The re- sults of this step are filtered through a path analysis program to suggest a causal model.

The Design and Operation of an AIS Design and operation of an AIS for managerial decision making given in this section draws heavily upon the RX system for medical re- search [8]. RX supports medical research in much the same way as we are suggesting the AIS support managerial decision making and

(2) the ability to develop testable hypotheses, (3) the ability to test the hypotheses using real data, (4) access to meta-data, i.e., infor- mation on, and the relationships among, data items and processes, and (5) the ability to backtrack on the system's logic.

In MYCIN, an expert system for diagnosing in- fectious diseases, the doctor is coached in what tests would be appropriate to make a clear diagnosis [63]. Similarily, an expert business system could coach and perform testing of managerial hypotheses about prob- lems [49]. For example, analysis of two simi- lar plants (one with and one without the prob- lem) can suggest a cause of the problem. This is an important capability since selecting relevant variables can be more important than procedural sophistication in making good decisions [38]. To do this well, the data- base must be augmented with expert infor- mation about the processes happening in each plant and data beyond that normally re- quired for data processing. This requires in- formation which conceptualizes organiza- tional tasks and task structure in addition to transactional data. Often the former is more crucial in decision making and better handled with expert systems [38].

In Blum's RX system [8], testable hypotheses are derived in a different way. In this system for performing medical research, the system asks the users what they wish to predict (for example, the causes of heart attacks). A sub- program of Blum's RX system finds the high- est correlations between the criterion variable (heart attacks) and other potential prediction variables; the Spearman non-parametric cor- relation coefficient is used for this. The re- sults of this step are filtered through a path analysis program to suggest a causal model.

The Design and Operation of an AIS Design and operation of an AIS for managerial decision making given in this section draws heavily upon the RX system for medical re- search [8]. RX supports medical research in much the same way as we are suggesting the AIS support managerial decision making and

(2) the ability to develop testable hypotheses, (3) the ability to test the hypotheses using real data, (4) access to meta-data, i.e., infor- mation on, and the relationships among, data items and processes, and (5) the ability to backtrack on the system's logic.

In MYCIN, an expert system for diagnosing in- fectious diseases, the doctor is coached in what tests would be appropriate to make a clear diagnosis [63]. Similarily, an expert business system could coach and perform testing of managerial hypotheses about prob- lems [49]. For example, analysis of two simi- lar plants (one with and one without the prob- lem) can suggest a cause of the problem. This is an important capability since selecting relevant variables can be more important than procedural sophistication in making good decisions [38]. To do this well, the data- base must be augmented with expert infor- mation about the processes happening in each plant and data beyond that normally re- quired for data processing. This requires in- formation which conceptualizes organiza- tional tasks and task structure in addition to transactional data. Often the former is more crucial in decision making and better handled with expert systems [38].

In Blum's RX system [8], testable hypotheses are derived in a different way. In this system for performing medical research, the system asks the users what they wish to predict (for example, the causes of heart attacks). A sub- program of Blum's RX system finds the high- est correlations between the criterion variable (heart attacks) and other potential prediction variables; the Spearman non-parametric cor- relation coefficient is used for this. The re- sults of this step are filtered through a path analysis program to suggest a causal model.

The Design and Operation of an AIS Design and operation of an AIS for managerial decision making given in this section draws heavily upon the RX system for medical re- search [8]. RX supports medical research in much the same way as we are suggesting the AIS support managerial decision making and

(2) the ability to develop testable hypotheses, (3) the ability to test the hypotheses using real data, (4) access to meta-data, i.e., infor- mation on, and the relationships among, data items and processes, and (5) the ability to backtrack on the system's logic.

In MYCIN, an expert system for diagnosing in- fectious diseases, the doctor is coached in what tests would be appropriate to make a clear diagnosis [63]. Similarily, an expert business system could coach and perform testing of managerial hypotheses about prob- lems [49]. For example, analysis of two simi- lar plants (one with and one without the prob- lem) can suggest a cause of the problem. This is an important capability since selecting relevant variables can be more important than procedural sophistication in making good decisions [38]. To do this well, the data- base must be augmented with expert infor- mation about the processes happening in each plant and data beyond that normally re- quired for data processing. This requires in- formation which conceptualizes organiza- tional tasks and task structure in addition to transactional data. Often the former is more crucial in decision making and better handled with expert systems [38].

In Blum's RX system [8], testable hypotheses are derived in a different way. In this system for performing medical research, the system asks the users what they wish to predict (for example, the causes of heart attacks). A sub- program of Blum's RX system finds the high- est correlations between the criterion variable (heart attacks) and other potential prediction variables; the Spearman non-parametric cor- relation coefficient is used for this. The re- sults of this step are filtered through a path analysis program to suggest a causal model.

The Design and Operation of an AIS Design and operation of an AIS for managerial decision making given in this section draws heavily upon the RX system for medical re- search [8]. RX supports medical research in much the same way as we are suggesting the AIS support managerial decision making and

MIS Quarterly/December 1986 411 MIS Quarterly/December 1986 411 MIS Quarterly/December 1986 411 MIS Quarterly/December 1986 411 MIS Quarterly/December 1986 411 MIS Quarterly/December 1986 411 MIS Quarterly/December 1986 411 MIS Quarterly/December 1986 411

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 11: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

problem solving. RX uses statistical and medical knowledge in an attempt to discover causal relationships, such as "cholesterol causes heart attacts."

The basic functions that the AIS must per- form are:

1. Parse the query, 2. Determine data that is appropriate to

answering the query, 3. Determine correspondingly appropriate

statistical technique(s), 4. Determine how to handle issues such

as outliers, multicollinearity, and trans- formations,

5. Produce output to the decision maker which minimizes possible biasing ef- fects.

Note that steps (2) through (4) support secon- dary decisions; only (5) provides direct pri- mary decision support.

The basic structure of an AIS is given in Figure 1. Note that its structure mirrors the DSS framework proposed by Sprague [69], with the addition of the inference engine and knowl- edge base (KB) components. The inference en- gine functions as the executive module of the AIS and is shown as the hub of the system. The inference engine receives a request (in parsed form) from the user-interface compo- nent. It references the knowledge base and database/data dictionary to choose an appro- priate statistical technique which is then in-

problem solving. RX uses statistical and medical knowledge in an attempt to discover causal relationships, such as "cholesterol causes heart attacts."

The basic functions that the AIS must per- form are:

1. Parse the query, 2. Determine data that is appropriate to

answering the query, 3. Determine correspondingly appropriate

statistical technique(s), 4. Determine how to handle issues such

as outliers, multicollinearity, and trans- formations,

5. Produce output to the decision maker which minimizes possible biasing ef- fects.

Note that steps (2) through (4) support secon- dary decisions; only (5) provides direct pri- mary decision support.

The basic structure of an AIS is given in Figure 1. Note that its structure mirrors the DSS framework proposed by Sprague [69], with the addition of the inference engine and knowl- edge base (KB) components. The inference en- gine functions as the executive module of the AIS and is shown as the hub of the system. The inference engine receives a request (in parsed form) from the user-interface compo- nent. It references the knowledge base and database/data dictionary to choose an appro- priate statistical technique which is then in-

problem solving. RX uses statistical and medical knowledge in an attempt to discover causal relationships, such as "cholesterol causes heart attacts."

The basic functions that the AIS must per- form are:

1. Parse the query, 2. Determine data that is appropriate to

answering the query, 3. Determine correspondingly appropriate

statistical technique(s), 4. Determine how to handle issues such

as outliers, multicollinearity, and trans- formations,

5. Produce output to the decision maker which minimizes possible biasing ef- fects.

Note that steps (2) through (4) support secon- dary decisions; only (5) provides direct pri- mary decision support.

The basic structure of an AIS is given in Figure 1. Note that its structure mirrors the DSS framework proposed by Sprague [69], with the addition of the inference engine and knowl- edge base (KB) components. The inference en- gine functions as the executive module of the AIS and is shown as the hub of the system. The inference engine receives a request (in parsed form) from the user-interface compo- nent. It references the knowledge base and database/data dictionary to choose an appro- priate statistical technique which is then in-

problem solving. RX uses statistical and medical knowledge in an attempt to discover causal relationships, such as "cholesterol causes heart attacts."

The basic functions that the AIS must per- form are:

1. Parse the query, 2. Determine data that is appropriate to

answering the query, 3. Determine correspondingly appropriate

statistical technique(s), 4. Determine how to handle issues such

as outliers, multicollinearity, and trans- formations,

5. Produce output to the decision maker which minimizes possible biasing ef- fects.

Note that steps (2) through (4) support secon- dary decisions; only (5) provides direct pri- mary decision support.

The basic structure of an AIS is given in Figure 1. Note that its structure mirrors the DSS framework proposed by Sprague [69], with the addition of the inference engine and knowl- edge base (KB) components. The inference en- gine functions as the executive module of the AIS and is shown as the hub of the system. The inference engine receives a request (in parsed form) from the user-interface compo- nent. It references the knowledge base and database/data dictionary to choose an appro- priate statistical technique which is then in-

problem solving. RX uses statistical and medical knowledge in an attempt to discover causal relationships, such as "cholesterol causes heart attacts."

The basic functions that the AIS must per- form are:

1. Parse the query, 2. Determine data that is appropriate to

answering the query, 3. Determine correspondingly appropriate

statistical technique(s), 4. Determine how to handle issues such

as outliers, multicollinearity, and trans- formations,

5. Produce output to the decision maker which minimizes possible biasing ef- fects.

Note that steps (2) through (4) support secon- dary decisions; only (5) provides direct pri- mary decision support.

The basic structure of an AIS is given in Figure 1. Note that its structure mirrors the DSS framework proposed by Sprague [69], with the addition of the inference engine and knowl- edge base (KB) components. The inference en- gine functions as the executive module of the AIS and is shown as the hub of the system. The inference engine receives a request (in parsed form) from the user-interface compo- nent. It references the knowledge base and database/data dictionary to choose an appro- priate statistical technique which is then in-

problem solving. RX uses statistical and medical knowledge in an attempt to discover causal relationships, such as "cholesterol causes heart attacts."

The basic functions that the AIS must per- form are:

1. Parse the query, 2. Determine data that is appropriate to

answering the query, 3. Determine correspondingly appropriate

statistical technique(s), 4. Determine how to handle issues such

as outliers, multicollinearity, and trans- formations,

5. Produce output to the decision maker which minimizes possible biasing ef- fects.

Note that steps (2) through (4) support secon- dary decisions; only (5) provides direct pri- mary decision support.

The basic structure of an AIS is given in Figure 1. Note that its structure mirrors the DSS framework proposed by Sprague [69], with the addition of the inference engine and knowl- edge base (KB) components. The inference en- gine functions as the executive module of the AIS and is shown as the hub of the system. The inference engine receives a request (in parsed form) from the user-interface compo- nent. It references the knowledge base and database/data dictionary to choose an appro- priate statistical technique which is then in-

problem solving. RX uses statistical and medical knowledge in an attempt to discover causal relationships, such as "cholesterol causes heart attacts."

The basic functions that the AIS must per- form are:

1. Parse the query, 2. Determine data that is appropriate to

answering the query, 3. Determine correspondingly appropriate

statistical technique(s), 4. Determine how to handle issues such

as outliers, multicollinearity, and trans- formations,

5. Produce output to the decision maker which minimizes possible biasing ef- fects.

Note that steps (2) through (4) support secon- dary decisions; only (5) provides direct pri- mary decision support.

The basic structure of an AIS is given in Figure 1. Note that its structure mirrors the DSS framework proposed by Sprague [69], with the addition of the inference engine and knowl- edge base (KB) components. The inference en- gine functions as the executive module of the AIS and is shown as the hub of the system. The inference engine receives a request (in parsed form) from the user-interface compo- nent. It references the knowledge base and database/data dictionary to choose an appro- priate statistical technique which is then in-

problem solving. RX uses statistical and medical knowledge in an attempt to discover causal relationships, such as "cholesterol causes heart attacts."

The basic functions that the AIS must per- form are:

1. Parse the query, 2. Determine data that is appropriate to

answering the query, 3. Determine correspondingly appropriate

statistical technique(s), 4. Determine how to handle issues such

as outliers, multicollinearity, and trans- formations,

5. Produce output to the decision maker which minimizes possible biasing ef- fects.

Note that steps (2) through (4) support secon- dary decisions; only (5) provides direct pri- mary decision support.

The basic structure of an AIS is given in Figure 1. Note that its structure mirrors the DSS framework proposed by Sprague [69], with the addition of the inference engine and knowl- edge base (KB) components. The inference en- gine functions as the executive module of the AIS and is shown as the hub of the system. The inference engine receives a request (in parsed form) from the user-interface compo- nent. It references the knowledge base and database/data dictionary to choose an appro- priate statistical technique which is then in-

voked. The actual references to the various components would be interleaved as evi- denced by the following example.

1. The decision maker says "Compare the sales commissions in Boston and Chicago."

2. The natural language interface parses the request to: (a) get Chicago data on sales commis- sions. (b) get Boston data on sales commis- sions. (c) statistically test two samples.

3. The inference engine references the KB to find a list of tests which com- pare two independent samples and a set of rules to choose between them. [The rules will require analyses of both arrays of data to determine whether the arrays are nominal, cardinal, or or- dinal.]

4. The two data arrays must be called up and passed to statistical subroutines for analysis.

5. The subroutines return to the in- ference engine lists of statistical arguments [to determine whether the data is nominal, ordinal, or cardinal].

6. The AIS inference engine uses those statistical arguments and the step 3 rules to make a decision as to whether the data is nominal, ordinal, or car- dinal. Note that the inclusion of this

voked. The actual references to the various components would be interleaved as evi- denced by the following example.

1. The decision maker says "Compare the sales commissions in Boston and Chicago."

2. The natural language interface parses the request to: (a) get Chicago data on sales commis- sions. (b) get Boston data on sales commis- sions. (c) statistically test two samples.

3. The inference engine references the KB to find a list of tests which com- pare two independent samples and a set of rules to choose between them. [The rules will require analyses of both arrays of data to determine whether the arrays are nominal, cardinal, or or- dinal.]

4. The two data arrays must be called up and passed to statistical subroutines for analysis.

5. The subroutines return to the in- ference engine lists of statistical arguments [to determine whether the data is nominal, ordinal, or cardinal].

6. The AIS inference engine uses those statistical arguments and the step 3 rules to make a decision as to whether the data is nominal, ordinal, or car- dinal. Note that the inclusion of this

voked. The actual references to the various components would be interleaved as evi- denced by the following example.

1. The decision maker says "Compare the sales commissions in Boston and Chicago."

2. The natural language interface parses the request to: (a) get Chicago data on sales commis- sions. (b) get Boston data on sales commis- sions. (c) statistically test two samples.

3. The inference engine references the KB to find a list of tests which com- pare two independent samples and a set of rules to choose between them. [The rules will require analyses of both arrays of data to determine whether the arrays are nominal, cardinal, or or- dinal.]

4. The two data arrays must be called up and passed to statistical subroutines for analysis.

5. The subroutines return to the in- ference engine lists of statistical arguments [to determine whether the data is nominal, ordinal, or cardinal].

6. The AIS inference engine uses those statistical arguments and the step 3 rules to make a decision as to whether the data is nominal, ordinal, or car- dinal. Note that the inclusion of this

voked. The actual references to the various components would be interleaved as evi- denced by the following example.

1. The decision maker says "Compare the sales commissions in Boston and Chicago."

2. The natural language interface parses the request to: (a) get Chicago data on sales commis- sions. (b) get Boston data on sales commis- sions. (c) statistically test two samples.

3. The inference engine references the KB to find a list of tests which com- pare two independent samples and a set of rules to choose between them. [The rules will require analyses of both arrays of data to determine whether the arrays are nominal, cardinal, or or- dinal.]

4. The two data arrays must be called up and passed to statistical subroutines for analysis.

5. The subroutines return to the in- ference engine lists of statistical arguments [to determine whether the data is nominal, ordinal, or cardinal].

6. The AIS inference engine uses those statistical arguments and the step 3 rules to make a decision as to whether the data is nominal, ordinal, or car- dinal. Note that the inclusion of this

voked. The actual references to the various components would be interleaved as evi- denced by the following example.

1. The decision maker says "Compare the sales commissions in Boston and Chicago."

2. The natural language interface parses the request to: (a) get Chicago data on sales commis- sions. (b) get Boston data on sales commis- sions. (c) statistically test two samples.

3. The inference engine references the KB to find a list of tests which com- pare two independent samples and a set of rules to choose between them. [The rules will require analyses of both arrays of data to determine whether the arrays are nominal, cardinal, or or- dinal.]

4. The two data arrays must be called up and passed to statistical subroutines for analysis.

5. The subroutines return to the in- ference engine lists of statistical arguments [to determine whether the data is nominal, ordinal, or cardinal].

6. The AIS inference engine uses those statistical arguments and the step 3 rules to make a decision as to whether the data is nominal, ordinal, or car- dinal. Note that the inclusion of this

voked. The actual references to the various components would be interleaved as evi- denced by the following example.

1. The decision maker says "Compare the sales commissions in Boston and Chicago."

2. The natural language interface parses the request to: (a) get Chicago data on sales commis- sions. (b) get Boston data on sales commis- sions. (c) statistically test two samples.

3. The inference engine references the KB to find a list of tests which com- pare two independent samples and a set of rules to choose between them. [The rules will require analyses of both arrays of data to determine whether the arrays are nominal, cardinal, or or- dinal.]

4. The two data arrays must be called up and passed to statistical subroutines for analysis.

5. The subroutines return to the in- ference engine lists of statistical arguments [to determine whether the data is nominal, ordinal, or cardinal].

6. The AIS inference engine uses those statistical arguments and the step 3 rules to make a decision as to whether the data is nominal, ordinal, or car- dinal. Note that the inclusion of this

voked. The actual references to the various components would be interleaved as evi- denced by the following example.

1. The decision maker says "Compare the sales commissions in Boston and Chicago."

2. The natural language interface parses the request to: (a) get Chicago data on sales commis- sions. (b) get Boston data on sales commis- sions. (c) statistically test two samples.

3. The inference engine references the KB to find a list of tests which com- pare two independent samples and a set of rules to choose between them. [The rules will require analyses of both arrays of data to determine whether the arrays are nominal, cardinal, or or- dinal.]

4. The two data arrays must be called up and passed to statistical subroutines for analysis.

5. The subroutines return to the in- ference engine lists of statistical arguments [to determine whether the data is nominal, ordinal, or cardinal].

6. The AIS inference engine uses those statistical arguments and the step 3 rules to make a decision as to whether the data is nominal, ordinal, or car- dinal. Note that the inclusion of this

voked. The actual references to the various components would be interleaved as evi- denced by the following example.

1. The decision maker says "Compare the sales commissions in Boston and Chicago."

2. The natural language interface parses the request to: (a) get Chicago data on sales commis- sions. (b) get Boston data on sales commis- sions. (c) statistically test two samples.

3. The inference engine references the KB to find a list of tests which com- pare two independent samples and a set of rules to choose between them. [The rules will require analyses of both arrays of data to determine whether the arrays are nominal, cardinal, or or- dinal.]

4. The two data arrays must be called up and passed to statistical subroutines for analysis.

5. The subroutines return to the in- ference engine lists of statistical arguments [to determine whether the data is nominal, ordinal, or cardinal].

6. The AIS inference engine uses those statistical arguments and the step 3 rules to make a decision as to whether the data is nominal, ordinal, or car- dinal. Note that the inclusion of this

Figure 1. Simple Block Diagram of an Artifically Intelligent Statistican Figure 1. Simple Block Diagram of an Artifically Intelligent Statistican Figure 1. Simple Block Diagram of an Artifically Intelligent Statistican Figure 1. Simple Block Diagram of an Artifically Intelligent Statistican Figure 1. Simple Block Diagram of an Artifically Intelligent Statistican Figure 1. Simple Block Diagram of an Artifically Intelligent Statistican Figure 1. Simple Block Diagram of an Artifically Intelligent Statistican Figure 1. Simple Block Diagram of an Artifically Intelligent Statistican

412 MIS Quarterly/December 1986 412 MIS Quarterly/December 1986 412 MIS Quarterly/December 1986 412 MIS Quarterly/December 1986 412 MIS Quarterly/December 1986 412 MIS Quarterly/December 1986 412 MIS Quarterly/December 1986 412 MIS Quarterly/December 1986

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 12: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

metadata information in the data dic- tionary will simplify steps 4 through 6.

7. Knowing the data type, the inference engine now finds a short list of tests and adjustment procedures.

8. Given the data type, the next AIS rule causes the data arrays to be sent to an appropriate module in the model base to handle any outliers (e.g., for car- dinal data this heuristic subroutine might be based on Winsorizing). The subroutine returns the adjusted data arrays.

9. Given the data type and handling of outliers, the AIS rule calls for the ad- justed data to be passed to an ap- propriate statistical subroutine [e.g., t-test]. There are two sets of t-tests depending on whether the data is paired or not. If not paired, then there is a choice to be made between t-tests with pooled and non-pooled variances. The latter may be established by a F-test on variances within the subroutine. The subroutine returns a select list of statistical measures for the comparison.

10. The AIS inference engine uses rules to understand and interpret the results of the test based on the select list of statistical measures. [This may re- quire reference to t tables which can be stored in the data, knowledge, or model base].

11. The AIS must pass arguments to the natural language interface for en- coding in English. The select list of statistical measures may then also be stored in the database or knowledge base for future reference.

The AIS as illustrated here can be implement- ed using existing software components - DBMS and data dictionaries, statistical soft- ware such as SPSS, and generalized inference engines (expert system generators) such as EMYCIN. "Patching" such diverse compo- nents together, however, proves clumsy and results in execution inefficiencies. For exam- Die, in the system initially developed by Blum [8], the data retrieved from the DBMS had to be reformatted before being passed to SPSS;

metadata information in the data dic- tionary will simplify steps 4 through 6.

7. Knowing the data type, the inference engine now finds a short list of tests and adjustment procedures.

8. Given the data type, the next AIS rule causes the data arrays to be sent to an appropriate module in the model base to handle any outliers (e.g., for car- dinal data this heuristic subroutine might be based on Winsorizing). The subroutine returns the adjusted data arrays.

9. Given the data type and handling of outliers, the AIS rule calls for the ad- justed data to be passed to an ap- propriate statistical subroutine [e.g., t-test]. There are two sets of t-tests depending on whether the data is paired or not. If not paired, then there is a choice to be made between t-tests with pooled and non-pooled variances. The latter may be established by a F-test on variances within the subroutine. The subroutine returns a select list of statistical measures for the comparison.

10. The AIS inference engine uses rules to understand and interpret the results of the test based on the select list of statistical measures. [This may re- quire reference to t tables which can be stored in the data, knowledge, or model base].

11. The AIS must pass arguments to the natural language interface for en- coding in English. The select list of statistical measures may then also be stored in the database or knowledge base for future reference.

The AIS as illustrated here can be implement- ed using existing software components - DBMS and data dictionaries, statistical soft- ware such as SPSS, and generalized inference engines (expert system generators) such as EMYCIN. "Patching" such diverse compo- nents together, however, proves clumsy and results in execution inefficiencies. For exam- Die, in the system initially developed by Blum [8], the data retrieved from the DBMS had to be reformatted before being passed to SPSS;

metadata information in the data dic- tionary will simplify steps 4 through 6.

7. Knowing the data type, the inference engine now finds a short list of tests and adjustment procedures.

8. Given the data type, the next AIS rule causes the data arrays to be sent to an appropriate module in the model base to handle any outliers (e.g., for car- dinal data this heuristic subroutine might be based on Winsorizing). The subroutine returns the adjusted data arrays.

9. Given the data type and handling of outliers, the AIS rule calls for the ad- justed data to be passed to an ap- propriate statistical subroutine [e.g., t-test]. There are two sets of t-tests depending on whether the data is paired or not. If not paired, then there is a choice to be made between t-tests with pooled and non-pooled variances. The latter may be established by a F-test on variances within the subroutine. The subroutine returns a select list of statistical measures for the comparison.

10. The AIS inference engine uses rules to understand and interpret the results of the test based on the select list of statistical measures. [This may re- quire reference to t tables which can be stored in the data, knowledge, or model base].

11. The AIS must pass arguments to the natural language interface for en- coding in English. The select list of statistical measures may then also be stored in the database or knowledge base for future reference.

The AIS as illustrated here can be implement- ed using existing software components - DBMS and data dictionaries, statistical soft- ware such as SPSS, and generalized inference engines (expert system generators) such as EMYCIN. "Patching" such diverse compo- nents together, however, proves clumsy and results in execution inefficiencies. For exam- Die, in the system initially developed by Blum [8], the data retrieved from the DBMS had to be reformatted before being passed to SPSS;

metadata information in the data dic- tionary will simplify steps 4 through 6.

7. Knowing the data type, the inference engine now finds a short list of tests and adjustment procedures.

8. Given the data type, the next AIS rule causes the data arrays to be sent to an appropriate module in the model base to handle any outliers (e.g., for car- dinal data this heuristic subroutine might be based on Winsorizing). The subroutine returns the adjusted data arrays.

9. Given the data type and handling of outliers, the AIS rule calls for the ad- justed data to be passed to an ap- propriate statistical subroutine [e.g., t-test]. There are two sets of t-tests depending on whether the data is paired or not. If not paired, then there is a choice to be made between t-tests with pooled and non-pooled variances. The latter may be established by a F-test on variances within the subroutine. The subroutine returns a select list of statistical measures for the comparison.

10. The AIS inference engine uses rules to understand and interpret the results of the test based on the select list of statistical measures. [This may re- quire reference to t tables which can be stored in the data, knowledge, or model base].

11. The AIS must pass arguments to the natural language interface for en- coding in English. The select list of statistical measures may then also be stored in the database or knowledge base for future reference.

The AIS as illustrated here can be implement- ed using existing software components - DBMS and data dictionaries, statistical soft- ware such as SPSS, and generalized inference engines (expert system generators) such as EMYCIN. "Patching" such diverse compo- nents together, however, proves clumsy and results in execution inefficiencies. For exam- Die, in the system initially developed by Blum [8], the data retrieved from the DBMS had to be reformatted before being passed to SPSS;

metadata information in the data dic- tionary will simplify steps 4 through 6.

7. Knowing the data type, the inference engine now finds a short list of tests and adjustment procedures.

8. Given the data type, the next AIS rule causes the data arrays to be sent to an appropriate module in the model base to handle any outliers (e.g., for car- dinal data this heuristic subroutine might be based on Winsorizing). The subroutine returns the adjusted data arrays.

9. Given the data type and handling of outliers, the AIS rule calls for the ad- justed data to be passed to an ap- propriate statistical subroutine [e.g., t-test]. There are two sets of t-tests depending on whether the data is paired or not. If not paired, then there is a choice to be made between t-tests with pooled and non-pooled variances. The latter may be established by a F-test on variances within the subroutine. The subroutine returns a select list of statistical measures for the comparison.

10. The AIS inference engine uses rules to understand and interpret the results of the test based on the select list of statistical measures. [This may re- quire reference to t tables which can be stored in the data, knowledge, or model base].

11. The AIS must pass arguments to the natural language interface for en- coding in English. The select list of statistical measures may then also be stored in the database or knowledge base for future reference.

The AIS as illustrated here can be implement- ed using existing software components - DBMS and data dictionaries, statistical soft- ware such as SPSS, and generalized inference engines (expert system generators) such as EMYCIN. "Patching" such diverse compo- nents together, however, proves clumsy and results in execution inefficiencies. For exam- Die, in the system initially developed by Blum [8], the data retrieved from the DBMS had to be reformatted before being passed to SPSS;

metadata information in the data dic- tionary will simplify steps 4 through 6.

7. Knowing the data type, the inference engine now finds a short list of tests and adjustment procedures.

8. Given the data type, the next AIS rule causes the data arrays to be sent to an appropriate module in the model base to handle any outliers (e.g., for car- dinal data this heuristic subroutine might be based on Winsorizing). The subroutine returns the adjusted data arrays.

9. Given the data type and handling of outliers, the AIS rule calls for the ad- justed data to be passed to an ap- propriate statistical subroutine [e.g., t-test]. There are two sets of t-tests depending on whether the data is paired or not. If not paired, then there is a choice to be made between t-tests with pooled and non-pooled variances. The latter may be established by a F-test on variances within the subroutine. The subroutine returns a select list of statistical measures for the comparison.

10. The AIS inference engine uses rules to understand and interpret the results of the test based on the select list of statistical measures. [This may re- quire reference to t tables which can be stored in the data, knowledge, or model base].

11. The AIS must pass arguments to the natural language interface for en- coding in English. The select list of statistical measures may then also be stored in the database or knowledge base for future reference.

The AIS as illustrated here can be implement- ed using existing software components - DBMS and data dictionaries, statistical soft- ware such as SPSS, and generalized inference engines (expert system generators) such as EMYCIN. "Patching" such diverse compo- nents together, however, proves clumsy and results in execution inefficiencies. For exam- Die, in the system initially developed by Blum [8], the data retrieved from the DBMS had to be reformatted before being passed to SPSS;

metadata information in the data dic- tionary will simplify steps 4 through 6.

7. Knowing the data type, the inference engine now finds a short list of tests and adjustment procedures.

8. Given the data type, the next AIS rule causes the data arrays to be sent to an appropriate module in the model base to handle any outliers (e.g., for car- dinal data this heuristic subroutine might be based on Winsorizing). The subroutine returns the adjusted data arrays.

9. Given the data type and handling of outliers, the AIS rule calls for the ad- justed data to be passed to an ap- propriate statistical subroutine [e.g., t-test]. There are two sets of t-tests depending on whether the data is paired or not. If not paired, then there is a choice to be made between t-tests with pooled and non-pooled variances. The latter may be established by a F-test on variances within the subroutine. The subroutine returns a select list of statistical measures for the comparison.

10. The AIS inference engine uses rules to understand and interpret the results of the test based on the select list of statistical measures. [This may re- quire reference to t tables which can be stored in the data, knowledge, or model base].

11. The AIS must pass arguments to the natural language interface for en- coding in English. The select list of statistical measures may then also be stored in the database or knowledge base for future reference.

The AIS as illustrated here can be implement- ed using existing software components - DBMS and data dictionaries, statistical soft- ware such as SPSS, and generalized inference engines (expert system generators) such as EMYCIN. "Patching" such diverse compo- nents together, however, proves clumsy and results in execution inefficiencies. For exam- Die, in the system initially developed by Blum [8], the data retrieved from the DBMS had to be reformatted before being passed to SPSS;

metadata information in the data dic- tionary will simplify steps 4 through 6.

7. Knowing the data type, the inference engine now finds a short list of tests and adjustment procedures.

8. Given the data type, the next AIS rule causes the data arrays to be sent to an appropriate module in the model base to handle any outliers (e.g., for car- dinal data this heuristic subroutine might be based on Winsorizing). The subroutine returns the adjusted data arrays.

9. Given the data type and handling of outliers, the AIS rule calls for the ad- justed data to be passed to an ap- propriate statistical subroutine [e.g., t-test]. There are two sets of t-tests depending on whether the data is paired or not. If not paired, then there is a choice to be made between t-tests with pooled and non-pooled variances. The latter may be established by a F-test on variances within the subroutine. The subroutine returns a select list of statistical measures for the comparison.

10. The AIS inference engine uses rules to understand and interpret the results of the test based on the select list of statistical measures. [This may re- quire reference to t tables which can be stored in the data, knowledge, or model base].

11. The AIS must pass arguments to the natural language interface for en- coding in English. The select list of statistical measures may then also be stored in the database or knowledge base for future reference.

The AIS as illustrated here can be implement- ed using existing software components - DBMS and data dictionaries, statistical soft- ware such as SPSS, and generalized inference engines (expert system generators) such as EMYCIN. "Patching" such diverse compo- nents together, however, proves clumsy and results in execution inefficiencies. For exam- Die, in the system initially developed by Blum [8], the data retrieved from the DBMS had to be reformatted before being passed to SPSS;

the output from SPSS was reformatted into a form suitable for pattern matching with rules in the knowledge base. The overall system was several batch oriented tasks rather than a real time DSS. A more complete list of diffi- culties in implementing an integrated, indus- trial-strength AIS includes the following:

1. A robust parser must be developed. This requires the definition of a formal syn- tax for business problem solving. Natur- al language parsers for database query transform a natural query into a formal syntax such as SEQUEL. A formal syn- tax for managerial problem solving must be orders of magnitude more ro- bust than such query formalisms.

2. The data storage representation must be equivalent in the various AIS compo- nents such that information can be passed between the components with- out the need for storage format conver- sion.

3. The AIS inference engine must be able to access large amounts of historical data and metadata. This requires a time oriented database with powerful, exten- sible access functions. The functional data model [62] holds promise here. Fur- ther, the functional data model attempts to address (2) by treating data and meta- data equivalently [4].

4. Analogous to (3), an AIS that attempts to support a broad range of managerial decision making requires the construc- tion and maintenace of a substantive knowledge base. Issues here include development of encyclopedic KB's [43] which must allow for change over time, storage of possibly conflicting expert knowledge, and concurrent access by many users [36]. In short, there is a need for knowledge base management sys- tems and knowledge base administra- tors.

5. The computational complexity of such a large-scale system is tremendous and requires efficient implementation and hardware.

Despite the monumental effort implied by these difficulties, a number of organizations have committed to undertake such large scale

the output from SPSS was reformatted into a form suitable for pattern matching with rules in the knowledge base. The overall system was several batch oriented tasks rather than a real time DSS. A more complete list of diffi- culties in implementing an integrated, indus- trial-strength AIS includes the following:

1. A robust parser must be developed. This requires the definition of a formal syn- tax for business problem solving. Natur- al language parsers for database query transform a natural query into a formal syntax such as SEQUEL. A formal syn- tax for managerial problem solving must be orders of magnitude more ro- bust than such query formalisms.

2. The data storage representation must be equivalent in the various AIS compo- nents such that information can be passed between the components with- out the need for storage format conver- sion.

3. The AIS inference engine must be able to access large amounts of historical data and metadata. This requires a time oriented database with powerful, exten- sible access functions. The functional data model [62] holds promise here. Fur- ther, the functional data model attempts to address (2) by treating data and meta- data equivalently [4].

4. Analogous to (3), an AIS that attempts to support a broad range of managerial decision making requires the construc- tion and maintenace of a substantive knowledge base. Issues here include development of encyclopedic KB's [43] which must allow for change over time, storage of possibly conflicting expert knowledge, and concurrent access by many users [36]. In short, there is a need for knowledge base management sys- tems and knowledge base administra- tors.

5. The computational complexity of such a large-scale system is tremendous and requires efficient implementation and hardware.

Despite the monumental effort implied by these difficulties, a number of organizations have committed to undertake such large scale

the output from SPSS was reformatted into a form suitable for pattern matching with rules in the knowledge base. The overall system was several batch oriented tasks rather than a real time DSS. A more complete list of diffi- culties in implementing an integrated, indus- trial-strength AIS includes the following:

1. A robust parser must be developed. This requires the definition of a formal syn- tax for business problem solving. Natur- al language parsers for database query transform a natural query into a formal syntax such as SEQUEL. A formal syn- tax for managerial problem solving must be orders of magnitude more ro- bust than such query formalisms.

2. The data storage representation must be equivalent in the various AIS compo- nents such that information can be passed between the components with- out the need for storage format conver- sion.

3. The AIS inference engine must be able to access large amounts of historical data and metadata. This requires a time oriented database with powerful, exten- sible access functions. The functional data model [62] holds promise here. Fur- ther, the functional data model attempts to address (2) by treating data and meta- data equivalently [4].

4. Analogous to (3), an AIS that attempts to support a broad range of managerial decision making requires the construc- tion and maintenace of a substantive knowledge base. Issues here include development of encyclopedic KB's [43] which must allow for change over time, storage of possibly conflicting expert knowledge, and concurrent access by many users [36]. In short, there is a need for knowledge base management sys- tems and knowledge base administra- tors.

5. The computational complexity of such a large-scale system is tremendous and requires efficient implementation and hardware.

Despite the monumental effort implied by these difficulties, a number of organizations have committed to undertake such large scale

the output from SPSS was reformatted into a form suitable for pattern matching with rules in the knowledge base. The overall system was several batch oriented tasks rather than a real time DSS. A more complete list of diffi- culties in implementing an integrated, indus- trial-strength AIS includes the following:

1. A robust parser must be developed. This requires the definition of a formal syn- tax for business problem solving. Natur- al language parsers for database query transform a natural query into a formal syntax such as SEQUEL. A formal syn- tax for managerial problem solving must be orders of magnitude more ro- bust than such query formalisms.

2. The data storage representation must be equivalent in the various AIS compo- nents such that information can be passed between the components with- out the need for storage format conver- sion.

3. The AIS inference engine must be able to access large amounts of historical data and metadata. This requires a time oriented database with powerful, exten- sible access functions. The functional data model [62] holds promise here. Fur- ther, the functional data model attempts to address (2) by treating data and meta- data equivalently [4].

4. Analogous to (3), an AIS that attempts to support a broad range of managerial decision making requires the construc- tion and maintenace of a substantive knowledge base. Issues here include development of encyclopedic KB's [43] which must allow for change over time, storage of possibly conflicting expert knowledge, and concurrent access by many users [36]. In short, there is a need for knowledge base management sys- tems and knowledge base administra- tors.

5. The computational complexity of such a large-scale system is tremendous and requires efficient implementation and hardware.

Despite the monumental effort implied by these difficulties, a number of organizations have committed to undertake such large scale

the output from SPSS was reformatted into a form suitable for pattern matching with rules in the knowledge base. The overall system was several batch oriented tasks rather than a real time DSS. A more complete list of diffi- culties in implementing an integrated, indus- trial-strength AIS includes the following:

1. A robust parser must be developed. This requires the definition of a formal syn- tax for business problem solving. Natur- al language parsers for database query transform a natural query into a formal syntax such as SEQUEL. A formal syn- tax for managerial problem solving must be orders of magnitude more ro- bust than such query formalisms.

2. The data storage representation must be equivalent in the various AIS compo- nents such that information can be passed between the components with- out the need for storage format conver- sion.

3. The AIS inference engine must be able to access large amounts of historical data and metadata. This requires a time oriented database with powerful, exten- sible access functions. The functional data model [62] holds promise here. Fur- ther, the functional data model attempts to address (2) by treating data and meta- data equivalently [4].

4. Analogous to (3), an AIS that attempts to support a broad range of managerial decision making requires the construc- tion and maintenace of a substantive knowledge base. Issues here include development of encyclopedic KB's [43] which must allow for change over time, storage of possibly conflicting expert knowledge, and concurrent access by many users [36]. In short, there is a need for knowledge base management sys- tems and knowledge base administra- tors.

5. The computational complexity of such a large-scale system is tremendous and requires efficient implementation and hardware.

Despite the monumental effort implied by these difficulties, a number of organizations have committed to undertake such large scale

the output from SPSS was reformatted into a form suitable for pattern matching with rules in the knowledge base. The overall system was several batch oriented tasks rather than a real time DSS. A more complete list of diffi- culties in implementing an integrated, indus- trial-strength AIS includes the following:

1. A robust parser must be developed. This requires the definition of a formal syn- tax for business problem solving. Natur- al language parsers for database query transform a natural query into a formal syntax such as SEQUEL. A formal syn- tax for managerial problem solving must be orders of magnitude more ro- bust than such query formalisms.

2. The data storage representation must be equivalent in the various AIS compo- nents such that information can be passed between the components with- out the need for storage format conver- sion.

3. The AIS inference engine must be able to access large amounts of historical data and metadata. This requires a time oriented database with powerful, exten- sible access functions. The functional data model [62] holds promise here. Fur- ther, the functional data model attempts to address (2) by treating data and meta- data equivalently [4].

4. Analogous to (3), an AIS that attempts to support a broad range of managerial decision making requires the construc- tion and maintenace of a substantive knowledge base. Issues here include development of encyclopedic KB's [43] which must allow for change over time, storage of possibly conflicting expert knowledge, and concurrent access by many users [36]. In short, there is a need for knowledge base management sys- tems and knowledge base administra- tors.

5. The computational complexity of such a large-scale system is tremendous and requires efficient implementation and hardware.

Despite the monumental effort implied by these difficulties, a number of organizations have committed to undertake such large scale

the output from SPSS was reformatted into a form suitable for pattern matching with rules in the knowledge base. The overall system was several batch oriented tasks rather than a real time DSS. A more complete list of diffi- culties in implementing an integrated, indus- trial-strength AIS includes the following:

1. A robust parser must be developed. This requires the definition of a formal syn- tax for business problem solving. Natur- al language parsers for database query transform a natural query into a formal syntax such as SEQUEL. A formal syn- tax for managerial problem solving must be orders of magnitude more ro- bust than such query formalisms.

2. The data storage representation must be equivalent in the various AIS compo- nents such that information can be passed between the components with- out the need for storage format conver- sion.

3. The AIS inference engine must be able to access large amounts of historical data and metadata. This requires a time oriented database with powerful, exten- sible access functions. The functional data model [62] holds promise here. Fur- ther, the functional data model attempts to address (2) by treating data and meta- data equivalently [4].

4. Analogous to (3), an AIS that attempts to support a broad range of managerial decision making requires the construc- tion and maintenace of a substantive knowledge base. Issues here include development of encyclopedic KB's [43] which must allow for change over time, storage of possibly conflicting expert knowledge, and concurrent access by many users [36]. In short, there is a need for knowledge base management sys- tems and knowledge base administra- tors.

5. The computational complexity of such a large-scale system is tremendous and requires efficient implementation and hardware.

Despite the monumental effort implied by these difficulties, a number of organizations have committed to undertake such large scale

the output from SPSS was reformatted into a form suitable for pattern matching with rules in the knowledge base. The overall system was several batch oriented tasks rather than a real time DSS. A more complete list of diffi- culties in implementing an integrated, indus- trial-strength AIS includes the following:

1. A robust parser must be developed. This requires the definition of a formal syn- tax for business problem solving. Natur- al language parsers for database query transform a natural query into a formal syntax such as SEQUEL. A formal syn- tax for managerial problem solving must be orders of magnitude more ro- bust than such query formalisms.

2. The data storage representation must be equivalent in the various AIS compo- nents such that information can be passed between the components with- out the need for storage format conver- sion.

3. The AIS inference engine must be able to access large amounts of historical data and metadata. This requires a time oriented database with powerful, exten- sible access functions. The functional data model [62] holds promise here. Fur- ther, the functional data model attempts to address (2) by treating data and meta- data equivalently [4].

4. Analogous to (3), an AIS that attempts to support a broad range of managerial decision making requires the construc- tion and maintenace of a substantive knowledge base. Issues here include development of encyclopedic KB's [43] which must allow for change over time, storage of possibly conflicting expert knowledge, and concurrent access by many users [36]. In short, there is a need for knowledge base management sys- tems and knowledge base administra- tors.

5. The computational complexity of such a large-scale system is tremendous and requires efficient implementation and hardware.

Despite the monumental effort implied by these difficulties, a number of organizations have committed to undertake such large scale

MIS Quarterly/December 1986 413 MIS Quarterly/December 1986 413 MIS Quarterly/December 1986 413 MIS Quarterly/December 1986 413 MIS Quarterly/December 1986 413 MIS Quarterly/December 1986 413 MIS Quarterly/December 1986 413 MIS Quarterly/December 1986 413

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 13: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

projects - see, for example, Lenat et. al., [43] and Fox [26]. Also, research in model manage- ment promises to provide frameworks for such systems [7, 9, 17].

Summary Perhaps the most popular framework for DSS is that outlined by Sprague [69]; it is an archi- tectural framework. The framework embodied by the AIS illustrated in this paper extends Sprague's architectural view while also ad- ding a complimentary decision process view. The architectural extension is conceptually simple yet fundamental and involves incorpo- rating an "executable" knowledge-based DSS component [9]. The addition of this component leads to a vertical view of the decision making process. Decision making implies the exis- tence of secondary processes which involve "deciding how to decide." The knowledge- based component, then, is used to support both primary and secondary decision making.

In an AIS, extra-statistical knowledge is used to guide the selection of decision making tools - in this case, the secondary task of choosing among data and statistical analysis techniques. Knowledge of human cognitive properties is used to display information in a form effective for primary decision making. This basic, leveled approach is applicable to a wide range of decision making situations. See, for example, the discussion of a DSS for selecting general decision making strategies [80].

The distinction between DSS generators and DSS tools affords a useful summary of sys- tems like AIS. A DSS generator is a user- friendly system with which end users can develop specific DSSs. The use of DSS tools for data analysis, such as statistical analy- sis packages and database management sys- tems, requires substantive technical training [79]. The inclusion of an expert component, typified by AIS, integrates assorted DSS tools into a fairly robust DSS generator. Further, since the AIS contains primary and secondary decision making knowledge, it should be able to generate specific DSS models from little more than users' problem statements.

projects - see, for example, Lenat et. al., [43] and Fox [26]. Also, research in model manage- ment promises to provide frameworks for such systems [7, 9, 17].

Summary Perhaps the most popular framework for DSS is that outlined by Sprague [69]; it is an archi- tectural framework. The framework embodied by the AIS illustrated in this paper extends Sprague's architectural view while also ad- ding a complimentary decision process view. The architectural extension is conceptually simple yet fundamental and involves incorpo- rating an "executable" knowledge-based DSS component [9]. The addition of this component leads to a vertical view of the decision making process. Decision making implies the exis- tence of secondary processes which involve "deciding how to decide." The knowledge- based component, then, is used to support both primary and secondary decision making.

In an AIS, extra-statistical knowledge is used to guide the selection of decision making tools - in this case, the secondary task of choosing among data and statistical analysis techniques. Knowledge of human cognitive properties is used to display information in a form effective for primary decision making. This basic, leveled approach is applicable to a wide range of decision making situations. See, for example, the discussion of a DSS for selecting general decision making strategies [80].

The distinction between DSS generators and DSS tools affords a useful summary of sys- tems like AIS. A DSS generator is a user- friendly system with which end users can develop specific DSSs. The use of DSS tools for data analysis, such as statistical analy- sis packages and database management sys- tems, requires substantive technical training [79]. The inclusion of an expert component, typified by AIS, integrates assorted DSS tools into a fairly robust DSS generator. Further, since the AIS contains primary and secondary decision making knowledge, it should be able to generate specific DSS models from little more than users' problem statements.

projects - see, for example, Lenat et. al., [43] and Fox [26]. Also, research in model manage- ment promises to provide frameworks for such systems [7, 9, 17].

Summary Perhaps the most popular framework for DSS is that outlined by Sprague [69]; it is an archi- tectural framework. The framework embodied by the AIS illustrated in this paper extends Sprague's architectural view while also ad- ding a complimentary decision process view. The architectural extension is conceptually simple yet fundamental and involves incorpo- rating an "executable" knowledge-based DSS component [9]. The addition of this component leads to a vertical view of the decision making process. Decision making implies the exis- tence of secondary processes which involve "deciding how to decide." The knowledge- based component, then, is used to support both primary and secondary decision making.

In an AIS, extra-statistical knowledge is used to guide the selection of decision making tools - in this case, the secondary task of choosing among data and statistical analysis techniques. Knowledge of human cognitive properties is used to display information in a form effective for primary decision making. This basic, leveled approach is applicable to a wide range of decision making situations. See, for example, the discussion of a DSS for selecting general decision making strategies [80].

The distinction between DSS generators and DSS tools affords a useful summary of sys- tems like AIS. A DSS generator is a user- friendly system with which end users can develop specific DSSs. The use of DSS tools for data analysis, such as statistical analy- sis packages and database management sys- tems, requires substantive technical training [79]. The inclusion of an expert component, typified by AIS, integrates assorted DSS tools into a fairly robust DSS generator. Further, since the AIS contains primary and secondary decision making knowledge, it should be able to generate specific DSS models from little more than users' problem statements.

projects - see, for example, Lenat et. al., [43] and Fox [26]. Also, research in model manage- ment promises to provide frameworks for such systems [7, 9, 17].

Summary Perhaps the most popular framework for DSS is that outlined by Sprague [69]; it is an archi- tectural framework. The framework embodied by the AIS illustrated in this paper extends Sprague's architectural view while also ad- ding a complimentary decision process view. The architectural extension is conceptually simple yet fundamental and involves incorpo- rating an "executable" knowledge-based DSS component [9]. The addition of this component leads to a vertical view of the decision making process. Decision making implies the exis- tence of secondary processes which involve "deciding how to decide." The knowledge- based component, then, is used to support both primary and secondary decision making.

In an AIS, extra-statistical knowledge is used to guide the selection of decision making tools - in this case, the secondary task of choosing among data and statistical analysis techniques. Knowledge of human cognitive properties is used to display information in a form effective for primary decision making. This basic, leveled approach is applicable to a wide range of decision making situations. See, for example, the discussion of a DSS for selecting general decision making strategies [80].

The distinction between DSS generators and DSS tools affords a useful summary of sys- tems like AIS. A DSS generator is a user- friendly system with which end users can develop specific DSSs. The use of DSS tools for data analysis, such as statistical analy- sis packages and database management sys- tems, requires substantive technical training [79]. The inclusion of an expert component, typified by AIS, integrates assorted DSS tools into a fairly robust DSS generator. Further, since the AIS contains primary and secondary decision making knowledge, it should be able to generate specific DSS models from little more than users' problem statements.

projects - see, for example, Lenat et. al., [43] and Fox [26]. Also, research in model manage- ment promises to provide frameworks for such systems [7, 9, 17].

Summary Perhaps the most popular framework for DSS is that outlined by Sprague [69]; it is an archi- tectural framework. The framework embodied by the AIS illustrated in this paper extends Sprague's architectural view while also ad- ding a complimentary decision process view. The architectural extension is conceptually simple yet fundamental and involves incorpo- rating an "executable" knowledge-based DSS component [9]. The addition of this component leads to a vertical view of the decision making process. Decision making implies the exis- tence of secondary processes which involve "deciding how to decide." The knowledge- based component, then, is used to support both primary and secondary decision making.

In an AIS, extra-statistical knowledge is used to guide the selection of decision making tools - in this case, the secondary task of choosing among data and statistical analysis techniques. Knowledge of human cognitive properties is used to display information in a form effective for primary decision making. This basic, leveled approach is applicable to a wide range of decision making situations. See, for example, the discussion of a DSS for selecting general decision making strategies [80].

The distinction between DSS generators and DSS tools affords a useful summary of sys- tems like AIS. A DSS generator is a user- friendly system with which end users can develop specific DSSs. The use of DSS tools for data analysis, such as statistical analy- sis packages and database management sys- tems, requires substantive technical training [79]. The inclusion of an expert component, typified by AIS, integrates assorted DSS tools into a fairly robust DSS generator. Further, since the AIS contains primary and secondary decision making knowledge, it should be able to generate specific DSS models from little more than users' problem statements.

projects - see, for example, Lenat et. al., [43] and Fox [26]. Also, research in model manage- ment promises to provide frameworks for such systems [7, 9, 17].

Summary Perhaps the most popular framework for DSS is that outlined by Sprague [69]; it is an archi- tectural framework. The framework embodied by the AIS illustrated in this paper extends Sprague's architectural view while also ad- ding a complimentary decision process view. The architectural extension is conceptually simple yet fundamental and involves incorpo- rating an "executable" knowledge-based DSS component [9]. The addition of this component leads to a vertical view of the decision making process. Decision making implies the exis- tence of secondary processes which involve "deciding how to decide." The knowledge- based component, then, is used to support both primary and secondary decision making.

In an AIS, extra-statistical knowledge is used to guide the selection of decision making tools - in this case, the secondary task of choosing among data and statistical analysis techniques. Knowledge of human cognitive properties is used to display information in a form effective for primary decision making. This basic, leveled approach is applicable to a wide range of decision making situations. See, for example, the discussion of a DSS for selecting general decision making strategies [80].

The distinction between DSS generators and DSS tools affords a useful summary of sys- tems like AIS. A DSS generator is a user- friendly system with which end users can develop specific DSSs. The use of DSS tools for data analysis, such as statistical analy- sis packages and database management sys- tems, requires substantive technical training [79]. The inclusion of an expert component, typified by AIS, integrates assorted DSS tools into a fairly robust DSS generator. Further, since the AIS contains primary and secondary decision making knowledge, it should be able to generate specific DSS models from little more than users' problem statements.

projects - see, for example, Lenat et. al., [43] and Fox [26]. Also, research in model manage- ment promises to provide frameworks for such systems [7, 9, 17].

Summary Perhaps the most popular framework for DSS is that outlined by Sprague [69]; it is an archi- tectural framework. The framework embodied by the AIS illustrated in this paper extends Sprague's architectural view while also ad- ding a complimentary decision process view. The architectural extension is conceptually simple yet fundamental and involves incorpo- rating an "executable" knowledge-based DSS component [9]. The addition of this component leads to a vertical view of the decision making process. Decision making implies the exis- tence of secondary processes which involve "deciding how to decide." The knowledge- based component, then, is used to support both primary and secondary decision making.

In an AIS, extra-statistical knowledge is used to guide the selection of decision making tools - in this case, the secondary task of choosing among data and statistical analysis techniques. Knowledge of human cognitive properties is used to display information in a form effective for primary decision making. This basic, leveled approach is applicable to a wide range of decision making situations. See, for example, the discussion of a DSS for selecting general decision making strategies [80].

The distinction between DSS generators and DSS tools affords a useful summary of sys- tems like AIS. A DSS generator is a user- friendly system with which end users can develop specific DSSs. The use of DSS tools for data analysis, such as statistical analy- sis packages and database management sys- tems, requires substantive technical training [79]. The inclusion of an expert component, typified by AIS, integrates assorted DSS tools into a fairly robust DSS generator. Further, since the AIS contains primary and secondary decision making knowledge, it should be able to generate specific DSS models from little more than users' problem statements.

projects - see, for example, Lenat et. al., [43] and Fox [26]. Also, research in model manage- ment promises to provide frameworks for such systems [7, 9, 17].

Summary Perhaps the most popular framework for DSS is that outlined by Sprague [69]; it is an archi- tectural framework. The framework embodied by the AIS illustrated in this paper extends Sprague's architectural view while also ad- ding a complimentary decision process view. The architectural extension is conceptually simple yet fundamental and involves incorpo- rating an "executable" knowledge-based DSS component [9]. The addition of this component leads to a vertical view of the decision making process. Decision making implies the exis- tence of secondary processes which involve "deciding how to decide." The knowledge- based component, then, is used to support both primary and secondary decision making.

In an AIS, extra-statistical knowledge is used to guide the selection of decision making tools - in this case, the secondary task of choosing among data and statistical analysis techniques. Knowledge of human cognitive properties is used to display information in a form effective for primary decision making. This basic, leveled approach is applicable to a wide range of decision making situations. See, for example, the discussion of a DSS for selecting general decision making strategies [80].

The distinction between DSS generators and DSS tools affords a useful summary of sys- tems like AIS. A DSS generator is a user- friendly system with which end users can develop specific DSSs. The use of DSS tools for data analysis, such as statistical analy- sis packages and database management sys- tems, requires substantive technical training [79]. The inclusion of an expert component, typified by AIS, integrates assorted DSS tools into a fairly robust DSS generator. Further, since the AIS contains primary and secondary decision making knowledge, it should be able to generate specific DSS models from little more than users' problem statements.

Despite the promise of large-scale, knowledge- based DSS, a number of major challenges re- main. These include development of robust, integrated user interface, data management, model management, and knowledge manage- ment subsystems. Moreover, the fundamen- tal challenge in developing any knowledge- based system remains the effective identifi- cation and encoding of appropriate knowledge.

References [1] Ackoff, R.L "Management Misinformation

Systems," Management Science, Volume 14, Number 4, December 1967, pp. B147- B156.

[2] Anderson, N.H. and Jacobson, A. "Effect of Stimulus Inconsistency and Discount- ing Instructions in Personality Impres- sion Formation," Journal of Personality and Social Psychology, Volume 2, Number 4, April 1965, pp. 531-539.

[3] Andrews, F.M., Klem, L., Davidson, T.N., O'Malley, P.M. and Rodgers, W.L. A Guide to Selecting Statistical Techniques for Analyzing Social Science Data, Ann Arbor, Michigan, 1976.

[4] Atkinson, M.P. and Kulkarni, K.G. "Experi- menting with the Functional Data Model," in Databases - Role and Structure, P.M. Stocker, P.M.D. Gray and M.P. Atkinson (eds.). Cambridge University Press, New York, New York, 1984, pp. 311-338.

[5] Batson, C.D. "Rational Processing or Ra- tionalization?: The Effect of Disconfirm- ing Information on Stated Religious Be- lief," Journal of Personality and Social Psychology, Volume 32, Number 1, Janu- ary 1975, pp. 176-184.

[6] Benbassat, I. and Taylor, R. "Behavioral Aspects of Information Processing for the Design of Management Information Sys- tems," IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-12, Number 4, July/August 1982, pp. 439-450.

[7] Blanning, R.W. "Conversing with Manage- ment Information Systems in Natural Lan- guage," Communications of the ACM, Vol- ume 27, Number 3, March 1984, pp. 201- 207.

Despite the promise of large-scale, knowledge- based DSS, a number of major challenges re- main. These include development of robust, integrated user interface, data management, model management, and knowledge manage- ment subsystems. Moreover, the fundamen- tal challenge in developing any knowledge- based system remains the effective identifi- cation and encoding of appropriate knowledge.

References [1] Ackoff, R.L "Management Misinformation

Systems," Management Science, Volume 14, Number 4, December 1967, pp. B147- B156.

[2] Anderson, N.H. and Jacobson, A. "Effect of Stimulus Inconsistency and Discount- ing Instructions in Personality Impres- sion Formation," Journal of Personality and Social Psychology, Volume 2, Number 4, April 1965, pp. 531-539.

[3] Andrews, F.M., Klem, L., Davidson, T.N., O'Malley, P.M. and Rodgers, W.L. A Guide to Selecting Statistical Techniques for Analyzing Social Science Data, Ann Arbor, Michigan, 1976.

[4] Atkinson, M.P. and Kulkarni, K.G. "Experi- menting with the Functional Data Model," in Databases - Role and Structure, P.M. Stocker, P.M.D. Gray and M.P. Atkinson (eds.). Cambridge University Press, New York, New York, 1984, pp. 311-338.

[5] Batson, C.D. "Rational Processing or Ra- tionalization?: The Effect of Disconfirm- ing Information on Stated Religious Be- lief," Journal of Personality and Social Psychology, Volume 32, Number 1, Janu- ary 1975, pp. 176-184.

[6] Benbassat, I. and Taylor, R. "Behavioral Aspects of Information Processing for the Design of Management Information Sys- tems," IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-12, Number 4, July/August 1982, pp. 439-450.

[7] Blanning, R.W. "Conversing with Manage- ment Information Systems in Natural Lan- guage," Communications of the ACM, Vol- ume 27, Number 3, March 1984, pp. 201- 207.

Despite the promise of large-scale, knowledge- based DSS, a number of major challenges re- main. These include development of robust, integrated user interface, data management, model management, and knowledge manage- ment subsystems. Moreover, the fundamen- tal challenge in developing any knowledge- based system remains the effective identifi- cation and encoding of appropriate knowledge.

References [1] Ackoff, R.L "Management Misinformation

Systems," Management Science, Volume 14, Number 4, December 1967, pp. B147- B156.

[2] Anderson, N.H. and Jacobson, A. "Effect of Stimulus Inconsistency and Discount- ing Instructions in Personality Impres- sion Formation," Journal of Personality and Social Psychology, Volume 2, Number 4, April 1965, pp. 531-539.

[3] Andrews, F.M., Klem, L., Davidson, T.N., O'Malley, P.M. and Rodgers, W.L. A Guide to Selecting Statistical Techniques for Analyzing Social Science Data, Ann Arbor, Michigan, 1976.

[4] Atkinson, M.P. and Kulkarni, K.G. "Experi- menting with the Functional Data Model," in Databases - Role and Structure, P.M. Stocker, P.M.D. Gray and M.P. Atkinson (eds.). Cambridge University Press, New York, New York, 1984, pp. 311-338.

[5] Batson, C.D. "Rational Processing or Ra- tionalization?: The Effect of Disconfirm- ing Information on Stated Religious Be- lief," Journal of Personality and Social Psychology, Volume 32, Number 1, Janu- ary 1975, pp. 176-184.

[6] Benbassat, I. and Taylor, R. "Behavioral Aspects of Information Processing for the Design of Management Information Sys- tems," IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-12, Number 4, July/August 1982, pp. 439-450.

[7] Blanning, R.W. "Conversing with Manage- ment Information Systems in Natural Lan- guage," Communications of the ACM, Vol- ume 27, Number 3, March 1984, pp. 201- 207.

Despite the promise of large-scale, knowledge- based DSS, a number of major challenges re- main. These include development of robust, integrated user interface, data management, model management, and knowledge manage- ment subsystems. Moreover, the fundamen- tal challenge in developing any knowledge- based system remains the effective identifi- cation and encoding of appropriate knowledge.

References [1] Ackoff, R.L "Management Misinformation

Systems," Management Science, Volume 14, Number 4, December 1967, pp. B147- B156.

[2] Anderson, N.H. and Jacobson, A. "Effect of Stimulus Inconsistency and Discount- ing Instructions in Personality Impres- sion Formation," Journal of Personality and Social Psychology, Volume 2, Number 4, April 1965, pp. 531-539.

[3] Andrews, F.M., Klem, L., Davidson, T.N., O'Malley, P.M. and Rodgers, W.L. A Guide to Selecting Statistical Techniques for Analyzing Social Science Data, Ann Arbor, Michigan, 1976.

[4] Atkinson, M.P. and Kulkarni, K.G. "Experi- menting with the Functional Data Model," in Databases - Role and Structure, P.M. Stocker, P.M.D. Gray and M.P. Atkinson (eds.). Cambridge University Press, New York, New York, 1984, pp. 311-338.

[5] Batson, C.D. "Rational Processing or Ra- tionalization?: The Effect of Disconfirm- ing Information on Stated Religious Be- lief," Journal of Personality and Social Psychology, Volume 32, Number 1, Janu- ary 1975, pp. 176-184.

[6] Benbassat, I. and Taylor, R. "Behavioral Aspects of Information Processing for the Design of Management Information Sys- tems," IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-12, Number 4, July/August 1982, pp. 439-450.

[7] Blanning, R.W. "Conversing with Manage- ment Information Systems in Natural Lan- guage," Communications of the ACM, Vol- ume 27, Number 3, March 1984, pp. 201- 207.

Despite the promise of large-scale, knowledge- based DSS, a number of major challenges re- main. These include development of robust, integrated user interface, data management, model management, and knowledge manage- ment subsystems. Moreover, the fundamen- tal challenge in developing any knowledge- based system remains the effective identifi- cation and encoding of appropriate knowledge.

References [1] Ackoff, R.L "Management Misinformation

Systems," Management Science, Volume 14, Number 4, December 1967, pp. B147- B156.

[2] Anderson, N.H. and Jacobson, A. "Effect of Stimulus Inconsistency and Discount- ing Instructions in Personality Impres- sion Formation," Journal of Personality and Social Psychology, Volume 2, Number 4, April 1965, pp. 531-539.

[3] Andrews, F.M., Klem, L., Davidson, T.N., O'Malley, P.M. and Rodgers, W.L. A Guide to Selecting Statistical Techniques for Analyzing Social Science Data, Ann Arbor, Michigan, 1976.

[4] Atkinson, M.P. and Kulkarni, K.G. "Experi- menting with the Functional Data Model," in Databases - Role and Structure, P.M. Stocker, P.M.D. Gray and M.P. Atkinson (eds.). Cambridge University Press, New York, New York, 1984, pp. 311-338.

[5] Batson, C.D. "Rational Processing or Ra- tionalization?: The Effect of Disconfirm- ing Information on Stated Religious Be- lief," Journal of Personality and Social Psychology, Volume 32, Number 1, Janu- ary 1975, pp. 176-184.

[6] Benbassat, I. and Taylor, R. "Behavioral Aspects of Information Processing for the Design of Management Information Sys- tems," IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-12, Number 4, July/August 1982, pp. 439-450.

[7] Blanning, R.W. "Conversing with Manage- ment Information Systems in Natural Lan- guage," Communications of the ACM, Vol- ume 27, Number 3, March 1984, pp. 201- 207.

Despite the promise of large-scale, knowledge- based DSS, a number of major challenges re- main. These include development of robust, integrated user interface, data management, model management, and knowledge manage- ment subsystems. Moreover, the fundamen- tal challenge in developing any knowledge- based system remains the effective identifi- cation and encoding of appropriate knowledge.

References [1] Ackoff, R.L "Management Misinformation

Systems," Management Science, Volume 14, Number 4, December 1967, pp. B147- B156.

[2] Anderson, N.H. and Jacobson, A. "Effect of Stimulus Inconsistency and Discount- ing Instructions in Personality Impres- sion Formation," Journal of Personality and Social Psychology, Volume 2, Number 4, April 1965, pp. 531-539.

[3] Andrews, F.M., Klem, L., Davidson, T.N., O'Malley, P.M. and Rodgers, W.L. A Guide to Selecting Statistical Techniques for Analyzing Social Science Data, Ann Arbor, Michigan, 1976.

[4] Atkinson, M.P. and Kulkarni, K.G. "Experi- menting with the Functional Data Model," in Databases - Role and Structure, P.M. Stocker, P.M.D. Gray and M.P. Atkinson (eds.). Cambridge University Press, New York, New York, 1984, pp. 311-338.

[5] Batson, C.D. "Rational Processing or Ra- tionalization?: The Effect of Disconfirm- ing Information on Stated Religious Be- lief," Journal of Personality and Social Psychology, Volume 32, Number 1, Janu- ary 1975, pp. 176-184.

[6] Benbassat, I. and Taylor, R. "Behavioral Aspects of Information Processing for the Design of Management Information Sys- tems," IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-12, Number 4, July/August 1982, pp. 439-450.

[7] Blanning, R.W. "Conversing with Manage- ment Information Systems in Natural Lan- guage," Communications of the ACM, Vol- ume 27, Number 3, March 1984, pp. 201- 207.

Despite the promise of large-scale, knowledge- based DSS, a number of major challenges re- main. These include development of robust, integrated user interface, data management, model management, and knowledge manage- ment subsystems. Moreover, the fundamen- tal challenge in developing any knowledge- based system remains the effective identifi- cation and encoding of appropriate knowledge.

References [1] Ackoff, R.L "Management Misinformation

Systems," Management Science, Volume 14, Number 4, December 1967, pp. B147- B156.

[2] Anderson, N.H. and Jacobson, A. "Effect of Stimulus Inconsistency and Discount- ing Instructions in Personality Impres- sion Formation," Journal of Personality and Social Psychology, Volume 2, Number 4, April 1965, pp. 531-539.

[3] Andrews, F.M., Klem, L., Davidson, T.N., O'Malley, P.M. and Rodgers, W.L. A Guide to Selecting Statistical Techniques for Analyzing Social Science Data, Ann Arbor, Michigan, 1976.

[4] Atkinson, M.P. and Kulkarni, K.G. "Experi- menting with the Functional Data Model," in Databases - Role and Structure, P.M. Stocker, P.M.D. Gray and M.P. Atkinson (eds.). Cambridge University Press, New York, New York, 1984, pp. 311-338.

[5] Batson, C.D. "Rational Processing or Ra- tionalization?: The Effect of Disconfirm- ing Information on Stated Religious Be- lief," Journal of Personality and Social Psychology, Volume 32, Number 1, Janu- ary 1975, pp. 176-184.

[6] Benbassat, I. and Taylor, R. "Behavioral Aspects of Information Processing for the Design of Management Information Sys- tems," IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-12, Number 4, July/August 1982, pp. 439-450.

[7] Blanning, R.W. "Conversing with Manage- ment Information Systems in Natural Lan- guage," Communications of the ACM, Vol- ume 27, Number 3, March 1984, pp. 201- 207.

Despite the promise of large-scale, knowledge- based DSS, a number of major challenges re- main. These include development of robust, integrated user interface, data management, model management, and knowledge manage- ment subsystems. Moreover, the fundamen- tal challenge in developing any knowledge- based system remains the effective identifi- cation and encoding of appropriate knowledge.

References [1] Ackoff, R.L "Management Misinformation

Systems," Management Science, Volume 14, Number 4, December 1967, pp. B147- B156.

[2] Anderson, N.H. and Jacobson, A. "Effect of Stimulus Inconsistency and Discount- ing Instructions in Personality Impres- sion Formation," Journal of Personality and Social Psychology, Volume 2, Number 4, April 1965, pp. 531-539.

[3] Andrews, F.M., Klem, L., Davidson, T.N., O'Malley, P.M. and Rodgers, W.L. A Guide to Selecting Statistical Techniques for Analyzing Social Science Data, Ann Arbor, Michigan, 1976.

[4] Atkinson, M.P. and Kulkarni, K.G. "Experi- menting with the Functional Data Model," in Databases - Role and Structure, P.M. Stocker, P.M.D. Gray and M.P. Atkinson (eds.). Cambridge University Press, New York, New York, 1984, pp. 311-338.

[5] Batson, C.D. "Rational Processing or Ra- tionalization?: The Effect of Disconfirm- ing Information on Stated Religious Be- lief," Journal of Personality and Social Psychology, Volume 32, Number 1, Janu- ary 1975, pp. 176-184.

[6] Benbassat, I. and Taylor, R. "Behavioral Aspects of Information Processing for the Design of Management Information Sys- tems," IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-12, Number 4, July/August 1982, pp. 439-450.

[7] Blanning, R.W. "Conversing with Manage- ment Information Systems in Natural Lan- guage," Communications of the ACM, Vol- ume 27, Number 3, March 1984, pp. 201- 207.

414 MIS Quarterly/December 1986 414 MIS Quarterly/December 1986 414 MIS Quarterly/December 1986 414 MIS Quarterly/December 1986 414 MIS Quarterly/December 1986 414 MIS Quarterly/December 1986 414 MIS Quarterly/December 1986 414 MIS Quarterly/December 1986

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 14: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

[8] Blum, R.L. Discovery and Representation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project, Springer-Verlag Lecture Notes in Medical Informatics, Berlin, Germany, 1982.

[9] Bonczek, R.H., Holsapple, C.W. and Whin- ston, A.B. "The Evolving Roles of Models in Decision Support Systems," Decision Sciences, Volume 11, Number 2, April 1980, pp. 337-356.

[10] Borgida, E. and Nisbett, R.E. "The Differ- ential Impact of Abstract vs. Concrete In- formation on Decisions," Journal of Ap- plied Social Psychology, Volume 7, Num- ber 3, 1977, pp. 258-271.

[11] Bowman, E.H. "Consistency and Optimal- ity in Management Decision Making," Management Science, Volume 9, Number 2, January 1963, pp. 310-321.

[12] Bruner, J.S., Goodnow, J.J. and Austin, G.A. A Study of Thinking, John Wiley & Sons, New York, New York, 1956.

[13] Bruner, J.S. and Postman, L.J. "On the Perception of Incongruity," Journal of Per- sonality, Volume 18, Number 2, December 1949, pp. 206-223.

[14] Chervany, N.L. and Dickson, G.W. "An Ex- perimental Evaluation of Information Over- load in a Production Environment," Man- agement Science, Volume 20, Number 10, June 1974, pp. 1335-1344.

[15] Christensen-Szalanski, J.J. and Beach, L.R. "Experience and the Base-Rate Fal- lacy," Organizational Behavior and Hu- man Performance, Volume 29, Number 2, April 1982, pp. 270-278.

[16] Dearborn, D.C. and Simon, H.A. "Selec- tive Perception: A Note on the Departmen- tal Identification of Executives," Socio- metry, Volume 21, Number 2, 1958, pp. 140-144.

[17] Dolk, D. and Konsynski, B.R. "Knowledge Representation for Model Management Systems," IEEE Transactions on Software Engineering, Volume 10, Number 6, No- vember 1984, pp. 619-628.

[18] Ebert, R.J. "Environmental Structure and Programmed Decision Effectiveness," Management Science, Volume 19, Num- ber 4, December 1972, pp. 435-445.

[19] Edgell, S.E. and Hennessey, J.E. "Irrele- vant Information and Utilization of Event

[8] Blum, R.L. Discovery and Representation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project, Springer-Verlag Lecture Notes in Medical Informatics, Berlin, Germany, 1982.

[9] Bonczek, R.H., Holsapple, C.W. and Whin- ston, A.B. "The Evolving Roles of Models in Decision Support Systems," Decision Sciences, Volume 11, Number 2, April 1980, pp. 337-356.

[10] Borgida, E. and Nisbett, R.E. "The Differ- ential Impact of Abstract vs. Concrete In- formation on Decisions," Journal of Ap- plied Social Psychology, Volume 7, Num- ber 3, 1977, pp. 258-271.

[11] Bowman, E.H. "Consistency and Optimal- ity in Management Decision Making," Management Science, Volume 9, Number 2, January 1963, pp. 310-321.

[12] Bruner, J.S., Goodnow, J.J. and Austin, G.A. A Study of Thinking, John Wiley & Sons, New York, New York, 1956.

[13] Bruner, J.S. and Postman, L.J. "On the Perception of Incongruity," Journal of Per- sonality, Volume 18, Number 2, December 1949, pp. 206-223.

[14] Chervany, N.L. and Dickson, G.W. "An Ex- perimental Evaluation of Information Over- load in a Production Environment," Man- agement Science, Volume 20, Number 10, June 1974, pp. 1335-1344.

[15] Christensen-Szalanski, J.J. and Beach, L.R. "Experience and the Base-Rate Fal- lacy," Organizational Behavior and Hu- man Performance, Volume 29, Number 2, April 1982, pp. 270-278.

[16] Dearborn, D.C. and Simon, H.A. "Selec- tive Perception: A Note on the Departmen- tal Identification of Executives," Socio- metry, Volume 21, Number 2, 1958, pp. 140-144.

[17] Dolk, D. and Konsynski, B.R. "Knowledge Representation for Model Management Systems," IEEE Transactions on Software Engineering, Volume 10, Number 6, No- vember 1984, pp. 619-628.

[18] Ebert, R.J. "Environmental Structure and Programmed Decision Effectiveness," Management Science, Volume 19, Num- ber 4, December 1972, pp. 435-445.

[19] Edgell, S.E. and Hennessey, J.E. "Irrele- vant Information and Utilization of Event

[8] Blum, R.L. Discovery and Representation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project, Springer-Verlag Lecture Notes in Medical Informatics, Berlin, Germany, 1982.

[9] Bonczek, R.H., Holsapple, C.W. and Whin- ston, A.B. "The Evolving Roles of Models in Decision Support Systems," Decision Sciences, Volume 11, Number 2, April 1980, pp. 337-356.

[10] Borgida, E. and Nisbett, R.E. "The Differ- ential Impact of Abstract vs. Concrete In- formation on Decisions," Journal of Ap- plied Social Psychology, Volume 7, Num- ber 3, 1977, pp. 258-271.

[11] Bowman, E.H. "Consistency and Optimal- ity in Management Decision Making," Management Science, Volume 9, Number 2, January 1963, pp. 310-321.

[12] Bruner, J.S., Goodnow, J.J. and Austin, G.A. A Study of Thinking, John Wiley & Sons, New York, New York, 1956.

[13] Bruner, J.S. and Postman, L.J. "On the Perception of Incongruity," Journal of Per- sonality, Volume 18, Number 2, December 1949, pp. 206-223.

[14] Chervany, N.L. and Dickson, G.W. "An Ex- perimental Evaluation of Information Over- load in a Production Environment," Man- agement Science, Volume 20, Number 10, June 1974, pp. 1335-1344.

[15] Christensen-Szalanski, J.J. and Beach, L.R. "Experience and the Base-Rate Fal- lacy," Organizational Behavior and Hu- man Performance, Volume 29, Number 2, April 1982, pp. 270-278.

[16] Dearborn, D.C. and Simon, H.A. "Selec- tive Perception: A Note on the Departmen- tal Identification of Executives," Socio- metry, Volume 21, Number 2, 1958, pp. 140-144.

[17] Dolk, D. and Konsynski, B.R. "Knowledge Representation for Model Management Systems," IEEE Transactions on Software Engineering, Volume 10, Number 6, No- vember 1984, pp. 619-628.

[18] Ebert, R.J. "Environmental Structure and Programmed Decision Effectiveness," Management Science, Volume 19, Num- ber 4, December 1972, pp. 435-445.

[19] Edgell, S.E. and Hennessey, J.E. "Irrele- vant Information and Utilization of Event

[8] Blum, R.L. Discovery and Representation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project, Springer-Verlag Lecture Notes in Medical Informatics, Berlin, Germany, 1982.

[9] Bonczek, R.H., Holsapple, C.W. and Whin- ston, A.B. "The Evolving Roles of Models in Decision Support Systems," Decision Sciences, Volume 11, Number 2, April 1980, pp. 337-356.

[10] Borgida, E. and Nisbett, R.E. "The Differ- ential Impact of Abstract vs. Concrete In- formation on Decisions," Journal of Ap- plied Social Psychology, Volume 7, Num- ber 3, 1977, pp. 258-271.

[11] Bowman, E.H. "Consistency and Optimal- ity in Management Decision Making," Management Science, Volume 9, Number 2, January 1963, pp. 310-321.

[12] Bruner, J.S., Goodnow, J.J. and Austin, G.A. A Study of Thinking, John Wiley & Sons, New York, New York, 1956.

[13] Bruner, J.S. and Postman, L.J. "On the Perception of Incongruity," Journal of Per- sonality, Volume 18, Number 2, December 1949, pp. 206-223.

[14] Chervany, N.L. and Dickson, G.W. "An Ex- perimental Evaluation of Information Over- load in a Production Environment," Man- agement Science, Volume 20, Number 10, June 1974, pp. 1335-1344.

[15] Christensen-Szalanski, J.J. and Beach, L.R. "Experience and the Base-Rate Fal- lacy," Organizational Behavior and Hu- man Performance, Volume 29, Number 2, April 1982, pp. 270-278.

[16] Dearborn, D.C. and Simon, H.A. "Selec- tive Perception: A Note on the Departmen- tal Identification of Executives," Socio- metry, Volume 21, Number 2, 1958, pp. 140-144.

[17] Dolk, D. and Konsynski, B.R. "Knowledge Representation for Model Management Systems," IEEE Transactions on Software Engineering, Volume 10, Number 6, No- vember 1984, pp. 619-628.

[18] Ebert, R.J. "Environmental Structure and Programmed Decision Effectiveness," Management Science, Volume 19, Num- ber 4, December 1972, pp. 435-445.

[19] Edgell, S.E. and Hennessey, J.E. "Irrele- vant Information and Utilization of Event

[8] Blum, R.L. Discovery and Representation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project, Springer-Verlag Lecture Notes in Medical Informatics, Berlin, Germany, 1982.

[9] Bonczek, R.H., Holsapple, C.W. and Whin- ston, A.B. "The Evolving Roles of Models in Decision Support Systems," Decision Sciences, Volume 11, Number 2, April 1980, pp. 337-356.

[10] Borgida, E. and Nisbett, R.E. "The Differ- ential Impact of Abstract vs. Concrete In- formation on Decisions," Journal of Ap- plied Social Psychology, Volume 7, Num- ber 3, 1977, pp. 258-271.

[11] Bowman, E.H. "Consistency and Optimal- ity in Management Decision Making," Management Science, Volume 9, Number 2, January 1963, pp. 310-321.

[12] Bruner, J.S., Goodnow, J.J. and Austin, G.A. A Study of Thinking, John Wiley & Sons, New York, New York, 1956.

[13] Bruner, J.S. and Postman, L.J. "On the Perception of Incongruity," Journal of Per- sonality, Volume 18, Number 2, December 1949, pp. 206-223.

[14] Chervany, N.L. and Dickson, G.W. "An Ex- perimental Evaluation of Information Over- load in a Production Environment," Man- agement Science, Volume 20, Number 10, June 1974, pp. 1335-1344.

[15] Christensen-Szalanski, J.J. and Beach, L.R. "Experience and the Base-Rate Fal- lacy," Organizational Behavior and Hu- man Performance, Volume 29, Number 2, April 1982, pp. 270-278.

[16] Dearborn, D.C. and Simon, H.A. "Selec- tive Perception: A Note on the Departmen- tal Identification of Executives," Socio- metry, Volume 21, Number 2, 1958, pp. 140-144.

[17] Dolk, D. and Konsynski, B.R. "Knowledge Representation for Model Management Systems," IEEE Transactions on Software Engineering, Volume 10, Number 6, No- vember 1984, pp. 619-628.

[18] Ebert, R.J. "Environmental Structure and Programmed Decision Effectiveness," Management Science, Volume 19, Num- ber 4, December 1972, pp. 435-445.

[19] Edgell, S.E. and Hennessey, J.E. "Irrele- vant Information and Utilization of Event

[8] Blum, R.L. Discovery and Representation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project, Springer-Verlag Lecture Notes in Medical Informatics, Berlin, Germany, 1982.

[9] Bonczek, R.H., Holsapple, C.W. and Whin- ston, A.B. "The Evolving Roles of Models in Decision Support Systems," Decision Sciences, Volume 11, Number 2, April 1980, pp. 337-356.

[10] Borgida, E. and Nisbett, R.E. "The Differ- ential Impact of Abstract vs. Concrete In- formation on Decisions," Journal of Ap- plied Social Psychology, Volume 7, Num- ber 3, 1977, pp. 258-271.

[11] Bowman, E.H. "Consistency and Optimal- ity in Management Decision Making," Management Science, Volume 9, Number 2, January 1963, pp. 310-321.

[12] Bruner, J.S., Goodnow, J.J. and Austin, G.A. A Study of Thinking, John Wiley & Sons, New York, New York, 1956.

[13] Bruner, J.S. and Postman, L.J. "On the Perception of Incongruity," Journal of Per- sonality, Volume 18, Number 2, December 1949, pp. 206-223.

[14] Chervany, N.L. and Dickson, G.W. "An Ex- perimental Evaluation of Information Over- load in a Production Environment," Man- agement Science, Volume 20, Number 10, June 1974, pp. 1335-1344.

[15] Christensen-Szalanski, J.J. and Beach, L.R. "Experience and the Base-Rate Fal- lacy," Organizational Behavior and Hu- man Performance, Volume 29, Number 2, April 1982, pp. 270-278.

[16] Dearborn, D.C. and Simon, H.A. "Selec- tive Perception: A Note on the Departmen- tal Identification of Executives," Socio- metry, Volume 21, Number 2, 1958, pp. 140-144.

[17] Dolk, D. and Konsynski, B.R. "Knowledge Representation for Model Management Systems," IEEE Transactions on Software Engineering, Volume 10, Number 6, No- vember 1984, pp. 619-628.

[18] Ebert, R.J. "Environmental Structure and Programmed Decision Effectiveness," Management Science, Volume 19, Num- ber 4, December 1972, pp. 435-445.

[19] Edgell, S.E. and Hennessey, J.E. "Irrele- vant Information and Utilization of Event

[8] Blum, R.L. Discovery and Representation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project, Springer-Verlag Lecture Notes in Medical Informatics, Berlin, Germany, 1982.

[9] Bonczek, R.H., Holsapple, C.W. and Whin- ston, A.B. "The Evolving Roles of Models in Decision Support Systems," Decision Sciences, Volume 11, Number 2, April 1980, pp. 337-356.

[10] Borgida, E. and Nisbett, R.E. "The Differ- ential Impact of Abstract vs. Concrete In- formation on Decisions," Journal of Ap- plied Social Psychology, Volume 7, Num- ber 3, 1977, pp. 258-271.

[11] Bowman, E.H. "Consistency and Optimal- ity in Management Decision Making," Management Science, Volume 9, Number 2, January 1963, pp. 310-321.

[12] Bruner, J.S., Goodnow, J.J. and Austin, G.A. A Study of Thinking, John Wiley & Sons, New York, New York, 1956.

[13] Bruner, J.S. and Postman, L.J. "On the Perception of Incongruity," Journal of Per- sonality, Volume 18, Number 2, December 1949, pp. 206-223.

[14] Chervany, N.L. and Dickson, G.W. "An Ex- perimental Evaluation of Information Over- load in a Production Environment," Man- agement Science, Volume 20, Number 10, June 1974, pp. 1335-1344.

[15] Christensen-Szalanski, J.J. and Beach, L.R. "Experience and the Base-Rate Fal- lacy," Organizational Behavior and Hu- man Performance, Volume 29, Number 2, April 1982, pp. 270-278.

[16] Dearborn, D.C. and Simon, H.A. "Selec- tive Perception: A Note on the Departmen- tal Identification of Executives," Socio- metry, Volume 21, Number 2, 1958, pp. 140-144.

[17] Dolk, D. and Konsynski, B.R. "Knowledge Representation for Model Management Systems," IEEE Transactions on Software Engineering, Volume 10, Number 6, No- vember 1984, pp. 619-628.

[18] Ebert, R.J. "Environmental Structure and Programmed Decision Effectiveness," Management Science, Volume 19, Num- ber 4, December 1972, pp. 435-445.

[19] Edgell, S.E. and Hennessey, J.E. "Irrele- vant Information and Utilization of Event

[8] Blum, R.L. Discovery and Representation of Causal Relationships from a Large Time-Oriented Clinical Database: The RX Project, Springer-Verlag Lecture Notes in Medical Informatics, Berlin, Germany, 1982.

[9] Bonczek, R.H., Holsapple, C.W. and Whin- ston, A.B. "The Evolving Roles of Models in Decision Support Systems," Decision Sciences, Volume 11, Number 2, April 1980, pp. 337-356.

[10] Borgida, E. and Nisbett, R.E. "The Differ- ential Impact of Abstract vs. Concrete In- formation on Decisions," Journal of Ap- plied Social Psychology, Volume 7, Num- ber 3, 1977, pp. 258-271.

[11] Bowman, E.H. "Consistency and Optimal- ity in Management Decision Making," Management Science, Volume 9, Number 2, January 1963, pp. 310-321.

[12] Bruner, J.S., Goodnow, J.J. and Austin, G.A. A Study of Thinking, John Wiley & Sons, New York, New York, 1956.

[13] Bruner, J.S. and Postman, L.J. "On the Perception of Incongruity," Journal of Per- sonality, Volume 18, Number 2, December 1949, pp. 206-223.

[14] Chervany, N.L. and Dickson, G.W. "An Ex- perimental Evaluation of Information Over- load in a Production Environment," Man- agement Science, Volume 20, Number 10, June 1974, pp. 1335-1344.

[15] Christensen-Szalanski, J.J. and Beach, L.R. "Experience and the Base-Rate Fal- lacy," Organizational Behavior and Hu- man Performance, Volume 29, Number 2, April 1982, pp. 270-278.

[16] Dearborn, D.C. and Simon, H.A. "Selec- tive Perception: A Note on the Departmen- tal Identification of Executives," Socio- metry, Volume 21, Number 2, 1958, pp. 140-144.

[17] Dolk, D. and Konsynski, B.R. "Knowledge Representation for Model Management Systems," IEEE Transactions on Software Engineering, Volume 10, Number 6, No- vember 1984, pp. 619-628.

[18] Ebert, R.J. "Environmental Structure and Programmed Decision Effectiveness," Management Science, Volume 19, Num- ber 4, December 1972, pp. 435-445.

[19] Edgell, S.E. and Hennessey, J.E. "Irrele- vant Information and Utilization of Event

Base Rates in Nonmetric Multiple Cue Probability Learning," Organizational Be- havior and Human Performance, Volume 26, Number 1, August 1980, pp. 1-6.

[20] Egeth, H. "Selective Attention," Psycho- logical Bulletin, Volume 67, Number 1, January 1967, pp. 41-57.

[21] Einhorn, H. and Hogarth, R. "Behavioral Decision Theory: Processes of Judgment and Choice," Annual Review of Psycholo- gy, Volume 32, 1981, pp. 53-88.

[22] Estes, W.K. "The Cognitive Side of Prob- ability Learning," Psychological Review, Volume 83, Number 1, January 1976, pp. 37-64.

[23] Feldman, M.S. and March, J.G. "Informa- tion in Organizations as Signal and Sym- bol," Administrative Science Quarterly, Volume 26, Number 2, June 1981, pp. 171- 186.

[24] Fischhoff, B. "Debiasing," in Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky (eds.), Cambridge University Press, New York, New York, 1981.

[25] Fischhoff, B., Slovic, P. and Lichtenstein, S. "Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Presen- tation," Journal of Experimental Psycholo- gy: Human Perception and Performance, Volume 4, Number 2, 1978, pp. 330-344.

[26] Fox, M.S. "The Intelligent Management System: An Overview," Working Paper, In- telligent Systems Laboratory, The Robot- ics Institute, Carnegie-Mellon University, August 11, 1981.

[27] Ganster, D.C. "Individual Differences and Task Design: A Laboratory Experiment," Organizational Behavior and Human Per- formance, Volume 26, Number 1, August 1980, pp. 131-148.

[28] Gettys, C.F., Kelly, III, C.W. and Petter- son, C.R. "The Best Guess Hypothesis in Multistage Inference," Organizational Be- havior and Human Performance, Volume 10, Number 3, 1973, pp. 364-373.

[29] Goldberg, L.R. "Man Versus Model of Man: A Rationale, Plus Some Evidence for a Method of Improving on Clinical In- ferences," Psychological Bulletin, Vol- ume 73, Number 6, June 1970, pp. 422-432.

[30] Hogarth, R.M. and Makridakis, S. "The Value of Decision Making in a Complex

Base Rates in Nonmetric Multiple Cue Probability Learning," Organizational Be- havior and Human Performance, Volume 26, Number 1, August 1980, pp. 1-6.

[20] Egeth, H. "Selective Attention," Psycho- logical Bulletin, Volume 67, Number 1, January 1967, pp. 41-57.

[21] Einhorn, H. and Hogarth, R. "Behavioral Decision Theory: Processes of Judgment and Choice," Annual Review of Psycholo- gy, Volume 32, 1981, pp. 53-88.

[22] Estes, W.K. "The Cognitive Side of Prob- ability Learning," Psychological Review, Volume 83, Number 1, January 1976, pp. 37-64.

[23] Feldman, M.S. and March, J.G. "Informa- tion in Organizations as Signal and Sym- bol," Administrative Science Quarterly, Volume 26, Number 2, June 1981, pp. 171- 186.

[24] Fischhoff, B. "Debiasing," in Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky (eds.), Cambridge University Press, New York, New York, 1981.

[25] Fischhoff, B., Slovic, P. and Lichtenstein, S. "Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Presen- tation," Journal of Experimental Psycholo- gy: Human Perception and Performance, Volume 4, Number 2, 1978, pp. 330-344.

[26] Fox, M.S. "The Intelligent Management System: An Overview," Working Paper, In- telligent Systems Laboratory, The Robot- ics Institute, Carnegie-Mellon University, August 11, 1981.

[27] Ganster, D.C. "Individual Differences and Task Design: A Laboratory Experiment," Organizational Behavior and Human Per- formance, Volume 26, Number 1, August 1980, pp. 131-148.

[28] Gettys, C.F., Kelly, III, C.W. and Petter- son, C.R. "The Best Guess Hypothesis in Multistage Inference," Organizational Be- havior and Human Performance, Volume 10, Number 3, 1973, pp. 364-373.

[29] Goldberg, L.R. "Man Versus Model of Man: A Rationale, Plus Some Evidence for a Method of Improving on Clinical In- ferences," Psychological Bulletin, Vol- ume 73, Number 6, June 1970, pp. 422-432.

[30] Hogarth, R.M. and Makridakis, S. "The Value of Decision Making in a Complex

Base Rates in Nonmetric Multiple Cue Probability Learning," Organizational Be- havior and Human Performance, Volume 26, Number 1, August 1980, pp. 1-6.

[20] Egeth, H. "Selective Attention," Psycho- logical Bulletin, Volume 67, Number 1, January 1967, pp. 41-57.

[21] Einhorn, H. and Hogarth, R. "Behavioral Decision Theory: Processes of Judgment and Choice," Annual Review of Psycholo- gy, Volume 32, 1981, pp. 53-88.

[22] Estes, W.K. "The Cognitive Side of Prob- ability Learning," Psychological Review, Volume 83, Number 1, January 1976, pp. 37-64.

[23] Feldman, M.S. and March, J.G. "Informa- tion in Organizations as Signal and Sym- bol," Administrative Science Quarterly, Volume 26, Number 2, June 1981, pp. 171- 186.

[24] Fischhoff, B. "Debiasing," in Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky (eds.), Cambridge University Press, New York, New York, 1981.

[25] Fischhoff, B., Slovic, P. and Lichtenstein, S. "Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Presen- tation," Journal of Experimental Psycholo- gy: Human Perception and Performance, Volume 4, Number 2, 1978, pp. 330-344.

[26] Fox, M.S. "The Intelligent Management System: An Overview," Working Paper, In- telligent Systems Laboratory, The Robot- ics Institute, Carnegie-Mellon University, August 11, 1981.

[27] Ganster, D.C. "Individual Differences and Task Design: A Laboratory Experiment," Organizational Behavior and Human Per- formance, Volume 26, Number 1, August 1980, pp. 131-148.

[28] Gettys, C.F., Kelly, III, C.W. and Petter- son, C.R. "The Best Guess Hypothesis in Multistage Inference," Organizational Be- havior and Human Performance, Volume 10, Number 3, 1973, pp. 364-373.

[29] Goldberg, L.R. "Man Versus Model of Man: A Rationale, Plus Some Evidence for a Method of Improving on Clinical In- ferences," Psychological Bulletin, Vol- ume 73, Number 6, June 1970, pp. 422-432.

[30] Hogarth, R.M. and Makridakis, S. "The Value of Decision Making in a Complex

Base Rates in Nonmetric Multiple Cue Probability Learning," Organizational Be- havior and Human Performance, Volume 26, Number 1, August 1980, pp. 1-6.

[20] Egeth, H. "Selective Attention," Psycho- logical Bulletin, Volume 67, Number 1, January 1967, pp. 41-57.

[21] Einhorn, H. and Hogarth, R. "Behavioral Decision Theory: Processes of Judgment and Choice," Annual Review of Psycholo- gy, Volume 32, 1981, pp. 53-88.

[22] Estes, W.K. "The Cognitive Side of Prob- ability Learning," Psychological Review, Volume 83, Number 1, January 1976, pp. 37-64.

[23] Feldman, M.S. and March, J.G. "Informa- tion in Organizations as Signal and Sym- bol," Administrative Science Quarterly, Volume 26, Number 2, June 1981, pp. 171- 186.

[24] Fischhoff, B. "Debiasing," in Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky (eds.), Cambridge University Press, New York, New York, 1981.

[25] Fischhoff, B., Slovic, P. and Lichtenstein, S. "Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Presen- tation," Journal of Experimental Psycholo- gy: Human Perception and Performance, Volume 4, Number 2, 1978, pp. 330-344.

[26] Fox, M.S. "The Intelligent Management System: An Overview," Working Paper, In- telligent Systems Laboratory, The Robot- ics Institute, Carnegie-Mellon University, August 11, 1981.

[27] Ganster, D.C. "Individual Differences and Task Design: A Laboratory Experiment," Organizational Behavior and Human Per- formance, Volume 26, Number 1, August 1980, pp. 131-148.

[28] Gettys, C.F., Kelly, III, C.W. and Petter- son, C.R. "The Best Guess Hypothesis in Multistage Inference," Organizational Be- havior and Human Performance, Volume 10, Number 3, 1973, pp. 364-373.

[29] Goldberg, L.R. "Man Versus Model of Man: A Rationale, Plus Some Evidence for a Method of Improving on Clinical In- ferences," Psychological Bulletin, Vol- ume 73, Number 6, June 1970, pp. 422-432.

[30] Hogarth, R.M. and Makridakis, S. "The Value of Decision Making in a Complex

Base Rates in Nonmetric Multiple Cue Probability Learning," Organizational Be- havior and Human Performance, Volume 26, Number 1, August 1980, pp. 1-6.

[20] Egeth, H. "Selective Attention," Psycho- logical Bulletin, Volume 67, Number 1, January 1967, pp. 41-57.

[21] Einhorn, H. and Hogarth, R. "Behavioral Decision Theory: Processes of Judgment and Choice," Annual Review of Psycholo- gy, Volume 32, 1981, pp. 53-88.

[22] Estes, W.K. "The Cognitive Side of Prob- ability Learning," Psychological Review, Volume 83, Number 1, January 1976, pp. 37-64.

[23] Feldman, M.S. and March, J.G. "Informa- tion in Organizations as Signal and Sym- bol," Administrative Science Quarterly, Volume 26, Number 2, June 1981, pp. 171- 186.

[24] Fischhoff, B. "Debiasing," in Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky (eds.), Cambridge University Press, New York, New York, 1981.

[25] Fischhoff, B., Slovic, P. and Lichtenstein, S. "Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Presen- tation," Journal of Experimental Psycholo- gy: Human Perception and Performance, Volume 4, Number 2, 1978, pp. 330-344.

[26] Fox, M.S. "The Intelligent Management System: An Overview," Working Paper, In- telligent Systems Laboratory, The Robot- ics Institute, Carnegie-Mellon University, August 11, 1981.

[27] Ganster, D.C. "Individual Differences and Task Design: A Laboratory Experiment," Organizational Behavior and Human Per- formance, Volume 26, Number 1, August 1980, pp. 131-148.

[28] Gettys, C.F., Kelly, III, C.W. and Petter- son, C.R. "The Best Guess Hypothesis in Multistage Inference," Organizational Be- havior and Human Performance, Volume 10, Number 3, 1973, pp. 364-373.

[29] Goldberg, L.R. "Man Versus Model of Man: A Rationale, Plus Some Evidence for a Method of Improving on Clinical In- ferences," Psychological Bulletin, Vol- ume 73, Number 6, June 1970, pp. 422-432.

[30] Hogarth, R.M. and Makridakis, S. "The Value of Decision Making in a Complex

Base Rates in Nonmetric Multiple Cue Probability Learning," Organizational Be- havior and Human Performance, Volume 26, Number 1, August 1980, pp. 1-6.

[20] Egeth, H. "Selective Attention," Psycho- logical Bulletin, Volume 67, Number 1, January 1967, pp. 41-57.

[21] Einhorn, H. and Hogarth, R. "Behavioral Decision Theory: Processes of Judgment and Choice," Annual Review of Psycholo- gy, Volume 32, 1981, pp. 53-88.

[22] Estes, W.K. "The Cognitive Side of Prob- ability Learning," Psychological Review, Volume 83, Number 1, January 1976, pp. 37-64.

[23] Feldman, M.S. and March, J.G. "Informa- tion in Organizations as Signal and Sym- bol," Administrative Science Quarterly, Volume 26, Number 2, June 1981, pp. 171- 186.

[24] Fischhoff, B. "Debiasing," in Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky (eds.), Cambridge University Press, New York, New York, 1981.

[25] Fischhoff, B., Slovic, P. and Lichtenstein, S. "Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Presen- tation," Journal of Experimental Psycholo- gy: Human Perception and Performance, Volume 4, Number 2, 1978, pp. 330-344.

[26] Fox, M.S. "The Intelligent Management System: An Overview," Working Paper, In- telligent Systems Laboratory, The Robot- ics Institute, Carnegie-Mellon University, August 11, 1981.

[27] Ganster, D.C. "Individual Differences and Task Design: A Laboratory Experiment," Organizational Behavior and Human Per- formance, Volume 26, Number 1, August 1980, pp. 131-148.

[28] Gettys, C.F., Kelly, III, C.W. and Petter- son, C.R. "The Best Guess Hypothesis in Multistage Inference," Organizational Be- havior and Human Performance, Volume 10, Number 3, 1973, pp. 364-373.

[29] Goldberg, L.R. "Man Versus Model of Man: A Rationale, Plus Some Evidence for a Method of Improving on Clinical In- ferences," Psychological Bulletin, Vol- ume 73, Number 6, June 1970, pp. 422-432.

[30] Hogarth, R.M. and Makridakis, S. "The Value of Decision Making in a Complex

Base Rates in Nonmetric Multiple Cue Probability Learning," Organizational Be- havior and Human Performance, Volume 26, Number 1, August 1980, pp. 1-6.

[20] Egeth, H. "Selective Attention," Psycho- logical Bulletin, Volume 67, Number 1, January 1967, pp. 41-57.

[21] Einhorn, H. and Hogarth, R. "Behavioral Decision Theory: Processes of Judgment and Choice," Annual Review of Psycholo- gy, Volume 32, 1981, pp. 53-88.

[22] Estes, W.K. "The Cognitive Side of Prob- ability Learning," Psychological Review, Volume 83, Number 1, January 1976, pp. 37-64.

[23] Feldman, M.S. and March, J.G. "Informa- tion in Organizations as Signal and Sym- bol," Administrative Science Quarterly, Volume 26, Number 2, June 1981, pp. 171- 186.

[24] Fischhoff, B. "Debiasing," in Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky (eds.), Cambridge University Press, New York, New York, 1981.

[25] Fischhoff, B., Slovic, P. and Lichtenstein, S. "Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Presen- tation," Journal of Experimental Psycholo- gy: Human Perception and Performance, Volume 4, Number 2, 1978, pp. 330-344.

[26] Fox, M.S. "The Intelligent Management System: An Overview," Working Paper, In- telligent Systems Laboratory, The Robot- ics Institute, Carnegie-Mellon University, August 11, 1981.

[27] Ganster, D.C. "Individual Differences and Task Design: A Laboratory Experiment," Organizational Behavior and Human Per- formance, Volume 26, Number 1, August 1980, pp. 131-148.

[28] Gettys, C.F., Kelly, III, C.W. and Petter- son, C.R. "The Best Guess Hypothesis in Multistage Inference," Organizational Be- havior and Human Performance, Volume 10, Number 3, 1973, pp. 364-373.

[29] Goldberg, L.R. "Man Versus Model of Man: A Rationale, Plus Some Evidence for a Method of Improving on Clinical In- ferences," Psychological Bulletin, Vol- ume 73, Number 6, June 1970, pp. 422-432.

[30] Hogarth, R.M. and Makridakis, S. "The Value of Decision Making in a Complex

Base Rates in Nonmetric Multiple Cue Probability Learning," Organizational Be- havior and Human Performance, Volume 26, Number 1, August 1980, pp. 1-6.

[20] Egeth, H. "Selective Attention," Psycho- logical Bulletin, Volume 67, Number 1, January 1967, pp. 41-57.

[21] Einhorn, H. and Hogarth, R. "Behavioral Decision Theory: Processes of Judgment and Choice," Annual Review of Psycholo- gy, Volume 32, 1981, pp. 53-88.

[22] Estes, W.K. "The Cognitive Side of Prob- ability Learning," Psychological Review, Volume 83, Number 1, January 1976, pp. 37-64.

[23] Feldman, M.S. and March, J.G. "Informa- tion in Organizations as Signal and Sym- bol," Administrative Science Quarterly, Volume 26, Number 2, June 1981, pp. 171- 186.

[24] Fischhoff, B. "Debiasing," in Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky (eds.), Cambridge University Press, New York, New York, 1981.

[25] Fischhoff, B., Slovic, P. and Lichtenstein, S. "Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Presen- tation," Journal of Experimental Psycholo- gy: Human Perception and Performance, Volume 4, Number 2, 1978, pp. 330-344.

[26] Fox, M.S. "The Intelligent Management System: An Overview," Working Paper, In- telligent Systems Laboratory, The Robot- ics Institute, Carnegie-Mellon University, August 11, 1981.

[27] Ganster, D.C. "Individual Differences and Task Design: A Laboratory Experiment," Organizational Behavior and Human Per- formance, Volume 26, Number 1, August 1980, pp. 131-148.

[28] Gettys, C.F., Kelly, III, C.W. and Petter- son, C.R. "The Best Guess Hypothesis in Multistage Inference," Organizational Be- havior and Human Performance, Volume 10, Number 3, 1973, pp. 364-373.

[29] Goldberg, L.R. "Man Versus Model of Man: A Rationale, Plus Some Evidence for a Method of Improving on Clinical In- ferences," Psychological Bulletin, Vol- ume 73, Number 6, June 1970, pp. 422-432.

[30] Hogarth, R.M. and Makridakis, S. "The Value of Decision Making in a Complex

MIS Quarterly/December 1986 415 MIS Quarterly/December 1986 415 MIS Quarterly/December 1986 415 MIS Quarterly/December 1986 415 MIS Quarterly/December 1986 415 MIS Quarterly/December 1986 415 MIS Quarterly/December 1986 415 MIS Quarterly/December 1986 415

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 15: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

Environment: An Experimental Approach," Management Science, Volume 27, Num- ber 1, January 1981, pp. 93-107.

[31] Howell, W.C. and Kerkar, S.P. "A Test of Task Influences in Uncertainty Measure- ment," Organizational Behavior and Hu- man Performance, Volume 30, Number 3, December 1982, pp. 365-390.

[32] Huber, G.P. "Cognitive Style as a Basis for MIS and DSS Designs: Much Ado About Nothing?" Management Science, Volume 29, Number 5, May 1983, pp. 567- 579.

[33] Jenkins, H.M. and Ward, W.C. "Judgment of Contingency Between Responses and Outcomes," Psychological Monographs: General and Applied, Volume 79, Number 1, 1965, Whole No. 594, pp. 1-17.

[34] Kahneman, D. and Tversky, A. "Subjective Probability: A Judgment of Representa- tiveness," Cognitive Psychology, Volume 3, Number 3, July 1972, pp. 430-454.

[35] Kahneman, D. and Tversky, A. "On the Psychology of Prediction," Psychological Review, Volume 80, Number 4, July 1973, pp. 237-251.

[36] Kehler, T.P., Friedland, P., Pople, H., Ro- boh, R. and Rosenberg, S. "Industrial Strength Knowledge Bases: Issues and Experiences," Proceedings on the Eighth International Joint Conference on Artifi- cial Intelligence, Karlsruhe, West Ger- may, August 8-12, 1983, pp. 108-109.

[37] Kent, G. The Brains of Men and Machines. Bytes Books, New York, New York, 1982.

[38] Kleinmuntz, D. "Cognitive Heuristics and Feedback in a Dynamic Decision Environ- ment," Working Paper 83184-4-23, Univer- sity of Texas at Austin, Austin, Texas, 1984.

[39] Knafl, K. and Burkett, G. "Professional Socialization in a Surgical Speciality: Ac- quiring Medical Judgment," Social Sci- ence and Medicine, Volume 9, Number 7, July 1975, pp. 397-404.

[40] Kunreuther, H. "Limited Knowledge and Insurance Protection," Public Policy, Vol- ume 24, Number 2, Spring 1976, pp. 227- 261.

[41] Langer, E.J. "The Psychology of Chance," Journal for the Theory of Social Behavior, Volume 7, Number 2, October 1977, pp. 185-207.

Environment: An Experimental Approach," Management Science, Volume 27, Num- ber 1, January 1981, pp. 93-107.

[31] Howell, W.C. and Kerkar, S.P. "A Test of Task Influences in Uncertainty Measure- ment," Organizational Behavior and Hu- man Performance, Volume 30, Number 3, December 1982, pp. 365-390.

[32] Huber, G.P. "Cognitive Style as a Basis for MIS and DSS Designs: Much Ado About Nothing?" Management Science, Volume 29, Number 5, May 1983, pp. 567- 579.

[33] Jenkins, H.M. and Ward, W.C. "Judgment of Contingency Between Responses and Outcomes," Psychological Monographs: General and Applied, Volume 79, Number 1, 1965, Whole No. 594, pp. 1-17.

[34] Kahneman, D. and Tversky, A. "Subjective Probability: A Judgment of Representa- tiveness," Cognitive Psychology, Volume 3, Number 3, July 1972, pp. 430-454.

[35] Kahneman, D. and Tversky, A. "On the Psychology of Prediction," Psychological Review, Volume 80, Number 4, July 1973, pp. 237-251.

[36] Kehler, T.P., Friedland, P., Pople, H., Ro- boh, R. and Rosenberg, S. "Industrial Strength Knowledge Bases: Issues and Experiences," Proceedings on the Eighth International Joint Conference on Artifi- cial Intelligence, Karlsruhe, West Ger- may, August 8-12, 1983, pp. 108-109.

[37] Kent, G. The Brains of Men and Machines. Bytes Books, New York, New York, 1982.

[38] Kleinmuntz, D. "Cognitive Heuristics and Feedback in a Dynamic Decision Environ- ment," Working Paper 83184-4-23, Univer- sity of Texas at Austin, Austin, Texas, 1984.

[39] Knafl, K. and Burkett, G. "Professional Socialization in a Surgical Speciality: Ac- quiring Medical Judgment," Social Sci- ence and Medicine, Volume 9, Number 7, July 1975, pp. 397-404.

[40] Kunreuther, H. "Limited Knowledge and Insurance Protection," Public Policy, Vol- ume 24, Number 2, Spring 1976, pp. 227- 261.

[41] Langer, E.J. "The Psychology of Chance," Journal for the Theory of Social Behavior, Volume 7, Number 2, October 1977, pp. 185-207.

Environment: An Experimental Approach," Management Science, Volume 27, Num- ber 1, January 1981, pp. 93-107.

[31] Howell, W.C. and Kerkar, S.P. "A Test of Task Influences in Uncertainty Measure- ment," Organizational Behavior and Hu- man Performance, Volume 30, Number 3, December 1982, pp. 365-390.

[32] Huber, G.P. "Cognitive Style as a Basis for MIS and DSS Designs: Much Ado About Nothing?" Management Science, Volume 29, Number 5, May 1983, pp. 567- 579.

[33] Jenkins, H.M. and Ward, W.C. "Judgment of Contingency Between Responses and Outcomes," Psychological Monographs: General and Applied, Volume 79, Number 1, 1965, Whole No. 594, pp. 1-17.

[34] Kahneman, D. and Tversky, A. "Subjective Probability: A Judgment of Representa- tiveness," Cognitive Psychology, Volume 3, Number 3, July 1972, pp. 430-454.

[35] Kahneman, D. and Tversky, A. "On the Psychology of Prediction," Psychological Review, Volume 80, Number 4, July 1973, pp. 237-251.

[36] Kehler, T.P., Friedland, P., Pople, H., Ro- boh, R. and Rosenberg, S. "Industrial Strength Knowledge Bases: Issues and Experiences," Proceedings on the Eighth International Joint Conference on Artifi- cial Intelligence, Karlsruhe, West Ger- may, August 8-12, 1983, pp. 108-109.

[37] Kent, G. The Brains of Men and Machines. Bytes Books, New York, New York, 1982.

[38] Kleinmuntz, D. "Cognitive Heuristics and Feedback in a Dynamic Decision Environ- ment," Working Paper 83184-4-23, Univer- sity of Texas at Austin, Austin, Texas, 1984.

[39] Knafl, K. and Burkett, G. "Professional Socialization in a Surgical Speciality: Ac- quiring Medical Judgment," Social Sci- ence and Medicine, Volume 9, Number 7, July 1975, pp. 397-404.

[40] Kunreuther, H. "Limited Knowledge and Insurance Protection," Public Policy, Vol- ume 24, Number 2, Spring 1976, pp. 227- 261.

[41] Langer, E.J. "The Psychology of Chance," Journal for the Theory of Social Behavior, Volume 7, Number 2, October 1977, pp. 185-207.

Environment: An Experimental Approach," Management Science, Volume 27, Num- ber 1, January 1981, pp. 93-107.

[31] Howell, W.C. and Kerkar, S.P. "A Test of Task Influences in Uncertainty Measure- ment," Organizational Behavior and Hu- man Performance, Volume 30, Number 3, December 1982, pp. 365-390.

[32] Huber, G.P. "Cognitive Style as a Basis for MIS and DSS Designs: Much Ado About Nothing?" Management Science, Volume 29, Number 5, May 1983, pp. 567- 579.

[33] Jenkins, H.M. and Ward, W.C. "Judgment of Contingency Between Responses and Outcomes," Psychological Monographs: General and Applied, Volume 79, Number 1, 1965, Whole No. 594, pp. 1-17.

[34] Kahneman, D. and Tversky, A. "Subjective Probability: A Judgment of Representa- tiveness," Cognitive Psychology, Volume 3, Number 3, July 1972, pp. 430-454.

[35] Kahneman, D. and Tversky, A. "On the Psychology of Prediction," Psychological Review, Volume 80, Number 4, July 1973, pp. 237-251.

[36] Kehler, T.P., Friedland, P., Pople, H., Ro- boh, R. and Rosenberg, S. "Industrial Strength Knowledge Bases: Issues and Experiences," Proceedings on the Eighth International Joint Conference on Artifi- cial Intelligence, Karlsruhe, West Ger- may, August 8-12, 1983, pp. 108-109.

[37] Kent, G. The Brains of Men and Machines. Bytes Books, New York, New York, 1982.

[38] Kleinmuntz, D. "Cognitive Heuristics and Feedback in a Dynamic Decision Environ- ment," Working Paper 83184-4-23, Univer- sity of Texas at Austin, Austin, Texas, 1984.

[39] Knafl, K. and Burkett, G. "Professional Socialization in a Surgical Speciality: Ac- quiring Medical Judgment," Social Sci- ence and Medicine, Volume 9, Number 7, July 1975, pp. 397-404.

[40] Kunreuther, H. "Limited Knowledge and Insurance Protection," Public Policy, Vol- ume 24, Number 2, Spring 1976, pp. 227- 261.

[41] Langer, E.J. "The Psychology of Chance," Journal for the Theory of Social Behavior, Volume 7, Number 2, October 1977, pp. 185-207.

Environment: An Experimental Approach," Management Science, Volume 27, Num- ber 1, January 1981, pp. 93-107.

[31] Howell, W.C. and Kerkar, S.P. "A Test of Task Influences in Uncertainty Measure- ment," Organizational Behavior and Hu- man Performance, Volume 30, Number 3, December 1982, pp. 365-390.

[32] Huber, G.P. "Cognitive Style as a Basis for MIS and DSS Designs: Much Ado About Nothing?" Management Science, Volume 29, Number 5, May 1983, pp. 567- 579.

[33] Jenkins, H.M. and Ward, W.C. "Judgment of Contingency Between Responses and Outcomes," Psychological Monographs: General and Applied, Volume 79, Number 1, 1965, Whole No. 594, pp. 1-17.

[34] Kahneman, D. and Tversky, A. "Subjective Probability: A Judgment of Representa- tiveness," Cognitive Psychology, Volume 3, Number 3, July 1972, pp. 430-454.

[35] Kahneman, D. and Tversky, A. "On the Psychology of Prediction," Psychological Review, Volume 80, Number 4, July 1973, pp. 237-251.

[36] Kehler, T.P., Friedland, P., Pople, H., Ro- boh, R. and Rosenberg, S. "Industrial Strength Knowledge Bases: Issues and Experiences," Proceedings on the Eighth International Joint Conference on Artifi- cial Intelligence, Karlsruhe, West Ger- may, August 8-12, 1983, pp. 108-109.

[37] Kent, G. The Brains of Men and Machines. Bytes Books, New York, New York, 1982.

[38] Kleinmuntz, D. "Cognitive Heuristics and Feedback in a Dynamic Decision Environ- ment," Working Paper 83184-4-23, Univer- sity of Texas at Austin, Austin, Texas, 1984.

[39] Knafl, K. and Burkett, G. "Professional Socialization in a Surgical Speciality: Ac- quiring Medical Judgment," Social Sci- ence and Medicine, Volume 9, Number 7, July 1975, pp. 397-404.

[40] Kunreuther, H. "Limited Knowledge and Insurance Protection," Public Policy, Vol- ume 24, Number 2, Spring 1976, pp. 227- 261.

[41] Langer, E.J. "The Psychology of Chance," Journal for the Theory of Social Behavior, Volume 7, Number 2, October 1977, pp. 185-207.

Environment: An Experimental Approach," Management Science, Volume 27, Num- ber 1, January 1981, pp. 93-107.

[31] Howell, W.C. and Kerkar, S.P. "A Test of Task Influences in Uncertainty Measure- ment," Organizational Behavior and Hu- man Performance, Volume 30, Number 3, December 1982, pp. 365-390.

[32] Huber, G.P. "Cognitive Style as a Basis for MIS and DSS Designs: Much Ado About Nothing?" Management Science, Volume 29, Number 5, May 1983, pp. 567- 579.

[33] Jenkins, H.M. and Ward, W.C. "Judgment of Contingency Between Responses and Outcomes," Psychological Monographs: General and Applied, Volume 79, Number 1, 1965, Whole No. 594, pp. 1-17.

[34] Kahneman, D. and Tversky, A. "Subjective Probability: A Judgment of Representa- tiveness," Cognitive Psychology, Volume 3, Number 3, July 1972, pp. 430-454.

[35] Kahneman, D. and Tversky, A. "On the Psychology of Prediction," Psychological Review, Volume 80, Number 4, July 1973, pp. 237-251.

[36] Kehler, T.P., Friedland, P., Pople, H., Ro- boh, R. and Rosenberg, S. "Industrial Strength Knowledge Bases: Issues and Experiences," Proceedings on the Eighth International Joint Conference on Artifi- cial Intelligence, Karlsruhe, West Ger- may, August 8-12, 1983, pp. 108-109.

[37] Kent, G. The Brains of Men and Machines. Bytes Books, New York, New York, 1982.

[38] Kleinmuntz, D. "Cognitive Heuristics and Feedback in a Dynamic Decision Environ- ment," Working Paper 83184-4-23, Univer- sity of Texas at Austin, Austin, Texas, 1984.

[39] Knafl, K. and Burkett, G. "Professional Socialization in a Surgical Speciality: Ac- quiring Medical Judgment," Social Sci- ence and Medicine, Volume 9, Number 7, July 1975, pp. 397-404.

[40] Kunreuther, H. "Limited Knowledge and Insurance Protection," Public Policy, Vol- ume 24, Number 2, Spring 1976, pp. 227- 261.

[41] Langer, E.J. "The Psychology of Chance," Journal for the Theory of Social Behavior, Volume 7, Number 2, October 1977, pp. 185-207.

Environment: An Experimental Approach," Management Science, Volume 27, Num- ber 1, January 1981, pp. 93-107.

[31] Howell, W.C. and Kerkar, S.P. "A Test of Task Influences in Uncertainty Measure- ment," Organizational Behavior and Hu- man Performance, Volume 30, Number 3, December 1982, pp. 365-390.

[32] Huber, G.P. "Cognitive Style as a Basis for MIS and DSS Designs: Much Ado About Nothing?" Management Science, Volume 29, Number 5, May 1983, pp. 567- 579.

[33] Jenkins, H.M. and Ward, W.C. "Judgment of Contingency Between Responses and Outcomes," Psychological Monographs: General and Applied, Volume 79, Number 1, 1965, Whole No. 594, pp. 1-17.

[34] Kahneman, D. and Tversky, A. "Subjective Probability: A Judgment of Representa- tiveness," Cognitive Psychology, Volume 3, Number 3, July 1972, pp. 430-454.

[35] Kahneman, D. and Tversky, A. "On the Psychology of Prediction," Psychological Review, Volume 80, Number 4, July 1973, pp. 237-251.

[36] Kehler, T.P., Friedland, P., Pople, H., Ro- boh, R. and Rosenberg, S. "Industrial Strength Knowledge Bases: Issues and Experiences," Proceedings on the Eighth International Joint Conference on Artifi- cial Intelligence, Karlsruhe, West Ger- may, August 8-12, 1983, pp. 108-109.

[37] Kent, G. The Brains of Men and Machines. Bytes Books, New York, New York, 1982.

[38] Kleinmuntz, D. "Cognitive Heuristics and Feedback in a Dynamic Decision Environ- ment," Working Paper 83184-4-23, Univer- sity of Texas at Austin, Austin, Texas, 1984.

[39] Knafl, K. and Burkett, G. "Professional Socialization in a Surgical Speciality: Ac- quiring Medical Judgment," Social Sci- ence and Medicine, Volume 9, Number 7, July 1975, pp. 397-404.

[40] Kunreuther, H. "Limited Knowledge and Insurance Protection," Public Policy, Vol- ume 24, Number 2, Spring 1976, pp. 227- 261.

[41] Langer, E.J. "The Psychology of Chance," Journal for the Theory of Social Behavior, Volume 7, Number 2, October 1977, pp. 185-207.

Environment: An Experimental Approach," Management Science, Volume 27, Num- ber 1, January 1981, pp. 93-107.

[31] Howell, W.C. and Kerkar, S.P. "A Test of Task Influences in Uncertainty Measure- ment," Organizational Behavior and Hu- man Performance, Volume 30, Number 3, December 1982, pp. 365-390.

[32] Huber, G.P. "Cognitive Style as a Basis for MIS and DSS Designs: Much Ado About Nothing?" Management Science, Volume 29, Number 5, May 1983, pp. 567- 579.

[33] Jenkins, H.M. and Ward, W.C. "Judgment of Contingency Between Responses and Outcomes," Psychological Monographs: General and Applied, Volume 79, Number 1, 1965, Whole No. 594, pp. 1-17.

[34] Kahneman, D. and Tversky, A. "Subjective Probability: A Judgment of Representa- tiveness," Cognitive Psychology, Volume 3, Number 3, July 1972, pp. 430-454.

[35] Kahneman, D. and Tversky, A. "On the Psychology of Prediction," Psychological Review, Volume 80, Number 4, July 1973, pp. 237-251.

[36] Kehler, T.P., Friedland, P., Pople, H., Ro- boh, R. and Rosenberg, S. "Industrial Strength Knowledge Bases: Issues and Experiences," Proceedings on the Eighth International Joint Conference on Artifi- cial Intelligence, Karlsruhe, West Ger- may, August 8-12, 1983, pp. 108-109.

[37] Kent, G. The Brains of Men and Machines. Bytes Books, New York, New York, 1982.

[38] Kleinmuntz, D. "Cognitive Heuristics and Feedback in a Dynamic Decision Environ- ment," Working Paper 83184-4-23, Univer- sity of Texas at Austin, Austin, Texas, 1984.

[39] Knafl, K. and Burkett, G. "Professional Socialization in a Surgical Speciality: Ac- quiring Medical Judgment," Social Sci- ence and Medicine, Volume 9, Number 7, July 1975, pp. 397-404.

[40] Kunreuther, H. "Limited Knowledge and Insurance Protection," Public Policy, Vol- ume 24, Number 2, Spring 1976, pp. 227- 261.

[41] Langer, E.J. "The Psychology of Chance," Journal for the Theory of Social Behavior, Volume 7, Number 2, October 1977, pp. 185-207.

[42] Lathrop, R.G. "Perceived Variability," Journal of Experimental Psychology, Vol- ume 73, Number 4, April 1967, pp. 498-502.

[43] Lenat, D.B., Borning, A., McDonald, D., Tay- lor, C. and Weyer, S. "Knowsphere Build- ing Expert Systems with Encyclopedic Knowledge," Proceedings of the Eighth International Conference on Artificial In- telligence, Karlsruhe, West Germany, Au- gust 8-12, 1983, pp. 167-169.

[44] Lichtenstein, S.C., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. "Judged Frequency of Lethal Events," Journal of Experimental Psychology: Human Learn- ing and Memory, Volume 4, Number 6, No- vember 1978, pp. 551-578.

[45] Makridakis, S. and Hibon, M. "Accuracy of Forecasting: An Empirical Investiga- tion," Journal of the Royal Statistical Society A, Volume 142, Part II, 1979, pp. 97-145.

[46] March, J.G. and Shapiro, Z. "Behavioral Decision Theory and Organizational Deci- sion Theory," In Decision Making: An Inter- disciplinary Inquiry, G.R. Ungson and D.N. Braunstein (eds.), Kent Boston, Massachu- setts, 1982, pp. 92-115.

[47] Mason, R.O. and Moskowitz, H. "Conser- vatism in Information Processing: Impli- cations for Management Information Sys- tems," Decision Sciences, Volume 3, Number 4, October 1972, pp. 35-55.

[48] Mcintyre, S.H. and Ryans, A.B. "Task Ef- fects on Decision Quality in Traveling Salesperson Problems," Organizational Behavior and Human Performance, Vol- ume 32, Number 3, December 1983, pp. 344-369.

[49] Michaelsen, R., and Michie, D. "Expert Systems in Business," DATAMATION, Volume 29, Number 11, November 1983, pp. 240-246.

[50] Mintzberg, H., Raisinghani, D. and Theoret, A. "The Structure of 'Unstructured' Deci- sion Processes," Administrative Science Quarterly, Volume 21, Number 2, June 1976, pp. 246-275.

[51] Moskowitz, M. and Miller, J. "Information and Decision Systems for Production Plan- ning," Management Science, Volume 22, Number 3, November 1975, pp. 359-370.

[52] Moskowitz, M., Schaefer, R.E. and Borch- erding, K. "Irrationality of Managerial

[42] Lathrop, R.G. "Perceived Variability," Journal of Experimental Psychology, Vol- ume 73, Number 4, April 1967, pp. 498-502.

[43] Lenat, D.B., Borning, A., McDonald, D., Tay- lor, C. and Weyer, S. "Knowsphere Build- ing Expert Systems with Encyclopedic Knowledge," Proceedings of the Eighth International Conference on Artificial In- telligence, Karlsruhe, West Germany, Au- gust 8-12, 1983, pp. 167-169.

[44] Lichtenstein, S.C., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. "Judged Frequency of Lethal Events," Journal of Experimental Psychology: Human Learn- ing and Memory, Volume 4, Number 6, No- vember 1978, pp. 551-578.

[45] Makridakis, S. and Hibon, M. "Accuracy of Forecasting: An Empirical Investiga- tion," Journal of the Royal Statistical Society A, Volume 142, Part II, 1979, pp. 97-145.

[46] March, J.G. and Shapiro, Z. "Behavioral Decision Theory and Organizational Deci- sion Theory," In Decision Making: An Inter- disciplinary Inquiry, G.R. Ungson and D.N. Braunstein (eds.), Kent Boston, Massachu- setts, 1982, pp. 92-115.

[47] Mason, R.O. and Moskowitz, H. "Conser- vatism in Information Processing: Impli- cations for Management Information Sys- tems," Decision Sciences, Volume 3, Number 4, October 1972, pp. 35-55.

[48] Mcintyre, S.H. and Ryans, A.B. "Task Ef- fects on Decision Quality in Traveling Salesperson Problems," Organizational Behavior and Human Performance, Vol- ume 32, Number 3, December 1983, pp. 344-369.

[49] Michaelsen, R., and Michie, D. "Expert Systems in Business," DATAMATION, Volume 29, Number 11, November 1983, pp. 240-246.

[50] Mintzberg, H., Raisinghani, D. and Theoret, A. "The Structure of 'Unstructured' Deci- sion Processes," Administrative Science Quarterly, Volume 21, Number 2, June 1976, pp. 246-275.

[51] Moskowitz, M. and Miller, J. "Information and Decision Systems for Production Plan- ning," Management Science, Volume 22, Number 3, November 1975, pp. 359-370.

[52] Moskowitz, M., Schaefer, R.E. and Borch- erding, K. "Irrationality of Managerial

[42] Lathrop, R.G. "Perceived Variability," Journal of Experimental Psychology, Vol- ume 73, Number 4, April 1967, pp. 498-502.

[43] Lenat, D.B., Borning, A., McDonald, D., Tay- lor, C. and Weyer, S. "Knowsphere Build- ing Expert Systems with Encyclopedic Knowledge," Proceedings of the Eighth International Conference on Artificial In- telligence, Karlsruhe, West Germany, Au- gust 8-12, 1983, pp. 167-169.

[44] Lichtenstein, S.C., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. "Judged Frequency of Lethal Events," Journal of Experimental Psychology: Human Learn- ing and Memory, Volume 4, Number 6, No- vember 1978, pp. 551-578.

[45] Makridakis, S. and Hibon, M. "Accuracy of Forecasting: An Empirical Investiga- tion," Journal of the Royal Statistical Society A, Volume 142, Part II, 1979, pp. 97-145.

[46] March, J.G. and Shapiro, Z. "Behavioral Decision Theory and Organizational Deci- sion Theory," In Decision Making: An Inter- disciplinary Inquiry, G.R. Ungson and D.N. Braunstein (eds.), Kent Boston, Massachu- setts, 1982, pp. 92-115.

[47] Mason, R.O. and Moskowitz, H. "Conser- vatism in Information Processing: Impli- cations for Management Information Sys- tems," Decision Sciences, Volume 3, Number 4, October 1972, pp. 35-55.

[48] Mcintyre, S.H. and Ryans, A.B. "Task Ef- fects on Decision Quality in Traveling Salesperson Problems," Organizational Behavior and Human Performance, Vol- ume 32, Number 3, December 1983, pp. 344-369.

[49] Michaelsen, R., and Michie, D. "Expert Systems in Business," DATAMATION, Volume 29, Number 11, November 1983, pp. 240-246.

[50] Mintzberg, H., Raisinghani, D. and Theoret, A. "The Structure of 'Unstructured' Deci- sion Processes," Administrative Science Quarterly, Volume 21, Number 2, June 1976, pp. 246-275.

[51] Moskowitz, M. and Miller, J. "Information and Decision Systems for Production Plan- ning," Management Science, Volume 22, Number 3, November 1975, pp. 359-370.

[52] Moskowitz, M., Schaefer, R.E. and Borch- erding, K. "Irrationality of Managerial

[42] Lathrop, R.G. "Perceived Variability," Journal of Experimental Psychology, Vol- ume 73, Number 4, April 1967, pp. 498-502.

[43] Lenat, D.B., Borning, A., McDonald, D., Tay- lor, C. and Weyer, S. "Knowsphere Build- ing Expert Systems with Encyclopedic Knowledge," Proceedings of the Eighth International Conference on Artificial In- telligence, Karlsruhe, West Germany, Au- gust 8-12, 1983, pp. 167-169.

[44] Lichtenstein, S.C., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. "Judged Frequency of Lethal Events," Journal of Experimental Psychology: Human Learn- ing and Memory, Volume 4, Number 6, No- vember 1978, pp. 551-578.

[45] Makridakis, S. and Hibon, M. "Accuracy of Forecasting: An Empirical Investiga- tion," Journal of the Royal Statistical Society A, Volume 142, Part II, 1979, pp. 97-145.

[46] March, J.G. and Shapiro, Z. "Behavioral Decision Theory and Organizational Deci- sion Theory," In Decision Making: An Inter- disciplinary Inquiry, G.R. Ungson and D.N. Braunstein (eds.), Kent Boston, Massachu- setts, 1982, pp. 92-115.

[47] Mason, R.O. and Moskowitz, H. "Conser- vatism in Information Processing: Impli- cations for Management Information Sys- tems," Decision Sciences, Volume 3, Number 4, October 1972, pp. 35-55.

[48] Mcintyre, S.H. and Ryans, A.B. "Task Ef- fects on Decision Quality in Traveling Salesperson Problems," Organizational Behavior and Human Performance, Vol- ume 32, Number 3, December 1983, pp. 344-369.

[49] Michaelsen, R., and Michie, D. "Expert Systems in Business," DATAMATION, Volume 29, Number 11, November 1983, pp. 240-246.

[50] Mintzberg, H., Raisinghani, D. and Theoret, A. "The Structure of 'Unstructured' Deci- sion Processes," Administrative Science Quarterly, Volume 21, Number 2, June 1976, pp. 246-275.

[51] Moskowitz, M. and Miller, J. "Information and Decision Systems for Production Plan- ning," Management Science, Volume 22, Number 3, November 1975, pp. 359-370.

[52] Moskowitz, M., Schaefer, R.E. and Borch- erding, K. "Irrationality of Managerial

[42] Lathrop, R.G. "Perceived Variability," Journal of Experimental Psychology, Vol- ume 73, Number 4, April 1967, pp. 498-502.

[43] Lenat, D.B., Borning, A., McDonald, D., Tay- lor, C. and Weyer, S. "Knowsphere Build- ing Expert Systems with Encyclopedic Knowledge," Proceedings of the Eighth International Conference on Artificial In- telligence, Karlsruhe, West Germany, Au- gust 8-12, 1983, pp. 167-169.

[44] Lichtenstein, S.C., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. "Judged Frequency of Lethal Events," Journal of Experimental Psychology: Human Learn- ing and Memory, Volume 4, Number 6, No- vember 1978, pp. 551-578.

[45] Makridakis, S. and Hibon, M. "Accuracy of Forecasting: An Empirical Investiga- tion," Journal of the Royal Statistical Society A, Volume 142, Part II, 1979, pp. 97-145.

[46] March, J.G. and Shapiro, Z. "Behavioral Decision Theory and Organizational Deci- sion Theory," In Decision Making: An Inter- disciplinary Inquiry, G.R. Ungson and D.N. Braunstein (eds.), Kent Boston, Massachu- setts, 1982, pp. 92-115.

[47] Mason, R.O. and Moskowitz, H. "Conser- vatism in Information Processing: Impli- cations for Management Information Sys- tems," Decision Sciences, Volume 3, Number 4, October 1972, pp. 35-55.

[48] Mcintyre, S.H. and Ryans, A.B. "Task Ef- fects on Decision Quality in Traveling Salesperson Problems," Organizational Behavior and Human Performance, Vol- ume 32, Number 3, December 1983, pp. 344-369.

[49] Michaelsen, R., and Michie, D. "Expert Systems in Business," DATAMATION, Volume 29, Number 11, November 1983, pp. 240-246.

[50] Mintzberg, H., Raisinghani, D. and Theoret, A. "The Structure of 'Unstructured' Deci- sion Processes," Administrative Science Quarterly, Volume 21, Number 2, June 1976, pp. 246-275.

[51] Moskowitz, M. and Miller, J. "Information and Decision Systems for Production Plan- ning," Management Science, Volume 22, Number 3, November 1975, pp. 359-370.

[52] Moskowitz, M., Schaefer, R.E. and Borch- erding, K. "Irrationality of Managerial

[42] Lathrop, R.G. "Perceived Variability," Journal of Experimental Psychology, Vol- ume 73, Number 4, April 1967, pp. 498-502.

[43] Lenat, D.B., Borning, A., McDonald, D., Tay- lor, C. and Weyer, S. "Knowsphere Build- ing Expert Systems with Encyclopedic Knowledge," Proceedings of the Eighth International Conference on Artificial In- telligence, Karlsruhe, West Germany, Au- gust 8-12, 1983, pp. 167-169.

[44] Lichtenstein, S.C., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. "Judged Frequency of Lethal Events," Journal of Experimental Psychology: Human Learn- ing and Memory, Volume 4, Number 6, No- vember 1978, pp. 551-578.

[45] Makridakis, S. and Hibon, M. "Accuracy of Forecasting: An Empirical Investiga- tion," Journal of the Royal Statistical Society A, Volume 142, Part II, 1979, pp. 97-145.

[46] March, J.G. and Shapiro, Z. "Behavioral Decision Theory and Organizational Deci- sion Theory," In Decision Making: An Inter- disciplinary Inquiry, G.R. Ungson and D.N. Braunstein (eds.), Kent Boston, Massachu- setts, 1982, pp. 92-115.

[47] Mason, R.O. and Moskowitz, H. "Conser- vatism in Information Processing: Impli- cations for Management Information Sys- tems," Decision Sciences, Volume 3, Number 4, October 1972, pp. 35-55.

[48] Mcintyre, S.H. and Ryans, A.B. "Task Ef- fects on Decision Quality in Traveling Salesperson Problems," Organizational Behavior and Human Performance, Vol- ume 32, Number 3, December 1983, pp. 344-369.

[49] Michaelsen, R., and Michie, D. "Expert Systems in Business," DATAMATION, Volume 29, Number 11, November 1983, pp. 240-246.

[50] Mintzberg, H., Raisinghani, D. and Theoret, A. "The Structure of 'Unstructured' Deci- sion Processes," Administrative Science Quarterly, Volume 21, Number 2, June 1976, pp. 246-275.

[51] Moskowitz, M. and Miller, J. "Information and Decision Systems for Production Plan- ning," Management Science, Volume 22, Number 3, November 1975, pp. 359-370.

[52] Moskowitz, M., Schaefer, R.E. and Borch- erding, K. "Irrationality of Managerial

[42] Lathrop, R.G. "Perceived Variability," Journal of Experimental Psychology, Vol- ume 73, Number 4, April 1967, pp. 498-502.

[43] Lenat, D.B., Borning, A., McDonald, D., Tay- lor, C. and Weyer, S. "Knowsphere Build- ing Expert Systems with Encyclopedic Knowledge," Proceedings of the Eighth International Conference on Artificial In- telligence, Karlsruhe, West Germany, Au- gust 8-12, 1983, pp. 167-169.

[44] Lichtenstein, S.C., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. "Judged Frequency of Lethal Events," Journal of Experimental Psychology: Human Learn- ing and Memory, Volume 4, Number 6, No- vember 1978, pp. 551-578.

[45] Makridakis, S. and Hibon, M. "Accuracy of Forecasting: An Empirical Investiga- tion," Journal of the Royal Statistical Society A, Volume 142, Part II, 1979, pp. 97-145.

[46] March, J.G. and Shapiro, Z. "Behavioral Decision Theory and Organizational Deci- sion Theory," In Decision Making: An Inter- disciplinary Inquiry, G.R. Ungson and D.N. Braunstein (eds.), Kent Boston, Massachu- setts, 1982, pp. 92-115.

[47] Mason, R.O. and Moskowitz, H. "Conser- vatism in Information Processing: Impli- cations for Management Information Sys- tems," Decision Sciences, Volume 3, Number 4, October 1972, pp. 35-55.

[48] Mcintyre, S.H. and Ryans, A.B. "Task Ef- fects on Decision Quality in Traveling Salesperson Problems," Organizational Behavior and Human Performance, Vol- ume 32, Number 3, December 1983, pp. 344-369.

[49] Michaelsen, R., and Michie, D. "Expert Systems in Business," DATAMATION, Volume 29, Number 11, November 1983, pp. 240-246.

[50] Mintzberg, H., Raisinghani, D. and Theoret, A. "The Structure of 'Unstructured' Deci- sion Processes," Administrative Science Quarterly, Volume 21, Number 2, June 1976, pp. 246-275.

[51] Moskowitz, M. and Miller, J. "Information and Decision Systems for Production Plan- ning," Management Science, Volume 22, Number 3, November 1975, pp. 359-370.

[52] Moskowitz, M., Schaefer, R.E. and Borch- erding, K. "Irrationality of Managerial

[42] Lathrop, R.G. "Perceived Variability," Journal of Experimental Psychology, Vol- ume 73, Number 4, April 1967, pp. 498-502.

[43] Lenat, D.B., Borning, A., McDonald, D., Tay- lor, C. and Weyer, S. "Knowsphere Build- ing Expert Systems with Encyclopedic Knowledge," Proceedings of the Eighth International Conference on Artificial In- telligence, Karlsruhe, West Germany, Au- gust 8-12, 1983, pp. 167-169.

[44] Lichtenstein, S.C., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. "Judged Frequency of Lethal Events," Journal of Experimental Psychology: Human Learn- ing and Memory, Volume 4, Number 6, No- vember 1978, pp. 551-578.

[45] Makridakis, S. and Hibon, M. "Accuracy of Forecasting: An Empirical Investiga- tion," Journal of the Royal Statistical Society A, Volume 142, Part II, 1979, pp. 97-145.

[46] March, J.G. and Shapiro, Z. "Behavioral Decision Theory and Organizational Deci- sion Theory," In Decision Making: An Inter- disciplinary Inquiry, G.R. Ungson and D.N. Braunstein (eds.), Kent Boston, Massachu- setts, 1982, pp. 92-115.

[47] Mason, R.O. and Moskowitz, H. "Conser- vatism in Information Processing: Impli- cations for Management Information Sys- tems," Decision Sciences, Volume 3, Number 4, October 1972, pp. 35-55.

[48] Mcintyre, S.H. and Ryans, A.B. "Task Ef- fects on Decision Quality in Traveling Salesperson Problems," Organizational Behavior and Human Performance, Vol- ume 32, Number 3, December 1983, pp. 344-369.

[49] Michaelsen, R., and Michie, D. "Expert Systems in Business," DATAMATION, Volume 29, Number 11, November 1983, pp. 240-246.

[50] Mintzberg, H., Raisinghani, D. and Theoret, A. "The Structure of 'Unstructured' Deci- sion Processes," Administrative Science Quarterly, Volume 21, Number 2, June 1976, pp. 246-275.

[51] Moskowitz, M. and Miller, J. "Information and Decision Systems for Production Plan- ning," Management Science, Volume 22, Number 3, November 1975, pp. 359-370.

[52] Moskowitz, M., Schaefer, R.E. and Borch- erding, K. "Irrationality of Managerial

416 MIS Quarterly/December 1986 416 MIS Quarterly/December 1986 416 MIS Quarterly/December 1986 416 MIS Quarterly/December 1986 416 MIS Quarterly/December 1986 416 MIS Quarterly/December 1986 416 MIS Quarterly/December 1986 416 MIS Quarterly/December 1986

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 16: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

Judgments: Implications for Information Systems," Omega, Volume 4, Number 2, June 1976, pp. 125-140.

[53] Norman, D.A. "Stages and Levels in Human-Machine Interaction," Interna- tional Journal of Man-Machine Studies, Volume 21, Number 4, October 1984, pp. 365-375.

[54] Payne, J.W. "Task Complexity and Con- tingent Processing in Decision Making: An Information Search and Protocol Anal- ysis," Organization Behavior and Human Performance, Volume 16, Number 2, Au- gust 1976, pp. 366-387.

[55] Pitz, G.F. "Decision Making and Cogni- tion," In Decision Making and Change in Human Affairs, H. Jungermann and G. De Zeeuw (eds.), D. Reidel Publishing, Dord- recht, Holland, 1975, pp. 403-424.

[56] Pitz, G.F. and Sachs, N.J. "Judgment and Decision: Theory and Application," An- nual Review of Psychology, Volume 35, 1984, pp. 139-163.

[57] Remus, W.E. "Bias and Variance in Bow- man's Managerial Coefficient Theory," Omega, Volume 5, Number 3, September 1977, pp. 349-351.

[58] Remus, W.E. "An Empirical Investigation of the Impact of Graphical and Tabular Data Presentations on Decision Making," Management Science, Volume 30, Num- ber 5, May 1984, pp. 533-542.

[59] Remus, W.E., Carter, P. and Jenicke, L. "Regression Models of Decision Rules in Unstable Environments," Journal of Busi- ness Research, Volume 7, Number 2, 1979, pp. 187-196.

[60] Sage, A.P. "Behavioral and Organization- al Considerations in the Design of Infor- mation Systems and Processes for Plan- ning and Decision Support," IEEE Trans- actions on Systems, Man, and Cyberne- tics, Volume SME-11, Number 9, Septem- ber 1981, pp. 640-678.

[61] Shaklee, H. and Fischhoff, B. "Strategies of Information Search in Causal Analy- sis," Decision Research Report 79-1, Uni- versity of Oregon, Eugene, Oregon, 1979.

[62] Shipman, D.W. "The Functional Data Model and the Data Language DAPLEX," ACM Transactions on Database Systems, Volume 6, Number 1, March 1981, pp. 140- 173.

Judgments: Implications for Information Systems," Omega, Volume 4, Number 2, June 1976, pp. 125-140.

[53] Norman, D.A. "Stages and Levels in Human-Machine Interaction," Interna- tional Journal of Man-Machine Studies, Volume 21, Number 4, October 1984, pp. 365-375.

[54] Payne, J.W. "Task Complexity and Con- tingent Processing in Decision Making: An Information Search and Protocol Anal- ysis," Organization Behavior and Human Performance, Volume 16, Number 2, Au- gust 1976, pp. 366-387.

[55] Pitz, G.F. "Decision Making and Cogni- tion," In Decision Making and Change in Human Affairs, H. Jungermann and G. De Zeeuw (eds.), D. Reidel Publishing, Dord- recht, Holland, 1975, pp. 403-424.

[56] Pitz, G.F. and Sachs, N.J. "Judgment and Decision: Theory and Application," An- nual Review of Psychology, Volume 35, 1984, pp. 139-163.

[57] Remus, W.E. "Bias and Variance in Bow- man's Managerial Coefficient Theory," Omega, Volume 5, Number 3, September 1977, pp. 349-351.

[58] Remus, W.E. "An Empirical Investigation of the Impact of Graphical and Tabular Data Presentations on Decision Making," Management Science, Volume 30, Num- ber 5, May 1984, pp. 533-542.

[59] Remus, W.E., Carter, P. and Jenicke, L. "Regression Models of Decision Rules in Unstable Environments," Journal of Busi- ness Research, Volume 7, Number 2, 1979, pp. 187-196.

[60] Sage, A.P. "Behavioral and Organization- al Considerations in the Design of Infor- mation Systems and Processes for Plan- ning and Decision Support," IEEE Trans- actions on Systems, Man, and Cyberne- tics, Volume SME-11, Number 9, Septem- ber 1981, pp. 640-678.

[61] Shaklee, H. and Fischhoff, B. "Strategies of Information Search in Causal Analy- sis," Decision Research Report 79-1, Uni- versity of Oregon, Eugene, Oregon, 1979.

[62] Shipman, D.W. "The Functional Data Model and the Data Language DAPLEX," ACM Transactions on Database Systems, Volume 6, Number 1, March 1981, pp. 140- 173.

Judgments: Implications for Information Systems," Omega, Volume 4, Number 2, June 1976, pp. 125-140.

[53] Norman, D.A. "Stages and Levels in Human-Machine Interaction," Interna- tional Journal of Man-Machine Studies, Volume 21, Number 4, October 1984, pp. 365-375.

[54] Payne, J.W. "Task Complexity and Con- tingent Processing in Decision Making: An Information Search and Protocol Anal- ysis," Organization Behavior and Human Performance, Volume 16, Number 2, Au- gust 1976, pp. 366-387.

[55] Pitz, G.F. "Decision Making and Cogni- tion," In Decision Making and Change in Human Affairs, H. Jungermann and G. De Zeeuw (eds.), D. Reidel Publishing, Dord- recht, Holland, 1975, pp. 403-424.

[56] Pitz, G.F. and Sachs, N.J. "Judgment and Decision: Theory and Application," An- nual Review of Psychology, Volume 35, 1984, pp. 139-163.

[57] Remus, W.E. "Bias and Variance in Bow- man's Managerial Coefficient Theory," Omega, Volume 5, Number 3, September 1977, pp. 349-351.

[58] Remus, W.E. "An Empirical Investigation of the Impact of Graphical and Tabular Data Presentations on Decision Making," Management Science, Volume 30, Num- ber 5, May 1984, pp. 533-542.

[59] Remus, W.E., Carter, P. and Jenicke, L. "Regression Models of Decision Rules in Unstable Environments," Journal of Busi- ness Research, Volume 7, Number 2, 1979, pp. 187-196.

[60] Sage, A.P. "Behavioral and Organization- al Considerations in the Design of Infor- mation Systems and Processes for Plan- ning and Decision Support," IEEE Trans- actions on Systems, Man, and Cyberne- tics, Volume SME-11, Number 9, Septem- ber 1981, pp. 640-678.

[61] Shaklee, H. and Fischhoff, B. "Strategies of Information Search in Causal Analy- sis," Decision Research Report 79-1, Uni- versity of Oregon, Eugene, Oregon, 1979.

[62] Shipman, D.W. "The Functional Data Model and the Data Language DAPLEX," ACM Transactions on Database Systems, Volume 6, Number 1, March 1981, pp. 140- 173.

Judgments: Implications for Information Systems," Omega, Volume 4, Number 2, June 1976, pp. 125-140.

[53] Norman, D.A. "Stages and Levels in Human-Machine Interaction," Interna- tional Journal of Man-Machine Studies, Volume 21, Number 4, October 1984, pp. 365-375.

[54] Payne, J.W. "Task Complexity and Con- tingent Processing in Decision Making: An Information Search and Protocol Anal- ysis," Organization Behavior and Human Performance, Volume 16, Number 2, Au- gust 1976, pp. 366-387.

[55] Pitz, G.F. "Decision Making and Cogni- tion," In Decision Making and Change in Human Affairs, H. Jungermann and G. De Zeeuw (eds.), D. Reidel Publishing, Dord- recht, Holland, 1975, pp. 403-424.

[56] Pitz, G.F. and Sachs, N.J. "Judgment and Decision: Theory and Application," An- nual Review of Psychology, Volume 35, 1984, pp. 139-163.

[57] Remus, W.E. "Bias and Variance in Bow- man's Managerial Coefficient Theory," Omega, Volume 5, Number 3, September 1977, pp. 349-351.

[58] Remus, W.E. "An Empirical Investigation of the Impact of Graphical and Tabular Data Presentations on Decision Making," Management Science, Volume 30, Num- ber 5, May 1984, pp. 533-542.

[59] Remus, W.E., Carter, P. and Jenicke, L. "Regression Models of Decision Rules in Unstable Environments," Journal of Busi- ness Research, Volume 7, Number 2, 1979, pp. 187-196.

[60] Sage, A.P. "Behavioral and Organization- al Considerations in the Design of Infor- mation Systems and Processes for Plan- ning and Decision Support," IEEE Trans- actions on Systems, Man, and Cyberne- tics, Volume SME-11, Number 9, Septem- ber 1981, pp. 640-678.

[61] Shaklee, H. and Fischhoff, B. "Strategies of Information Search in Causal Analy- sis," Decision Research Report 79-1, Uni- versity of Oregon, Eugene, Oregon, 1979.

[62] Shipman, D.W. "The Functional Data Model and the Data Language DAPLEX," ACM Transactions on Database Systems, Volume 6, Number 1, March 1981, pp. 140- 173.

Judgments: Implications for Information Systems," Omega, Volume 4, Number 2, June 1976, pp. 125-140.

[53] Norman, D.A. "Stages and Levels in Human-Machine Interaction," Interna- tional Journal of Man-Machine Studies, Volume 21, Number 4, October 1984, pp. 365-375.

[54] Payne, J.W. "Task Complexity and Con- tingent Processing in Decision Making: An Information Search and Protocol Anal- ysis," Organization Behavior and Human Performance, Volume 16, Number 2, Au- gust 1976, pp. 366-387.

[55] Pitz, G.F. "Decision Making and Cogni- tion," In Decision Making and Change in Human Affairs, H. Jungermann and G. De Zeeuw (eds.), D. Reidel Publishing, Dord- recht, Holland, 1975, pp. 403-424.

[56] Pitz, G.F. and Sachs, N.J. "Judgment and Decision: Theory and Application," An- nual Review of Psychology, Volume 35, 1984, pp. 139-163.

[57] Remus, W.E. "Bias and Variance in Bow- man's Managerial Coefficient Theory," Omega, Volume 5, Number 3, September 1977, pp. 349-351.

[58] Remus, W.E. "An Empirical Investigation of the Impact of Graphical and Tabular Data Presentations on Decision Making," Management Science, Volume 30, Num- ber 5, May 1984, pp. 533-542.

[59] Remus, W.E., Carter, P. and Jenicke, L. "Regression Models of Decision Rules in Unstable Environments," Journal of Busi- ness Research, Volume 7, Number 2, 1979, pp. 187-196.

[60] Sage, A.P. "Behavioral and Organization- al Considerations in the Design of Infor- mation Systems and Processes for Plan- ning and Decision Support," IEEE Trans- actions on Systems, Man, and Cyberne- tics, Volume SME-11, Number 9, Septem- ber 1981, pp. 640-678.

[61] Shaklee, H. and Fischhoff, B. "Strategies of Information Search in Causal Analy- sis," Decision Research Report 79-1, Uni- versity of Oregon, Eugene, Oregon, 1979.

[62] Shipman, D.W. "The Functional Data Model and the Data Language DAPLEX," ACM Transactions on Database Systems, Volume 6, Number 1, March 1981, pp. 140- 173.

Judgments: Implications for Information Systems," Omega, Volume 4, Number 2, June 1976, pp. 125-140.

[53] Norman, D.A. "Stages and Levels in Human-Machine Interaction," Interna- tional Journal of Man-Machine Studies, Volume 21, Number 4, October 1984, pp. 365-375.

[54] Payne, J.W. "Task Complexity and Con- tingent Processing in Decision Making: An Information Search and Protocol Anal- ysis," Organization Behavior and Human Performance, Volume 16, Number 2, Au- gust 1976, pp. 366-387.

[55] Pitz, G.F. "Decision Making and Cogni- tion," In Decision Making and Change in Human Affairs, H. Jungermann and G. De Zeeuw (eds.), D. Reidel Publishing, Dord- recht, Holland, 1975, pp. 403-424.

[56] Pitz, G.F. and Sachs, N.J. "Judgment and Decision: Theory and Application," An- nual Review of Psychology, Volume 35, 1984, pp. 139-163.

[57] Remus, W.E. "Bias and Variance in Bow- man's Managerial Coefficient Theory," Omega, Volume 5, Number 3, September 1977, pp. 349-351.

[58] Remus, W.E. "An Empirical Investigation of the Impact of Graphical and Tabular Data Presentations on Decision Making," Management Science, Volume 30, Num- ber 5, May 1984, pp. 533-542.

[59] Remus, W.E., Carter, P. and Jenicke, L. "Regression Models of Decision Rules in Unstable Environments," Journal of Busi- ness Research, Volume 7, Number 2, 1979, pp. 187-196.

[60] Sage, A.P. "Behavioral and Organization- al Considerations in the Design of Infor- mation Systems and Processes for Plan- ning and Decision Support," IEEE Trans- actions on Systems, Man, and Cyberne- tics, Volume SME-11, Number 9, Septem- ber 1981, pp. 640-678.

[61] Shaklee, H. and Fischhoff, B. "Strategies of Information Search in Causal Analy- sis," Decision Research Report 79-1, Uni- versity of Oregon, Eugene, Oregon, 1979.

[62] Shipman, D.W. "The Functional Data Model and the Data Language DAPLEX," ACM Transactions on Database Systems, Volume 6, Number 1, March 1981, pp. 140- 173.

Judgments: Implications for Information Systems," Omega, Volume 4, Number 2, June 1976, pp. 125-140.

[53] Norman, D.A. "Stages and Levels in Human-Machine Interaction," Interna- tional Journal of Man-Machine Studies, Volume 21, Number 4, October 1984, pp. 365-375.

[54] Payne, J.W. "Task Complexity and Con- tingent Processing in Decision Making: An Information Search and Protocol Anal- ysis," Organization Behavior and Human Performance, Volume 16, Number 2, Au- gust 1976, pp. 366-387.

[55] Pitz, G.F. "Decision Making and Cogni- tion," In Decision Making and Change in Human Affairs, H. Jungermann and G. De Zeeuw (eds.), D. Reidel Publishing, Dord- recht, Holland, 1975, pp. 403-424.

[56] Pitz, G.F. and Sachs, N.J. "Judgment and Decision: Theory and Application," An- nual Review of Psychology, Volume 35, 1984, pp. 139-163.

[57] Remus, W.E. "Bias and Variance in Bow- man's Managerial Coefficient Theory," Omega, Volume 5, Number 3, September 1977, pp. 349-351.

[58] Remus, W.E. "An Empirical Investigation of the Impact of Graphical and Tabular Data Presentations on Decision Making," Management Science, Volume 30, Num- ber 5, May 1984, pp. 533-542.

[59] Remus, W.E., Carter, P. and Jenicke, L. "Regression Models of Decision Rules in Unstable Environments," Journal of Busi- ness Research, Volume 7, Number 2, 1979, pp. 187-196.

[60] Sage, A.P. "Behavioral and Organization- al Considerations in the Design of Infor- mation Systems and Processes for Plan- ning and Decision Support," IEEE Trans- actions on Systems, Man, and Cyberne- tics, Volume SME-11, Number 9, Septem- ber 1981, pp. 640-678.

[61] Shaklee, H. and Fischhoff, B. "Strategies of Information Search in Causal Analy- sis," Decision Research Report 79-1, Uni- versity of Oregon, Eugene, Oregon, 1979.

[62] Shipman, D.W. "The Functional Data Model and the Data Language DAPLEX," ACM Transactions on Database Systems, Volume 6, Number 1, March 1981, pp. 140- 173.

Judgments: Implications for Information Systems," Omega, Volume 4, Number 2, June 1976, pp. 125-140.

[53] Norman, D.A. "Stages and Levels in Human-Machine Interaction," Interna- tional Journal of Man-Machine Studies, Volume 21, Number 4, October 1984, pp. 365-375.

[54] Payne, J.W. "Task Complexity and Con- tingent Processing in Decision Making: An Information Search and Protocol Anal- ysis," Organization Behavior and Human Performance, Volume 16, Number 2, Au- gust 1976, pp. 366-387.

[55] Pitz, G.F. "Decision Making and Cogni- tion," In Decision Making and Change in Human Affairs, H. Jungermann and G. De Zeeuw (eds.), D. Reidel Publishing, Dord- recht, Holland, 1975, pp. 403-424.

[56] Pitz, G.F. and Sachs, N.J. "Judgment and Decision: Theory and Application," An- nual Review of Psychology, Volume 35, 1984, pp. 139-163.

[57] Remus, W.E. "Bias and Variance in Bow- man's Managerial Coefficient Theory," Omega, Volume 5, Number 3, September 1977, pp. 349-351.

[58] Remus, W.E. "An Empirical Investigation of the Impact of Graphical and Tabular Data Presentations on Decision Making," Management Science, Volume 30, Num- ber 5, May 1984, pp. 533-542.

[59] Remus, W.E., Carter, P. and Jenicke, L. "Regression Models of Decision Rules in Unstable Environments," Journal of Busi- ness Research, Volume 7, Number 2, 1979, pp. 187-196.

[60] Sage, A.P. "Behavioral and Organization- al Considerations in the Design of Infor- mation Systems and Processes for Plan- ning and Decision Support," IEEE Trans- actions on Systems, Man, and Cyberne- tics, Volume SME-11, Number 9, Septem- ber 1981, pp. 640-678.

[61] Shaklee, H. and Fischhoff, B. "Strategies of Information Search in Causal Analy- sis," Decision Research Report 79-1, Uni- versity of Oregon, Eugene, Oregon, 1979.

[62] Shipman, D.W. "The Functional Data Model and the Data Language DAPLEX," ACM Transactions on Database Systems, Volume 6, Number 1, March 1981, pp. 140- 173.

[63] Shortcliffe, E. "Medical Consultation Sys- tems: Designing for Doctors," in Design- ing For Human-Computer Interaction, M. Sime and M. Coombs (eds.), Academic Press, New York, New York, 1983, pp. 209- 238.

[64] Slovic, P. "Value as a Determiner of Sub- jective Probability," IEEE Transactions on Human Factors, HFE-7, Number 1, 1966, pp. 22-28.

[65] Slovic, P. Fischhoff, B. and Lichtenstein, S. "Behavioral Decision Theory," Annual Review of Psychology, Volume 28, 1977, pp. 363-396.

[66] Slovic, P. and Lichenstein, S. "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organization Behavior and Human Performance, Volume 6, Number 6, November 1971, pp. 649-744.

[67] Smedslund, J. "The Concept of Correla- tion in Adults," Scandinavian Journal of Psychology, Volume 4, Number 3, 1963, pp. 165-173.

[68] Sprague, R.H., and Carlson, E.D. Building Effective Decision Support Systems, Prentice-Hall, Englewood Cliffs, New Jer- sey, 1982.

[69] Sprague, R.H. "A Framework for the De- velopment of Decision Support Systems," MIS Quarterly, Volume 4, Number 4, De- cember 1980, pp. 1-26.

[70] Stabell, C.B. "Integrative Complexity of Information Environment Perception and Information Use," Organizational Behavior and Human Performance, Volume 22, Number 1, August 1978, pp. 116-142.

[71] Timmers, H. and Wagenaar, W.A. "Inverse Statistics and Misperception of Exponen- tial Growth," Perception and Psychophys- ics, Volume 21, Number 6, June 1977, pp. 558-562.

[72] Tversky, A. and Kahneman, D. "The Belief in the 'Law of Small Numbers'," Psycho- logical Bulletin, Volume 76, Number 2, August 1971, pp. 105-110.

[73] Tversky, A. "Availability: A Heuristic for Judging Frequency and Probability," Cog- nitive Psychology, Volume 5, Number 2, September 1973, pp. 207-232.

[74] Tversky, A. "Judgment Under Uncertainty: Heuristics and Biases," Science, Volume 185, September 27, 1974, pp. 1124-1131.

[63] Shortcliffe, E. "Medical Consultation Sys- tems: Designing for Doctors," in Design- ing For Human-Computer Interaction, M. Sime and M. Coombs (eds.), Academic Press, New York, New York, 1983, pp. 209- 238.

[64] Slovic, P. "Value as a Determiner of Sub- jective Probability," IEEE Transactions on Human Factors, HFE-7, Number 1, 1966, pp. 22-28.

[65] Slovic, P. Fischhoff, B. and Lichtenstein, S. "Behavioral Decision Theory," Annual Review of Psychology, Volume 28, 1977, pp. 363-396.

[66] Slovic, P. and Lichenstein, S. "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organization Behavior and Human Performance, Volume 6, Number 6, November 1971, pp. 649-744.

[67] Smedslund, J. "The Concept of Correla- tion in Adults," Scandinavian Journal of Psychology, Volume 4, Number 3, 1963, pp. 165-173.

[68] Sprague, R.H., and Carlson, E.D. Building Effective Decision Support Systems, Prentice-Hall, Englewood Cliffs, New Jer- sey, 1982.

[69] Sprague, R.H. "A Framework for the De- velopment of Decision Support Systems," MIS Quarterly, Volume 4, Number 4, De- cember 1980, pp. 1-26.

[70] Stabell, C.B. "Integrative Complexity of Information Environment Perception and Information Use," Organizational Behavior and Human Performance, Volume 22, Number 1, August 1978, pp. 116-142.

[71] Timmers, H. and Wagenaar, W.A. "Inverse Statistics and Misperception of Exponen- tial Growth," Perception and Psychophys- ics, Volume 21, Number 6, June 1977, pp. 558-562.

[72] Tversky, A. and Kahneman, D. "The Belief in the 'Law of Small Numbers'," Psycho- logical Bulletin, Volume 76, Number 2, August 1971, pp. 105-110.

[73] Tversky, A. "Availability: A Heuristic for Judging Frequency and Probability," Cog- nitive Psychology, Volume 5, Number 2, September 1973, pp. 207-232.

[74] Tversky, A. "Judgment Under Uncertainty: Heuristics and Biases," Science, Volume 185, September 27, 1974, pp. 1124-1131.

[63] Shortcliffe, E. "Medical Consultation Sys- tems: Designing for Doctors," in Design- ing For Human-Computer Interaction, M. Sime and M. Coombs (eds.), Academic Press, New York, New York, 1983, pp. 209- 238.

[64] Slovic, P. "Value as a Determiner of Sub- jective Probability," IEEE Transactions on Human Factors, HFE-7, Number 1, 1966, pp. 22-28.

[65] Slovic, P. Fischhoff, B. and Lichtenstein, S. "Behavioral Decision Theory," Annual Review of Psychology, Volume 28, 1977, pp. 363-396.

[66] Slovic, P. and Lichenstein, S. "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organization Behavior and Human Performance, Volume 6, Number 6, November 1971, pp. 649-744.

[67] Smedslund, J. "The Concept of Correla- tion in Adults," Scandinavian Journal of Psychology, Volume 4, Number 3, 1963, pp. 165-173.

[68] Sprague, R.H., and Carlson, E.D. Building Effective Decision Support Systems, Prentice-Hall, Englewood Cliffs, New Jer- sey, 1982.

[69] Sprague, R.H. "A Framework for the De- velopment of Decision Support Systems," MIS Quarterly, Volume 4, Number 4, De- cember 1980, pp. 1-26.

[70] Stabell, C.B. "Integrative Complexity of Information Environment Perception and Information Use," Organizational Behavior and Human Performance, Volume 22, Number 1, August 1978, pp. 116-142.

[71] Timmers, H. and Wagenaar, W.A. "Inverse Statistics and Misperception of Exponen- tial Growth," Perception and Psychophys- ics, Volume 21, Number 6, June 1977, pp. 558-562.

[72] Tversky, A. and Kahneman, D. "The Belief in the 'Law of Small Numbers'," Psycho- logical Bulletin, Volume 76, Number 2, August 1971, pp. 105-110.

[73] Tversky, A. "Availability: A Heuristic for Judging Frequency and Probability," Cog- nitive Psychology, Volume 5, Number 2, September 1973, pp. 207-232.

[74] Tversky, A. "Judgment Under Uncertainty: Heuristics and Biases," Science, Volume 185, September 27, 1974, pp. 1124-1131.

[63] Shortcliffe, E. "Medical Consultation Sys- tems: Designing for Doctors," in Design- ing For Human-Computer Interaction, M. Sime and M. Coombs (eds.), Academic Press, New York, New York, 1983, pp. 209- 238.

[64] Slovic, P. "Value as a Determiner of Sub- jective Probability," IEEE Transactions on Human Factors, HFE-7, Number 1, 1966, pp. 22-28.

[65] Slovic, P. Fischhoff, B. and Lichtenstein, S. "Behavioral Decision Theory," Annual Review of Psychology, Volume 28, 1977, pp. 363-396.

[66] Slovic, P. and Lichenstein, S. "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organization Behavior and Human Performance, Volume 6, Number 6, November 1971, pp. 649-744.

[67] Smedslund, J. "The Concept of Correla- tion in Adults," Scandinavian Journal of Psychology, Volume 4, Number 3, 1963, pp. 165-173.

[68] Sprague, R.H., and Carlson, E.D. Building Effective Decision Support Systems, Prentice-Hall, Englewood Cliffs, New Jer- sey, 1982.

[69] Sprague, R.H. "A Framework for the De- velopment of Decision Support Systems," MIS Quarterly, Volume 4, Number 4, De- cember 1980, pp. 1-26.

[70] Stabell, C.B. "Integrative Complexity of Information Environment Perception and Information Use," Organizational Behavior and Human Performance, Volume 22, Number 1, August 1978, pp. 116-142.

[71] Timmers, H. and Wagenaar, W.A. "Inverse Statistics and Misperception of Exponen- tial Growth," Perception and Psychophys- ics, Volume 21, Number 6, June 1977, pp. 558-562.

[72] Tversky, A. and Kahneman, D. "The Belief in the 'Law of Small Numbers'," Psycho- logical Bulletin, Volume 76, Number 2, August 1971, pp. 105-110.

[73] Tversky, A. "Availability: A Heuristic for Judging Frequency and Probability," Cog- nitive Psychology, Volume 5, Number 2, September 1973, pp. 207-232.

[74] Tversky, A. "Judgment Under Uncertainty: Heuristics and Biases," Science, Volume 185, September 27, 1974, pp. 1124-1131.

[63] Shortcliffe, E. "Medical Consultation Sys- tems: Designing for Doctors," in Design- ing For Human-Computer Interaction, M. Sime and M. Coombs (eds.), Academic Press, New York, New York, 1983, pp. 209- 238.

[64] Slovic, P. "Value as a Determiner of Sub- jective Probability," IEEE Transactions on Human Factors, HFE-7, Number 1, 1966, pp. 22-28.

[65] Slovic, P. Fischhoff, B. and Lichtenstein, S. "Behavioral Decision Theory," Annual Review of Psychology, Volume 28, 1977, pp. 363-396.

[66] Slovic, P. and Lichenstein, S. "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organization Behavior and Human Performance, Volume 6, Number 6, November 1971, pp. 649-744.

[67] Smedslund, J. "The Concept of Correla- tion in Adults," Scandinavian Journal of Psychology, Volume 4, Number 3, 1963, pp. 165-173.

[68] Sprague, R.H., and Carlson, E.D. Building Effective Decision Support Systems, Prentice-Hall, Englewood Cliffs, New Jer- sey, 1982.

[69] Sprague, R.H. "A Framework for the De- velopment of Decision Support Systems," MIS Quarterly, Volume 4, Number 4, De- cember 1980, pp. 1-26.

[70] Stabell, C.B. "Integrative Complexity of Information Environment Perception and Information Use," Organizational Behavior and Human Performance, Volume 22, Number 1, August 1978, pp. 116-142.

[71] Timmers, H. and Wagenaar, W.A. "Inverse Statistics and Misperception of Exponen- tial Growth," Perception and Psychophys- ics, Volume 21, Number 6, June 1977, pp. 558-562.

[72] Tversky, A. and Kahneman, D. "The Belief in the 'Law of Small Numbers'," Psycho- logical Bulletin, Volume 76, Number 2, August 1971, pp. 105-110.

[73] Tversky, A. "Availability: A Heuristic for Judging Frequency and Probability," Cog- nitive Psychology, Volume 5, Number 2, September 1973, pp. 207-232.

[74] Tversky, A. "Judgment Under Uncertainty: Heuristics and Biases," Science, Volume 185, September 27, 1974, pp. 1124-1131.

[63] Shortcliffe, E. "Medical Consultation Sys- tems: Designing for Doctors," in Design- ing For Human-Computer Interaction, M. Sime and M. Coombs (eds.), Academic Press, New York, New York, 1983, pp. 209- 238.

[64] Slovic, P. "Value as a Determiner of Sub- jective Probability," IEEE Transactions on Human Factors, HFE-7, Number 1, 1966, pp. 22-28.

[65] Slovic, P. Fischhoff, B. and Lichtenstein, S. "Behavioral Decision Theory," Annual Review of Psychology, Volume 28, 1977, pp. 363-396.

[66] Slovic, P. and Lichenstein, S. "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organization Behavior and Human Performance, Volume 6, Number 6, November 1971, pp. 649-744.

[67] Smedslund, J. "The Concept of Correla- tion in Adults," Scandinavian Journal of Psychology, Volume 4, Number 3, 1963, pp. 165-173.

[68] Sprague, R.H., and Carlson, E.D. Building Effective Decision Support Systems, Prentice-Hall, Englewood Cliffs, New Jer- sey, 1982.

[69] Sprague, R.H. "A Framework for the De- velopment of Decision Support Systems," MIS Quarterly, Volume 4, Number 4, De- cember 1980, pp. 1-26.

[70] Stabell, C.B. "Integrative Complexity of Information Environment Perception and Information Use," Organizational Behavior and Human Performance, Volume 22, Number 1, August 1978, pp. 116-142.

[71] Timmers, H. and Wagenaar, W.A. "Inverse Statistics and Misperception of Exponen- tial Growth," Perception and Psychophys- ics, Volume 21, Number 6, June 1977, pp. 558-562.

[72] Tversky, A. and Kahneman, D. "The Belief in the 'Law of Small Numbers'," Psycho- logical Bulletin, Volume 76, Number 2, August 1971, pp. 105-110.

[73] Tversky, A. "Availability: A Heuristic for Judging Frequency and Probability," Cog- nitive Psychology, Volume 5, Number 2, September 1973, pp. 207-232.

[74] Tversky, A. "Judgment Under Uncertainty: Heuristics and Biases," Science, Volume 185, September 27, 1974, pp. 1124-1131.

[63] Shortcliffe, E. "Medical Consultation Sys- tems: Designing for Doctors," in Design- ing For Human-Computer Interaction, M. Sime and M. Coombs (eds.), Academic Press, New York, New York, 1983, pp. 209- 238.

[64] Slovic, P. "Value as a Determiner of Sub- jective Probability," IEEE Transactions on Human Factors, HFE-7, Number 1, 1966, pp. 22-28.

[65] Slovic, P. Fischhoff, B. and Lichtenstein, S. "Behavioral Decision Theory," Annual Review of Psychology, Volume 28, 1977, pp. 363-396.

[66] Slovic, P. and Lichenstein, S. "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organization Behavior and Human Performance, Volume 6, Number 6, November 1971, pp. 649-744.

[67] Smedslund, J. "The Concept of Correla- tion in Adults," Scandinavian Journal of Psychology, Volume 4, Number 3, 1963, pp. 165-173.

[68] Sprague, R.H., and Carlson, E.D. Building Effective Decision Support Systems, Prentice-Hall, Englewood Cliffs, New Jer- sey, 1982.

[69] Sprague, R.H. "A Framework for the De- velopment of Decision Support Systems," MIS Quarterly, Volume 4, Number 4, De- cember 1980, pp. 1-26.

[70] Stabell, C.B. "Integrative Complexity of Information Environment Perception and Information Use," Organizational Behavior and Human Performance, Volume 22, Number 1, August 1978, pp. 116-142.

[71] Timmers, H. and Wagenaar, W.A. "Inverse Statistics and Misperception of Exponen- tial Growth," Perception and Psychophys- ics, Volume 21, Number 6, June 1977, pp. 558-562.

[72] Tversky, A. and Kahneman, D. "The Belief in the 'Law of Small Numbers'," Psycho- logical Bulletin, Volume 76, Number 2, August 1971, pp. 105-110.

[73] Tversky, A. "Availability: A Heuristic for Judging Frequency and Probability," Cog- nitive Psychology, Volume 5, Number 2, September 1973, pp. 207-232.

[74] Tversky, A. "Judgment Under Uncertainty: Heuristics and Biases," Science, Volume 185, September 27, 1974, pp. 1124-1131.

[63] Shortcliffe, E. "Medical Consultation Sys- tems: Designing for Doctors," in Design- ing For Human-Computer Interaction, M. Sime and M. Coombs (eds.), Academic Press, New York, New York, 1983, pp. 209- 238.

[64] Slovic, P. "Value as a Determiner of Sub- jective Probability," IEEE Transactions on Human Factors, HFE-7, Number 1, 1966, pp. 22-28.

[65] Slovic, P. Fischhoff, B. and Lichtenstein, S. "Behavioral Decision Theory," Annual Review of Psychology, Volume 28, 1977, pp. 363-396.

[66] Slovic, P. and Lichenstein, S. "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organization Behavior and Human Performance, Volume 6, Number 6, November 1971, pp. 649-744.

[67] Smedslund, J. "The Concept of Correla- tion in Adults," Scandinavian Journal of Psychology, Volume 4, Number 3, 1963, pp. 165-173.

[68] Sprague, R.H., and Carlson, E.D. Building Effective Decision Support Systems, Prentice-Hall, Englewood Cliffs, New Jer- sey, 1982.

[69] Sprague, R.H. "A Framework for the De- velopment of Decision Support Systems," MIS Quarterly, Volume 4, Number 4, De- cember 1980, pp. 1-26.

[70] Stabell, C.B. "Integrative Complexity of Information Environment Perception and Information Use," Organizational Behavior and Human Performance, Volume 22, Number 1, August 1978, pp. 116-142.

[71] Timmers, H. and Wagenaar, W.A. "Inverse Statistics and Misperception of Exponen- tial Growth," Perception and Psychophys- ics, Volume 21, Number 6, June 1977, pp. 558-562.

[72] Tversky, A. and Kahneman, D. "The Belief in the 'Law of Small Numbers'," Psycho- logical Bulletin, Volume 76, Number 2, August 1971, pp. 105-110.

[73] Tversky, A. "Availability: A Heuristic for Judging Frequency and Probability," Cog- nitive Psychology, Volume 5, Number 2, September 1973, pp. 207-232.

[74] Tversky, A. "Judgment Under Uncertainty: Heuristics and Biases," Science, Volume 185, September 27, 1974, pp. 1124-1131.

MIS Quarterly/December 1986 417 MIS Quarterly/December 1986 417 MIS Quarterly/December 1986 417 MIS Quarterly/December 1986 417 MIS Quarterly/December 1986 417 MIS Quarterly/December 1986 417 MIS Quarterly/December 1986 417 MIS Quarterly/December 1986 417

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions

Page 17: Toward Intelligent Decision Support Systems: An Artificially Intelligent Statistician

Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions Future Directions

[75] Wagenaar, W.A. and Sagaria, S.D. "Mis- perception of Exponential Growth," Per- ception and Psychophysics, Volume 18, Number 6, December 1975, pp. 416-422.

[76] Wagenaar, W.A. and Timmers, H. "Extra- polation of Exponential Time Series is Not Enhanced by Having More Data Points," Perception and Psychophysics, Volume 24, Number 2, August 1978, pp. 182-184.

[77] Ward, W.C. and Jenkins, H.M. "The Dis- play of Information and the Judgment of Contingency," Canadian Journal of Psy- chology, Volume 19, Number 3, Septem- ber 1965, pp. 231-241.

[78] Wason, P.C. and Johnson-Laird, P.N. Psy- chology of Reasoning: Structure and Con- tent, Harvard University Press, Boston, Massachusetts, 1972.

[79] Mann, R.I. and Watson, H.J. "A Contin- gency Model for User Involvement in DSS Development," MIS Quarterly, Volume 8, Number 1, March 1984, pp. 27-38.

[80] Wedley, W.C. and Field, R.H.G. "A Predeci- sion Support System," Academy of Man- agement Review Volume 9, Number 4, Oc- tober 1984, pp. 696-703.

[81] White, D.J. "The Nature of Decision Theory," in Theories of Decision in Prac- tice, D.J. White and K.C. Bowen (eds.), Hodder and Soughton, London, England, 1975, pp. 3-16.

[82] Winston, P.H. Artificial Intelligence, Addi- son-Wesley, Reading Massachusetts, 1977.

[83] Wright, J.C. and Murphy, G.L. "The Utility of Theories in Intuitive Statistics: The Ro- bustness of Theory-based Judgments," Journal of Experimental Psychology: General, Volume 13, Number 2, June 1984, pp. 301-322.

[75] Wagenaar, W.A. and Sagaria, S.D. "Mis- perception of Exponential Growth," Per- ception and Psychophysics, Volume 18, Number 6, December 1975, pp. 416-422.

[76] Wagenaar, W.A. and Timmers, H. "Extra- polation of Exponential Time Series is Not Enhanced by Having More Data Points," Perception and Psychophysics, Volume 24, Number 2, August 1978, pp. 182-184.

[77] Ward, W.C. and Jenkins, H.M. "The Dis- play of Information and the Judgment of Contingency," Canadian Journal of Psy- chology, Volume 19, Number 3, Septem- ber 1965, pp. 231-241.

[78] Wason, P.C. and Johnson-Laird, P.N. Psy- chology of Reasoning: Structure and Con- tent, Harvard University Press, Boston, Massachusetts, 1972.

[79] Mann, R.I. and Watson, H.J. "A Contin- gency Model for User Involvement in DSS Development," MIS Quarterly, Volume 8, Number 1, March 1984, pp. 27-38.

[80] Wedley, W.C. and Field, R.H.G. "A Predeci- sion Support System," Academy of Man- agement Review Volume 9, Number 4, Oc- tober 1984, pp. 696-703.

[81] White, D.J. "The Nature of Decision Theory," in Theories of Decision in Prac- tice, D.J. White and K.C. Bowen (eds.), Hodder and Soughton, London, England, 1975, pp. 3-16.

[82] Winston, P.H. Artificial Intelligence, Addi- son-Wesley, Reading Massachusetts, 1977.

[83] Wright, J.C. and Murphy, G.L. "The Utility of Theories in Intuitive Statistics: The Ro- bustness of Theory-based Judgments," Journal of Experimental Psychology: General, Volume 13, Number 2, June 1984, pp. 301-322.

[75] Wagenaar, W.A. and Sagaria, S.D. "Mis- perception of Exponential Growth," Per- ception and Psychophysics, Volume 18, Number 6, December 1975, pp. 416-422.

[76] Wagenaar, W.A. and Timmers, H. "Extra- polation of Exponential Time Series is Not Enhanced by Having More Data Points," Perception and Psychophysics, Volume 24, Number 2, August 1978, pp. 182-184.

[77] Ward, W.C. and Jenkins, H.M. "The Dis- play of Information and the Judgment of Contingency," Canadian Journal of Psy- chology, Volume 19, Number 3, Septem- ber 1965, pp. 231-241.

[78] Wason, P.C. and Johnson-Laird, P.N. Psy- chology of Reasoning: Structure and Con- tent, Harvard University Press, Boston, Massachusetts, 1972.

[79] Mann, R.I. and Watson, H.J. "A Contin- gency Model for User Involvement in DSS Development," MIS Quarterly, Volume 8, Number 1, March 1984, pp. 27-38.

[80] Wedley, W.C. and Field, R.H.G. "A Predeci- sion Support System," Academy of Man- agement Review Volume 9, Number 4, Oc- tober 1984, pp. 696-703.

[81] White, D.J. "The Nature of Decision Theory," in Theories of Decision in Prac- tice, D.J. White and K.C. Bowen (eds.), Hodder and Soughton, London, England, 1975, pp. 3-16.

[82] Winston, P.H. Artificial Intelligence, Addi- son-Wesley, Reading Massachusetts, 1977.

[83] Wright, J.C. and Murphy, G.L. "The Utility of Theories in Intuitive Statistics: The Ro- bustness of Theory-based Judgments," Journal of Experimental Psychology: General, Volume 13, Number 2, June 1984, pp. 301-322.

[75] Wagenaar, W.A. and Sagaria, S.D. "Mis- perception of Exponential Growth," Per- ception and Psychophysics, Volume 18, Number 6, December 1975, pp. 416-422.

[76] Wagenaar, W.A. and Timmers, H. "Extra- polation of Exponential Time Series is Not Enhanced by Having More Data Points," Perception and Psychophysics, Volume 24, Number 2, August 1978, pp. 182-184.

[77] Ward, W.C. and Jenkins, H.M. "The Dis- play of Information and the Judgment of Contingency," Canadian Journal of Psy- chology, Volume 19, Number 3, Septem- ber 1965, pp. 231-241.

[78] Wason, P.C. and Johnson-Laird, P.N. Psy- chology of Reasoning: Structure and Con- tent, Harvard University Press, Boston, Massachusetts, 1972.

[79] Mann, R.I. and Watson, H.J. "A Contin- gency Model for User Involvement in DSS Development," MIS Quarterly, Volume 8, Number 1, March 1984, pp. 27-38.

[80] Wedley, W.C. and Field, R.H.G. "A Predeci- sion Support System," Academy of Man- agement Review Volume 9, Number 4, Oc- tober 1984, pp. 696-703.

[81] White, D.J. "The Nature of Decision Theory," in Theories of Decision in Prac- tice, D.J. White and K.C. Bowen (eds.), Hodder and Soughton, London, England, 1975, pp. 3-16.

[82] Winston, P.H. Artificial Intelligence, Addi- son-Wesley, Reading Massachusetts, 1977.

[83] Wright, J.C. and Murphy, G.L. "The Utility of Theories in Intuitive Statistics: The Ro- bustness of Theory-based Judgments," Journal of Experimental Psychology: General, Volume 13, Number 2, June 1984, pp. 301-322.

[75] Wagenaar, W.A. and Sagaria, S.D. "Mis- perception of Exponential Growth," Per- ception and Psychophysics, Volume 18, Number 6, December 1975, pp. 416-422.

[76] Wagenaar, W.A. and Timmers, H. "Extra- polation of Exponential Time Series is Not Enhanced by Having More Data Points," Perception and Psychophysics, Volume 24, Number 2, August 1978, pp. 182-184.

[77] Ward, W.C. and Jenkins, H.M. "The Dis- play of Information and the Judgment of Contingency," Canadian Journal of Psy- chology, Volume 19, Number 3, Septem- ber 1965, pp. 231-241.

[78] Wason, P.C. and Johnson-Laird, P.N. Psy- chology of Reasoning: Structure and Con- tent, Harvard University Press, Boston, Massachusetts, 1972.

[79] Mann, R.I. and Watson, H.J. "A Contin- gency Model for User Involvement in DSS Development," MIS Quarterly, Volume 8, Number 1, March 1984, pp. 27-38.

[80] Wedley, W.C. and Field, R.H.G. "A Predeci- sion Support System," Academy of Man- agement Review Volume 9, Number 4, Oc- tober 1984, pp. 696-703.

[81] White, D.J. "The Nature of Decision Theory," in Theories of Decision in Prac- tice, D.J. White and K.C. Bowen (eds.), Hodder and Soughton, London, England, 1975, pp. 3-16.

[82] Winston, P.H. Artificial Intelligence, Addi- son-Wesley, Reading Massachusetts, 1977.

[83] Wright, J.C. and Murphy, G.L. "The Utility of Theories in Intuitive Statistics: The Ro- bustness of Theory-based Judgments," Journal of Experimental Psychology: General, Volume 13, Number 2, June 1984, pp. 301-322.

[75] Wagenaar, W.A. and Sagaria, S.D. "Mis- perception of Exponential Growth," Per- ception and Psychophysics, Volume 18, Number 6, December 1975, pp. 416-422.

[76] Wagenaar, W.A. and Timmers, H. "Extra- polation of Exponential Time Series is Not Enhanced by Having More Data Points," Perception and Psychophysics, Volume 24, Number 2, August 1978, pp. 182-184.

[77] Ward, W.C. and Jenkins, H.M. "The Dis- play of Information and the Judgment of Contingency," Canadian Journal of Psy- chology, Volume 19, Number 3, Septem- ber 1965, pp. 231-241.

[78] Wason, P.C. and Johnson-Laird, P.N. Psy- chology of Reasoning: Structure and Con- tent, Harvard University Press, Boston, Massachusetts, 1972.

[79] Mann, R.I. and Watson, H.J. "A Contin- gency Model for User Involvement in DSS Development," MIS Quarterly, Volume 8, Number 1, March 1984, pp. 27-38.

[80] Wedley, W.C. and Field, R.H.G. "A Predeci- sion Support System," Academy of Man- agement Review Volume 9, Number 4, Oc- tober 1984, pp. 696-703.

[81] White, D.J. "The Nature of Decision Theory," in Theories of Decision in Prac- tice, D.J. White and K.C. Bowen (eds.), Hodder and Soughton, London, England, 1975, pp. 3-16.

[82] Winston, P.H. Artificial Intelligence, Addi- son-Wesley, Reading Massachusetts, 1977.

[83] Wright, J.C. and Murphy, G.L. "The Utility of Theories in Intuitive Statistics: The Ro- bustness of Theory-based Judgments," Journal of Experimental Psychology: General, Volume 13, Number 2, June 1984, pp. 301-322.

[75] Wagenaar, W.A. and Sagaria, S.D. "Mis- perception of Exponential Growth," Per- ception and Psychophysics, Volume 18, Number 6, December 1975, pp. 416-422.

[76] Wagenaar, W.A. and Timmers, H. "Extra- polation of Exponential Time Series is Not Enhanced by Having More Data Points," Perception and Psychophysics, Volume 24, Number 2, August 1978, pp. 182-184.

[77] Ward, W.C. and Jenkins, H.M. "The Dis- play of Information and the Judgment of Contingency," Canadian Journal of Psy- chology, Volume 19, Number 3, Septem- ber 1965, pp. 231-241.

[78] Wason, P.C. and Johnson-Laird, P.N. Psy- chology of Reasoning: Structure and Con- tent, Harvard University Press, Boston, Massachusetts, 1972.

[79] Mann, R.I. and Watson, H.J. "A Contin- gency Model for User Involvement in DSS Development," MIS Quarterly, Volume 8, Number 1, March 1984, pp. 27-38.

[80] Wedley, W.C. and Field, R.H.G. "A Predeci- sion Support System," Academy of Man- agement Review Volume 9, Number 4, Oc- tober 1984, pp. 696-703.

[81] White, D.J. "The Nature of Decision Theory," in Theories of Decision in Prac- tice, D.J. White and K.C. Bowen (eds.), Hodder and Soughton, London, England, 1975, pp. 3-16.

[82] Winston, P.H. Artificial Intelligence, Addi- son-Wesley, Reading Massachusetts, 1977.

[83] Wright, J.C. and Murphy, G.L. "The Utility of Theories in Intuitive Statistics: The Ro- bustness of Theory-based Judgments," Journal of Experimental Psychology: General, Volume 13, Number 2, June 1984, pp. 301-322.

[75] Wagenaar, W.A. and Sagaria, S.D. "Mis- perception of Exponential Growth," Per- ception and Psychophysics, Volume 18, Number 6, December 1975, pp. 416-422.

[76] Wagenaar, W.A. and Timmers, H. "Extra- polation of Exponential Time Series is Not Enhanced by Having More Data Points," Perception and Psychophysics, Volume 24, Number 2, August 1978, pp. 182-184.

[77] Ward, W.C. and Jenkins, H.M. "The Dis- play of Information and the Judgment of Contingency," Canadian Journal of Psy- chology, Volume 19, Number 3, Septem- ber 1965, pp. 231-241.

[78] Wason, P.C. and Johnson-Laird, P.N. Psy- chology of Reasoning: Structure and Con- tent, Harvard University Press, Boston, Massachusetts, 1972.

[79] Mann, R.I. and Watson, H.J. "A Contin- gency Model for User Involvement in DSS Development," MIS Quarterly, Volume 8, Number 1, March 1984, pp. 27-38.

[80] Wedley, W.C. and Field, R.H.G. "A Predeci- sion Support System," Academy of Man- agement Review Volume 9, Number 4, Oc- tober 1984, pp. 696-703.

[81] White, D.J. "The Nature of Decision Theory," in Theories of Decision in Prac- tice, D.J. White and K.C. Bowen (eds.), Hodder and Soughton, London, England, 1975, pp. 3-16.

[82] Winston, P.H. Artificial Intelligence, Addi- son-Wesley, Reading Massachusetts, 1977.

[83] Wright, J.C. and Murphy, G.L. "The Utility of Theories in Intuitive Statistics: The Ro- bustness of Theory-based Judgments," Journal of Experimental Psychology: General, Volume 13, Number 2, June 1984, pp. 301-322.

About the Authors William E. Remus is Professor of Decision Sci- ences at the University of Hawaii. During the last decade he has published over two dozen articles in journals such as Management Sci- ence, the International Journal of Manage- ment Science (OMEGA), and Journal of Busi- ness Research. He has been funded by the National Science Foundation for research into behavioral decision making and was a Ful- bright scholar at the National University of Malaysia in 1980. His current areas of research include man-machine interfaces and applica- tions of artifical intelligence.

Jeffrey E. Kottemann is Assistant Professor of Decision Sciences at the University of Hawaii. He received his Ph.D. in Management Information Systems and Quantitative Meth- ods from the University of Arizona in 1984. He has published articles in the Journal of MIS and in Information Systems. His current re- search interests include information system development environments, construction of software to integrate data, document, and knowledge base management, and empirical research into the impacts of DSS on decision making behavior.

About the Authors William E. Remus is Professor of Decision Sci- ences at the University of Hawaii. During the last decade he has published over two dozen articles in journals such as Management Sci- ence, the International Journal of Manage- ment Science (OMEGA), and Journal of Busi- ness Research. He has been funded by the National Science Foundation for research into behavioral decision making and was a Ful- bright scholar at the National University of Malaysia in 1980. His current areas of research include man-machine interfaces and applica- tions of artifical intelligence.

Jeffrey E. Kottemann is Assistant Professor of Decision Sciences at the University of Hawaii. He received his Ph.D. in Management Information Systems and Quantitative Meth- ods from the University of Arizona in 1984. He has published articles in the Journal of MIS and in Information Systems. His current re- search interests include information system development environments, construction of software to integrate data, document, and knowledge base management, and empirical research into the impacts of DSS on decision making behavior.

About the Authors William E. Remus is Professor of Decision Sci- ences at the University of Hawaii. During the last decade he has published over two dozen articles in journals such as Management Sci- ence, the International Journal of Manage- ment Science (OMEGA), and Journal of Busi- ness Research. He has been funded by the National Science Foundation for research into behavioral decision making and was a Ful- bright scholar at the National University of Malaysia in 1980. His current areas of research include man-machine interfaces and applica- tions of artifical intelligence.

Jeffrey E. Kottemann is Assistant Professor of Decision Sciences at the University of Hawaii. He received his Ph.D. in Management Information Systems and Quantitative Meth- ods from the University of Arizona in 1984. He has published articles in the Journal of MIS and in Information Systems. His current re- search interests include information system development environments, construction of software to integrate data, document, and knowledge base management, and empirical research into the impacts of DSS on decision making behavior.

About the Authors William E. Remus is Professor of Decision Sci- ences at the University of Hawaii. During the last decade he has published over two dozen articles in journals such as Management Sci- ence, the International Journal of Manage- ment Science (OMEGA), and Journal of Busi- ness Research. He has been funded by the National Science Foundation for research into behavioral decision making and was a Ful- bright scholar at the National University of Malaysia in 1980. His current areas of research include man-machine interfaces and applica- tions of artifical intelligence.

Jeffrey E. Kottemann is Assistant Professor of Decision Sciences at the University of Hawaii. He received his Ph.D. in Management Information Systems and Quantitative Meth- ods from the University of Arizona in 1984. He has published articles in the Journal of MIS and in Information Systems. His current re- search interests include information system development environments, construction of software to integrate data, document, and knowledge base management, and empirical research into the impacts of DSS on decision making behavior.

About the Authors William E. Remus is Professor of Decision Sci- ences at the University of Hawaii. During the last decade he has published over two dozen articles in journals such as Management Sci- ence, the International Journal of Manage- ment Science (OMEGA), and Journal of Busi- ness Research. He has been funded by the National Science Foundation for research into behavioral decision making and was a Ful- bright scholar at the National University of Malaysia in 1980. His current areas of research include man-machine interfaces and applica- tions of artifical intelligence.

Jeffrey E. Kottemann is Assistant Professor of Decision Sciences at the University of Hawaii. He received his Ph.D. in Management Information Systems and Quantitative Meth- ods from the University of Arizona in 1984. He has published articles in the Journal of MIS and in Information Systems. His current re- search interests include information system development environments, construction of software to integrate data, document, and knowledge base management, and empirical research into the impacts of DSS on decision making behavior.

About the Authors William E. Remus is Professor of Decision Sci- ences at the University of Hawaii. During the last decade he has published over two dozen articles in journals such as Management Sci- ence, the International Journal of Manage- ment Science (OMEGA), and Journal of Busi- ness Research. He has been funded by the National Science Foundation for research into behavioral decision making and was a Ful- bright scholar at the National University of Malaysia in 1980. His current areas of research include man-machine interfaces and applica- tions of artifical intelligence.

Jeffrey E. Kottemann is Assistant Professor of Decision Sciences at the University of Hawaii. He received his Ph.D. in Management Information Systems and Quantitative Meth- ods from the University of Arizona in 1984. He has published articles in the Journal of MIS and in Information Systems. His current re- search interests include information system development environments, construction of software to integrate data, document, and knowledge base management, and empirical research into the impacts of DSS on decision making behavior.

About the Authors William E. Remus is Professor of Decision Sci- ences at the University of Hawaii. During the last decade he has published over two dozen articles in journals such as Management Sci- ence, the International Journal of Manage- ment Science (OMEGA), and Journal of Busi- ness Research. He has been funded by the National Science Foundation for research into behavioral decision making and was a Ful- bright scholar at the National University of Malaysia in 1980. His current areas of research include man-machine interfaces and applica- tions of artifical intelligence.

Jeffrey E. Kottemann is Assistant Professor of Decision Sciences at the University of Hawaii. He received his Ph.D. in Management Information Systems and Quantitative Meth- ods from the University of Arizona in 1984. He has published articles in the Journal of MIS and in Information Systems. His current re- search interests include information system development environments, construction of software to integrate data, document, and knowledge base management, and empirical research into the impacts of DSS on decision making behavior.

About the Authors William E. Remus is Professor of Decision Sci- ences at the University of Hawaii. During the last decade he has published over two dozen articles in journals such as Management Sci- ence, the International Journal of Manage- ment Science (OMEGA), and Journal of Busi- ness Research. He has been funded by the National Science Foundation for research into behavioral decision making and was a Ful- bright scholar at the National University of Malaysia in 1980. His current areas of research include man-machine interfaces and applica- tions of artifical intelligence.

Jeffrey E. Kottemann is Assistant Professor of Decision Sciences at the University of Hawaii. He received his Ph.D. in Management Information Systems and Quantitative Meth- ods from the University of Arizona in 1984. He has published articles in the Journal of MIS and in Information Systems. His current re- search interests include information system development environments, construction of software to integrate data, document, and knowledge base management, and empirical research into the impacts of DSS on decision making behavior.

418 MIS Quarterly/December 1986 418 MIS Quarterly/December 1986 418 MIS Quarterly/December 1986 418 MIS Quarterly/December 1986 418 MIS Quarterly/December 1986 418 MIS Quarterly/December 1986 418 MIS Quarterly/December 1986 418 MIS Quarterly/December 1986

This content downloaded from 193.142.30.234 on Sat, 28 Jun 2014 12:59:32 PMAll use subject to JSTOR Terms and Conditions