17
International Journal of Forecasting 8 (1992) 251-267 North-Holland 251 Bridging the gap between theory and practice in forecasting Essam Mahmoud American Graduate School of International Management, Glendale, AZ 85306, USA Richard DeRoeck General Motors Corporation, Detroit, MI 48202, USA Robert Brown Materials Management Systems, Inc., P.O. Box 239, Thetford Center, VT 05075-0239, USA Gillian Rice American Graduate School of International Management, Glendale, AZ 85306, USA Abstract: The successful implementation of forecasting in many organizations is hampered by gaps in communication and understanding between forecast preparers and forecast users. Different individuals in organizations may have varying political agendas which impact the forecasting process. Advances in forecasting can be limited because of the gap between forecasting theorists and practitioners. This paper discusses these issues. In particular, it reports on a series of roundtable discussions on the gap between theory and practice which took place at the 1991 International Symposium on Forecasting. The paper includes suggestions on ways to bridge the gap. Keywords: Forecast practice, Implementing forecasting, Forecasting process. Correspondence to: E. Mahmoud, American Graduate School of International Management, Glendale, AZ 85306. USA We wish to express our thanks to all those who participated in the roundtables and shared their experiences and views. We are also particularly grateful to roundtable participants and others who offered suggestions which enabled us to improve an earlier draft of the paper: Wally Albers, Scott Armstrong, Stuart Bretschneider, Chris Chatfield, Bob Clemen, Estela Bee Dagum, Xiang Ying Du, Robert Fildes, Benito Flores, Wassily Leontief, Russ Maddox, Lars-Erik Oiler, Keith Ord, Robert Raeside, Debra Schramm, Janet Sniezek, Ralph Snyder, Herman Stekler, Arthur Taylor, Tom Yokum, Peg Young, Lilian Wu, Shawa Zhang and an anonymous reviewer. 1. Introduction Some observers believe that there is a gap between forecasting theory and practice, for ex- ample, see DeRoeck’s (1991) editorial in the International Journal of Forecasting. Several years ago, a study by Wheelwright and Clarke (1976) revealed just such a discrepancy between users and preparers of forecasts. This was re- flected in the technical emphasis of the preparers and the managerial emphasis of the users. Wheelwright and Clarke make the interesting finding that, although technical competence was

Bridging the gap between theory and practice in forecasting

Embed Size (px)

Citation preview

International Journal of Forecasting 8 (1992) 251-267

North-Holland 251

Bridging the gap between theory and practice in forecasting

Essam Mahmoud

American Graduate School of International Management, Glendale, AZ 85306, USA

Richard DeRoeck

General Motors Corporation, Detroit, MI 48202, USA

Robert Brown

Materials Management Systems, Inc., P.O. Box 239, Thetford Center, VT 05075-0239, USA

Gillian Rice

American Graduate School of International Management, Glendale, AZ 85306, USA

Abstract: The successful implementation of forecasting in many organizations is hampered by gaps in communication and understanding between forecast preparers and forecast users. Different individuals in organizations may have varying political agendas which impact the forecasting process. Advances in forecasting can be limited because of the gap between forecasting theorists and practitioners. This paper discusses these issues. In particular, it reports on a series of roundtable discussions on the gap between theory and practice which took place at the 1991 International Symposium on Forecasting. The paper includes suggestions on ways to bridge the gap.

Keywords: Forecast practice, Implementing forecasting, Forecasting process.

Correspondence to: E. Mahmoud, American Graduate

School of International Management, Glendale, AZ 85306. USA

We wish to express our thanks to all those who participated

in the roundtables and shared their experiences and views.

We are also particularly grateful to roundtable participants

and others who offered suggestions which enabled us to

improve an earlier draft of the paper: Wally Albers, Scott

Armstrong, Stuart Bretschneider, Chris Chatfield, Bob Clemen, Estela Bee Dagum, Xiang Ying Du, Robert Fildes,

Benito Flores, Wassily Leontief, Russ Maddox, Lars-Erik Oiler, Keith Ord, Robert Raeside, Debra Schramm, Janet

Sniezek, Ralph Snyder, Herman Stekler, Arthur Taylor, Tom Yokum, Peg Young, Lilian Wu, Shawa Zhang and an

anonymous reviewer.

1. Introduction

Some observers believe that there is a gap between forecasting theory and practice, for ex- ample, see DeRoeck’s (1991) editorial in the International Journal of Forecasting. Several years ago, a study by Wheelwright and Clarke (1976) revealed just such a discrepancy between users and preparers of forecasts. This was re- flected in the technical emphasis of the preparers and the managerial emphasis of the users. Wheelwright and Clarke make the interesting finding that, although technical competence was

~~tt~ibuted to the preparers by the users, users lacked confidence in the ability of the preparers to choose the best techniques or to provide cost-effective forecasts. Further support can bc found in De Gooijcr (1990) who argued that if we acknowledge the fact that the discipline of forecasting derives its reason for existence from its practical relevance. then we should try to close the gap between theory and practice.

This paper reports on a series of roundtable discussions at the 1991 International Symposium on Forecasting (ISF) which addressed the prob- lem of the gap between forecasting theory and practice. Roundtable participants also suggested ways of bridging the gaps that exist between academic forecasters and practitioners and be- tween forecast preparers and forecast users. The longer term goal of this paper is to encourage others to come forward with their ideas and experiences so that the International Institute of Forecasters (IIF) can better serve its members and society in general.

The paper is organized as follows. First is a brief review of the experience of the IIF with respect to the building of relationships between academics and practitioners. Section 3 reports on the roundtahle discussions. Section 4 suggests some solutions and ways to bridge the gap. Section 5 presents some concluding remarks.

Before publication, an attempt was made to have this paper reviewed by all participants in the roundtable discussions. IIF members were also encouraged to contribute their own com- mentary. At the end of the paper are a series of short commentaries in which other authors share their views and experiences regarding the gap between theory and practice.

2. IIF experience

Many recognize the need to improve relation- ships between practitioners who forecast for business and governmental purposes and aca- demics who develop and research forecasting methods. The IIF, from its inception in 1981, has been active in fostering exchanges and building links between forecasters in academia and prac- ticing forecasters in government and the corpo- rate world. Practitioner members of the IIF are becoming more numerous and now represent

more than 40% of IIF membership. The Institute places much emphasis on inviting practitioners to participate in its symposia and publish in its journals. Practitioner members have always served on the editorial boards of the IIF’s journals.

In a survey, Armstrong (1988) asked prac- titioner members of the IIF if they actually used the research published in the ZF?~ernario~#i Jour- nal of Forecasting (IJF). Thirty-six per cent re- ported applications of research in the journal and about half of this number reported success- ful applications. While these results appear im- pressive, there is much yet to be done. Gardner and Makridakis (1988) believed that there was a substantial amount of confusion experienced by practitioners interested in using research litera- ture to select and apply forecasting methods. Before 1980, research focused on the theoretical deveIopment of quantitative methodologies in forecasting. Research since 1980 has emphasized empirical studies which challenge the theoretical expectations of methods developed earlier. In the literature, advocates of the various methods still disagree about the most appropriate method for any situation and about how particular meth- ods should be applied and evaluated.

According to Dagum (1989), if forecasting is to become recognized as a profession, there must be unity between theorists and practitioners. The challenge before the Institute in the 1990s is to establish this unity [Gardner (1991)]. In 1985, the IIF introduced the Consultants’ Clearing House, to help match forecast users to forecast providers. This has had limited success, how- ever, despite considerable publicity in the IIF newsletter and at conferences. At the Interna- tional Symposium on Forecasting (ISF) in 1991, the IIF presented the first practitioner’s award, as another step towards joining theory and practice.

At the tenth ISF in Delphi, Greece, in 1990, a small group of participants, upon the suggestion of DeRoeck, addressed the need to bridge the gap between academics and practitioners in a more formal and structured way. DeRoeck and Mahmoud agreed to take the initiative in de- veloping the approach. As a result, at the 1991 ISF, three workshop sessions dealt with “Bridg- ing the Gap between Theory and Practice in Forecasting’. These roundtables reffected the

E. Mahmoud et al. I Bridging the gap between theory and practice 253

theme of the entire symposium: ‘Practice and Potential’. All roundtable participants are listed in Appendix A.

3. The three gaps between theory and practice

Discussions at the three roundtables centered on the question of whether a gap between theory and practice exists, and, if so, what exactly is its nature and what can be done about it? Discus- sion was purposely unstructured, yet seemed to fall naturally into three areas: understanding of forecasting, data requirements, and organiza- tional politics. There also appeared to be three, and not just two, groups of people at variance with each other: academics, practitioners and forecast users. These discrepancies arise because of the differences in behavior and perceptions between the members of these groups. The ‘gaps’ are listed in Exhibit 1 and are elaborated upon below.

3.1. The understanding gap

According to Fildes (1990): ‘The various sur- veys of (market forecasting) practice all have one thing in common: the conclusion that subjective forecasting techniques based on expert, usually executive, opinion are more widely used than any of the quantitative approaches to forecasting recommended in the management science litera- ture.’ As Ralph Snyder (ISF 91) explained, man- agers often possess ambivalent attitudes towards management science in general and forecasting in particular. When confronted with manage- ment science, as a time and resource intensive process for problem-solving and as an iterative

Understanding gap Managerial

understanding

Data sharing gap Industry need for.

confidentiality

Political gap Hidden agenda of.

management

. . Technical detail of

forecasts

Researchers’ needs for

data to build and test

useful models

Exhibit 1

Analyst’s viewpoint is

to forecast honestly

scheme of successive model refinement, the inevitable failures experienced often provide the convenient excuse required for jettisoning such an approach. Unless a solution can be bought ‘off the shelf’ and provide instantaneous success, the chances of engaging managers in a scientific approach are, whether we like it or not, rather remote.

Wally Albers (ISF 91) explained the under- standing gap in terms of workplace culture. There is, he observed, an accepted division of labor between the academics who develop new knowledge and the practitioners who apply it. The academics live in a different culture, receiv- ing different rewards. Academics receive recog- nition and advancement for research that might have limited applicability in a business environ- ment. Academics traditionally have had to be less concerned with the immediate practical orientation of their work. A forecast may be technically brilliant, but understood by few (Russ Maddox, ISF 91). There has to be a translation of the forecast by the forecaster so that it can be used by others. Forecasters need imagination to perform this task successfully. In today’s business world, learning how to deliver research products may be more important than the production of the products (Shawa Zhang, ISF 91). In the words of Martin Morman (ISF 91): ‘If you have to sell a forecast, then someone doesn’t understand it. That’s a communication problem. ’

The way in which forecast performance mea- sures are used is an example of the understand- ing gap. Mean Absolute Deviation, for example, is important to a statistician but the measure that should be used, perhaps, is the extent of the financial impact upon business performance (Richard D’Angelo, ISF 91). In another exam- ple, Leitch and Tanner (1991) investigated profit measures of interest rate forecasts. They argued that the main reason that firms pay for profes- sional forecasts, which do not seem to work much better than do naive forecasts, is because the former enhance profits and seem to be linked to turning point information. In contrast, most academic studies of forecast accuracy argue for the importance of using conventional statistical error measures in the face of some limited evi- dence as to their inadequacy. From a forecast user perspective, it may be better to communi-

254 E. Mahrnoud et 01. I Bridging the gup between theory und practiw

cate the forecast results in terms of customer service rates, profits, etc. (Richard D’Angelo, ISF 91). Some researchers, such as Price and Sharp (1986) and Mahmoud and Pegels (1990), have illustrated the use of a dollar-based forecast accuracy measure, but more work is needed here.

3.2. The data sharing gap

Wassily Leontief (ISF 91) emphasized the im- portance of disseminating information among re- searchers and making data available to them. Managers often consider data to be sensitive and confidential. Yet, the sharing of data with aca- demics may be a good way for organizations to learn more about forecasting theory and about how to analyze data more successfully using appropriate techniques.

The data sharing gap reflects more than a mere unwillingness to provide data because of reasons of confidentiality. There may also be an attitude on the part of some academics that data have to be in a particular form before they will use them, Views on data quality and the accep- tability of data for model-building differ between academics and practitioners. Academics prefer to have ‘clean’ data; practitioners must work with what managers give them or with what is available from various internal or external sources. Contaminated data are a fact of life for the practitioner; the art of purifying data to get at targeted elements is often critical to the de- velopment of a valid and reliable forecast (RUSS Maddox, ISF 91). If the purpose of the forecast is to predict the future behavior of the market, available data may need to be adjusted for non- market-related events. In addition, adjustments may need to be made to account for prior (non- recurring) market-related events which do not have a continuing impact. The resultant series provides a sounder basis for forecasting. See Rice and Mahmoud (1984) for a discussion of issues concerning data quality for forecasting.

3.3. The political gap

The political gap exists when some of the

participants in an organization’s forecasting pro- cess have an agenda in support of decision- making that is often hidden from other particip- ants, and which may be in conflict with statistical accuracy. This can result in biased forecasts.

Depending on their role in an organization, people may have different perceptions of the rewards of over- or under-forecasting. The fol- lowing examples are typical, although not uni- versal.

l Sales people want to underestimate the fore- cast so that they will be sure of surpassing it and gaining bonuses.

l Production prefers to overestimate the forecast to ensure that there will be no backlogs or delays in the production schedule.

l Top management seeks a forecast that is ori- ented towards satisfying board members as to corporate profitability.

l Marketing likes to overestimate the forecast in order to gain a larger budget for marketing expenditures. Because of the inherent conflict of interest

between different managerial functions, forecasts are negotiated. Forecasting is more and more being viewed as a process where people start with a figure and elicit discussion on why it should be changed (Keith Ord, ISF 91). In busi- ness, the making of adjustments to statistically determined forecasting is becoming more accept- able; as is anything that appears to increase forecasting accuracy (Russ Maddox, ISF 91). Note, however, that adjustments may be under- taken as part of the negotiation process, despite the fact that the accuracy of the forecast may be adversely affected.

If forecasters are of low status in the organiza- tion, this means that their forecasts can readily be altered. Forecasters are perceived as provid- ing input to decision-makers; forecasters do not make operational decisions (Russ Maddox, ISF 91). They are, therefore, deemed to be support staff and have limited opportunities to progress to upper management levels. As a result, fore- casting, as a career, may not be viewed positively (Chaman Jain, ISF 91). This uncertainty about the status of forecasters could provide a research agenda for personnel psychologists or human resource management specialists (Janet Sniezek, ISF 91).

E. Mahmoud et al. I Bridging the gap between theory and practice 255

4. Solutions: an agenda for bridging the gap

4.1. Bridging the understanding gap

An important issue is how to motivate aca- demics and people in industry to reach out to one another to improve understanding and com- munication. As Tom Cook (ISF 91) observed, there is an enormous amount of talent in academia going largely untapped. If it could be rechannelled to solve some of the problems faced by industry, real progress could be made. Roundtable sessions themselves may be useful in bridging the gap. In 1991, IIF formed a Corpo- rate Membership Committee to examine ways of attracting forecasters from business to join the IIF.

The Institute of Management Science (TIMS) and the Operations Research Society of Am- erica (ORSA) have recognized the need to de- velop a more aggressive approach to improving academic-practitioner relationships [Cook (1990)]. These organizations established the TIMS/ORSA Academic/Practitioner Interface Committee. Among the ideas the committee is considering are those of a consultants’ clearing house, an award for the best applied disserta- tion, internships (student and faculty) and in- creasing practitioner input into the curriculum. The committee is also seeking to improve rela- tionships by promoting the National Science Foundation (NSF) Private Sector Research Op- portunities Program. This program encourages industry and academia to conduct joint research.

Cook (ISF 91) emphasized that if academics and universities are serious about working with non-academics, they need to build relationships before asking for funds for research or educa- tion. This is analogous to a business putting its resources into new business development. Aca- demics need to change their attitudes and reach out to their constituents. Bruce Abramson (ISF 91) observed difficulties in introducing, at a uni- versity, an ‘Industry Associates Program’ for for- malizing collaborative research projects. Most academics do not have industry specific knowl- edge (Ralph Snyder, ISF 91). Likewise, as Wally Albers (ISF 91) observed, industrial research and development people are not familiar with the problems and environment of the academic.

Bridging the gap requires initiative on both sides. Albers described the concept of joint pro- grams as a sound one, however, and noted the success of other programs, such as the ‘Industry Affiliate Program’ at Massachusetts Institute of Technology and California Institute of Technolo- gy. At ISF, an annual award could be given for the best application paper co-authored by an academic and a practitioner. This might encour- age collaborative projects. Practitioners could be invited to discuss, as well as to present, papers at future symposia.

IIF members who are professors could occa- sionally invite forecasters or forecast users from the corporate world to their classes to provide practitioner and managerial perspectives. Mana- gers could share experiences, success stories and problems. Leontief, Mahmoud and Brown (ISF 91) all urged those carrying out academic re- search to examine and understand the sociologi- cal process of forecasting and not just the de- velopment of forecasting techniques. Academics and practitioners would benefit from a shift away from model-building in isolation to an approach that involved thoroughly learning about a com- pany’s environment. Research emphasis should be placed on how forecasting actually gets car- ried out in organizations [Gardner (1991)]. Such research can validate existing theory, lead to the development of new theory and can help to unify theory and practice. Because forecasting is, after all, a human activity, academics should ask ques- tions about the way in which an organization’s environment, culture and process interact with the techniques for generating a forecast [Bretschneider and Gorr (1989b)].

The forecasting process should be clearly un- derstood by the different managers involved (sales, inventory, production, etc.) and also by the forecaster. Forecasting assumptions should be clearly stated and information on these shared and open discussed. A first issue to consider may be: Why exactly is a forecast desired? (Russ Maddox, ISF, 1991). One needs to consider the purpose of a particular forecast, to determine the appropriate level of technical content and to apply imagination in both its development and delivery in order to ensure that the final result is understood. A key point raised by Keith Ord (ISF 91) is the distinction between forecasts and

plans. For example, a salesperson has a ‘goal’, but calls it a ‘forecast’. Makridakis (1988) points out that forecast users need to know specifically what can and cannot be predicted, since there are important consequences if the future turns out to be different from that for which plans have been made.

Forecasters must therefore spend time learn- ing to understand the forecast users and their culture (Chaman Jain, ISF 91). In the Interna- tional Forecasting Symposia, more emphasis has recently been placed on the implementation of forecasts, the role of forecasting in decision- making and the forecasting process. Academics could spend sabbaticals in organizations in order to help the organization with a specific issue and also to understand more about the practical im- plementation of forecasts (Benito Flores, ISF 91). This could be arranged through the IIF Consultants’ Clearing House. Companies would agree to pay expenses and to share data. A case study would be written that could be used in teaching, with the identity of the company dis- guised, if need be.

Education of managers is also important: edu- cation should focus on the principles underlying a forecasting procedure, rather than the statisti- cal details, so that a manager can learn to dis- tinguish between good and bad forecasts. Gard- ner and Makridakis (1988) proposed that the IIF should have a committee for the purpose of developing standards in forecasting practice, al- though this idea has not yet been taken further. IIF and other professional societies in statistics and management science can disseminate knowl- edge and understanding of forecasting to forecast users. Collaborative training ventures to educate participants about forecasting techniques could be structured to result simultaneously in the sharing of ideas and information. These en- terprises could become a regular feature of the ISF. For example, immediately prior to ISF 91, Gardner ran a training workshop.

4.2. Bridging the data gap

Organizations that are willing to share infor- mation with IIF members for the purpose of analysis could be invited to do so. This could help solve the problem of data-sharing. In addi- tion, if the organizations identified issues to be

addressed by IIF members, these members could then help the data providers to learn about alternatives that could be useful in solving mana- gerial problems. The definition of a problem is often limited, however, by the availability of data. Database development is clearly a key factor. The IIF could establish databases of col- lected time series from different studies and different organizations. These could then be used by both researchers and practitioners for the purposes of testing and comparative analysis.

Encouraging IIF practitioners and IIF aca- demics to work together and share information could also help to close the data gap. For exam- ple, Spyros Makridakis of INSEAD and Richard D’Angelo of Bristol-Myers Squibb are working together on the M2-Competition, utilizing some time series provided by D’Angelo. The M2- Competition data base and others, such as an inventory data base and the M-Competition and M3-Competition data bases, are available to forecasting researchers [Gardner and Makridakis (19SS)l.

4.3. Neutralizing the political gap

Political skills may be just as important to the forecaster as the analytical skills. A forecaster who is an effective politician can achieve con- sensus on a forecast. A manager is more likely to use a forecast if a forecaster shows how he can utilize the manager’s intuition (Allen Gutheim, ISF 91). Lilian Wu (ISF 91) and Anne Koehler (ISF 91) agreed that forecasters should market their forecasts to an opinion leader within the organization. For example, Lilian Wu explained that she has found a couple of key people in her organization who appreciate the way that quan- titative methods and systematic thinking can help them. There are differences in managerial philosophies in different companies, however. Forecasters are beginning to admit that they can only provide the tools for forecasting and that managerial judgment is significant (John Hanke, ISF 91). There is a large body of literature in psychology that demonstrates how people can justify deviations from a model, or abandonment of a model, because of their allegedly superior intuitions. Management is a murky, fuzzy busi- ness and managers have an intuitive way of operating in this complexity. Russ Maddox (ISF

E. Mahmoud et al. I Bridging the gap hetweerz theory and practice 257

91) related an example, illustrating the impor- tance of intuition: ‘I remember the best forecas- ter I ever supervised. He could never explain precisely how he arrived at the forecast, but his long-term record of accuracy was unequalled. It is now apparent to me that he had superior intuition, which enabled him to blend statistical analysis with subjective information from various sources to create a “feel” for the future direction of growth in his territory.’

In pursuing the two objectives of ‘forecasting accuracy’ and ‘feel for the future’, it is essential to distinguish between forecasting a ‘trend’ (that is, a future not structurally different from the past) and forecasting a ‘shock’ (that is, a future radically different from the past). For example, compare the forecasting of next year’s growth of the U.S. economy versus the forecasting of the fall of communism in Eastern Europe. DeRoeck (1991) argues that the same criteria of accuracy should not be applied to both kinds of forecast- ing problems. His view is that ‘. . . a shock cannot be forecast unless one is prepared to accept that some people are in fact able to predict the future. But that [he adds] belongs to the area of parapsychology, a field, which to my knowledge, has never been mentioned at an ISF conference. Perhaps it should.’ Direct empirical tests, how- ever, invariably favor models over intuition, at least in the absence of shocks. Furthermore, the empirical results on managerial adjustments to forecasts are mixed, suggesting caution in adjust- ing forecasts [Mahmoud (1989)].

Ideally, the forecaster and user must be part- ners in the forecasting process. Research studies concerning the implementation of forecasting and of models in general, consistently suggest that user involvement is the key. Lawrence (1983), for example, states that ‘without the active and enthusiastic support of [users] a fore- casting system cannot succeed’. The forecasting system must be viewed by users as supporting them in their forecasting work.

New forecasting people at Warner-Lambert experience sales training in order to learn about pharmaceuticals. These forecasters can then challenge the reality of product performance and understand the reasons behind it, rather than just dealing with statistics (Richard D’Angelo, ISF 91). Robert Brown (ISF 91) gave the follow- ing example. At Upjohn, each brand manager

has responsibility for the entire brand and can recruit an expert who knows all about computers and statistics. The brand manager and the tech- nical expert learn a great deal about each other’s skills without necessarily aiming to, because they work so closely together. At Upjohn, five years were needed to gain acceptance for this concept, but only six months after implementation it be-

came effective. There also could be a mechanism within the

IIF for reaching out to other members. For example, people could identify problems they had experienced and a dialogue could then be initiated between the problem owners and others who may have had similar experiences or prob- lems and resolved them. In his text about strategic alliances, Badaracco (1991) presents the viewpoint of companies that no longer have a vested interest in building up barriers that pro- tect them from the outside world. Instead, their strengths lie in their openness to ideas from outside. Knowledge, for them, is the key to success, and knowledge resides in social relation- ships as well as in patents and formulae.

5. Conclusion

The building of good relationships between academics and practitioners is something that occurs gradually. Recall the findings of the study by Wheelwright and Clarke (1976) and note the comments made fifteen years later in ZJF editori- als, as well as in the roundtable sessions. Trust has to be established between the forecast pre- parers and the forecast users. It undoubtedly will take much time to set up mechanisms for bridg- ing the gap between academics and forecasting practitioners. Because of the nature of its mem- bership, the IIF is an ideal forum in which such much needed developments may occur. There are ample opportunities for leadership in both action and research in this context. Having regu- lar dialogues enables all those involved to learn more about one another and to contribute to the development of the field of forecasting. The gaps once bridged can still widen. It needs continuing effort at the local level to prevent this from happening.

Appendix A. Participants in the Roundtables, ISF 91, 9-12 June 1991, New York

Bruce Abramson, University of Southern Cali- fornia

Wally Albers, General Motors Robert Brown, Materials Management Systems

Inc.

Tom Cook, American Airlines Decision Tech- nologies

Richard D’Angelo, Bristol-Myers Squibb Xiang Ying Du, Research Institute of Machinery

Science & Technology, China Benito Flores, Texas A&M University Allen Gutheim, DRI/McGraw Hill John Hanke, Eastern Washington University Hilary Harmon, DSI Chaman Jain, St. John’s University Anne Koehler, Miami University Panos Kontzalis, Sandoz Pharma AG Wassily Leontief, New York University Russell Maddox, Southwestern Bell Telephone Essam Mahmoud, AGSIM Stuart Mattson, AGSIM Martin Morman, Morehouse College Lars-Erik Oller, University of Helsinki Keith Ord, Pennsylvania State University Richard Preismeyer, St. Mary’s University Gillian Rice, AGSIM Michael Ryall, Decision Strategies International Debra Schramm, Warner Lambert Janet Sniezek, University of Illinois Ralph Snyder, Monash University Herman Stekler, ICAF - National Defense Uni-

versity Henry Townsend, Bureau of Economic Analysis Raza Ullah, Edtel Lilian Shiao-Yen Wu, IBM John Tobin, Applied Planning Associates Tom Yokum, Angelo State University Shawa Zhang, Ohio State University

Comments

Lilian Shiao-Yen Wu, IBM Research Divi- sion, T.J. Watson Center, Yorktown Heights, NY 10598, USA

When I first started to work in forecasting and planning in IBM almost ten years ago, I had a

rude awakening as to the discrepancy in ideas between the practitioner, who was a business executive, and myself, the researcher. When the executive heard that I was going to show him ‘some interesting results based on my analysis of IBM product sales’, he turned to his assistant and asked ‘Why is she here?’ I was stunned. I felt that I had found some previous unknown ‘truths’ which would lead to what I thought would be better planning, and not only was he uninterested in hearing about them, but he seemed to imply that any theoretical work would be useless. He could not imagine what possible use he would have with anything worked out by a person who was not intimately knowledgeable about the product, its market and competitors. His concerns were with the results of his market- ing program and whether the functions of the operating system for the machine were sufficient; how could I possibly tell him anything of even the remotest use? This experience was by no means unique. It has happened over and over again with different variations, the practitioner having basically the same attitude in each case.

I believe that a ‘gap’ definitely exists in fore- casting between the practitioner and researcher. From my side (or the researcher’s side) this arises from my belief that the development of a systematic forecasting framework is very im- portant. We need to use history and data on external factors to find out which is the best way to predict the future and, equally importantly, to quantify how much of the ‘future’ we cannot forecast just by looking at the past. Stated another way, we need to quantify the uncertainty in our predictions. Then, based on the size of the uncertainties, we need to find systematic ways to manage our risks. All the details in a particular situation are there of course, but many details cannot be precisely specified or quantified and, therefore, cannot be included and studied within a mathematical framework.

From the practitioner’s side, the ‘gap’ arises from his view of the future. He believes that it is the particular details that will determine the future outcome. For example, it could be his marketing program, whether his suppliers fix the quality problem, or his customers’ cash positions that will determine future sales. Therefore, it is highly unlikely that a mathematical model, even if it is based on past sales, will tell him much of

E. Mahmoud et ul. I Bridging the gap between theory and pructice 259

the future. This attitude also shows up in the way in which some practitioners deal with forecast errors. They would rather hear explanations of what caused these errors, even if the explana- tions are not verifiable, than be given an interval forecast that quantifies past forecast errors.

Working in foretasting and business planning over the last ten years has brought my thinking and working practices closer to those areas that are useful to practitioners. I have come to under- stand the details in a particular forecasting situa- tion, and IBM practitioners have grown to ap- preciate the need for my more systematic and quantifiable approach.

Barbara Cable Nienstedt , World Research Group, 5839 N. 44th Street, Phoenix, AZ 85018, USA

My experience as a forecaster has put me on both sides of the practitioner/academician fence. I have taught forecasting to graduate classes at Arizona State University and I have also pro- duced forecasts as an employee of an organiza- tion. In addition, as methodologist for the Office of the Auditor General of Arizona, I have been asked to evaluate the forecasts of others in pub- lic policy organizations under review. These ex- periences have provided me with a fuller and more personal understanding of the gap between practitioner and academician.

In the academic setting, we are introduced to all manner of forecasting possibilities, usually requiring rigorous statistical training. The variety of forecasting techniques being used by prac- titioners, while not always of the same degree of rigor, is, nonetheless, impressive, enterprising and resourceful. They range from the ever-popu- lar ‘Ruler Approach’ (add a constant to some base number), through the ‘Rube Goldberg In- vention Approach’ (the more convolutions the better) to the occasional, more traditional, aca- demic method.

One of the factors that intrinsically separates the practitioner from the academic is what they see to be the actual goal of the forecast. Aca- demicians typically view accuracy as the goal of a good forecast. Contests are even held by profes- sional organizations to foster competition among methods and to strive for greater accuracy in

forecasting methods. Academics often have little investment in what the output of these forecasts indicates for an organization.

Outside of academia, however, accuracy may be only a desired, but not necessarily imperative, by-product. The real goals of the forecast are often driven by a political agenda. One goal may be to obtain a larger share of the resource pie; another may be to expand a director’s territory; yet another may be to buy credibility and respec- tability for the organization by hiring a well- recognized firm/university to produce the fore- casts.

Several years ago, I was involved in an evaluation of a clean air program, run by an environmental agency. This evaluation included the use of a time series impact assessment and forecasts which were generated by an ARIMA modeling process. The findings of the analysis showed that there was no effect on ambient air quality due to this program. Moreover, the fore- casts indicated that it was unlikely that the geo- graphical areas in question would be in com- pliance with federal standards at any time in the next five years. The response of the agency was immediate and drastic. It was a case of ‘killing the messenger who delivers bad news’. The fore- casts were denounced as worthless, as was the entire evaluation, not to mention the forecasters themselves. The agency produced its own fore- casts, which showed that compliance with air quality standards was just around the corner. What became quickly apparent was that our forecasts cast doubts not only on the program, but also on the agency, its direction and its staff. They were all fighting for their political lives through their forecasts, so the importance of the accuracy of the results was incidental.

While time proved our forecasts correct, the experience was not a pleasant one to live through. Recognition and acceptance of these political factors in a repeat evaluation five years later resulted in our taking a different approach. I assembled a task force of practitioners and academics and included representatives from the agency in question to discuss the impending evaluation. The entire process of evaluation and forecasting was a collaborative effort. There were no surprises for the agency. Moreover, the agency acknowledged that our earlier forecasts had indeed proved accurate and that the pro-

gram had not achieved its desired effect. How- ever, their political agenda on the second evalua- tion was directed toward improvement of the program rather than discrediting the results. The experience resulted in a sensitivity on each side towards the needs of the other and, conse- quently, a much more productive evaluation, with no sacrifice of integrity.

The IIF can play its own role in the promotion of these cooperative ventures. It could devote future issues of the journal to examples in which there has been coordination or cooperation be- tween practitioners and academicians. Converse- ly, they could also feature ‘worst-case scenarios’, in which there was no cooperation and undesir- able consequences resulted.

Joint research provides another opportunity to promote cooperation. It is not uncommon for practitioners to hire academics to help with cer- tain technical projects. It is more unusual for academics to seek the help of practitioners. I have always found it immensely helpful to active- ly seek the advice of those who are in the field, working with the data. I feel it has made my projects more relevant to the end users.

The IIF could also help by pursuing an aggres- sive ‘sales pitch’ towards organizations which might need the services of its forecasting con- sultants, both academic and practitioner. Small teams (2-4) of consultants might be useful to businesses or governments. The affiliation and imprimatur of an international forecasting or- ganization would, no doubt, endow the teams with some prestige.

Keith Ord, The Pennsylvania State University, 303 Beam Business Administration Building, University Park, PA 16802, USA

‘The one thing that we know about the statis- tical models we develop for forecasting is that they are incorrect.’ This adage, or some version of it, is often trotted out as a criticism of statisti- cal models by those who seek to demonstrate that everything is too complex to be captured by a quantitative model. Some of the research pub- lished by the International Journal of Forecasting over the past few years has shown, however, that simple methods are often surprisingly effective. Of even greater importance, perhaps, is the fact

that they are usually of a magnitude cheaper to produce than consensus forecasts generated by hours of discussion in a smoke-filled room.

Indeed, my first statement has an important corollary: the one thing we know about forecasts developed by managerial intuition is that they are incorrect. But such sloganizing misses the point. Preparers and users can, and should, bring complementary skills and information to the forecasting process. Statistical modeling requires that we spell out our assumptions clearly; our models must be developed in conjunction with the users. Only then can we be sure of making plausible assumptions and of ensuring that we have elicited the managers’ knowledge of the process (and their prejudices).

As Barbara Cable Nienstedt’s superb example illustrates, we need to be politically aware of and sensitive to the needs of management. A rcput- able auto dealer sells vehicles on his or her reputation for quality and a guarantee of good performance; the buyer may never have much idea about what goes on under the hood. As forecasters. we need to develop the same rela- tionship with users, but we should not shrink from developing Cadillac-style models when the occasion demands; after all, Cadillac won the Malcolm Baldridge award by listening to its cus- tomers!

Bob Bordley, Social & Economic Sciences Divi- sion, National Science Foundation, 1800 G Street NW, Washington, DC 20550, USA

The main value of many forecasters is in helping to shape various decisions. But as Barab- ba (1986) noted, forecasts are frequently not in a form that is particularly helpful to a decision- maker. However, an interactive decision support system’ -which mediates between the forecasts and analytical models and the needs and inter- ests of the decision-maker-can often overcome this problem. Thus, in one application, estimates of how price changes would impact sales initially attracted only a relatively small audience. But when these results were packaged as part of an

’ Another alternative is a decision/risk analysis process. in which the decision-makers (the Decision Board) engage in a structured dialogue with a core team of analysts.

E. Mahmoud et al. I Bridging the gap between theory and practice 261

interactive price war simulation that pitted dif-

ferent pricing managers against one another, the audience for the results was vastly magnified. And this is only one example of how the emer- gence of decision support systems has vastly improved the possibilities for technology transfer.

Decision support systems focus our attention on how robust various decisions are to forecast inaccuracy. In a surprisingly large number of decisions, there is no value in reducing a fore- cast’s variance. If this is so, the forecast - though error-laden - is certainly as good as the decision- maker desires it to be. In other cases, knowing that forecast uncertainty is currently high can motivate certain corporate strategies - for exam- ple, flexible manufacturing - which reduce the need for more accurate forecasts. Thus, as ‘inter- facers’, for example, decision/risk analysis teams and decision support systems designers, continue to flourish, some forecasters may choose to join the interfacers or to become their consultants.

Panos Kontzalis, Sandoz Pharma AG, CH-4002, Basle, Switzerland

In my experience the gap between forecast makers and forecast users within a corporation exists because of the following reasons. Firstly, there is no central forecasting function as an independent unit. In fact, various departments (for example, corporate planning, production planning) generate different forecasts using vari- ous sources of information (for example, inter- nal, external, headquarters, affiliates) and differ- ent techniques. The existence of forecast ‘varia- tions’ leads to confusion and finally to a ‘crisis of trust’ and a de facto failure to recognize the forecasting discipline itself. Secondly, in the case where there is a forecasting unit within a corpo- ration, it is mainly considered a data-providing, rather than a decision-making, function. In addi- tion, owing to the ‘political gap’, managers usu- ally anticipate ‘good’ figures and are not ready to accept ‘bad’ figures. This is where the need for education arises. Managers have to learn that modern forecasting is not the ‘ruler approach’ but a dynamic process which can be used effec- tively to reduce uncertainty about the future. Therefore, if forecasting users want to know

what would be most likely to happen in the future in order to turn it to their best advantage, they have first of all to know that the future may

not only be brilliant, but grey and/or black as well.

J. Thomas Yokum, Angelo State University, 2601 West Avenue N, San Angelo, TX 76909,

USA

The nature and extent of perceptual gaps between the occupational roles of forecasters is an important topic in forecasting today. Knowl- edge about these barriers may substantially ad- vance the discipline through better implementa- tion and usage of techniques. In addition to the various factors considered in the Mahmoud et al. article, three other points are addressed here.

First, the ‘gaps’ may differ depending on which occupational role groups we are consider- ing. Yokum and Armstrong (1991) define four forecasting segments: researchers, educators, practitioners and decision-makers. The nature of discrepancies between practitioners and decision- makers may be political (division of task rc- sponsibility) and practical (different criteria used in evaluating forecasts). Discrepancies between researchers and decision makers may involve statistical criteria versus ease-of-application is- sues. ‘Gaps’ between educators and practition- ers, both concerned with successful communica- tion, may be narrower.

Second, the nature of many analyses is to consider the groups as distinct sets. However, the groups may have occupational roles which actually overlap. In the same study, Yokum and Armstrong asked IIF respondents to allocate 100 points among the four segments, according to how they felt they were described by each role. Exhibit 2 cross-classifies the groups, with the rows representing respondents who classified themselves as primarily (>50 points) in that category. For example, the 88 IIF members who thought that they were predominantly research- ers also allocated an average of 24.5 points to the role of educator, 11.4 to that of practitioner and 5.5 to that of decision-maker. The counts along the diagonal row are the number of each group who allocated all 100 points to the role. The

262

Exhibit 2

E. Mahmoud et al. I Bridging the gap heiween theory and practice

Researcher Educator Practitioner Decision-maker

Researcher (n = 88)

Educator (n = 48)

Practitioner

(n= 111)

Decision-maker (n = 2X)

61.5% (7) 24.5% 11.4% 5.5% 16.2% 63.6% (10) 15.4% 4.8%

5.8%’ 9.0% 74.4% (24) 10.6%

4.8% 6.4% 21.3% 67.9% (51

members with split roles (off-diagonal percen- tages) may be the best arbiters of group gaps.

Third, increased communication may prove more damaging than helpful. Decision-makers do not seem to enjoy working on papers and researchers often feel uncomfortable in an indus- trial setting. A suggestion is to consider self- interest, both individual and mutual. For exam- ple, decision-makers weigh the monetary and social rewards of the corporate culture; research- ers value publication; educators and practitioners appreciate the successful transference of fore- casting thought. In other words, forecasters would concentrate on using their own talents. Academicians and corporate personnel would work in the areas in which they respectively do best-theory and practice. Each group would be rewarded in the currency of its segment of the profession. Only if areas of substantial and mutual self-interest were identified would over- lapping interests be pursued. The stronger the interest, the stronger would be the long-term benefits.

Dwight Thomas, AT&T Network Systems, 3330 W. Friendly Ave., Greensboro, NC 27410, USA

As a user, practitioner and/or internal con- sultant in the field of forecasting for seventeen years with AT&T, I sense that our profession is on the threshold of a ‘golden age’.

Many diverse, sophisticated and increasingly accurate statistical methodologies have been de- veloped. Low-cost mass data storage devices are available. There is also widespread deployment of decentralized computing power on the desk- top (or even ‘lap-top’ or ‘palm-top’) computer. These technological advances are converging with a managerial process that emphasizes dis-

cipline, quality and speed. The forecasting pro- fession can be uniquely positioned to impact significantly our business and government en- tities. The methods. software, hardware and cli- mate are ready. We in the forecasting profession must now lead in the application of these tools. We must show that we can contribute through reduced costs, shorter process cycle times and a reduction in defections in the business. The ball is in our court.

This is why the subject-matter of this paper by Mahmoud et al. is so important to us. There is a gap between forecasting theory and practice, and we ought to view this as part of the natural evolution of this profession and as an opportuni- ty for each of us. We need not be defensive or deny its existence. A ‘gap’ implies knowledge or understanding by one group and lack of the same knowledge or understanding by another group. We, the academics and practitioners, believe we have the ability, through forecasting, to enable our enterprises to better achieve objectives, strategies or plans. The users of our forecasts often do not agree (based on their past ex- periences) or have no opinion at all.

The gap between academics and practitioners can be partially closed by adding more structure to the term ‘forecasting’. Even a company as large as AT&T, for example, does not have a Forecasting Vice-President. We do have a Mar- keting Vice-President, Vice-Presidents of Sales, Vice-Presidents of Manufacturing and even a Corporate Economist. each of whom are forecas- ters. These forecasters are defined by their func- tional objects: market share forecast, sales (re- venue) forecast, product demand forecast and macroeconomic forecasts. It is important, there- fore, for the academic community to identify its focus, or its customer, more specifically. This is not to suggest that there is not a need for pure

E. Mahmoud et al. I Bridging the gap between theory and practice 263

research, but if we want to close the academic/

practitioner gap, we need to focus more sharply on the customer (the practitioner).

Regarding the gap between practitioners and users, the burden lies on the practitioners. The keys here are relevance and value added. The forecasters must consistently convey not only raw numbers but rationale, insight and under- standing, so that the user can make plans and decisions that positively impact the business or government entity. If the forecast cannot add value, it is like a puzzle piece that does not fit. For example. a forecast for AT&T’s 5ESS Switching System must be linked to a potential for a large contract with a new customer. The user is entitled to know whether the forecast ‘includes this contract or not, so that the produc- tion planning and raw material procurement de- cisions can be optimized. At AT&T, we urge our product forecasters to ‘start statistically and end judgmentally’, to ensure relevance to the busi- ness issues.

a forecast simply because it does not conform with desired results. However, by working with

managers to understand how they view business and the incorporation of assistance from the educators and researchers previously mentioned, we were able to gain forecasting credibility and a role in planning our business based on our statis- tical findings.

We established management credibility by de- signing forecast systems which conformed to management’s view of the business. In addition, we emphasized the analysis part of our function as much as the forecast. For example, credibility with the head of marketing was established be- cause our department explained to him the sales and financial impact of his marketing actions. Thus, the head of marketing could explain the business impact of his marketing actions to the firm’s president, as well as project the impact of future actions. Credibility established in one area spreads to others.

Benchmarking also provides the opportunity for forecasters and users to learn jointly from ‘world class’ performers. Not only can forecas- ters validate their own processes, but users can learn from other users.

In conclusion, the gaps cited here are a nor- mal stage of our profession’s evolutionary jour- ney. We need to ‘seize the moment’ and demon- strate our value and relevance.

Richard C. Wiser, Mary Kay Cosmetics, Inc. 8787 Stemmons Freeway, Dallas, TX 75247, USA

Educators such as Schultz, Mahmoud and Capps have aided us, as practitioners, by finding new factors impacting our forecast, suggesting methods to improve unit forecasts, providing statistical tools to improve forecasts and training the department’s staff. Time and experience have shown that the gap between practitioners, educators and management can be partially bridged if there is mutual respect and an under- standing of each other’s roles. The department’s role has tended to become one of explaining our business structure to educators, translating man- agement’s view for educators and explaining statistical techniques and their use to manage- ment in a manner which conforms to their view of the business.

Over the last fifteen years, I have had the Over the years, the forecasting and analysis advantage of working on forecasting and related function has evolved from being the part-time analysis problems with Tim Davidson (Temple, job of forecasting sales for the annual plan to Baker and Sloane), Randall Schultz (University requiring a department of its own, assisting man- of Iowa), Essam Mahmoud (American Graduate agement to plan and analyze the business. De- School of International Management) and Oral partment personnel forecast overall sales based Capps (Texas A&M). In a way somewhat similar on marketing plans and economic trends, fore- to that described in Lilian Wu’s commentary, my cast individual product sales, assist product mar- initial forecasting experiences were also rude keting to forecast special promotion sales, ana- awakenings, as I learned that a forecast based on lyze and report sales trends and project and solid statistics could be ignored by management analyze the impact of marketing plans and due to political and communication barriers. The events. It has taken time, a patient management tendency to confuse a forecast with a goal was a group, and skilful educators and academics to major barrier. Management sometimes discounts reach this point. While I believe that we have

264 E. Mahmoud et al. I Bridging the gap between theory and practice

partially bridged the gap at Mary Kay, we still have to work hard to stay where we are and must communicate constantly. The progress made has

taken time, and we still have improvements to make.

Stuart Bretschneider, The Maxwell School Tech- nology and Information Policy Program, Syra- cuse University, 529 Link Hall, Syracuse, NY 13244-1240, USA

First, I wish to applaud the effort the authors have made to directly confront this problem. They have, through both the roundtables and this paper, begun the process of identifying and, hopefully, closing the gap that exists between university-based forecasting research and prac- tice in the field. My objective in presenting the following remarks is similar to those of the au- thors of the paper- I (we) want to encourage discussion, debate and some hard thinking about the appropriate role of academics doing research in forecasting with respect to forecasting in practice.

Let me begin by arguing that the paper puts forward some bold proposals for dealing with the ‘gap’. I have no doubt that some, and possibly all, of the suggestions put forth will work in some setting, but I fear they will also face resist- ance and failure in other situations. My com- ments will focus on three areas which, in my opinion, are important though not well covered in the paper and have a direct bearing on several of the prescriptions put forth by the authors.

Problem 1: Top level objectives versus accur- ate forecasting. Several years ago I wrote a short summary paper about forecasting to help teach

students some basic ideas [Bretschneider

(19X5)]. In that paper I developed a typology of forecasts, in which one type of forecast was named a ‘control’ forecast. To my way of think- ing, a ‘control’ forecast is not an extrapolation of the past, nor is it the result of a sophisticated planning model which considers the effect of various policy actions (though it could be). Gen- erally, a control forecast is a ‘target’ or objective subjectively developed by top management, against which organizational activity will be evaluated. Such targets are the result of ‘goal- setting’ activity within the organization and thus

often occur at the highest level (for example, corporate vice president or chief executive of- ficer). For example, the company will generate 15% more in gross sales this coming year, or sales of our major product line must grow by 5%. Usually, such forecasts are incremental over last year’s level, adjusted because of some sub- jective view of the future economy, or larger corporate goals. Given that the purpose of such ‘control’ forecasts is ‘goal-setting’, data analytic approaches will tend to be inappropriate for generating such forecasts, particularly time series extrapolative methods.

When top executives develop ‘control’ fore- casts. and professional forecasters argue for the use of their data and model driven approaches, several major problems may emerge. First, a priori claims for accuracy as a criterion have little or no meaning for evaluating ‘control’ fore- casts. Thus, the professional forecasters find that they are in unfamiliar territory. If the profession- al forecaster generates what later turn out to be very accurate predictions, which also happen to be distinct from the executive view, it will not mitigate the fact that, from a corporate strategic view, organizational goals were not met and corporate performance was ‘bad’. It is not a case of who is ‘right’ and who is ‘wrong’, rather a case of what the purpose of the numbers is. Second, there is a real danger in having professional forecasters involved in corporate politics, espe- cially when they are trying to adjust ‘control’ forecasts which, as stated above, tend to reflect strategic goals. For example, there is a problem of information asymmetry. Top executives have different information, usually more relevant to the goal-setting process than do professional forecasters. Finally, strategic and executive level activity, almost by definition, attempts to change (or break) historical patterns. Hence, executive level ‘control’ numbers might reflect new strategic directions for the corporation.

The main conclusion is that professional forecasters must be clear as to whether they are arguing over production or operational level phenomena forecasts or the selling of strategic goals.

Problem 2: Let me do what I know! Not what you need! A second major point is that academic researchers tend to do what they know! Conse- quently, they focus on questions to which they

E. Muhmoud et al. I Bridging the gap between theory and practice 265

can find answers, which may not be of any interest in practice. An example is the recent emphasis placed by researchers on which method forecasts most accurately. The question is essen- tially unanswerable to the satisfaction of most practitioners, especially given the typical aca- demic approach to solving the problem. Most practitioners are oriented toward specific ‘idiosyncratic’ situations, while researchers hope to generalize. Consequently, after studying the performance characteristics of 20 methods over a sample of 1,000 time series, researchers feel they have generated some generalizable knowledge. Practitioners, on the other hand (I think correct- ly), recognize that the results from these aca- demic studies are not likely to apply to their particular circumstances. As a professional or- ganization of forecasters this is probably our number one problem!

How do we get researchers to focus on more relevant questions? There exists a small amount of research literature on how to encourage ap- plied and interdisciplinary research in univer- sities. There has also been a growing number of prototype programs between industry and uni- versities designed to promote not simply poten- tially useful innovations, but also development and commercialization activities designed to completely transfer ideas across to working inno- vations. Some of these ideas are also potentially useful in bridging the gap between theory and practice in forecasting. They include, but are not limited to:

(a) industrial funding of applied research where industry defines the central questions;

(b) adjustment of reward systems within uni- versities to provide more credit for applied work and the generation of grants,

(c) development of university and industrial research consortia, joint projects and partners programs,

(d) the vesting of a share of the property rights associated with inventions and savings with the academic researcher; and

(e) personnel exchange programs between in- dustry and universities.

In many cases, knowledge of the organization- al setting for the forecasting activity is necessary for successful innovation or change in an organi- zation. I have argued elsewhere that it is neces- sary for academics to begin to study forecasting

as an organizational activity [Bretschneider and

Gorr (1989a)]. Such research should yield knowl- edge that will directly enhance the ability of academics to affect practice in a meaningful way. Many of the suggestions listed above, designed to improve contact between academics and pro- fessionals, work, in fact, in two ways. Not only does industry obtain, develop and commercialize forecasting innovations and procedures; academ- ics build a reservoir of organizational knowledge

about forecasting. Problem 3: The problem of context: public

sector forecasting is different. The final point I want to make deals with the difference between forecasting in public and in private sector organi- zations. Assuming that some differences do exist, the nature of the ‘gap’ between theory and practice in forecasting might also be different. One difference that my current research focuses on has to do with the use of extrapolative versus explanatory models in forecasting. There is a clear bias towards explanatory models in govern- ment, even when forecasting phenomena are better handled by extrapolative methods.* Why?

In most government organizations the forecast numbers are less important than the ‘story’ or explanation that goes along with them. Clearly, those making use of production forecasts for inventory management within business organiza- tions could not care less as to why the numbers are what they are, as long as they are accurate. Why then is the ‘story’ so important in gov- ernment?

There are several low-level explanations that relate to what public administration research re- fers to as ‘life in the goldfish bowl’, and higher level explanations that relate to the problems of accountability found in public sector organiza- tions. To illustrate this idea, consider what hap- pens if a mid-level bureaucrat generates a one million dollar error at IBM. No one outside IBM is likely to know about it. On the other hand, if some federal, state or local government em- ployee generates a one million dollar error, it is likely to appear in the newspapers and on the six and eleven o’clock news broadcasts! Govern- ment organizations have many more levels of

’ For a good example of this see ‘Forecast Evaluation at U.S.D.A.’ by Ken Holden, presented at the International

Symposium on Forecasting in New York, 1991.

266 E. Mahmo~td et al. I Bridging the gap hefween theory and practice

review and oversight (including the press) that are the result of historic and constitutional re- quirements for checks and balances. This differ- ence clearly helps to explain why. in govern- mcnt, the ‘story’ is more important than the numbers when forecasting. Many others, both inside and outside of the government organiza- tions, will ask the question: ‘Why these particu- lar forecasts?’ Consequently, explanatory models are preferred to their more accurate cousins simply because they generate good ‘stories’, that are detailed, plausible and credible. The stage, then, is set for conflict between researchers ad- vocating accurate approaches and public sector practitioners, requiring methods that generate good ‘stories’ and detailed explanations.

Conclusions. I view this paper and the accom- panying commentaries as a first step towards a better understanding of the ‘gap’ between theory and practice in forecasting and, hence, toward mitigation of the problem. It is important to consider the objectives behind a forecast before identifying either a forecasting technology or an implementation strategy. Also, forecasts in prac- tice cannot be divorced from their organizational setting. This suggests two lines of research by academics which might provide insight into the matching of forecasting technology to practice. We need to understand better how strategy for- mation, goal-setting and forecasting are linked within organizations and we need a better under- standing of forecasting as an organizational pro- cess which requires extensive intra-unit com- munications and coordination.

References

Armstrong. J.S., 1988, “Research needs in forecasting”.

International Journal of Forecasting, 4, 44YV465.

Badaracco, J.L.. Jr.. lY91. The Knowledge Link (Harvard

Business School Press, Boston).

Barabba, V.P., 1986. “Through a glass less darkly”. Journal

of the American Statistical Association, 3, X6.

Bretschneider, S.1.. 1985, “Forecasting: Some new realities”,

Metropolitan Studies Program. Occasional Paper 99, De-

cember.

Bretschneider, S.1. and W.L. Gorr. lY8Ya. “Forecasting as a science”, International Journal of Forecasting, S. 305-306.

Brctschneider. S.J. and W.L. Gorr. 1989b, “Introduction to the special issue on public sector forecasting”. Intenta-

tional Journal of Forecasting. 5. 303-304.

Cook, T.M., IYYO, “Improving the relationship between

academia and industry: A challenge for the ’90s”. OR/

MS Toduy, October. 4.

Dagum, E.. 19X9, “The future of the forecasting profession”.

International Journal of Forecasting, 5, 155-157.

De Gooijer, J.G.. 1990, “The role of time series analysis in

forecasting: A personal view”. International Journal qf Forecasting, 6. 449-451,

DeRoeck. R., 1991. “Is there a gap between forecasting

theory and practice? A personal view”. Internationul

Journal of Forecasting. 7. l-2. Fiides. R., 1990. “The organization and improvement of

market forecasting”, paper presented at The Tenth Inter-

national Symposium on Forecasting, Delphi, Greece.

Gardner. E.S.. Jr.. 1991. “The state of the institute”. Ne,vs-

letter of the International Institute of Forecasters. 4. 2-S.

Gardner. E.S.. Jr. and S. Makridakis, 19Xx, “The future of

forecasting”, International Jourrlal of Forecasting. 3. 321-

324.

Lawrence, M.J., 19X3, “An exploration of some practical

issues in the use of quantitative forecasting models”.

Journal of Foreca.cting. 22. 169- 179.

Lcitch, G. and J.E. Tanner. 1991. “Economic forecast

evaluation: profit versus the conventional error

measures”. American Economic Review, Xl. SXO&SYO.

Mahmoud, E., 1989. “Combining forecasts: Some manageri-

al issues”, International Journal of Forecasting. 5, SY9-

600.

Mahmoud, E. and C. Pegels. IYYO. “An approach to select-

ing time series forecasting models”. International Journal

of Operations & Production Mana,yemrnt. IO. 50-60.

Makridakis. S., 1988. “Metaforccasting: Ways of improving

accuracy and usefulness”. International Jourruzl of Fore-

usring. 4, 467-49 I

Price. D.H.R. and J.A. Sharp. 1986. “A comparison of the

performance of different univariate forecasting methods

in a model of capacity acquisition in UM electricity supply”, International Journal of Forecasting, 2. 333-348.

Rice G . and E. Mahmoud. 1984. “Forecasting and data

bases in international business”. Management Internation-

ul Review, 24. 59-71.

Wheelwright. S.C. and D.G. Clarke. 1976, “Corporate fore-

casting: Promise and reality”, Harvard Business Review.

53. 30-47.

Yokum, T. and J.S. Armstrong, 1991. “Barriers to the

diffusion of forecasting techniques”, Working paper,

Angelo State University.

Biographies: Essam MAHMOUD is Professor of Managc- ment Science at the American Graduate School of Intcrna- tional Management (Thunderbird). He is particularly inter- ested in forecast accurac y and the implementation of forc- casts in organizations. He serves on the editorial boards of the Imernationul Journal of Forecasting. Decision Scirmzv, Journal of the Academy of Murketing Science and Informa- tion & Management.

During most of his 25 years with General Motors. Richard DEROECK was Director of International Economic and

E. Mahmoud et al. I Bridging the gap between theory und practice 267

Automotive Forecasting. In that capacity, he was responsible for the analysis and forecast of economic growth, inflation. exchange rates and vehicle demand in the world economy. Mr. DcRoeck now lives in Tucson where he will continue to pursue his interests in economic analysis and forecasting.

Robert BROWN is President of Materials Management Sys- terns, Inc. For more than 35 years he has specialized in statistical forecasting for inventory control. He has worked in operations research with Arthur D. Little, IBM. Curtiss

Wright, the U.S. Navy and Air Force and has been a visiting professor at Yale, Northeastern. Dartmouth, Boston and Lehigh Universities.

Gillian RICE is Associate Professor of Marketing at the American Graduate School of International Management (Thunderbird). Her research interests include the implemen- tation of forecasting in organizations as well as political risk forecasting and other international applications of forecasting methods.