11
Teaching Statistics to Engineers Author(s): Søren Bisgaard Reviewed work(s): Source: The American Statistician, Vol. 45, No. 4 (Nov., 1991), pp. 274-283 Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/2684452 . Accessed: 24/11/2011 14:32 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to The American Statistician. http://www.jstor.org

Teaching Statistics to Engineers

Embed Size (px)

Citation preview

Page 1: Teaching Statistics to Engineers

Teaching Statistics to EngineersAuthor(s): Søren BisgaardReviewed work(s):Source: The American Statistician, Vol. 45, No. 4 (Nov., 1991), pp. 274-283Published by: American Statistical AssociationStable URL: http://www.jstor.org/stable/2684452 .Accessed: 24/11/2011 14:32

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to TheAmerican Statistician.

http://www.jstor.org

Page 2: Teaching Statistics to Engineers

Teaching Statistics to Engineers S0REN BISGAARD*

Engineers perform experiments and analyze data as an integral part of their job regardless of whether they have learned statistics. But those that have are likely to be more effective engineers. The fact that many engineers have only recently "discovered" statistics suggests that we need to reconsider our approach to teaching this im- portant science. I report on our experience teaching en- gineers using an approach that integrates statistics into engineering practice. Examples of course structure and curricula for both university and professional industrial courses are discussed. Special teaching methods are pre- sented that help engineers understand that statistics can help them solve their problems.

KEY WORDS: Design-of-experiment demonstration; Engineering education; Statistical consulting.

1. INTRODUCTION

The title of this article is in some sense misleading. A more accurate title would be "Teaching Engineers About Solving Engineering Problems That Incidentally Require Statistical Methods." But that title is also mis- leading. The problem I really want to address is: "How to Get Statisticians to Teach Engineers How to Solve Engineering Problems That, Incidentally, Require Sta- tistical Thinking and Methods."

As this revised title indicates, there is an implied crit- icism of the way the statistics profession has approached what has become known as Engineering Statistics. And that criticism not only concerns teaching style, but also content and organization. I dare say that in the past we have failed miserably. Some might be offended by my criticism and say, "Show me the data," but the answer is obvious. If we statisticians had been successful in teaching engineers problem-solving skills using statisti- cal methods, statistics and experimental design would today be incorporated in all engineering education and widely practiced, and it would not have been possible for Taguchi to move in and dazzle engineers with what is essentially simple (but often messed up) experimental design.

Although many of us can agree on criticizing Tagu- chi's use of statistics, his enormous success among en-

gineers ought to make us rethink what and how we teach engineers about statistical methods. A fundamental re- evaluation ought not leave anything sacred. (Inciden- tally, it does not mean that we should copy Taguchi's teaching method, either.) Engineering statistics has tra- ditionally been regarded as a separate topic, almost "or- thogonal" to engineering. Instead, it should be an inte- gral part of engineering. Statistical methods are not an adjunct to, but are an inseparable part of, how engineers solve engineering problems. Rather than looking for en- gineering problems that fit into our perception of engi- neering statistics, we should turn the question around and ask the engineers what they do, and see how we can help them solve their (engineering) problems.

So what do engineers do? Engineers, of course, do a variety of things. In fact, they do so many different things that they, themselves, have divided engineering into at least four major categories: mechanical, electrical, civil, and chemical, and each of these is further subdivided. When we teach engineers we need to keep in mind that there are certainly commonalities, but also many differ- ences, among the problems engineers face. From my ex- perience, I know most about mechanical, electrical, and the related manufacturing and industrial engineering as- pects of these areas. Therefore, my discussion in this article concerns only these fields.

Broadly speaking, engineers in these areas develop new products; improve previous designs; build and test pro- totypes; design tools, machines, and processes to make and test products; maintain, control, troubleshoot, and improve ongoing manufacturing processes; and main- tain, service, and repair products. In each of these func- tions engineers collect and analyze data. Whether or not engineers have learned statistics, they will do statistics. Therefore, the issue is not whether they use statistics or not, but how good they are at it. We statisticians have especially failed to convey to engineers that in fact sta- tistics includes the art and science of collecting data, be it through designed experiments or by sampling. If an engineering student has had a statistics course, design of experiments has usually been tucked away in the back of the textbook and hence has not been covered in a one- semester course. We cannot expect a student to sit through more than a one-semester course if it does not appear useful and relevant.

Engineering educators have also not been very con- scious about how much experimentation is in fact a part of an engineer's daily work. During the engineer's ed- ucation it is not talked about. Perhaps it was the post- World War II push for making engineering education more "scientific" that led to an overemphasis on pure, deduc- tive reasoning. Like everything else, too little and too much is usually bad. Many engineering students leave today's universities thinking that everything they need to

*S0ren Bisgaard is Assistant Professor, Department of Industrial Engineering and the Center for Quality and Productivity Improve- ment, University of Wisconsin, Madison, WI 53705. This work was sponsored by National Science Foundation Grant DDM-8808138 and was presented as an invited paper at the annual meetings of the Amer- ican Statistical Association, August 7, 1989 in Washington, DC. The author thanks George Box and Conrad Fung, as well as Tom Lewis of Mercury Marine, for many constructive discussions in developing the views expounded in this article and Albert Prat, Bill Hill, and Stephen Jones for many useful comments on previous drafts.

274 The American Statistician, November 1991, Vol. 45, No. 4 (? 1991 American Statistical Association

Page 3: Teaching Statistics to Engineers

know can essentially be deduced from Newton's Three Laws, Maxwell's Equations, Kirkhoff's Laws, and the Three Laws of Thermodynamics. It is not brought to their attention that every one of these laws originated from an extensive series of experiments in an iterative process of induction and deduction. Once students graduate, of course, they will have to confront reality and will have to experiment, but they are ill equipped to do so. The only type of experiments they have been involved in, if any, are often demonstration experiments of already- known phenomena. Unfortunately, these demonstrations only vaguely resemble real experiments from which new knowledge is generated.

With rapidly developing new technologies, materials, and processes, it is today even more important that en- gineers learn how to learn basic facts. Learning how to learn is the only invariant in today's rapidly changing world. Thus it is essential to gain a working knowledge about how to conduct valid and efficient experiments, collect data, analyze the results, and build models. In essence, we should teach scientific method and problem- solving skills related to engineering. As we do that, sta- tistics will present itself at the forefront of the stage. Then it will be natural to shift the emphasis in teaching from introducing statistics with excessive amounts of proba- bility theory and, instead, reverse the order and front load the course with the teaching of design of experi- ments.

2. PHILOSOPHICAL PRELIMINARIES

Before I make specific suggestions for teaching sta- tistics to engineers, I would like to discuss some phil- osophical issues that I think ought to serve as guiding principles. Most important, we need to make it clear to ourselves what we mean by statistics and the role we envision statistics should play in engineering science and practice.

The (informal) definition of statistics that I like best is "statistics is the art and science of collecting and ana- lyzing data." Like physics, it is a science distinct from mathematics. It is true that statistics, like physics, draws heavily on mathematics for developing theory and meth- ods; I would like to emphasize that we should not un- derestimate the importance of mathematics for statistical theory. But as physics is not just applied differential equations, so is statistics not just applied probability. As a corollary to this, what is important for the science of statistics should not be judged by how complicated it is mathematically. Graphics and exploratory data analysis are examples of methods that require only a minimum of mathematics but are of extreme importance for en- gineering statistics. I tend to think it was a violation of this corollary that in the past led to the damaging over- emphasis on acceptance sampling in quality control. Ish- ikawa's Seven Tools, on the other hand, are from a mathematical point of view trivial (even pitiful), and no fancy theoremas can be proved about them, but the phi- losophy behind their use is very important in helping en- gineers improve processes and they are vital tools for solving problems as detectives. As a science, statistics

is more appropriately considered the science of inductive reasoning and experimentation. What we emphasize and teach engineers should be to that end.

Box (1976) described a useful model for the role of statistics in a broader, scientific context. His model of the scientific learning process displayed an iterative pro- cess of induction and deduction, as illustrated in Figure 1. As previously mentioned, most engineering education is presented almost exclusively as a deductive develop- ment. The historical development of theories that had an empirical and inductive heritage is ignored. Even truly empirical relationships such as Taylor's tool-life equa- tions are often disguised as if they were deduced from some natural law. For teaching large quantities of ac- cumulated knowledge, that is perhaps the most efficient way to proceed. It gives students the false impression, however, that science and scientific thinking are syn- onymous with deductive reasoning only. They might even think that empirical work and inductive reasoning are somehow "dirty" and unscientific. Even worse, they might think that they can simulate everything on a computer and if there is a difference between their results and na- ture, nature is wrong. It is important that we introduce statistics not as a separate topic, but as an integral com- ponent in the scientific learning process of induction and deduction. The impression of science as purely deduc- tive is quite unproductive and can be the cause for the difficulty of introducing statistics as a valuable topic of study for engineers. The introductory statistics courses should be used to assure that students learn how to gen- erate fundamental new knowledge through induction and experimentation.

In this context it is well worth rereading many of R. A. Fisher's more philosophical articles. Few will dis- pute his profound understanding of statistics and sci- ence. In particular, Fisher's strong emphasis on the role of statistics in scientific investigations is important when

1. An Iteration Between Theory and Practice

PRACTICE DATA FACTS

induction induction induction

deduction deduction

HYPOTHESES MODEL CONJECTURE THEORY IDEA

2. A Feedback Loop Hj+i

replaces. ,- H3 FACTS

|Hypothesis F1j | - Modified indcton EROR RSIGNAL

J , _ ~~~~~~~~~Consequences of H3

Hi Hypothesi Hj M~~IR~ L~ideduct~ion

Figure 1. The Advancement of Leaning [reproduced from Box (1976)].

The American Statistician, November 1991, Vol. 45, No. 4 275

Page 4: Teaching Statistics to Engineers

we consider how to teach statistics to engineers. For ex- ample, Fisher wrote:

. . . [variable] phenomena come to our knowledge by ob- servation of the real world, and it is no small part of our task to understand, design and execute the forms of obser- vation, survey or experiments, which shall be competent to supply the knowledge needed. The observational material requires interpretation and analysis, and no progress is to be expected without constant experience in analysing and interpreting observational data of the most diverse types. Only so, as I have suggested, can a genuine and compre- hensive formulation of the process of inductive reasoning come into existence. (Fisher 1948, p. 40)

It is our duty as educators to help students appreciate inductive reasoning through this "constant experience in analysing and interpreting observational data of the most diverse types." Let us challenge students with real life and, often, messy problems. We should not start out with, "Let X1, X2, ... be iid normally distributed random vari- ables . . .," but let them cut the problems out them- selves, so that reasonable, plausible models and as- sumptions can be tried. Even better, we should let students be involved in arranging the experimental situations so that standard assumptions are plausible. Real statistical work involves a lot of practical considerations that I think can only be learned by doing statistics. Teaching induc- tive reasoning is difficult and does not seem to lend itself to the traditional theorem, proof, and example mode. The best way to learn it is by participating in all phases and details of real experimentation and through data analysis of real data sets (not simulated).

Fisher also had advice to statistics teachers (they were called mathematicians at the time) that is as relevant to- day as when it was first written. For example:

I want to insist on the important moral that the responsi- bility for the teaching of statistical methods in our univer- sities must be entrusted, certainly to highly trained math- ematicians, but only to such mathematicians as have had sufficient prolonged experience of practical research, and of responsibility for drawing conclusions from actual data, upon which practical action is to be taken. Mathematical acuteness alone is not enough. (Fisher 1938, p. 16)

I personally consult with many engineers both from the University of Wisconsin-Madison campus and from in- dustry. I do not see how I could teach statistics without the experience that this consulting gives me. Moreover, I know my students appreciate the relevance and realism it brings to my teaching (see Bisgaard 1989). Consulting certainly has taught me a lot about what the real prob- lems are. In fact, I think it ought to be as inconceivable to be a statistician who never consults as it is to be a "theoretical" physician who never sees a patient.

When teaching introductory statistics courses to en- gineers I think it is important that we teach what I call (for lack of a better term) "statistical intuition." For ex- ample, what is the intuitive idea of the paired experi- ment? Why is it essential that the pairs be formed, and how does that setup differ from the unpaired experi- ment? Why from an intuitive point of view does the t distribution have fatter tails than the normal? And why does the t distribution tend to the normal as the sample

size increases? After all, statistics is just common sense. As was pointed out in Davies (1954, p. 14), "By means of statistical methods the intuitive type of reasoning which an intelligent person might apply in drawing inferences from data is made objective and precise." Conversely, "objective and precise" statistical methods summarize a lot of common sense reasoning that is worth explaining to students in detail and in nonmathematical terms so that we elevate their level of intuition. We should remind ourselves of David Cox's comment (Cox 1981, p. 289): "Theory, while often mathematical, is not necessarily so and theory is certainly not synonymous with mathemat- ical theory." Blocking, randomization, and other pre- cautions that ensure that an experiment is properly con- ducted are part of the theory of statistics but are not entirely mathematical. Experimental strategy and tactics are also mostly nonmathematical, but are of no small importance for the successful completion of an experiment.

In teaching engineers statistics, we can learn a great deal from the physicists. They are not shy about teaching physics, starting with mostly qualitative insights, exper- iments, and applications. And, they use numerous well- chosen physical experiments, demonstrated right in front of the students, to explain the theory. Only later, when a certain level of qualitative understanding and intuition has been built up, do they introduce differential equa- tions, calculus of variations arguments, and so on. We should do the same! Moreover, we should teach fun things and methods the students can use, so that they become enthusiastic. Once they are "hooked" on statistics and can see just how much fun it can be, they will have the energy to study in more detail. The best we can do in a one-semester introductory engineering statistics course is to provide an "appetizer" to statistics-to get students excited.

Another point I want to make is that our job as teach- ers is not to screen out the less mathematically inclined students. It is to help them become better engineers! Some of the best experimenters I have taught were not nec- essarily the most mathematically and academically minded students and engineers. (Sometimes a practical and even moderately risk-taking nature is required to be a good experimenter. And that is not necessarily the trademark of a student inclined to book work.) Why should we de- prive otherwise good engineers of the benefits of learn- ing experimental design by setting up artificial barriers in terms of combinatorics and probability that they really do not need? That certainly is to nobody's benefit in the long run. We must get away from letting the introduc- tory statistics course be used to (artificially) create ac- ademic prestige (or boost our own vanity) by unduly challenging the students' mathematical skills through ri- gor and exercises that have only little relevance to en- gineering practice. Even the most intelligent students are justified in having aversion for something that seems un- necessarily complicated and not very useful.

Universities tend to teach large quantities of facts. In statistics that tendency manifests itself in the teaching of a large number of techniques, in particular, hypothesis- testing techniques. However, more is often less. Good

276 The American Statistician, November 1991, Vol. 45, No. 4

Page 5: Teaching Statistics to Engineers

statistical work requires a certain craftsmanship. Cuth- bert Daniel has taught us a lot about that. I think we should teach fewer things, but teach good statistical craftsmanship through careful and detailed analysis of a few examples in the style of Daniel (1976). Box's anal- ysis and discussion of Quinlan's experiment (see Box 1988; Quinlan 1988) is a good example of this that we use in our own teaching at the University of Wisconsin (with a few more details and more graphics than are pro- vided in the published version).

For the engineer's general education, it is important to teach critical thinking. In most (deductive) courses, there is usually only one solution, obtained in a logical and linear (deductive) fashion. The introductory statis- tics course may be the first occasion where a student has the experience that there is more than one solution to a problem, that "all models are wrong but some are use- ful" (Box 1979, p. 202), and where criticism of the ex- perimental setup is important. We also need much more emphasis on the experimental situation and scenario. The same data might be interpreted differently depending on the experimental setup. For example, if the experiment had repeated measurements or split plotting instead of genuine replications, we need to adopt a different anal- ysis and interpretation. To understand the physical setup of the experiment is more important than to understand the details of the computations. The qualitative and non- mathematical notions are more important than the details of the derivation of a procedure. The example on abra- sion resistance of rubber sheets from Davies (1954) is a beautiful illustration of this. Fisher's discussion of Dar- win's experiment on self-fertilized versus cross-fertilized plants (see Fisher 1935) is another. We can, through sta- tistics, teach students to exercise judgment and think about what is important and what is not. If we do that, I think we can help a great deal to ease our students' transition into the real world.

George Box, Conrad Fung, and I have done some re- thinking about how we teach statistics to engineers. The foregoing considerations are a result of this (although I take full responsibility for whatever might be contro- versial, since it is my interpretation that is presented here). The way we teach our courses has been worked out in close collaboration and after many long discussions. It is impossible to untangle who had which idea at this point, but I am indebted to them for their many ideas and in- sights. Moreover, the late Bill Hunter has very much influenced our thinking on the issue of teaching engi- neering statistics. We owe many of the ideas to him. We have also benefited from the ideas presented by Snee (1980), Kempthorne (1980), Deming (1975), and Hogg (1985). I will explain in more detail how our courses are designed.

In all of the courses we teach, we have reversed the traditional sequence of teaching probability first and ex- perimental design last. We emphasize concepts, ideas, and the qualitative and philosophical aspects more than mathematical techniques and their derivation. Most things are explained inductively and with graphics and concep- tual diagrams rather than equations. We focus on what

engineers do. We always start with a problem, rather than focusing on techniques. Most important, we show a live demonstration of a product developed through ex- perimentation using statistics.

We have experimented with these ideas for some time, and we think we have managed to impress on our stu- dents that statistics is an integral part of engineering sci- ence and practice. In the following sections I will discuss three different types of courses that I have been involved in teaching. The first is an undergraduate-level one-se- mester university course. The second and third are courses for practicing engineers from industry, the difference be- tween the two being that one is a one-week short course often taught at a site remote from the engineer's com- pany and the other is spread out over an extended period of time and taught on site.

3. CURRICULUM FOR A UNIVERSITY COURSE

The one-semester university course that I teach, in collaboration with Conrad Fung and designed jointly with George Box, is not officially an introductory undergrad- uate statistics course, but could be and actually is, for some students. Most students are from industrial, me- chanical, and manufacturing engineering. As a key point, we do not really teach statistics in the traditional sense. Instead, we teach engineering problem solving that in- cidentally requires statistics. Therefore, we start out with an engineering problem and bring in appropriate statis- tical techniques and thinking as needed.

We start with a philosophical introduction and over- view of the scientific context of modem quality improve- ment and the role of engineering statistics, with a dis- cussion of informed observation (mostly Ishikawa's Seven Tools for problem solving) and directed experimentation following the general outline of Box and Bisgaard (1987). After this introduction we go directly to a problem re- lated to the development of a new product. (No coin flipping or red and blue balls first.) To make it real, we have a small paper helicopter (our product) that we bring to the classroom. We tell the students that we are in the process of developing a better helicopter and, therefore, need to test this prototype. Specifically, we tell them that an important characteristic of the helicopter is flight time. We then climb up on a ladder and drop the helicopter four times from the ceiling, measure the time it takes to hit the floor, and, of course, get different numbers. Next we plot the data as a dot diagram showing that the first step in any analysis is to plot the data. From the plot, we develop the ideas of location and dispersion and show how to quantify these notions in terms of the average and the standard deviation. We also show small dia- grams that explain the central limit effect and the fact that an average has the same mean but a smaller variance than the original observations. Everything is introduced heuristically, inductively, and intuitively so that the stu- dents get a feel for what it means in the engineering con- text of the problem.

The helicopter experiment immediately raises ques- tions about how to measure flight time, operational def-

The American Statistician, November 1991, Vol. 45, No. 4 277

Page 6: Teaching Statistics to Engineers

initions of when the flight time begins, etcetera. Con- ducting the experiment for the class brings realism to statistics and stimulates valuable discussions. It also drives home the point that statistics is a natural part of engi- neering. After all, would not a good engineer test the prototype helicopter before it is sent to production or to the customer?

We are often asked if we could use a computer to sim- ulate this experiment. My answer is no. The difficult part of experimental design is the physical conduct of the experiment and the problems, practical and theoret- ical, that arise.

After this initial experiment we present a set of data from a second helicopter experiment conducted the pre- vious day with a different helicopter. We now ask the class to evaluate whether this (red) paper helicopter is different from the first (blue) helicopter. Technically, we are introducing comparative experiments. But this setup raises questions about the process of inductive inference, blocking, and confounding, because the second experi- ment was conducted the day before where, for example, the humidity and wind flow may have been different. We explain that one of the important issues in design of experiments is to guard against criticism from a "heavy- weight authority" (Fisher 1935, p. 2) who might dispute our conclusions because the experiment was "ill de- signed. "

With this discussion as a preamble, we discuss the ex- perimental situation that leads to the paired comparisons and explain the advantage of blocking. We also show various ways of analyzing the data. Later we introduce the problem of comparing more than two products, ex- perimentation in a noisy environment (blocking), and the role of randomization. We bypass analysis of variance by using the intuitively much simpler reference distri- bution approach as explained in Box, Hunter, and Hunter (1978). We also show various kinds of graphics for ana- lyzing data and introduce the idea of residual checking. Note that we compare more than two products, not more than two means. This is to reinforce the point that we are not doing statistics, but engineering that incidentally requires the use of statistical methods.

Next we raise the question of how we would evaluate the effect of changing the dimensional characteristics of the helicopter, such as the body length, body width, wing length, and so on. Thus we need an experimental design technique that can accommodate many factors. This leads us to explain the idea of two-level factorials, how to compute and interpret main effects and interactions, and the advantages of factorials. Then, we show the use of normal plots for analysis and for residual checking. We also briefly explain the use of simple factorials in evo- lutionary operation.

We think that two-level fractional factorials are among the most useful things we can teach engineers. But we also acknowledge that the technical aspects are quite complicated if taught the usual way. We emphasize, therefore, the ideas of fractional factorials, aliases, and confounding, but we do not bring the students to the point where they actually can construct the more complicated

designs and derive their confounding pattern themselves. Instead, by using numerous examples from engineering we explain the ideas and make sure that they can inter- pret the meaning of aliases, confounding, etcetera.

For the actual designs, I have developed a set of tables that contain all possible eight- and sixteen-run two-level factorials as well as fractional factorials, including all possible ways of blocking these. All the designs are writ- ten out in detail with all the pluses and minuses as pre- printed worksheets (see Bisgaard 1989a). Moreover, the corresponding alias structures are provided. We believe these are the most used designs and cover most engi- neering situations, at least for engineers who are just learning design of experiments. Larger and more com- plicated designs can be obtained from computer pro- grams. Moreover, it is best deferred to later when the student has an appreciation for the power of these methods. The design tables also help make design of experiments more concrete and ready-to-use for the students.

The development of the theory of two-level fractional factorials and their analysis is illustrated with many ex- amples from engineering design and process improve- ment. Good engineering examples are important. They should be relevant to the engineer's field of application and, in addition, should show a diversity of applications within that field. We cannot expect beginning engineer- ing students to think abstractly enough to see that an ex- periment on the effects of fertilizers on mangold roots is the same as an experiment on which factors cause fric- tion in a throttle handle or on how to get more horse- power out of a combustion engine.

We also infuse a healthy dose on philosophy of ex- perimentation. For example, we have a lecture called the "Iterative Nature of Experimentation" and another called "The Quality Detective" (see Bisgaard 1989b). It really is unfortunate that many statisticians tend to think that theory is synonymous with mathematical equations. So many aspects of experimental design are philosophical and qualitative.

Now that we have built up the necessary theory for conducting a fractional factorial experiment we return to our paper helicopters. We consider eight design factors for the helicopters that all seem to have a potential effect on flight time. We have 16 paper helicopters made up before class according to a 284 design. In about an hour, and with active and enthusiastic participation from the students, we conduct an experiment in the class. We carefully explain how we intend to conduct the experi- ment, discuss various management issues to prevent anything from going wrong, and randomize the run se- quence with the students' help. We try as much as pos- sible to make the experiment realistic. We assign various people to document and keep a laboratory notebook, take the flight times, write down the times, check that all 16 helicopters are made right, that the helicopters are taken in the right order, etcetera. With experienced engineers and students we often have an interesting discussion about all the practical issues that are just as important as the pluses and minuses.

278 The American Statistician, November 1991, Vol. 45, No. 4

Page 7: Teaching Statistics to Engineers

After we complete the experiment, we enter the data into a computer and perform the analysis on-line. At this point the students are usually as eager to see what hap- pened as if they were watching the conclusion of a mys- tery movie. The on-line analysis also provides us an op- portunity to discuss the results, not only in terms of statistics and what is statistically significant, but also in terms of the engineering implications, empirical and sci- entific feedback, what the engineers should do next, and what recommendations they should make to manage- ment after the experiment.

Incidentally, as it turns out, we also "discover" that flight time, which we decided to measure prior to the experiment, is not necessarily the only important re- sponse. In fact, some of the 16 helicopters are quite un- stable, so perhaps we should have used stability as an- other response. When we have observant note-takers, they often notice this during the experiment and sometimes we can pinpoint from their notes which factors most likely influence flight stability. Again, it is not uncommon that we discover something unexpected during an experiment that can lead to important discoveries useful in future experiments (see Box and Youle 1955, p. 320). As far as I know, however, this issue is not talked about enough in most statistics textbooks. In fact, some textbooks still promote the idea that to be "scientifically correct," all of the hypotheses must be stated prior to the experiment. That argument, of course, is out of step with how sci- entists and engineers work. It only serves the engineers or scientists who are looking for an excuse to say that statistics is useless in their work.

The helicopter experiment is the highlight of our course. This exercise is undoubtedly invaluable for teaching de- sign of experiments. The students have seen a real ex- periment performed for them. It is no longer just an ab- stract idea, but something they have a mental image of when they perform the first experiment on their own. It is much like the way we learned basic physics by seeing interesting demonstrations. Statistics becomes alive and relevant to engineers. Again, I want to stress that a com- puter simulation experiment could never teach all the practical issues that a real, physical experiment does.

After the helicopter experiment we discuss simple lin- ear regression and model building. As usual, we em- phasize the ideas rather than the derivation of the normal equations. Next follows a gentle introduction to the ideas of response surface methods, followed by discussion and illustrations of Taguchi's ideas of robust product design and reduction of variation. As in the earlier lectures, everything is introduced with conceptual pictures and graphics and only few equations. We emphasize Tagu- chi's good engineering ideas of robust products and vari- ance reduction, but show alternative, more conventional, and simpler methods for statistical design and analysis (see Box, Bisgaard, and Fung 1988).

Student Assignments. Since this is a university course, we assign weekly homework as well as a final project. Most assignments are developed by Conrad Fung and myself and are taken from our consulting. We often model

a consulting situation where students are supposed to pretend that they have received a set of data from a client or need to design an experiment. Students are then re- quired to return the following week with a report on what they have found out, what further questions they might have for the client, and their recommendations.

We often give students the data sheets that we origi- nally got from the client (disguising only proprietary in- formation). The problems are sometimes messy and without an obvious structure that would lead the students to look up a particular test procedure in a textbook. In fact, it is not always obvious what the objective is, so the students might have to figure it out themselves. Sometimes most of the work the students need to do is exploratory in nature and much like detective work. The computer work is often extensive. When necessary, we also require that they familiarize themselves with the technological aspects of the problem. We stress in these homework assignments the importance of being skepti- cal, not trusting the data and the measurements, and looking for outliers, curious patterns, clues, or anything unusual. Assumptions are only to be made tentatively, never trusted, and always checked. We emphasize that data analysis is iterative and adaptive. We also try to get the students to understand that there might be more than one analysis and one interpretation for a given set of data.

A key feature of our course is a final project-that is an idea we have adopted from Bill Hunter (see Hunter 1977). At the beginning of the semester the students are told that they are required to team up with a few other students and design, conduct, and analyze an experiment of their own choice. Each team is requested to present the results of this experiment to the rest of the class on the last day of the semester and prepare a written report. Many of our students are not used to oral presentations, but they will probably need these skills when they grad- uate, so we think this adds to the learning experience. These projects help students learn the practical aspects of experimental design. Only seeing the helicopter ex- periment conducted in class is not the same as planning and conducting an experiment oneself. It also helps them overcome the initial hesitation many people have when doing something for the first time.

Only occasionally do our students have access to lab- oratory facilities where they can conduct real, research- type experiments. Therefore, to inspire them to think about experiments they can perform at home or elsewhere, we give a list of 101 student experiments compiled by Bill Hunter (Hunter 1975) and an additional list of 30 ex- periments we have compiled. A representative sample from this list is given in Table 1. We do not want the students to repeat these experiments, but to use the list to provide inspiration for other experiments. As a rule we do not allow computer-generated and simulation ex- periments-that is too easy, seldom provides any "A-ha" experiences, and gives little practical experience in real (physical) experimentation. It is impressive to see the students' ingenuity.

Midway through the semester we ask the teams to hand in a proposal for the experiment they plan to do. We ask

The American Statistician, November 1991, Vol. 45, No. 4 279

Page 8: Teaching Statistics to Engineers

Table 1. Titles of Some Experimental Design Projects Planned, Conducted, and Analyzed by Undergraduate Engineering

Students at the University of Wisconsin-Madison, Spring of 1988

The Effect of Posture, Waiting Time, Venous Occlusion and Blood Collection Tube Type on Total Serum Cholesterol Measures: A 251- Fractional Factorial Experiment

Rubber Band Strength Experiment: A 25-1 Fractional Factorial Experiment

Rubber Band Shooting Experiment: A 25-1 Fractional Factorial Experiment

Waiting Time in Ski Lift: A Practical Application of Experimental Design

How To Get the Darkest Tan: An Experimental Approach

Identifying the Effect of Four Variables on Torque and Thrust Forces During Drilling

Shooting Darts: A 25-1 Fractional Factorial Experiment

The ERA Challenge: 24-1 Fractional Experiment on Washing Socks

Fractional Factorial Experiment to Study the Human Memory System-A Report

Designing a Better Ping Pong Catapult

Dehairing Hog Carcasses-Experimental Process Analysis

What Affects the Foam Thickness on Pouring Beer

The Art of Bowling: An Analysis of the Various Techniques Which Make a Great Bowler

Factors Affecting the Distance of Catapulted Water Balloons

them to pretend that the proposal is to upper manage- ment, clearly stating the objective of the experiment, which factors they want to experiment with, the design, what the response(s) is (are), how they plan to measure the response(s), what resources they need, and how much time it will take. We want them to practice getting per- mission from management to conduct an experiment. The benefits for us are that we can check their design, per- haps give practical advice, and avoid the natural pro- crastination from which we all suffer. The proposal mechanism seems to improve the quality of the projects significantly.

We have taught this course for a while and have gotten an enthusiastic response from our students. Statistics usually has a bad reputation among engineering stu- dents. I think, however, that we have managed to make this an enjoyable class where students discover that sta- tistics is very useful for engineering practice and is something about which they want to learn more. In fact, we have gotten quite a following, enrollment is up (with- out any advertisement from us), and we now also offer a graduate course in more advanced topics.

4. CURRICULA FOR COURSES FOR INDUSTRY

Teaching statistics to manufacturing and design en- gineers in industry is a challenge that requires a different approach. But it also provides different opportunities for making the teaching effort more relevant and useful. In this section I will discuss two different types of industrial courses. The first is a one-week (42 day) short course, developed by George Box, Conrad Fung, and myself, taught primarily to engineers who come to the University of Wisconsin-Madison campus, although we have oc- casionally taught the course in-house for large corpora- tions, also in a 42 day format. The second type of course I will discuss is a type of short course that I have ex-

perimented with during the past few years. It is taught to engineers in small segments over several months in- ternally at their respective companies. The main differ- ence between this and the other courses is that it is taught on site and that we work on projects directly related to the company's own manufacturing and design problems.

4.1 Industrial Short Course: Concentrated Version

Teaching an industrial short course in 4' days does put some obvious constraints on what can be covered in such a short time. We have, however, carefully considered this as a "knapsack problem" and left out everything that does not have top priority. This course follows the gen- eral outline for our one-semester university class, except we obviously do not have homework assignments and major projects. As a substitute we have, however, sev- eral workshops where the students try out various com- putations, designs, and analyses. Again, developing a better paper helicopter is our leading example. We have a special lecture on case studies, mostly from mechan- ical manufacturing, that illustrates the use of simple sta- tistical methods (mostly Ishikawa's Seven Tools) and design of experiments for product and process devel- opment. A computer with an overhead projection pad helps illustrate computer implementation of the methods. Sometimes we also spend an evening session in a com- puter laboratory so students can practice designing and analyzing experiments.

This course has been very well received by engineers. At the end of the course we always hand out a ques- tionnaire, and the feedback has been overwhelmingly positive. They seem to like the hands-on and problem- driven approach, the emphasis on engineering and prod- uct and process improvement rather than on statistics, our less mathematical approach, and the emphasis on philosophy.

4.2 Industrial Short Course: Extended Period

The format for the in-house, extended industrial sta- tistics and experimental design course that I teach has evolved during the past several years and continues to change as I discover new ways to make it more suc- cessful both in terms of content and administration. I believe the best way to learn statistics is to do statistics. Preferably students should work on problems that are their own problems, rather than textbook examples where there is no emotional involvement. This idea has several ad- ministrative implications that I will now explain.

Usually, I start the negotiation about a teaching con- tract with the company's quality manager, who will also later act as the program administrator. In our initial con- versation I insist that someone from upper management, in most cases the vice-president of manufacturing or en- gineering design, and sometimes both, be involved in planning the course. If these managers are not familiar with statistics I make a short presentation for them out- lining what can be done with statistics and how I plan to structure the course.

280 The American Statistician, November 1991, Vol. 45, No. 4

Page 9: Teaching Statistics to Engineers

Next I ask upper management to pick 8 to 10 problems that they consider most important for their business to be used as projects. It is essential that the projects be important. Working on unimportant problems because of timidity, unfamiliarity with statistics, or a risk-averse at- titude almost guarantees later failure. Nobody has time to work on unimportant problems for long, and even if they do, the problems remain unimportant and do not have impact even if they are successfully solved.

Based on the project proposals, the managers and pro- gram administrator select engineers already associated with the problems to work on the projects. If possible, the projects should be a part of the engineer's ordinary job, and not an extra, irrelevant effort for which their interest will diminish or for which they will not be re- warded. This is important, because the course may go on for as long as six months (because of breaks in be- tween) and the management and the engineers' priorities could change.

Usually a class includes 18-20 engineers, each as- signed to projects in small teams. Often I teach a morn- ing and afternoon session, each for about 20 engineers, to avoid pulling too many employees away from the job at the same time. The class initially meets for 3' to 4 hours every week. The class time is divided equally be- tween teaching prepared material and discussing proj- ects. All groups must be present when we discuss the projects. This generates interesting discussions within and between the teams. In addition, the more enthusiastic participants have a tendency to inspire the less enthu- siastic ones by showing that in fact these methods are applicable in their jobs and by someone just like them- selves. Sometimes I improvise a short lecture on a rel- evant topic brought up by the projects. This combined teaching and consulting approach is a serious attempt to implement the idea of solving the engineer's problems (or, as some like to express it, "listening to the voice of the customer") and not teach what we think might be "good" for them to know.

Manufacturing Engineers. If the engineers are mostly from manufacturing I spend the first six to seven ses- sions presenting my own modified version of Ishikawa's (1976) Guide to Quality Control. I exclude the chapters on sampling inspection and add material on general qual- ity improvement philosophy (see Box and Bisgaard 1987), exploratory data analysis, graphics, measurements, op- erational definitions, flow diagrams, CUSUM plots, and Deming's philosophy (Deming 1986). I stress the im- portance of working as detectives, finding and removing causes for quality problems. Depending on the engi- neers' backgrounds, I might show the videotapes "Road Map for Change: The Deming Approach" (Encyclopae- dia Brittanica Educational Corporation 1984) and "Right First Time" (British Productivity Council 1954) for qual- ity management philosophy. In the remaining 12 to 2 hours of each session I discuss projects with the groups.

In the initial lecture, I ask the teams to prepare for the next meeting a Pareto analysis and cause-and-effect dia- grams for their project. Since that occurs at the same

time that I teach these techniques, it helps them under- stand the power of these simple tools. It also helps us focus on what the important issues are and on how to get started. I also ask the engineers to bring to class their products, blueprints of the products, or other relevant material so that I can get a better understanding of what the problems are.

At each subsequent meeting I ask the teams to tell me and the class what they have accomplished during the intervening week. Often I have them make short im- promptu presentations. As the course progresses some teams, of course, report only little progress. If that is the case we become aware of this through the presentations and discussions and can take action. Here, I often benefit from the active participation of the program administra- tor. During lunch he or she can explain to me some of the internal company politics, as well as organizational or personal obstacles confronting the engineers. The pro- gram administrator can also, if necessary, call on the vice-president if progress on the projects is obstructed by something that requires his or her help to remove. Sometimes it is as simple as the vice-president reiterat- ing that working on the projects has a high priority with upper management. I have in the past been fortunate to work with very competent program administrators and supportive, enlightened upper management. I think that has been a key to my success with this type of teaching.

It is not unusual that the engineers find it difficult to make sufficient progress on their projects between the weekly meetings. After all, a week is a short time and it often takes time to get prototypes built, acquire testing materials, equipment, or whatever they need prepared. Therefore, as we get through the basic material on the Seven Tools and an introduction to two-level factorials, we sometimes take a break for a few weeks. Another useful strategy is to cancel the formal class and, instead, visit the teams on their turf. This helps me better ap- preciate their problems, and I often find the engineers more talkative and open for discussion. On the site visits, I almost always discover something about the ex- perimental setup or the measurement process that the en- gineers did not think was important enough to bring to my attention but is in fact important. I have sometimes visited up to 12 different groups at different locations in one day. The success of this approach has, therefore, depended on the competent planning and scheduling by the program administrator.

Design Engineers. If the course is primarily for de- sign engineers, the projects are usually product devel- opment work and prototype testing. Therefore, I spend only one initial lecture on Ishikawa's Seven Tools and go immediately to design of experiments, since this is more relevant. Again, the engineers are assigned to proj- ects as described previously. With design engineers it is usually easier to keep their attention, since they are not bothered to the same extent by the daily fire-fighting in the manufacturing environment. They often work, how- ever, against predetermined product-release deadlines and are, therefore, also under time pressure. It helps that it

The American Statistician, November 1991, Vol. 45, No. 4 281

Page 10: Teaching Statistics to Engineers

is their job to experiment, even if they have not previ- ously used factorial designs.

At the start of the course I suggest that the responsible upper management person speak to the engineers, ac- knowledging management's commitment to this teach- ing program. This is extremely important. Employees most often work according to their perception of man- agement's priorities. Another key feature is that the en- gineers on the first day of class are told that they, at the end of the course, usually 12 sessions, are to make a short presentation for upper management of their accom- plishments during the course. This again serves to show the engineers that this course has upper management's attention. It also serves the important additional function that management is forced to see the often very spec- tacular improvements that have been accomplished by the end of the course. That helps sustain their continued support for quality improvement after the course. Some politics like this is necessary. As a consultant I also feel good about this combined teaching and consulting ap- proach, because when the course is over, I know that the company has not only received education but also can see some tangible results.

It is clear that this kind of teaching is demanding on the instructor. It is difficult to work on so many con- sulting projects at the same time. In addition, a good understanding of engineering is necessary. This is, of course, not unique to engineering statistics, but common for all applications of statistics. Subject matter knowl- edge is necessary. With respect to working with me- chanical and production engineers, several good books are available that can give the statistician a short intro- duction to materials and manufacturing processes. My own collection includes Doyle, Keyser, Leach, Schrader, and Singer (1985) and Niebel, Draper, and Wysk (1989).

Of course, I personally benefit from having a manu- facturing engineering education. I want to emphasize, however, that consultants do not need to know every- thing about engineering. They mostly need (as a mini- mum) to know what the client does not know, but what is essential as a catalyst for solving the problems. In martial arts the combatants do not rely on their own strength only, but also use the muscles of their opponent for leverage. Similarly, in consulting we should combine our knowledge and skills with those of the client. The consultant should ask catalyzing questions like "why," "why not," "show me the data," "how do you know," and so on and, in addition, provide "statistical thinking" and problem-solving skills. As a consultant, I never sin- gle-handedly solve a problem. I act as a team player who provides leadership, provokes discussion and thinking, and helps with statistics.

There is a small kid in most of us. Engineers partic- ularly love to play with machinery. By using design-of- experiment techniques, we can legitimately play as adults. Moreover, many people enjoy solving puzzles, and most of our projects are of that nature. I am convinced that this combined teaching and consulting approach is the most efficient and satisfactory way for engineers to learn statistics. When the participants present their projects,

they undoubtedly have enjoyed themselves. I do not hear complaints about statistics being boring or the worst class they ever had. On the contrary, we have our best times when the final projects are presented. On a longer-term basis, if I continue to work with the company, the con- sulting afterwards is much better. The engineers can solve many of the simpler problems themselves, and they know me better and what kind of service I can provide for more complicated problems.

5. CONCLUSION

The problems of production are not any one of the individual steps or processes that are to be performed. Taken individually they are often trivial. The real prob- lem is to assure that all of the individual processes col- lectively perform well, like the wheels in a clockwork, under production and volume conditions. One of the key obstacles to this is excess variation. Therefore, statistics is an essential part of manufacturing, and it is to a large degree our (the statisticians) fault that we have not man- aged to secure statistics as an integral part of modem engineering design and manufacturing technology. We need to change this, and that calls for a change in the way we teach statistics to engineers.

Over the past few years several statisticians, and most notably W. Edwards Deming, have labored very hard to raise the consciousness about the United States' failing international competitiveness. Once the problem is rec- ognized and understood the next question is what to do about it. The education of engineers in statistics and quality improvement must, in my opinion, constitute a funda- mental component of any strategy for regaining com- petitive strength. Our educational efforts, however, must not merely constitute an increased frequency of the of- fering of standard engineering statistics courses as we have known them for decades. We must fundamentally rethink the way we teach statistics to engineers, keeping in mind that past efforts did not succeed in making sta- tistics an integral part of how engineers solve problems. This is, fortunately, beginning to be recognized, as evi- denced by a recent effort to incorporate statistics into engineering accreditation programs (see Penzias 1989).

Past education in engineering statistics, as much evi- denced by standard textbooks, mostly focused on what can be presented as a simple, deductive science; an ap- proach with which purely mathematically educated teachers are often most comfortable. But we need to teach statistics as part of scientific method. We should teach engineers inductive reasoning, experimentation, and problem solving using statistics. In particular, they need to work on real engineering problems, perform detective work, draw conclusions, and take action. It is important that we use relevant engineering examples. (Please, no more baseball examples! If we want to be respected as a useful profession, we need to show that we work on important problems.) If we teach statistics in the spirit outlined here, I think we can bring something unique to engineering. Statistics will then, it is hoped, be consid- ered an indispensable tool for solving enginleering prob- lems.

282 The American Statistician, November 1991, Vol. 45, No. 4

Page 11: Teaching Statistics to Engineers

Engineers can get excited about statistics. I believe that if they see that statistics is about inductive reason- ing, learning from the real world, detective work, and experimental design, they will consider their statistics course one of the most interesting courses they ever had. I make this statement based on data-I have seen it and they have told me so. I believe that statistics in the hands of a large community of engineers can have a dramatic influence on the future of modem manufacturing and en- gineering.

[Received November 1989. Revised December 1989.]

REFERENCES

Bisgaard, S. (1989a), A Practical Aid for Experimenters, Madison: Starlight Press.

(1989b) "The Quality Detective," Philosophical Transactions of the Royal Society of London, A, 327, 499-511.

Box, G. E. P. (1976), "Science and Statistics," Journal of the Amer- ican Statistical Association, 71, 791-799.

(1979), "Robustness in the Strategy of Scientific Model Building," in Robustness in Statistics, eds. R. L. Launer and G. N. Wilkinson, New York: Academic Press.

(1988), "Signal-to-Noise Ratios, Performance Criteria, and Transformations," Technometrics, 30, 1-17.

Box, G., and Bisgaard, S. (1987), "The Scientific Context of Quality Improvement," Quality Progress, XX, 54-61.

Box, G., Bisgaard, S., and Fung, C. (1988), "An Explanation and Critique of Taguchi's Contributions to Quality Engineering," Qual- ity and Reliability Engineering International, 4, 123-131.

Box, G. E. P., Hunter, W. G., and Hunter, J. S. (1978), Statistics for Experimenters, New York: John Wiley.

Box, G. E. P., and Youle, P. V. (1955), "The Exploration and Ex- ploitation of Response Surfaces: An Example of the Link Between the Fitted Surface and the Basic Mechanism of the System," Bio- metrics, 11, 287-323.

British Productivity Council (1954), "Right First Time," Videotape distributed in the United States by Productivity-Quality Systems, Inc., Dayton, Ohio.

Cox, D. R. (1981), "Theory and General Principle in Statistics," Journal of the Royal Statistical Society, Ser. A, 144, 189-197.

Daniel, C. (1976), Application of Statistics to Industrial Experimen- tation, New York: John Wiley.

Davies, 0. L. (ed.) (1954), The Design and Analysis of Industrial Experiments, London: Oliver & Boyd.

Deming, W. E. (1975), "On Probability As a Basis For Action" The American Statistician, Vol. 29, No. 4, pp. 146-152.

(1986), Out of the Crisis, Cambridge, Mass.: MIT Press. Doyle, L. E., Keyser, C. A., Leach, J. L., Schrader, G. F., and

Singer, M. B. (1985), Manufacturing Processes and Materials for Engineers (3rd ed.), Englewood Cliffs, NJ: Prentice-Hall.

Encyclopaedia Brittannica Educational Corporation (1984), "Road Map for Change: The Deming Approach" (videotape), Chicago: Author.

Fisher, R. A. (1935). Design of Experiments, London: Oliver & Boyd. (1938), "Presidential Address, First Indian Statistical Con-

ference," Sankhyd, 4, 14-17. (1948), "Biometry," Biometrics, 4, 1-9.

Hogg, R. V., et al. (1985), "Statistical Eduation for Engineers: An Initial Task Force Report," The American Statistician, 39, 168- 175.

Hunter, W. G. (1975), "101 Ways to Design an Experiment or Some Ideas About Teaching Design of Experiments," Technical Report 413, University of Wisconsin-Madison, Dept. of Statistics.

(1977), "Some Ideas About Teaching Design of Experiments, With 25 Examples of Experiments Conducted by Students," The American Statistician, 31, 1.

Ishikawa, K. (1976), Guide to Quality Control, Tokyo: Asian Pro- ductivity Association, UNIPUB.

Kempthorne, 0. (1980), "The Teaching of Statistics: Content Versus Form," The American Statistician, 34, 17-21.

Niebel, B. W., Draper, A. B., and Wysk, R. A. (1989), Modern Manufacturing Process Engineering, New York: McGraw-Hill.

Penzias, A. (1989), "Teaching Statistics to Engineers," Science, 244, 1025.

Quinlan, J. (1988), " 1985 Winner, American Supplier Institute Tagu- chi Application Award-Product Improvement by Application of Taguchi Methods," Target, 4, 22-29.

Snee, R. D. (Chairman, ASA Committee on Training Statisticians for Industry) (1980), "Preparing Statisticians for Careers in Industry," The American Statistician, 34, 65-80.

The American Statistician, November 1991, Vol. 45, No. 4 283