Transcript
  • Pergamon Computers ind. Engng Vol. 28, No. 3, pp. 671-679, 1995

    Copyright ~ 1995 Elsevier Science Ltd 0360-41352(94)00219-3 Printed in Great Britain. All rights r~crvcd

    0360-8352/95 $9.50 + 0.00

    ECONOMIC DES IGN OF CONTROL CHARTS US ING THE

    TAGUCHI LOSS FUNCTION

    SURAJ M. ALEXANDER, MATTHEW A. DILLMAN, JOHN S. USHER and BIJU DAMODARAN

    Department of Industrial Engineering, University of Louisville, Louisville, KY 40292, U.S.A.

    Abstract--We embellish Duncan's cost model with Taguchi's loss function to incorporate losses that result from both inherent variability due to assignable causes. Whereas Duncan applies a penalty cost for operating out of control, he does not show how this cost can be obtained or quantified. We illustrate, analyze, and evaluate this model utilizing hypothetical cost figures and process parameters. We also suggest adjustments to control chart design parameters when there are process improvements over time.

    I. BACKGROUND

    The design of the Shewhart ?? chart involves the determination of the sample size (N), the frequency or time between sampling (H) and the multiplier that defines the spread of the control limits from the centerline (K).

    In practice, the Shewart ?? charts have utilized a rational subgroup size for N, normally around 4 or 5. The sampling interval is generally selected based on the production rate and familiarity with the process. For instance, in the early stages of introduction of control charts to the process the samples may be taken frequently, such as once every 30 min. In the later stages, when the charts have been established and preventive measures taken against assignable causes, samples may be taken less frequently such as once every shift. The control limits for the control charts are traditionally set at _+3ax.

    The rational subgroup size is normally small, since larger sample sizes increase the risk of process shifts or assignable causes occurring while the sample is taken. Such an occurrence is undesirable, since this would filter the effect of the shift on the statistic used for monitoring and also exaggerate the perceived inherent variation of the process. The reduction in power of the statistical test, resulting from the small sample size, is compensated by taking more frequent samples. The + 3a~ limits have been found to provide an acceptable level of risk of false alarms in practice.

    The problem with the commonly used "rational" approach to control chart design is that it is used in almost all processes as the standard procedure for implementing control charts, without regard to the cost consequences of the design. In order to overcome this shortcoming, a number of researchers have proposed economic models for the design of control charts. Ho and Case [1] provide a literature review of such models covering the period 1981-1991. Most of this research has focused on the design of ??-charts, e.g. [2-4]. Even though these models have not been widely used, their value is obvious. One of the reasons economic models are not widely used is because the models are quite complex, and difficult to evaluate and optimize [5]. Also, these models are typically optimized for a particular size of shift, frequency of out of control, and cost of diagnosis. In practice, however, the mean period the process remains in control is not static, the size of the process shift is not constant, and the cost of diagnosis changes with time. In fact, with an assumption of continuous improvement, we would expect the frequency of out-of-control situations, the size of the shift and the cost of diagnosis to be reduced over time. After all, this is one of the purposes of statistical process control (SPC). In order to address some of these concerns we attempt to establish the direction of change of the control chart's design parameters, when the frequency and the size of process shifts and the cost of diagnosis change. With this information, the practitioner might be able to adjust the "optimized" design parameters over time.

    671

  • 672 Suraj M. Alexander et al.

    The first concern, related to model complexity, is not easily addressed, since the presence of integral evaluations and optimization over three variables makes the process difficult to simplify. Taguchi et al. [6] have proposed an on-line control model for which they have developed a closed-form solution for the selection of optimal control parameters. The closed-form solution makes the evaluation of process control parameters much easier. However, in their model the sample size (N) is always one; the costs associated with false alarms, and searching for assignable causes are ignored; also, the probability of not detecting a process shift is ignored. These simplifications are unrealistic, especially considered the fact that the Type II error increases with smaller sample sizes. Adams and Woodhall [7] provide a comparison of Taguchi's ideas and Duncan's model. We select Duncan's [8] cost function for the X chart which we find more realistic and we embellish this cost function with the Taguchi loss function. We determine the optimal control chart design parameters using this function and suggest changes in these parameters over time.

    The Taguchi loss function provides a means of explicitly considering the loss due to process variability. Whereas Duncan applies a penalty cost for operating out of control, he does not show how this cost can be obtained or quantified. In this paper we present, evaluate, optimize and analyze an economic model of the control chart. In the next section we describe our cost model. We then illustrate it's application using a hypothetical example. We conclude this paper by studying the direction of control chart design parameter changes in the presence of changes in the magnitude and frequency of process shifts and the costs of discovering and correcting the causes of these shifts.

    2. EMBELLISHMENT OF DUNCAN'S COST MODEL WITH TAGUCHI'S LOSS FUNCTION

    Duncan's model assumes a single out of control state. Research has confirmed that multiple assignable cause models can be approximated by an appropriately selected single cause model [2]. Hence we assume that we monitor the process to detect the occurrence of a single assignable cause that causes a fixed shift in the process. Duncan defines the monitoring and related costs over a cycle. The elements of the cycle are as follows:

    (1) The in control state. (The process starts in this state.) (2) The out of control state. (The process goes to an out-of-control state from an in-control

    state. This is assumed to be a Poisson process with 2 occurrences per hour.) (3) Detection of the out-of-control state. (4) The assignable cause is detected and fixed.

    Duncan also assumes that the process is not stopped while investigating the presence of an assignable cause.

    The expected cycle time (E(T)) with Duncan's assumption is:

    1 H + z+gN+D z I - f l

    where

    H = time between samples (h) (I - f l )= probability of detecting a shift

    r = marks the elapsed time within a sampling interval when the process goes out of control g = sampling time/unit N = sample size D = time required to detect and fix an assignable cause.

    The expected cost per cycle is:

    (a~ + a2N)E(T) H

    where

    a~ = fixed cost of sampling a2 = variable cost of sampling

    a3~ exp(-2H) ( H ) +a3+l -exp( - ) .H) +a4 l - -~- r +gN+D

  • Taguchi loss function 673

    a3 = cost of finding and fixing an assignable cause a 3 = cost of a false alarm

    pa4 = penalty cost for operating in an out of control state = probabil ity of a false alarm (Type I error).

    The Taguchi loss function for a product is defined below. Consider a product with bilateral tolerances of equal value (A). I f the cost to society for manufacturing a product out of specification is A S/unit, then, the Taguchi loss function defines the expected loss to society caused by using a particular process to produce the product as:

    A Expected loss/unit = ~-~ v 2 (1)

    where

    v2= mean squared deviation of the process.

    It can easily be shown that v 2 = a 2 + (p - T) 2, where o 2 = process variance, # = process mean and T = process target. We assume that when the process is in control its means is centered on target and its v 2 = v 2 = a 2. We also assume that when the process shifts, its mean shifts off target and v 2 = v~ = a 2 + (# - T) 2. (Since we are just considering $ charts the consideration of mean shifts are sufficient.)

    Using the definition of loss given in (I) we can easily embellish Duncan's model to consider losses owing to in-control and out-of-control variability. Noting that the expected period in-control is (1/2) and the expected period out-of-control is

    E" ] l~--~fl - r xgN+D and assuming that the production rate is P units/h, the cost per cycle (c) using the embellished model is shown below:

    AviP. Av,P[- H E(T) a3aexp(-).H) t - - - - f f - z+- - -~- -k - - - r +gN +O c = (a I + a2N) y + a3 -t 1 - exp( - ) ,H) I - fl "

    Dividing (2) by E(T) and applying the following approximations and definitions [2]

    H 2H 2 T~

    2 12

    (2)

    I 1 1 2H]H B= ( I - f l ) 2+- i2. . ] +D+gU

    exp( - ;tH) u

    (i -exp( -2H) ).H

    A L, = ~" v~

    A L2 =-~- i v~.

    We obtain the expected cost per hour as

    E(c)- al +a2N-- -t )~a3+a3~/H + L~P + L'PAB H I +2B

    The optimal values for N, H, and K can be obtained by minimizing the above cost function.

  • 674 Suraj M. Alexander et aL

    3. APPL ICAT ION OF EMBELL ISHED MODEL

    We apply our model to the following hypothetical example: A manufacturer produces a part that has a length specification of 2.5 in. with a tolerance of

    + 0.003 in. From previous runs the process standard deviation was estimated as 0.001. The process has required an average of ten adjustments during 40 h of production time. Therefore, the mean time between assignable causes entering the system is estimated to be 4 h.

    Based on an analysis of operator and quality control technician wages; it is determined that the fixed cost of sampling per subgroup is $1. While, the variable cost is $0.10 per part with a sample and interpret time of 0.01 h per part.

    The average time to investigate a false alarm or to find and eliminate an assignable cause is estimated to be two hours at a cost of $25/h. The process is assumed to continue producing parts at a rate of 100/h during investigation and elimination of out-of-control signals.

    The cost to rework or scrap a part that is found to be outside the specification limits is $5, while the shift in the process average to be detected is 0.001 in. From the above information we infer the following information relevant to our model.

    a l=$1 a2=$0.10 D=2

    a 3 = $50 a~ = $50

    P = 100 parts/h v~ =(0.001) 2

    A = $5/part A = 0.003

    I -=4h g=0.01h 2

    = (u - T) = 0.001 v~ = tr 2 + 62 = (0.001) 2 + (0.001) 2 = 0.000002.

    Table 1 lists the results of a computer search for the optimum design parameters. For these conditions the optimal parameters are seen to be N* = 13, K* = 2.5, and H* = 1.0 at a cost of $88.48/h. The most common values used in the U.S. industry are N = 5, K = 3, and H = 0.5 which results in a cost of $92.88/h or a penalty of $4.40/h. The computer program used to search for an optimum computes the optimal control limit width K and sampling frequency H for several values of N and displays the value of the cost function with the associated alpha risk and power as shown on Table I. This is the same approach used by Montgomery [2] and Jaraiedi and Zhuang [9]. The program is found in the Appendix. The program is easy to run on any IBM compatible computer with BASIC. It uses a simple grid search. The range and the step size of the search on any of the parameters can be changed by changing the "FOR" statements in the program.

    Table I. Variables and parameter selection for a chart

    N K H Power (I - / / ) Cost

    2 1.8 0.7 0.07 0.35 93.07 3 1.9 0.7 0.06 0.43 91.64 4 2. I 0.7 0.04 0.46 90.69 5 2. I 0.8 0.04 0.55 90.02 6 2.2 0.8 0.03 0.60 89.54 7 2.2 0.8 0.03 0.67 89.20 8 2.2 0.9 0.03 0.74 89.95 9 2.3 0.9 0.02 0.76 88.76

    10 2.4 0.9 0.02 0.78 88.64 11 2.4 1.0 0.02 0.82 88.55 12 2.4 1.0 0.02 0.86 88.50 13 2.5 1.0 0.01 0.87 88.48~ 14 2.5 I. I 0.01 0.89 88.49 15 2.6 I. 1 0.01 0.90 88.50

    N, subgroup size. K. coefficient to determine control limits; H. sampling interval (H).

  • Taguchi loss function 675

    20 --

    15

    10

    5

    \ \% =100 . . . . N \ \ . . . . . . . .

    ),: ..........

    II/I a31/50 a I = $1.0 I I=- ii a 3 = 100 a 2 -- $0.10

    I I I I I I I 0.5 1 2 3 4 5 6

    1/~. (h)

    Fig. I. Sample size (N) and sampl ing interval (H) vs I /L

    2.0

    1.5

    1.0 H

    0.5

    4. SENSIT IV ITY ANALYS IS

    We study the sensitivity to the magnitude and frequency of process shifts in order to determine the appropriate adjustment of control chart parameters in the presence of process improvements, and process deterioration. The frequency of process shifts is changed in the model by adjusting the value of (2), the expected arrival rate of process shifts. The magnitude of process shift is varied by changing the value of ~ = (/~ - T). Note that the cost of investigating and fixing an assignable cause (a3) is not changed as a function of the magnitude of shift, since we assume that the cost of investigating small shifts plus the cost of repair of these shifts is equal to the cost of investigating large shifts plus the cost of repair of the causes of these shifts. However, over time we expect the average cost a3 to decrease as the teams become more adept at discovering and correcting causes of process shifts. We therefore also study the change in optimal control chart parameters when a3 changes.

    Figures 1, 2, and 3 indicate changes in the "optimum" control chart design parameters, i.e. the sample size (N) and the sampling interval (H) under conditions of process improvement and deterioration. The design (K) was found to be relatively robust under the different conditions. Process improvement is denoted by a reduction in the frequency and magnitude of process shifts.

    N

    a 3= 100,~h a 3=50 ,~ a 3 =25 ~%

    / sss SSS

    a 3 -- 25/ / / / / / I I

    - / / / a 3 I00 / // a 3 = 50 /

    / / / / /

    . . . . d . . . . . .

    I I I

    H

    . . . . N

    a I = $10 .0

    a 2 --- $1.0

    I I I 1 2 3 4 5

    I/~. (h)

    Fig. 2. Sample size (N) and sampling interval (H) vs I/2.

    2.0

    1.5

    1.0 H

    0.5

  • 676 Suraj M. Alexander et al.

    20

    15

    L0 N

    a 3 = 25 ~ ~

    Ilk : 2 \ x

    I I I 1 I I 0.1)005 0.0007 0.0009 0.0010 0.0030 0.0050

    8 Fig. 3. Sample size (N) and sampling interval (H) vs ft.

    - - 2 .0

    - - 1.5

    - 1.0

    I 015

    H

    The curves in Fig. 1 indicate that when the frequency of process shifts decreases or the mean utime interval between process shifts increases, the sample size (N) increases and the sampling interval (H) decreases to a steady state value. This, at first, seems counter-intuitive, i.e. when the process improves the monitoring effort seems to have increased, albeit to a steady state value. However, this can be explained when we observe the rate of convergence to the steady state values of the design parameters. The rate of convergence to the steady state values depends on the cost of searching for an assignable cause (a3), i.e. the higher this cost the slower the rate of convergence. This signifies that if there is a high cost related to investigating out-of-control signals owing to the high cost of search and frequency of occurrence, then the control chart design parameters are set to keep this cost low. That is when N is kept low and H is set high, (1 - [3) is reduced and H/( I - [3), the time required to detect an out of control state increases. Hence the number of out of control states detected and investigated per unit time is reduced. Figure 2 illustrates the same curves (optimal N and H vs I/2) as that of Fig. 1. In Fig. 2, however, we investigate the senario where the sampling costs have increased by a factor of 10, i.e. a~ = $10 and a2 = $1. Under these conditions the behavior of N is unchanged, while the sampling interval H remains at a relatively high value. The latter can be explained by the high sampling costs.

    Figure 3 indicates that increases in the size of the shift from 0.5 a to 5 tr warrants a decrease in the sample size and an increase in the sampling frequency. The smaller sample size, recommended for larger process shifts, results in a lower cost of sampling, with the probability of detecting the shift (1 - f l ) , at an acceptable level. The sampliing frequency increase can be explained by the objective of limiting the period of operating out of control and its associated losses.

    5. CONCLUSIONS AND RECOMMENDATIONS

    In this paper we have embellished Duncan's cost model with the Taguchi loss function. This embellishment provides a framework for using the Taguchi loss function, which defines losses owing to the variabilty caused by both chance and assignable causes, for the economic design of control charts. We have also investigated the behavior of this embellished model through sensitivity analysis. Our analysis has indicated that the design parameters for the ~-chart are fairly robust when the cost of finding assignable cause and the frequency of occurrence of an assignable cause are not too high. The parameters (N and H) do have to be adjusted based on the size of the process shift that is investigated. Small process shifts require larger values of N and H, while for large shifts a small N and H are recommended.

    REFERENCES

    I. C. Ho and K. E. Case. Economic design of control charts: a literature review for 1981-1991. J. Quality Technol. 26, 1-78 (1994).

    2. D. C. Montgomery. Economic design of an ~ control chart. J. Quality Technol. 14, 40~,3 (1982).

  • Taguchi loss function 677

    3. J. J. Pignatiello. Optimal economic design of g-control charts when cost model parameters are not precisely known. HE Trans. 20, 103-1 I0 (1988).

    4. G. Tagaras. Economic 'f-charts with asymmetric control limits. J. Quality Technol. 21, 147-154 (1989). 5. E. M. Saniga. Economic statistical control chart designs with an application to "f and R charts. Technometrics 31,

    313--320. 6. G. Taguchi, E. A. Elsayed and T. Hsiang. Quality Engineering in Production Systems. McGraw.Hil l , New York. 7. B. M. Adams and W. H. Woodall. An analysis of Taguchi's on-line process control procedure under a random walk

    model. Technometrics. 31,401-413 (1989). 8. A. J. Duncan. The economic design of 'f-charts used to maintain current control of a process. J. Am. statist. Ass. 51,

    228 242 (1956). 9. M. Jaraiedi and Z. Zhuana. Determination of optimal design parameters of .f-charts when there is a multiplicity of

    assignable causes. J. Quality Technol. 23, 253-258 (1991). 10. T. J. Lorenzen and L. Vance. The economic design ofcontrol charts: a unified approach. Technometrics 28, 3-10 (1986). 11. T. P. McWilliams. Economic control chart designs and the in-control time distribution: a sensitivity study. J. Quality

    Technol. 21, 103 110 (1989). 12. D. C. Montgomery. Statistical Quality Control. Wiley, New York (1991).

    10 REM 20 CLS 30 INPUT 40 INPUT 50 INPUT 60 INPUT 70 INPUT 80 INPUT

    APPENDIX Economic Design of Control Charts Using the Taguchi Loss Function

    PARAMETER SELECTION FOR XBAR CHARTS

    "F IXED SAMPLING COST PER SUBGROUP = ";AI "'VARIABLE SAMPLE COST PER SAMPLE = ";A2 "'COST OF F INDING AN ASSIGNABLE CAUSE = ";A3 "COST OF INVESTIGATING A FALSE ALARM = ";A3P "PRODUCTION RATE (PCS/HR)= ";P "COST (SCRAP OR REWORK) FOR A PART OUTSIDE SPECIFICATION LIMITS = "':A

    90 INPUT "VARIANCE OF THE PRODUCT = ";VI 100 INPUT "TOLERANCE OF THE PRODUCT (+/ - )= ";TOL l l0 INPUT "'MEAN TIME PROCESS REMAINS IN CONTROL (HOURS)= " ;LAMDA 120 INPUT "'TIME TO TAKE A SAMPLE AND INTERPRET RESULTS (HOURS) = ";G 130 INPUT "TIME TO FIND AN ASSIGNABLE CAUSE (HOURS = ";D 140 INPUT "SIZE OF THE SHIFT YOU WISH TO DETECT (ABOVE/BELOW NOMINAL)= ";DELTA 150 REM LIST OF INPUTS 160 CLS:PRINT " PARAMETER SELECTION INPUTS":PRINT:PRINT 170 PRINT " I )F IXED SAMPLING COST PER SUBGROUP = ";TAB(70):A1 180 PRINT "2)VARIABLE SAMPLE COST PER SAMPLE = ";TAB(70);A2 190 PRINT "3)COST OF F INDING AN ASSIGNABLE CAUSE = ";TAB(70);A3 200 PRINT "'4)COST OF INVESTIGATING A FALSE ALARM = ";TAB(70);A3P 210 PRINT "'5)PRODUCTION RATE (PCS/HR)= ";TAB(70);P 220 PRINT "'6)COST (SCRAP OR REWORK) FOR A PART OUTSIDE SPECIFICATION LIMITS = ";TAB(70);A 230 PRINT "7)VARIANCE OF THE PRODUCT = ";TAB(70);VI 240 PRINT "'8)TOLERANCE OF THE PRODUCT (+/ - ) = ";TAB(70);TOL 250 PRINT "9)MEAN TIME PROCESS REMAINS IN CONTROL (HOURS)= "';TAB(70);LAMDA 260 PRINT " I0)TIME TO TAKE A SAMPLE AND INTERPRET RESULTS (HOURS) = ";TAB(70);G 270 PRINT " I I )T IME TO FIND AN ASSIGNABLE CAUSE (HOURS)= "TAB(70);D 280 PRINT "I2)SIZE OF THE SHIFT YOU WISH TO DETECT (ABOVE/BELOW NOMI- NAL) = "TAB(70);DELTA 290 REM ROUTINE TO MAKE CHANGES 300 PRINT:PRINT:PRINT 310 "'INPUT IF YOU WISH TO CHANGE A VALUE ENTER THE NUMBER OR ENTER 99 IF ALL THE VALUES ARE CORRECT";E 320 IF E = I GOTO 450 330 IF E = 2 GOTO 470 340 IF E = 3 GOTO 490 350 1F E =4 GOTO 510 360 IF E = 5 GOTO 530 370 IF E = 6 GOTO 550 380 IF E = 7 GOTO 570 390 IF E = 8 GOTO 590 400 IF E = 9 GOTO 610 410 IF E = l0 GOTO 630 420 IF E = I I GOTO 650 430 1F E = 12 GOTO 670 440 GOTO 690 450 INPUT "'FIXED SAMPLING COST PER SUBGROUP = ";AI 460 GOTO 160 470 INPUT "VARIABLE SAMPLE COST PER SAMPLE = ";A2 480 GOTO 160 490 INPUT "COST OF F INDING AN ASSIGNABLE CAUSE = ";A3

  • 678 Suraj M. Alexander et al.

    500 GOTO 160 510 INPUT "COST OF INVESTIGATING A FALSE ALARM = ";A3P 520 GOTO 160 530 INPUT "PRODUCTION RATE (PCS/HR)= ";P 540 GOTO 160 550 INPUT "COST (SCRAP OR REWORK) FOR A PART OUTSIDE SPECIFICATION LIMITS = "';A 560 GOTO 160 570 INPUT "VARIANCE OF THE PRODUCT = ";V1 580 GOTO 160 590 INPUT "TOLERANCE OF THE PRODUCT (+/ - ) = ";TOL 600 GOTO 160 610 INPUT "MEAN TIME PROCESS REMAINS IN CONTROL = ";LAMDA 620 GOTO 160 630 INPUT "TIME TO TAKE A SAMPLE AND INTERPRET RESULTS = "';G 640 GOTO 160 650 INPUT "TIME TO FIND AN ASSIGNABLE CAUSE (HOURS)= ";D 660 GOTO 160 670 INPUT "'SIZE OF THE SHIFT YOU WISH TO DETECT (ABOVE/BELOW NOMINAL) = ";DELTA 680 GOTO 160 690 LPRINT:LPRINT:LPRINT 700 LPRINT:LPRINT:LPRINT TAB(15): "VARIABLES AND PARAMETER SELECTION FOR XBAR CHART" 710 LPRINT:LPRINT 720 LPRINT TAB(14); "I)FIXED SAMPLING COST PER SUBGROUP = "';:LPRINT TAB(67) US ING"# # # . # #";AI 730 LPRINT TAB(14); "'2)VARIABLE SAMPLE COST PER SAMPLE";:LPRINT TAB(67)USING" # # # - # #";A2 740 LPRINT TAB(14); "3)COST OF FINDING AN ASSIGNABLE CAUSE";:LPRINT TAB(67)USING" # # # # #";A3 750 LPRINT TAB(14); "4)COST OF INVESTIGATING A FALSE ALARM";:LPRINT TAB(67)USING "46 46 46 # # ";A3P 760 LPRINT TAB(14); "5)PRODUCTION RATE (PCS/HR) ";:LPRINT

    TAB(67)USING"# # # # # #" ;P 770 LPRINT TAB(14): "6)COST (SCRAP/REWORK) FOR A PART OUTSIDE SPEC LIMITS = ";:LPRINT TAB(67) USING "# # # . # #" ;A 780 LPRINT TAB(14); "7)VARIANCE OF THE PRODUCT";:LPRINT TAB(64)USING "# # # # # # # #";VI 790 LPRINT TAB(14); "'8)TOLERANCE OF THE PRODUCT (+/ - ) " ; :LPRINT TAB(67)USING "# # # # #" ;TOL 800 LPRINT TAB(14); "9)MEAN TIME PROCESS REMAINS 1N CONTROL (HOURS) ";:LPRINT TAB(67)USING "# # # # # ";LAMDA 810 LPRINT TAB(14); "10)TIME TO TAKE A SAMPLE AND INTERPRET RESULTS (HRS) ";:LPRINT TAB(68)USING '" # 46 46 46";G 820 LPRINT TAB(14); "I I)TIME TO FIND AN ASSIGNABLE CAUSE (HOURS) ";:LPRINT TAB(69)USING" # # # ";D 830 LPRINT TAB(14); "12)SIZE OF THE SHIFT YOU WISH TO DETECT (+/ - ) ";:LPRINT TAB(67)USING "# # # # #";DELTA 840 LPRINT:LPRINT 850 LPRINT TAB(19); "N - - SUBGROUP SIZE" 860 LPR1NT TAB(19); "K - - COEFFICIENT TO DETERMINE CONTROL LIMITS" 870 LPRINT TAB(19); "H - - SAMPLING INTERVAL (HOURS)" 880 LPRINT:LPRINT 890 LPR1NT TAB(13); "N"; TAB(24); "K"; TAB(32);"H"; TAB(42);"ALPHA"; TAB(54);"POWER"; TAB(67);"COST" 900 LPRINT TAB(13);"--"; TAB(23);"---"; TAB(31);"--"; TAB(42);"- - - " ; TAB(54);"-. - - " ; TAB(67);"-- - '" 910 FORN=2TO 12 920 ECMIN = 9999999! 930 FOR H= .1 to 2STEP. I 940 FOR K= 1! to 4! STEP.I 950 REM DETERMINE ALPHA 960 X= -K 970 Y = 2*(K) 980 C = Y/8 99O S = C/(3"SQR(2"3.14159))*(EXP( - .5X ^ 2) + 4*EXP( - .5*(X + Y/8 ^ )2) + 2*EXP( - .5*(X + Y/4) ^ 2) + 4*EXP( -.5*(X + 3"Y/8) ^ 2)+ 2*EXP( - .5* (X + Y/2) ^ 2)+ 4*EXP( -.5*(X + 5"Y/8) ^ 2)+ 2*EXP( -.5*(X + 6"Y/8) ^ 2) + 4*EXP( - .5*(X + 7"Y/8) ^ 2)+ EXP( - .5*(X + Y) ^ 2)) 1000 ALPHA = 1 -S 1010 IF ALPHA < 0 THEN ALPHA = 0 1020 LI = A/TOL A 2*V1 1030 V2 = VI + DELTA ^ 2) 1040 L2 = A/TOL A 2"V2 1050 REM DETERMINE (I - BETA)

  • Taguchi loss function 679

    1060 NDELTA = DELTA/SQR(V I ) 1070 TI = NDELTA*SQR(N) - K 1080 T2 = - NDELTA*SQR(N) - K 1090 X = -3 .5 II00 YI =T I -X I I IOC=YI /8 1120 SI = C/(3*SQR(2*3.14159))*(EXP( - .5X A 2)+4*EXP( - .5*(X + YI/8) A 2)+ 2*EXP( - .5*(X + YI/4) A 2) +4*EXP~ - .5*(X + 3"Y1/8) ^ 2)+2 EXP( - .5*(X + YI/2) A 2) +4*EXP( - .5*(X + 5"YI/8) ^ 2)+ 2*EXP( - .5*(X + 6*Y1 ,'8) ^ + 4*EXP( - .5*(X + 7"Y1/8) A 2)+ EXP( - .5*(X + YI) A 2)) 1130 X = -5 1140 Y2=T2-X 1150 C2 = Y2,:8 1160 S2 = C2/(3"SQR(2"3.14159))*(EXP(- .5*X A 2) + 4*EXP( - .5*(X + Y2/8) A 2)+ 2*EXP( - .5*(X + Y2/4) ^ 2) +4*EXP( - .5*(X + 3'Y2/8) A 2) + 2*EXP( - .5*(X + Y2/2) A 2)+4*EXP( - .5*(X + 5*Y2/8J 2) + 2*EXP( - .5*(X + 6"Y2.~8) A 2) + 4*EXP( - .5*(X + 7"Y2/8) A 2)+ EXP( - .5*(X + Y2) ^ 2)) 1170 REM BETA IS I-BETA 1180 BETA = S1 + $2 1190

    EC = (AI + A2*N)/H + (A3 + A3P*ALPHA*LAMDA/H + A*VI*P/TOL A 2*LAMDA + A*V2*P/TOL ^ 2*(H/BETA - (H*(.5 - I /LAMDA*H/12)) + G*N + D))/(LAMDA + H/BETA - H*(.5 - H/12/LAMDA) + G*N + D) 1200 IF EC > ECMIN THEN GOTO 1260 1210 ECMIN = EC 1220 HBEST = H 1230 KBEST = K 1240 ALPHAB = ALPHA 1250 BETABEST = BETA 1260 NEXT K 1270 NEXT H 1280 LPRINT TAB(14) USING" # # ";N;:LPRINT TAB(23) USING "' # # ";KBEST;:LPRINT TAB(31) USING " # # "';HBEST;:LPRINT TAB(41) USING '" # # # # # ";ALPHAB;:LPR1NT TAB(54) USING '" # # # # # ";BETABEST;:LPRINT TAB(65) USING " # # # # # # ';ECMIN 1290 NEXT N 1300 LPRINT CHR$ 02) 1310 INPUT "make a change and run again (Y/N)?";N$ 1320 IF N$ = "Y" GOTO 160


Recommended