Alexander Sm, Dillman Ma and Usher Js

  • Published on
    10-Nov-2015

  • View
    213

  • Download
    0

Embed Size (px)

DESCRIPTION

QA

Transcript

  • Pergamon Computers ind. Engng Vol. 28, No. 3, pp. 671-679, 1995

    Copyright ~ 1995 Elsevier Science Ltd 0360-41352(94)00219-3 Printed in Great Britain. All rights r~crvcd

    0360-8352/95 $9.50 + 0.00

    ECONOMIC DES IGN OF CONTROL CHARTS US ING THE

    TAGUCHI LOSS FUNCTION

    SURAJ M. ALEXANDER, MATTHEW A. DILLMAN, JOHN S. USHER and BIJU DAMODARAN

    Department of Industrial Engineering, University of Louisville, Louisville, KY 40292, U.S.A.

    Abstract--We embellish Duncan's cost model with Taguchi's loss function to incorporate losses that result from both inherent variability due to assignable causes. Whereas Duncan applies a penalty cost for operating out of control, he does not show how this cost can be obtained or quantified. We illustrate, analyze, and evaluate this model utilizing hypothetical cost figures and process parameters. We also suggest adjustments to control chart design parameters when there are process improvements over time.

    I. BACKGROUND

    The design of the Shewhart ?? chart involves the determination of the sample size (N), the frequency or time between sampling (H) and the multiplier that defines the spread of the control limits from the centerline (K).

    In practice, the Shewart ?? charts have utilized a rational subgroup size for N, normally around 4 or 5. The sampling interval is generally selected based on the production rate and familiarity with the process. For instance, in the early stages of introduction of control charts to the process the samples may be taken frequently, such as once every 30 min. In the later stages, when the charts have been established and preventive measures taken against assignable causes, samples may be taken less frequently such as once every shift. The control limits for the control charts are traditionally set at _+3ax.

    The rational subgroup size is normally small, since larger sample sizes increase the risk of process shifts or assignable causes occurring while the sample is taken. Such an occurrence is undesirable, since this would filter the effect of the shift on the statistic used for monitoring and also exaggerate the perceived inherent variation of the process. The reduction in power of the statistical test, resulting from the small sample size, is compensated by taking more frequent samples. The + 3a~ limits have been found to provide an acceptable level of risk of false alarms in practice.

    The problem with the commonly used "rational" approach to control chart design is that it is used in almost all processes as the standard procedure for implementing control charts, without regard to the cost consequences of the design. In order to overcome this shortcoming, a number of researchers have proposed economic models for the design of control charts. Ho and Case [1] provide a literature review of such models covering the period 1981-1991. Most of this research has focused on the design of ??-charts, e.g. [2-4]. Even though these models have not been widely used, their value is obvious. One of the reasons economic models are not widely used is because the models are quite complex, and difficult to evaluate and optimize [5]. Also, these models are typically optimized for a particular size of shift, frequency of out of control, and cost of diagnosis. In practice, however, the mean period the process remains in control is not static, the size of the process shift is not constant, and the cost of diagnosis changes with time. In fact, with an assumption of continuous improvement, we would expect the frequency of out-of-control situations, the size of the shift and the cost of diagnosis to be reduced over time. After all, this is one of the purposes of statistical process control (SPC). In order to address some of these concerns we attempt to establish the direction of change of the control chart's design parameters, when the frequency and the size of process shifts and the cost of diagnosis change. With this information, the practitioner might be able to adjust the "optimized" design parameters over time.

    671

  • 672 Suraj M. Alexander et al.

    The first concern, related to model complexity, is not easily addressed, since the presence of integral evaluations and optimization over three variables makes the process difficult to simplify. Taguchi et al. [6] have proposed an on-line control model for which they have developed a closed-form solution for the selection of optimal control parameters. The closed-form solution makes the evaluation of process control parameters much easier. However, in their model the sample size (N) is always one; the costs associated with false alarms, and searching for assignable causes are ignored; also, the probability of not detecting a process shift is ignored. These simplifications are unrealistic, especially considered the fact that the Type II error increases with smaller sample sizes. Adams and Woodhall [7] provide a comparison of Taguchi's ideas and Duncan's model. We select Duncan's [8] cost function for the X chart which we find more realistic and we embellish this cost function with the Taguchi loss function. We determine the optimal control chart design parameters using this function and suggest changes in these parameters over time.

    The Taguchi loss function provides a means of explicitly considering the loss due to process variability. Whereas Duncan applies a penalty cost for operating out of control, he does not show how this cost can be obtained or quantified. In this paper we present, evaluate, optimize and analyze an economic model of the control chart. In the next section we describe our cost model. We then illustrate it's application using a hypothetical example. We conclude this paper by studying the direction of control chart design parameter changes in the presence of changes in the magnitude and frequency of process shifts and the costs of discovering and correcting the causes of these shifts.

    2. EMBELLISHMENT OF DUNCAN'S COST MODEL WITH TAGUCHI'S LOSS FUNCTION

    Duncan's model assumes a single out of control state. Research has confirmed that multiple assignable cause models can be approximated by an appropriately selected single cause model [2]. Hence we assume that we monitor the process to detect the occurrence of a single assignable cause that causes a fixed shift in the process. Duncan defines the monitoring and related costs over a cycle. The elements of the cycle are as follows:

    (1) The in control state. (The process starts in this state.) (2) The out of control state. (The process goes to an out-of-control state from an in-control

    state. This is assumed to be a Poisson process with 2 occurrences per hour.) (3) Detection of the out-of-control state. (4) The assignable cause is detected and fixed.

    Duncan also assumes that the process is not stopped while investigating the presence of an assignable cause.

    The expected cycle time (E(T)) with Duncan's assumption is:

    1 H + z+gN+D z I - f l

    where

    H = time between samples (h) (I - f l )= probability of detecting a shift

    r = marks the elapsed time within a sampling interval when the process goes out of control g = sampling time/unit N = sample size D = time required to detect and fix an assignable cause.

    The expected cost per cycle is:

    (a~ + a2N)E(T) H

    where

    a~ = fixed cost of sampling a2 = variable cost of sampling

    a3~ exp(-2H) ( H ) +a3+l -exp( - ) .H) +a4 l - -~- r +gN+D

  • Taguchi loss function 673

    a3 = cost of finding and fixing an assignable cause a 3 = cost of a false alarm

    pa4 = penalty cost for operating in an out of control state = probabil ity of a false alarm (Type I error).

    The Taguchi loss function for a product is defined below. Consider a product with bilateral tolerances of equal value (A). I f the cost to society for manufacturing a product out of specification is A S/unit, then, the Taguchi loss function defines the expected loss to society caused by using a particular process to produce the product as:

    A Expected loss/unit = ~-~ v 2 (1)

    where

    v2= mean squared deviation of the process.

    It can easily be shown that v 2 = a 2 + (p - T) 2, where o 2 = process variance, # = process mean and T = process target. We assume that when the process is in control its means is centered on target and its v 2 = v 2 = a 2. We also assume that when the process shifts, its mean shifts off target and v 2 = v~ = a 2 + (# - T) 2. (Since we are just considering $ charts the consideration of mean shifts are sufficient.)

    Using the definition of loss given in (I) we can easily embellish Duncan's model to consider losses owing to in-control and out-of-control variability. Noting that the expected period in-control is (1/2) and the expected period out-of-control is

    E" ] l~--~fl - r xgN+D and assuming that the production rate is P units/h, the cost per cycle (c) using the embellished model is shown below:

    AviP. Av,P[- H E(T) a3aexp(-).H) t - - - - f f - z+- - -~- -k - - - r +gN +O c = (a I + a2N) y + a3 -t 1 - exp( - ) ,H) I - fl "

    Dividing (2) by E(T) and applying the following approximations and definitions [2]

    H 2H 2 T~

    2 12

    (2)

    I 1 1 2H]H B= ( I - f l ) 2+- i2. . ] +D+gU

    exp( - ;tH) u

    (i -exp( -2H) ).H

    A L, = ~" v~

    A L2 =-~- i v~.

    We obtain the expected cost per hour as

    E(c)- al +a2N-- -t )~a3+a3~/H + L~P + L'PAB H I +2B

    The optimal values for N, H, and K can be obtained by minimizing the above cost function.

  • 674 Suraj M. Alexander et aL

    3. APPL ICAT ION OF EMBELL ISHED MODEL

    We apply our model to the following hypothetical example: A manufacturer produces a part that has a length specification of 2.5 in. with a tolerance of

    + 0.003 in. From previous runs the process standard deviation was estimated as 0.001. The process has required an average of ten adjustments during 40 h of production time. Therefore, the mean time between assignable causes entering the system is estimated to be 4 h.

    Based on an analysis of operator and quality control technician wages; it is determined that the fixed cost of sampling per subgroup is $1. While, the variable cost is $0.10 per part with a sample and interpret time of 0.01 h per part.

    The average time to investigate a false alarm or to find and eliminate an assignable cause is estimated to be two hours at a cost of $25/h. The process is assumed to continue producing parts at a rate of 100/h during investigation and elimination of out-of-control signals.

    The cost to rework or scrap a part that is found to be outside the specification limits is $5, while the shift in the process average to be detected is 0.001 in. From the above information we infer the following information relevant to our model.

    a l=$1 a2=$0.10 D=2

    a 3 = $50 a~ = $50

    P = 100 parts/h v~ =(0.001) 2

    A = $5/part A = 0.003

    I -=4h g=0.01h 2

    = (u - T) = 0.001 v~ = tr 2 + 62 = (0.001) 2 + (0.001) 2 = 0.000002.

    Table 1 lists the results of a computer search for the optimum design parameters. For these conditions the optimal parameters are seen to be N* = 13, K* = 2.5, and H* = 1.0 at a cost of $88.48/h. The most common values used in the U.S. industry are N = 5, K = 3, and H = 0.5 which results in a cost of $92.88/h or a penalty of $4.40/h. The computer program used to search for an optimum computes the optimal control limit width K and sampling frequency H for several values of N and displays the value of the cost function with the associated alpha risk and power as shown on Table I. This is the same approach used by Montgomery [2] and Jaraiedi and Zhuang [9]. The program is found in the Appendix. The program is easy to run on any IBM compatible computer with BASIC. It uses a simple grid search. The range and the step size of the search on any of the parameters can be changed by changing the "FOR" statements in the program.

    Table I. Variables and parameter selection for a chart

    N K H Power (I - / / ) Cost

    2 1.8 0.7 0.07 0.35 93.07 3 1.9 0.7 0.06 0.43 91.64 4 2. I 0.7 0.04 0.46 90.69 5 2. I 0.8 0.04 0.55 90.02 6 2.2 0.8 0.03 0.60 89.54 7 2.2 0.8 0.03 0.67 89.20 8 2.2 0.9 0.03 0.74 89.95 9 2.3 0.9 0.02 0.76 88.76

    10 2.4 0.9 0.02 0.78 88.64 11 2.4 1.0 0.02 0.82 88.55 12 2.4 1.0 0.02 0.86 88.50 13 2.5 1.0 0.01 0.87 88.48~ 14 2.5 I. I 0.01 0.89 88.49 15 2.6 I. 1 0.01 0.90 88.50

    N, subgroup size. K. coefficient to determine control limits; H. sampling interval (H).

  • Taguchi loss function 675

    20 --

    15

    10

    5

    \ \% =100 . . . . N \ \ . . . . . . . .

    ),: ..........

    II/I a31/50 a I = $1.0 I I=- ii a 3 = 100 a 2 -- $0.10

    I I I I I I I 0.5 1 2 3 4 5 6

    1/~. (h)

    Fig. I. Sample size (N) and sampl ing interval (H) vs I /L

    2.0

    1.5

    1.0 H

    0.5

    4. SENSIT IV ITY ANALYS IS

    We study the sensitivity to the magnitude and frequency of process shifts in order to determine the appropriate adjustment of control chart parameters in the presence of process improvements, and process deterioration. The frequency of process shifts is changed in the model by adjusting the value of (2), the expected arrival rate of process shifts. The magnitude of process shift is varied by changing the value of ~ = (/~ - T). Note that the cost of investigating and fixing an assignable cause (a3) is not changed as a function of the magnitude of shift, since we assume that the cost of investigating small shifts plus the cost of repair of these shifts is equal to the cost of investigating large shifts plus the cost of repair of the causes of these shifts. However, over time we expect the average cost a3 to decrease as the teams become more adept at discovering and correcting causes of process shifts. We therefore also study the change in optimal control chart parameters when a3 changes.

    Figures 1, 2, and 3 indicate changes in the "optimum" control chart design parameters, i.e. the sample size (N) and the sampling interval (H) under conditions of process improvement and deterioration. The design (K) was found to be relatively robust under the different conditions. Process improvement is denoted by a reduction in the frequency and magnitude of process shifts.

    N

    a 3= 100,~h a 3=50 ,~ a 3 =25 ~%

    / sss SSS

    a 3 -- 25/ / / / / / I I

    - / / / a 3 I00 / // a 3 = 50 /

    / / / / /

    . . . . d . . . . . .

    I I I

    H

    . . . . N

    a I = $10 .0

    a 2 --- $1.0

    I I I 1 2 3 4 5

    I/~. (h)

    Fig. 2. Sample size (N) and sampling interval (H) vs I/2.

    2.0

    1.5

    1.0 H

    0.5

  • 676 Suraj M. Alexander et al.

    20

    15

    L0 N

    a 3 = 25 ~ ~

    Ilk : 2 \ x

    I I I 1 I I 0.1)005 0.0007 0.0009 0.0010 0.0030 0.0050

    8 Fig. 3. Sample size (N) and sampling interval (H) vs ft.

    - - 2 .0

    - - 1.5

    - 1.0

    I 015

    H

    The curves in Fig. 1 indicate that when the frequency of process shifts decreases or the mean utime interval between process shifts increases, the sample size (N) increases and the sampling interval (H) decreases to a steady state value. This, at first, seems counter-intuitive, i.e. when the process improves the monitoring effort seems to have increased, albeit to a steady state value. However, this can be explained when we observe the rate of convergence to...