76
4. Statistical Procedures Formulas and Tables Quality Management in the Bosch Group | Technical Statistics Problem: Input quantities: Procedure: Result:

4. Statistical Procedures Formulas and Tables

Embed Size (px)

Citation preview

Page 1: 4. Statistical Procedures Formulas and Tables

4. Statistical ProceduresFormulas and Tables

Quality Management in the Bosch Group | Technical Statistics

Problem:

Input quantities:

Procedure:

Result:

Page 2: 4. Statistical Procedures Formulas and Tables

Edition 08.1993

1993 Robert Bosch GmbH

Page 3: 4. Statistical Procedures Formulas and Tables

- 3 -

Statistical Procedures, Formulas and Tables

Contents:

Foreword .................................................................................................... 5Arithmetic mean ......................................................................................... 6Median ....................................................................................................... 8Standard Deviation ................................................................................... 10Histogram................................................................................................. 12Probability Paper ...................................................................................... 14Representation of Samples with Small Sizes on Probability Paper............ 16Test of Normality ..................................................................................... 18Outlier Test .............................................................................................. 20Normal Distribution, Estimation of Fractions Nonconforming.................. 22Normal Distribution, Estimation of Confidence Limits............................. 24Confidence Interval for the Mean ............................................................. 26Confidence Interval for the Standard Deviation ........................................ 28Comparison of Two Variances (F-Test) .................................................... 30Comparison of Two Means (t-Test, Equal Variances)............................... 32Comparison of Two Means (t-Test, Unequal Variances)........................... 34Minimum Sample Size .............................................................................. 36Comparison of Several Means (Simple Analysis of Variance) .................. 38Comparison of Several Variances ............................................................. 40Factorial Analysis of Variance.................................................................. 42Regression line ......................................................................................... 46Correlation Coefficient ............................................................................. 48Law of Error Propagation ......................................................................... 50Use of Weibull Paper................................................................................ 52

Tables ....................................................................................................... 55

Page 4: 4. Statistical Procedures Formulas and Tables

- 4 -

Tables:

Table 1: Cumulative frequencies H ni ( ) for the recording of the points ( , )x Hi i fromranked samples on probability paper

Table 2: On the test of normality (according to Pearson)

Table 3: Limits of the outlier test (according to David-Hartley-Pearson)

Table 4: Standard normal distribution N(0,1)

Table 5: Factors k for the calculation of limiting values

Table 6: Quantiles of the t-distribution (two-sided)

Table 7: Factors for the determination of the confidence interval of a standard deviation

Table 8: Quantiles of the F-distribution (one-sided)

Table 9: Quantiles of the F-distribution (two-sided)

Page 5: 4. Statistical Procedures Formulas and Tables

- 5 -

Foreword

The booklet at hand contains selected procedures, statistical tests and practical exampleswhich are found in this or a similar form time and again in the individual phases ofproduct development. They represent a cross-section of applied technical statistics throughthe diverse fields, starting with a calculation of statistical characteristics and continueingon to secondary methods within the scope of statistical experimental planning andassessment.The majority of procedures is represented one to each page and explained on the basis ofexamples from Bosch practice. Usually only a few input quantities which are immediatelypresent are necessary for their application, e.g. the measurement values themselves, theirnumber, maximum and minimum, or statistical characteristics which can be easilycalculated from the measurement data.For reasons of clarity, a more in-depth treatment of the respective theoretical correlationsis consciously avoided. It should be observed, however, that the normal distribution as thedistribution of the considered population is presupposed in the procedures represented inthe following: Outlier test, confidence interval for µ and σ , F-Test, t-Test, calculation ofthe minimum sample size, simple and factorial variance analyses.In several examples, the calculated intermediate result is rounded. Through furthercalculation with the rounded value, slightly different numbers can be produced in the endresult depending on the degree of rounding. These small differences are insignificant,however, to the statistical information hereby gained.

Fundamentally, specific statistical statements are always coupled with a selectedconfidence level or level of significance. It should be clear in this association that theallocation of the attributes significant and insignificant to the probabilities ≥ 95% or≥ 99% were arbitrarily chosen in this booklet. While a statistical level of significance of1% can seem completely acceptable to a SPC user, the same level of significance ispresumably still much too large for a development engineer who is responsible for asafety-related component.

In connection with statistical tests, a certain basic example is always recognizable in theprocess. The test should enable a decision between a so-called null hypothesis and analternative hypothesis. Depending on the choice, one is dealing with a one-sided or two-sided question. This must, in general, be considered in the determination of the tabulatedtest statistic belonging to the test. In this booklet, the tables are already adjusted to thequestion with the respective corresponding process so that no problems occur in theseregards for the user.

Finally, the banal-sounding fact that each statistical process cannot take more informationfrom a data set than that which is contained in it (usually, the sample size, among otherfactors, limits the information content) must be emphasized. Moreover, a direct dataanalysis of measurement values prepared in a graphically sensible manner is usually easierto implement and more comprehensible than a statistical test.

Statistical procedures are auxiliary media which can support the user in the datapreparation and assessment. They cannot, however, take the place of sound judgement indecision making.

Page 6: 4. Statistical Procedures Formulas and Tables

- 6 -

Arithmetic Mean

The arithmetic mean x is a characteristic for the position of a set of values xi on thenumber line (x-axis).

Input quantities:

xi , n

Formula:

xn

x x x n= ⋅ + + +11 2( ). . .

Notation with sum sign:

xn

xii

n

= ⋅=∑1

1

Remark:

The arithmetic mean of a sample is frequently viewed as an estimated value for theunknown mean µ of the population of all values upon which the sample is based. It is tobe observed that the arithmetic mean of a data set can be greatly modified by a singleoutlier. In addition, in a skewed distribution for example, many more individual valuesmay lie below the mean than above it.

Page 7: 4. Statistical Procedures Formulas and Tables

- 7 -

Examples:

The means of the following measurement series should be determined:

Measurementseries 1 47 45 57 44 47 46 58 45 46 45

Measurementseries 2 48 49 45 50 49 47 47 48 49 48

Measurementseries 3 53 46 51 44 50 45 45 51 50 45

Measurement series 1: x = + + + + + + + + + = =47 45 57 44 47 46 58 45 46 4510

48010

48

Measurement series 2: x = + + + + + + + + + = =48 49 45 50 49 47 47 48 49 4810

48010

48

Measurement series 3: x = + + + + + + + + + = =53 46 51 44 50 45 45 51 50 4510

48010

48

Dot frequency diagram of the three measurement series:

measurement series 1

measurement series 2

measurement series 3

The schematic representation of the balance bar should illustrate that the arithmetic meancorresponds to the respective center of gravity of the weights which lie at position x i . Ifthe balance bar is supported at position x , then the balance finds itself in equilibrium.

On the basis of the first measurement series, it becomes clear that the arithmetic mean ofextreme values (e.g. also outliers) can be greatly influenced. In this example, only twovalues lie above the mean.

As the third measurement series shows, it is possible that no values at all will lie in theproximity of the mean.

Page 8: 4. Statistical Procedures Formulas and Tables

- 8 -

Median

The median %x is, like the arithmetic mean x , a characteristic for the position of a set ofvalues xi on the number line (x-axis).

Input quantities:

xi , n

Procedure:

1. The values x x xn1 2, , . . . , are arranged in order of magnitude:

( ) ( ) ( )x x x n1 2≤ ≤ ≤. . .

( )x 1 is the smallest, and ( )x n is the largest value.

2. Determination of the median:

~x x n= +

12

if n is odd

~x

x xn n

=

+

+

2 2

1

2 if n is even

The median, then, is the value which lies in the middle of the sequentially arranged groupof numbers. If n is an even number (divisible by 2 without remainder), there will be nonumber in the middle. The median is then equal to the mean of the two middle numbers.

Remark:

On the basis of the definition, the same number of values of the data set always lies belowand above the median %x . The median is therefore preferred to the arithmetic mean withunimodal, skewed distributions.

Page 9: 4. Statistical Procedures Formulas and Tables

- 9 -

Examples:

The operation shaft grinding is monitored within the scope of statistical process control(SPC) with the assistance of an %x -R chart. In the last 5-measurement sample, the followingdeviations of the monitored characteristic from the nominal dimension were measured :

4.8 5.1 4.9 5.2 4.5.

The median of the sample is to be recorded on the quality control chart.

Well-ordered values: 4.5 4.8 4.9 5.1 5.2

~ .x = 4 9

In the test of overpressure valves, the following opening pressures (in bar) were measured:

10.2 10.5 9.9 14.8 10.6 10.2 13.9 9.7 10.0 10.4.

Arrangement of the measurement values:

9.7 9.9 10.0 10.2 10.2 10.4 10.5 10.6 13.9 14.8

~ . ..x =

+=

10 2 10 42

10 3

Page 10: 4. Statistical Procedures Formulas and Tables

- 10 -

Standard Deviation

The standard deviation s is a characteristic for the dispersion in a group of values xi , withrespect to the arithmetic mean x . The square s 2 of the standard deviation s is called thevariance.

Input quantities:

xi , x , n

Formula:

sn

x xi

n

i=−

⋅ −=∑1

1 1

2( )

Alternatively, the following formulas can also be used for the calculation of s :

sn

xn

xii

n

ii

n

=−

⋅ − ⋅

= =

∑ ∑11

12

1 1

2

sn

x n xii

n

=−

⋅ − ⋅

=∑1

12

1

2

Remark:

The manner of calculation of s is independent of the distribution from which the values xi

originate and is always the same. Even a pocket calculator with statistical functions doesnot know anything about the distribution of the given values. It always calculatescorresponding to one of the formulas given above.

The representation on the right side shows the probability density function of the standardnormal distribution, a triangular distribution and a rectangular distribution, which allpossess the same theoretical standard deviation σ = 1. If one takes adequately largesamples from these distributions and calculates the respective standard deviation s , thesethree values will only differ slightly from each other, even though the samples can clearlyhave different ranges.The standard deviation s should, then, always be considered in connection with thedistribution upon which it is based.

Page 11: 4. Statistical Procedures Formulas and Tables

- 11 -

Example:

The standard deviation s of the following data set should be calculated:

6.1 5.9 5.4 5.5 4.8 5.9 5.7 5.3.

Many pocket calculators offer the option of entering the individual values with theassistance of a data key and calculating the standard deviation by pressing the s x key anddisplaying it (these keys can be different depending on the calculator).

Calculation with the second given formula:

x i x i2

6.1 37.215.9 34.815.4 29.165.5 30.254.8 23.045.9 34.815.7 32.495.3 28.09

x ii =∑ =

1

8

44 6. x ii =∑

= =

1

82

244 6 198916( ). . x ii

2

1

8

249 86=∑ = .

Insertion into the second formula produces: s = ⋅ −

=17

249 86 1989 168

0 4166, , , .

Probability density functions of various distributions with the samestandard deviation σ = 1 (see remark on the left side)

Page 12: 4. Statistical Procedures Formulas and Tables

- 12 -

Histogram

If the number line (x-axis) is divided into individual, adjacent fields in which one gives theboundaries, then this is referred to as a classification. Through the sorting of the values ofa data set into the individual classes (grouping), a number of values which fall into acertain class is produced for each class. This number is called the absolute frequency.If the absolute frequency n j is divided respectively by the total number n of all values, therelative frequencies h j are obtained. A plot of these relative frequencies over the classesin the form of adjacently lined-up rectangles is called a histogram. The histogram conveysan image of the value distribution.For the appearance of the histogram, the choice of the classification can be of decisivesignificance. There is no uniform, rigid rule, however, for the determination of theclassification, but rather merely recommendations which are listed in the following.Lastly, one must orient oneself to the individual peculiarities of the current problem duringthe creation of a histogram.

Problem:

On the basis of the n individual values xi of a data set, a histogram is to be created.

Input quantities:

xi , xma x , xmin, n

Procedure:

1. Select a suitable classification

Determine the number k of classes

Rule of thumb: 25 100≤ ≤n k n= n > 100 k n= ⋅5 log ( )

The classification should be selected with a fixed class width (if possible) so thatsimple numbers are produced as class limits. The first class should not be open to theleft and the last should not be open to the right. Empty classes, i.e. those into which novalues of the data set fall, are to be avoided.

2. Sort the values xi into the individual classesDetermine the absolute frequency n j for j k= 1 2, , . . . ,

3. Calculate the relative frequency hnnj

j= for j k= 1 2, , . . . ,

4. Plot the relative frequencies h j (y-axis) over the individual classes of the classification(x-axis)

Page 13: 4. Statistical Procedures Formulas and Tables

- 13 -

Remark:

The recommendation of calculating the class width b according to bx x

kma x min=

−− 1

usually results in class limits with several decimal places, which is impractical for themanual creation of a histogram and, beyond that, can lead to empty classes.

Example:

Relays were tested for series production start-up. A functionally decisive characteristic isthe so-called response voltage. With 50 relays, the following values of the responsevoltage U resp were measured in volts. A histogram should be created.

6.2 6.5 6.1 6.3 5.9 6.0 6.0 6.3 6.2 6.46.5 5.5 5.7 6.2 5.9 6.5 6.1 6.6 6.1 6.86.2 6.4 5.8 5.6 6.2 6.1 5.8 5.9 6.0 6.16.0 5.7 6.5 6.2 5.6 6.4 6.1 6.3 6.1 6.66.4 6.3 6.7 5.9 6.6 6.3 6.0 6.0 5.8 6.2

Corresponding to the rule of thumb, the number of classes k = ≈50 7. Due to theresolution of the measurement values of 0.1 mm here, the more precise giving of theclass limit by one decimal position is presented.

Class 1 2 3 4 5 6 7Lower class limit 5.45 5.65 5.85 6.05 6.25 6.45 6.65Upper class limit 5.65 5.85 6.05 6.25 6.45 6.65 6.85

Class 1 2 3 4 5 6 7Absolute frequency 3 5 10 14 9 7 2Relative frequency 6% 10% 20% 28% 18% 14% 4%

response voltage / V

Page 14: 4. Statistical Procedures Formulas and Tables

- 14 -

Probability Paper

The Gaussian bell-shaped curve is a representation of the probability density function ofthe normal distribution. The distribution function of the normal distribution is produced bythe integral over the density function. Their values are listed in Table 4. The graphicrepresentation of the distribution function Φ ( )u is an s-shaped curve.

If one distorts the y-axis of this representation in a manner that the s-shaped curvebecomes a straight line, a new coordinate system is produced, the probability paper. The x-axis remains unchanged. Due to this correlation, a representation of a normal distributionis always produced as a straight line on probability paper.

This fact can be made use of in order to graphically test a given data set for normaldistribution. As long as the number of given values is large enough, a histogram of thesevalues is created and the relative frequencies of values within the classes of aclassification are determined. If the relative cumulative frequencies found are plotted overthe right class limit on the probability paper and a sequence of dots is thereby producedwhich lie approximately in a straight line, then it can be concluded that the values of thedata set are approximately normally distributed.

Page 15: 4. Statistical Procedures Formulas and Tables

- 15 -

Example:

The response voltages of relays example is assumed from the previous sectionHistogram:

Class 1 2 3 4 5 6 7Lower class limit 5.45 5.65 5.85 6.05 6.25 6.45 6.65Upper class limit 5.65 5.85 6.05 6.25 6.45 6.65 6.85Absolute frequency 3 5 10 14 9 7 2Relative frequency 6% 10% 20% 28% 18% 14% 4%Absolute cumulative frequency 3 8 18 32 41 48 50Relative cumulative frequency 6% 16% 36% 64% 82% 96% 100%

response voltage / V

Representation of the measurement values on probability paper:x-axis: Scaling corresponding to the class limitsy-axis: Relative cumulative frequency in percent

Page 16: 4. Statistical Procedures Formulas and Tables

- 16 -

Representation of Samples with Small Sizes on Probability Paper

Problem:

Values xi of a data set are present, whose number n , however, is not sufficient for thecreation of a histogram. The individual values xi should have cumulative frequenciesallocated to them so that a recording on probability paper is possible.

Input quantities:

xi , n

Procedure:

1. The values x x xn1 2, , . . . , are arranged in order of magnitude:

( ) ( ) ( )x x x n1 2≤ ≤ ≤. . . .

The smallest value ( )x 1 has the rank 1, and the largest value ( )x n has the rank n .

2. Each ( )x i ( , , . . ., )i n= 1 2 has a cumulative frequency H ni ( ) allocated to it according

to Table 1:

( )x 1 , ( )x 2 , ... , ( )x n

H n1( ) , H n2 ( ), ... , H nn ( ).

3. Representation of the points ( ( )x i , H ni ( ) ) on the probability paper.

Remark:

The cumulative frequency H ni ( ) for rank number i can also be calculated with one of theapproximation formulas

H n ini ( ) =

− 0 5. and H n i

ni ( ) =−+

0 30 4..

.

The deviation from exact table values is insignificant here.

Page 17: 4. Statistical Procedures Formulas and Tables

- 17 -

Example:

The following sample of 10 measurement values should be graphically tested for normaldistribution:

2.1 2.9 2.4 2.5 2.5 2.8 1.9 2.7 2.7 2.3.

The values are arranged in order of magnitude:

1.9 2.1 2.3 2.4 2.5 2.5 2.7 2.7 2.8 2.9.

The value 1.9 has the rank 1, the value 2.9 has the rank 10. In Table 1 in the appendix(sample size n = 10), the cumulative frequencies (in percent) can be found for each ranknumber i :

6.2 15.9 25.5 35.2 45.2 54.8 64.8 74.5 84.1 93.8.

Afterwards, a suitable partitioning (scaling) is selected for the x-axis of the probabilitypaper corresponding to the values 1.9 to 2.9 and the cumulative frequencies are recordedover the corresponding well-ordered sample values on the probability paper. In theexample considered, then, the following points are marked:

(1.9, 6.2), (2.1, 15.9), (2.3, 25.5), . . . , (2.7, 74.5), (2.8, 84.1), (2.9, 93.8).

Since these points can be approximated quite well by a straight line, one can assume thatthe sample values are approximately normally distributed.

Representation of the measurement values on the probability paperx-axis: scaling corresponding to the measurement valuesy-axis: relative cumulative frequency in percent

Page 18: 4. Statistical Procedures Formulas and Tables

- 18 -

Test of Normality

Problem:

It should be inspected by way of a simple test whether the n values of a sample canoriginate from a normal distribution.

Input quantities:

xi , s , n

Procedure:

1. Determine the smallest value xmin and the largest value xma x of the sample and thedifference between these values:

R x xma x min= − (range)

2. Calculate the number

Q Rs

=

3. Find the lower limit LL and the upper limit UL for Q for sample size n and thedesired (chosen) level of significance α (e.g. α = 0.5%) in Table 2 and compare Qwith these limits.

Test outcome:

If Q lies outside of the interval given by LL and UL , then

Q LL≤ or Q UL≥ applies,

then the sample cannot originate from a normal distribution (level of significance α ).

Remark:

If Q exceeds the upper limit, this can also be caused by an outlier ( xma x) (compare withoutlier test).

Page 19: 4. Statistical Procedures Formulas and Tables

- 19 -

Example:

During a receiving inspection, a sample of 40 parts was taken from a delivery and wasmeasured. It should be inspected through a simple test as to whether or not the followingcharacteristic values of the parts taken can originate from a normal distribution.

29.1 25.1 27.1 25.1 29.1 27.7 26.1 24.929.7 26.4 26.8 28.5 26.9 29.8 28.7 26.626.4 25.8 29.3 27.4 25.1 27.6 30.0 27.726.0 28.9 26.6 29.5 25.6 27.4 25.7 28.329.8 24.5 27.0 27.4 26.2 28.6 25.1 24.6

x ma x = 30 0. x min = 24 5. x = 27 2. s = 1641.

R x xma x min= − = − =30 0 24 5 55. . .

Q Rs

= = =551641

335..

. Lower limit LL = 3 41. for n = 40 and α = 05. %

Q LL= < =3 35 3 41. .

The range of the sample values is too small in relation to the standard deviation. With alevel of significance of 0.5% the sample can not originate from a normally distributedpopulation.It can be suspected from a histogram of the sample values that the delivery was 100%sorted at the supplier.

Remark

This simple test inspects merely the relationship of range and standard deviation whichlies within the interval [ LL UL, ] dependent on n and α with normally distributed values.No comparison occurs, for example, between calculated relative cumulative frequenciesand the values of the theoretical distribution function of the normal distribution.In particular, in the case LL Q UL< < , the conclusion cannot be drawn that normallydistributed measurement data are actually present.

Page 20: 4. Statistical Procedures Formulas and Tables

- 20 -

Outlier Test

Problem:

A measurement series of n values is present from which can be assumed with quite greatcertainty that it originates from a normal distribution.This assumption is based in a subjectively logical manner, for example, on a longer-termobservation of the measurement object or is justified through the evaluation of a histogramor a recording of the values on probability paper. It should be decided as to whether avalue of the measurement series which is unusually large ( xma x) or unusually small ( xmin)may be handled as an outlier.

Input quantities:

xma x , xmin, x , s ,n

Procedure:

1. Calculate the difference

R x xma x min= −

2. Calculate the number

Q Rs

=

3. Find the upper limit UL (for Q) for the size of the measurement series (sample) n andthe desired level of significance α (e.g. α = 0.5%) in Table 3. Compare Q with thisupper limit.

Test outcome:

If Q exceeds the upper limit UL , i.e. Q UL≥ , one of the two values xma x and xmin whichis farthest away from the mean x can be considered as an outlier.

Page 21: 4. Statistical Procedures Formulas and Tables

- 21 -

Example:

Within the scope of a machine capability analysis, the following characteristic values weremeasured with 50 parts produced one after the other:

68.4 69.6 66.5 70.3 70.8 66.5 70.7 67.6 67.9 63.066.8 65.3 70.2 74.1 66.9 65.4 64.4 66.1 67.2 69.567.9 64.2 67.3 66.2 61.7 64.3 61.8 63.1 62.4 68.367.3 62.5 65.3 68.0 67.4 66.7 86.0 67.1 69.8 65.373.0 70.9 67.3 67.9 67.4 65.1 71.2 62.0 67.5 67.4

The graphical recording of these measurement values enables the recognition that thevalue 86.0 is comparably far away from the mean in comparison to all other values. In thecalculation of the mean (broken line), the possible outlier was not considered. A recordingof the remaining values on the probability paper indicates that the values areapproximately normally distributed.It should be explained whether all values can originate from the same population orwhether a real outlier is being dealt with with the extreme value.

x ma x = 86 0. x min = 617. x = 67 01. s = 3894.

R x xma x min= − = − =86 0 617 24 3. . .

Q Rs

= = =24 33894

6 24..

. Upper limit UL = 591. for n = 50 and α = 05. %

Q OS= > =6 24 5 91. .

A test of the components belonging to the outlier presented the result that an unworkedpiece from another material batch was accidentally processed during this manufacture.

Representation of the measurement values of the characteristic X in the form of a dotsequence.

Page 22: 4. Statistical Procedures Formulas and Tables

- 22 -

Normal Distribution, Estimation of Fractions Nonconforming

Problem:

A sample of n values xi of a normally distributed quantity x , e.g. a characteristic of aproduct is present. A tolerance interval is given for the characteristic which is limited bythe lower specification limit LSL and the upper specification limit USL .On the basis of a sample, the portions of the characteristic values are to be estimatedwhich lie a) below LSL , b) between LSL and USL , c) above USL .

Input quantities:

x and s of the sample, LSL , USL

Procedure:

1. Calculate

u x UGWs1 = −

and u OGW xs2 = −

2. Determine the values Φ( )u1 and Φ( )u2 from the table of the standard normaldistribution (Table 4)

Result:

a) Below LSL lie Φ( ) %u1 100⋅

b) Between LSL and USL lie ( ( ) ( ))Φ Φu u2 1 100− ⋅ %

c) Above USL lie ( ( ))1 1002− ⋅Φ u %

of all values.

Remark:

The estimated values calculated in this manner are full of uncertainty which becomesgreater the smaller the sample size n is and the more the distribution of the characteristicdeviates from the normal distribution.If fractions nonconforming are to be given in ppm (1 ppm = 1 part per million), the factor100% is to be replaced by 1,000,000 ppm in the calculation of the outcome.

Page 23: 4. Statistical Procedures Formulas and Tables

- 23 -

Example: x mm= 64 15. s mm= 0 5. LSL mm= 631. USL mm= 65 0.

u mm mmmm1

631 64 150 5

2 1=−

= −. .

.. u mm mm

mm265 0 64 15

0 517=

−=

. ..

.

Page 24: 4. Statistical Procedures Formulas and Tables

- 24 -

Normal Distribution, Estimation of Confidence Limits

Problem:

A sample of n values xi of a normally distributed quantity x is present, e.g. acharacteristic of a product. The limits LSL and USL of a region symmetric to x are to begiven in which the portion p (e.g. 99 %) of the population of all characteristic values uponwhich the sample is based lies with the probability P (e.g. 95 %).

Input quantities:

x and s of the sample, P , p , n

Procedure:

1. Calculate the number

Φ = +12

p

2. Find the number u for value Φ in the table of the standard normal distribution (Table4).

3. Find the factor k for the number n of the values and the probability P (Table 5).

4. Calculate

LSL x k u s= − ⋅ ⋅ and

USL x k u s= + ⋅ ⋅

Result:

The portion p of all characteristic values lies between the limits x k u s− ⋅ ⋅ andx k u s+ ⋅ ⋅ with the probability P .

Page 25: 4. Statistical Procedures Formulas and Tables

- 25 -

Example for the estimation of confidence limits:

The following injection volumes were measured during experiments on a test bench on 25injection valves (mass per 1000 strokes in g):

Injection volumes (mass per 1000 strokes in g)7.60 7.64 7.66 7.71 7.66 7.52 7.70 7.56 7.66 7.607.60 7.64 7.63 7.65 7.59 7.59 7.55 7.62 7.67 7.697.62 7.70 7.60 7.64 7.71

A recording of this measurement outcome on probability paper indicates that a normaldistribution can be assumed. The development department would like to fix a toleranceinterval [ LSL USL, ] symmetric to the mean in which p = 95% of all injection volumes ofthe total population lie with a probability of 99 % ( P = 99%).

x = 7 6324. s = 0 0506. n = 25

p = 95% ⇒ Φ =+

= =1

2195

20 975

p . .

u ( )0 975 196. .= k n P( %)= = =25 99 152, .

LSL = − ⋅ ⋅ ≈7 6324 152 196 0 0506 7 48. . . . .

USL = + ⋅ ⋅ ≈7 6324 152 196 0 0506 7 78. . . . .

Page 26: 4. Statistical Procedures Formulas and Tables

- 26 -

Confidence Interval for the Mean

Problem:

The characteristic quantities mean x and standard deviation s calculated from n samplevalues represent merely estimations for the generally unknown characteristic quantities µand σ of the population of all values upon which the sample is based. A region should begiven around x in which µ lies with great probability P (e.g. 99 %).

Input quantities:

x , s , n , P

Procedure:

1. Find the quantity t for the degrees of freedom f n= − 1 and the desired probability P(e.g. 99 %) in Table 6.

2. Calculate

the lower interval limit x t sn

− ⋅ and the

upper interval limit x t sn

+ ⋅ .

Result:

The unknown mean µ lies between the calculated interval limits with the probability P(e.g. 99 %), i.e. the following applies:

x t sn

x t sn

− ⋅ ≤ ≤ + ⋅µ .

Page 27: 4. Statistical Procedures Formulas and Tables

- 27 -

Example:

Within the scope of a process capability analysis, the mean x = 74 51. and the standarddeviation s = 138. were determined from the n = 125 individual values of a completelyfilled-out quality control chart ( x -s-chart). Corresponding with the outcome of thestability test, the process possesses a stable average.

For P = 99% and f = − =125 1 124, one finds the value t = 2 63. in Table 6 (this valuebelongs to the next smaller degree of freedom f = 100 listed in the table).

Use of both formulas produces:

lower interval limit: 74 51 2 63 138125

7418. . . .− ⋅ =

upper interval limit: 74 51 2 63 138125

74 83. . . .+ ⋅ = .

The unknown process average µ lies with a probability of 99% within the interval[74.18; 74.83].

95%-confidence interval ofthe mean µ:

t o p : f a c t o r s t n/

sample size

b o t to m : f a c to r s t / n−

The representation illustrates the dependence of the confidence interval of the mean µ onthe sample size n .

Page 28: 4. Statistical Procedures Formulas and Tables

- 28 -

Confidence Interval for the Standard Deviation

Problem:

The variance s 2 calculated from n sample values is an estimate of the unknown varianceσ 2 of the population of all values upon which the sample is based. A confidence intervalfor the standard deviation σ should be given; an interval, then, around s in which σ lieswith great probability P (e.g. 99 %).

Input quantities:

s , n , P

Procedure:

1. Find the quantities c1 and c2 for sample size n and the selected probability P (e.g.99%) in Table 7.

2. Calculate

the lower interval limit c s1 ⋅ and the

upper interval limit c s2 ⋅ .

Result:

The unknown standard deviation σ lies between the calculated interval limits with theprobability P (e.g. 99 %). Applicable, then, is:

c s c s1 2⋅ ≤ ≤ ⋅σ .

Page 29: 4. Statistical Procedures Formulas and Tables

- 29 -

Example:

A measuring instrument capability analysis includes the determination of the repeatabilitystandard deviation sW . For this, at least 25 measurements of a standard are conducted andevaluated at one standard.

Within the scope of such a test on an inductivity measurement device, the standarddeviation s sW= = 2 77. was calculated from n = 25 measurement outcomes. Themeasurement outcomes are approximately normally distributed.

For sample size n = 25 and probability P = 99% one finds the values c1 0 72= . andc 2 156= . in Table 7. Insertion produces:

lower interval limit: c s1 0 72 2 77 199⋅ = ⋅ =. . .

upper interval limit: c s2 156 2 77 4 32⋅ = ⋅ =. . . .

The unknown standard deviation σ which characterizes the variability of the measurementdevice lies within the interval [1.99; 4.32] with a probability of 99%.

95 %-confidence interval ofthe standard deviation σ:

top: factors c2bottom: factors c1

sample size n

The representation illustrates the dependency of the confidence interval of the standarddeviation σ on the sample size n .

95 %-confidence interval ofthe standard deviation

top: factors c2bottom: factors c1

Page 30: 4. Statistical Procedures Formulas and Tables

- 30 -

Comparison of Two Variances (F-Test)

Problem:

It should be decided whether or not the variances s12 and s2

2 of two data sets of size n1 andn2 are significantly different.

Input quantities:

s12 , s2

2 , n1, n2

Procedure:

1. Calculate the test statistic F

Fss

= 12

22 s1

2 is the greater of the two variances and is located below the fraction line.

n1 is the sample size belonging to s12 .

2. Compare the test statistic F with the table values F ( %)95 and F ( %)99 to the degreesof freedom f n1 1 1= − and f n2 2 1= − (Table 9).

Test outcome:

If ... ... the statistical difference between both variances is

F F≥ ( %)99 highly significant

F F F( %) ( %)99 95> ≥ significant

F F( %)95 > insignificant

Page 31: 4. Statistical Procedures Formulas and Tables

- 31 -

Example for the comparison of two variances:

By using two different granular materials A and B, 10 injection molding parts wererespectively manufactured and measured. Both measurement series which give the relativeshrinkage of the parts should be compared with respect to their variances.

Measurement outcomes (relative shrinkage)Gran. mat. A 0.16 0.30 0.26 0.24 0.33 0.28 0.24 0.18 0.35 0.30Gran. mat. B 0.32 0.26 0.36 0.22 0.14 0.23 0.40 0.19 0.32 0.12

Evaluationx s s 2

Gran. mat. A 0.264 0.06 0.0037Gran. mat. B 0.256 0.093 0.0087

F-Test: F = =0 00870 0037

2 35..

. Degrees of freedom: f 1 9= , f 2 9=

Table values: F( )95 4 03% .= F( )99 6 54% .= (Table 9)

Test decision:

Because F F< ( %)95 , the quantitative difference between both variances is insignificant.

Dot frequency diagram for the measurement outcome:

granular material B

granular material A

Page 32: 4. Statistical Procedures Formulas and Tables

- 32 -

Comparison of Two Means (Equal Variances)

Problem:

It should be decided whether the means x1 and x 2 of two data sets of size n1 and n2 aresignificantly different or whether both data sets originate from a mutual population.

Input quantities:

x1, x 2 , s12 , s2

2 , n1, n2

Prerequisite:

It was indicated by an F-test that the variances s12 and s2

2 are not significantly different.

Procedure:

1. Calculate the test characteristic

tn n n n

n n

x x

n s n s=

⋅ ⋅ + −+

⋅−

− ⋅ + − ⋅

1 2 1 2

1 2

1 2

1 12

2 22

2

1 1

(

( ) ( )

) for n n1 2≠

and

t nx x

s s= ⋅

+1 2

12

22

for n n n1 2= = .

2. Compare the test characteristic t with the table values t( %)95 and t( %)99 to thedegrees of freedom f n n= + −1 2 2 (Table 6).

Test outcome:

If ... ... the statistical difference between both means is

t t≥ ( %)99 highly significant

t t t( %) ( %)99 95> ≥ significant

t t( %)95 > insignificant

Page 33: 4. Statistical Procedures Formulas and Tables

- 33 -

Example (t-Test):

A new welding process is to be introduced in a welding procedure. The tensile strength ofthe weld joint was measured on 10 respective parts which were processed with the oldprocess and with the new process.On the basis of the measured values (force in kN), it should be decided whether the newprocedure produces tensile strengths which are significantly better.

Measurement outcomes x s s 2

Old process 2.6 2.0 1.9 1.7 2.1 2.2 1.4 2.4 2.0 1.6 1.99 0.36 0.13New process 2.1 2.9 2.4 2.5 2.5 2.8 1.9 2.7 2.7 2.3 2.48 0.31 0.1

F-test: F = =01301

13..

. Degrees of freedom: f 1 9= , f 2 9=

Table values: F( )95 4 03% .= F( )99 6 54% .=

The variances of the measurement outcomes are not significantly different and theprerequisite for the t-test is thereby fulfilled.

t = ⋅−

+= ⋅ =10

199 2 48013 01

10 0 490 48

3 2. .

. ...

.

n n1 2 10= = Degrees of freedom: f = + − =10 10 2 18

Table value: t( )95 2 10% .= t( )99 2 88% .= (Table 6)

t t= > =3 2 2 88 99. . %( )

The difference in the means of the two measurement series is highly significant.

Since the mean x 2 is greater than x1, the tensile strength increased, then, and the newprocedure should be introduced.

Dot frequency diagram of the measured values:

new process

old process

Page 34: 4. Statistical Procedures Formulas and Tables

- 34 -

Comparison of Two Means (Unequal Variances)

Problem:

It should be decided whether the means x1 and x 2 of two data sets are significantlydifferent or whether the two data sets originate from a mutual population.

Input quantities:

x1, x 2 , s12 , s2

2 , n1, n2

Prerequisite:

It was indicated by an F-test that the difference between s12 and s2

2 is not coincidental.

Procedure:

1. Calculate the test characteristic t : tx x

sn

sn

=−

+

1 2

12

1

22

2

.

2. Calculate the degrees of freedom f : f

sn

sn

sn

n

sn

n

=

+

−+

12

1

22

2

2

12

1

2

1

22

2

2

21 1

.

3. Compare the test characteristic t with the table values t( %)95 and t( %)99 (Table 6)to the degrees of freedom f (round f to a whole number!).

Test outcome:

If ... ... the statistical difference between both means is

t t≥ ( %)99 highly significant

t t t( %) ( %)99 95> ≥ significant

t t( %)95 > insignificant

Page 35: 4. Statistical Procedures Formulas and Tables

- 35 -

Example:

An adjustment standard was measured many times with two different measurementprocedures A and B.With procedure A, n1 8= , and with procedure B n2 16= measurements were conducted.The following deviations from the nominal value of the adjustment standard weredetermined:

Measured values (procedure A)4.4 4.1 1.0 3.8 2.3 4.4 6.3 2.9

Measured values (procedure B)6.9 7.6 7.6 8.2 8.2 8.1 8.5 7.97.2 8.0 9.3 8.1 7.1 7.7 7.3 6.9

It should be decided whether the means of both data sets are significantly different or not.Significantly different means mean in this case that both measurement procedures havesignificantly different systematic errors.

Evaluationni x i si

2

Process A (Index 1) 8 3.65 2.54Process B (Index 2) 16 7.7875 0.4065

F-test: F F= = > =2 540 4065

6 25 4 85 7 15 0 99.

.. . ; ; . (Table 9)

The difference between the two variances is not coincidental.

Test characteristic: t =−

+=

365 7 78752 54

84 065

16

7 07. .

. ..

Degrees of freedom: f =+

−+

2 548

4 06516

2 548

8 1

4 06516

16 1

8

2

2 2

. .

. .

Comparison with the table value: t t= > =7 07 3355 99. . %( ) (Table 6)

Test outcome: The difference between the means is highly significant.

Page 36: 4. Statistical Procedures Formulas and Tables

- 36 -

Minimum Sample Size

With the assistance of a t-test it can be decided whether the mean of two samples (datasets, measurement series) are significantly different. This decision becomes more certainthe greater the number of available measurement values.Since, though, expense and costs increase with the number of measurements (experi-ments), an experimenter should, for example, already consider during the preparation forthe experiment which minimum mean difference is of interest to him and which minimumsample size n he must choose, so that this minimum difference of means will berecognizable (significant) on the basis of the evaluation of the experiment.It is here necessary to give at least a rough estimate of the standard deviation s of themeasurement values to be expected and to express the interested difference of the means asa multiple of s .

The following rule of thumb is used in practice.

Mean interval which Necessary minimum number n ofshould be recognized: values per measurement series:

3 ⋅ s n ≥ 6

2 ⋅ s n ≥ 15

1 s⋅ n ≥ 30

Remark:

In the comparison of means of two measurement series and the corresponding testdecision, two types of error are possible.In the first case, both measurement series originate from the same population, i.e. there isno significant difference. If one decides on the basis of a t-test that a difference existsbetween the two means, one is making a type 1 error, α. It corresponds to the level ofsignificance of the t-test (e.g. α = 1%).

If, in the second case, an actual difference of the means exists, i.e. the measurement seriesoriginate from two different populations, this is not indicated with absolute certainty bythe test. The test outcome can coincidentally indicate that this difference does not exist.One refers in this case to a type 2 error, β .

Both errors can be uncomfortable for an experimenter, since he may possibly suggestexpensive continued examinations or even modifications in a production procedure on thebasis of the significant effects of a factor (type 1 error), or because he really doesntrecognize the significant effects present and the change for possible procedureimprovements escapes him (type 2 error).The minimum sample size n , which is necessary in order to recognize an actual meandifference, is dependent corresponding to the above plausibility consideration of theinterval of the two means and the levels of significance α and β given in units of thestandard deviation σ .

Page 37: 4. Statistical Procedures Formulas and Tables

- 37 -

Page 38: 4. Statistical Procedures Formulas and Tables

- 38 -

Comparison of Several Variances (Simple Analysis of Variance)

Problem:

The outcome of an experimental investigation with k individual experiments, which wererepeated respectively n times, is represented by k means y i and k variances si

2:

Experiment No. Outcomes Mean Variance

1 y y y n11 12 1, , . . . , y1 s12

2 y y y n21 22 2, , . . . , y 2 s22

⋅⋅⋅

⋅⋅⋅

⋅⋅⋅

⋅⋅⋅

k y y yk k k n1 2, , . . . , y k sk2

It should be decided whether the means y i of k experiments are significantly different, orwhether the differences are only coincidental. This procedure is usually used in the firstevaluation of experiments on the basis of statistical experimental designs (compare withfactorial variance analysis).

Input quantities:

y i, si2, n , k

Process:

1. Calculate the mean variance

sk

sii

k2 2

1

1= ⋅=∑ .

2. Calculate the variance of the means

sk

y yy ii

k2 2

1

11

=−

⋅ −=∑ ( ) with y

ky i

i

k

= ⋅=∑1

1

(Total mean).

3. Compare the test statistic

Fn s

sy

y

=⋅ 2

2

with the table values F ( %)95 and F ( %)99 to the degrees of freedom f k1 1= − andf n k2 1= − ⋅( ) (Table 8).

Page 39: 4. Statistical Procedures Formulas and Tables

- 39 -

Test outcome:

If ... ... the statistical difference of both variances is F F≥ ( %)99 highly significant

F F F( %) ( %)99 95> ≥ significant

F F( %)95 > insignificant

Example for the comparison of several means:

Sheet metal with a defined hardness from three suppliers is considered. With the testresults of three samples, it should be explained whether a significant difference existsbetween the hardness values of the sheet metals of the suppliers.

Outcomes yi j of the Hardness Measurements y i si si2

Supplier 1 16 21 22 26 28 31 17 24 11 20 34 23 20 12 26 22.1 6.49 42.07Supplier 2 26 29 24 18 27 27 21 36 28 20 32 32 22 27 24 26.2 4.90 24.03Supplier 3 24 25 25 23 26 23 27 20 21 25 22 25 24 23 26 23.9 1.94 3.78

Mean variance: s 2 42 07 24 03 3783

2329=+ +

=. . . .

Variance of the three means: s y2 4 28= . (pocket calculator!)

Test statistic: F =⋅

=15 4 28

23 292 76

..

.

Degrees of freedom: f 1 3 1 2= − = , f 2 15 1 3 42= − ⋅ =( )

Table values: F ( )95 32% .= F ( )99 51% .= F F< ( %)95 (Table 8)

No significantdifference existsbetween the meansof the sheet metalhardness.

There is clearly a difference in the variances of the three samples (compare with test forequality of several variances).

supplier 3

supplier 2

supplier 1

Page 40: 4. Statistical Procedures Formulas and Tables

- 40 -

Comparison of Several Variances

Problem:

The outcome of an experimental investigation with k individual experiments, which wererespectively repeated n times, is represented by k means x i and k variances si

2:

Experiment No. Outcomes Mean Variance

1 x x x n11 12 1, , . . . , x1 s12

2 x x x n21 22 2, , . . . , x 2 s22

⋅⋅⋅

⋅⋅⋅

⋅⋅⋅

⋅⋅⋅

k x x xk k k n1 2, , . . . , x k sk2

It is to be decided whether the variances si2 of the k experiments are significantly

different or whether the differences are only coincidental.

Input quantities: x i j , x i , si2

Procedure:

1. Calculate by rows the absolute deviation of the measurement outcomes x i j from themeans x i . This corresponds to a conversion according to the rule

y x xi j i j i= − .

2. Record the converted values yi j corresponding to the above table and calculate themeans y i and the variances si

2.

Experiment No. Outcomes Mean Variance

1 y y y n11 12 1, , . . . , y1 s12

2 y y y n21 22 2, , . . . , y 2 s22

⋅⋅⋅

⋅⋅⋅

⋅⋅⋅

⋅⋅⋅

k y y yk k k n1 2, , . . . , y k sk2

3. Conduct the procedure for the comparison of several means (p. 38) with the assistanceof the means y i and variances si

2 of the converted values, i.e. F-test with the test

characteristic Fn s

sy

y

=⋅ 2

2. Degrees of freedom: f k1 1= − , f n k2 1= − ⋅( ) (Table 8)

Page 41: 4. Statistical Procedures Formulas and Tables

- 41 -

Outcome:

If F is greater than the quantile Fk n k− − ⋅1 1 0 99; ; .( ) , for example, with a significance level of

1% it is concluded that the variances si2 of the original values x i j are different.

Remark:

Within the scope of tests on robust designs (or processes), it is frequently of interest tofind settings of test parameters with which the test outcomes show a variability (variance)as small as possible.For this purpose it makes sense to first examine, with the assistance of the test describedabove, whether the variances of the outcomes are significantly different in the individualexperiment rows (compare with factorial variance analysis).

Example for the Test for the Equality of Several Variances:

Assumption of the values x1 22 07= . , x 2 262= . , x 3 2393= . and x i j from p. 39

Conversion of the measurement values: y x xi j i j i= −

Outcomes of the Conversion y x xi j i j i= −Supplier 1 6.07 1.07 0.07 3.93 5.93 8.93 5.07 1.93

11.07 2.07 11.93 0.93 2.07 10.07 3.93Supplier 2 0.2 2.8 2.2 8.2 0.8 0.8 5.2 9.8

1.8 6.2 5.8 5.8 4.2 0.8 2.2Supplier 3 0.07 1.07 1.07 0.93 2.07 0.93 3.07 3.93

2.93 1.07 1.93 1.07 0.07 0.93 2.07

Evaluation y i s y s y2

Supplier 1 5.0 3.9 15.23Supplier 2 3.79 2.94 8.67Supplier 3 1.55 1.1 1.22

Mean of the variances: ss s s

y2 1

222

32

315 23 8 67 122

38 37=

+ +=

+ +=

. . ..

Variance of the means: s y2 306= . (pocket calculator!)

Test statistic: Fn s

sy

y

=⋅

=⋅

=2

2

15 3068 37

55.

.. Degrees of freedom: f k1 1 2= − =

f n k2 1 14 3 42= − ⋅ = ⋅ =( )Table value: F2 42 0 99 51; ; . .≈ (Table 8) F FTable>

Test decision: The sheet metal hardness of the three suppliers have highly significantdifferent variances.

Page 42: 4. Statistical Procedures Formulas and Tables

- 42 -

Factorial Analysis of Variance

Problem:

On the basis of an experimental design with k rows and n experiments per row, factorswere examined at two respective levels. The outcomes can be represented, in general, asthey appear in the following example of an assessment matrix of a four-row plan for thecolumns of the factors A and B as well as the columns of the interaction AB.

Experi-mentNo.

FactorsA B AB

Outcomesyi j

Meansy i

Variancessi

2

1 - - + y y y n11 12 1, , . . . , y1 s12

2 + - - y y y n21 22 2, , . . . , y 2 s22

3 - + - y y y n31 32 3, , . . . , y3 s32

4 + + + y y y n41 42 4, , . . . , y 4 s42

It is to be decided for each factor X whether it has a significant influence on the mean y i

of the experiment outcome.

Input quantities:

Means y i and variances si2 of the outcome of each row, number of rows k , number of

experiments per row n

Procedure:

1. Calculate the average variance s y2

sk

sy ii

k2 2

1

1= ⋅=∑ .

This quantity is a measure for the experimental noise.

2. Calculate the variance s x2 of the means of the measurement values per level of factor X

(see remark on the next page). The mean of all measurement outcomes is calculated,then, in which factor X (in the corresponding column) has the level + and the mean ofall measurement outcomes in which the factor X has the level -.The variance s x

2 is calculated from these two means.

3. Calculate the test statistic

Fs

sx

y

=⋅2

2

number of measurement values per level.

Page 43: 4. Statistical Procedures Formulas and Tables

- 43 -

4. Compare the test statistic F with the table values F ( %)95 and F ( %)99 to the degreesof freedom f 1 1= −number of levels and f n k2 1= − ⋅( ) ( Table 8 ).In the case of a two-level design (see above), f 1 1= .

Test outcome:

If ... ... the statistical influence of factor Xon the measurement outcome (means) is

F F≥ ( %)99 highly significant

F F F( %) ( %)99 95> ≥ significant

F F( %)95 > insignificant

Remark:

Each factor X corresponds to a column in the evaluation matrix. The procedure describedis to be conducted separately for each individual column.A detailed explanation of this assessment method is a component of the seminar onDesign of experiments (DOE).Depending on the size of the design present and the number of experiments per row, themanual assessment can be quite laborious. Because the risk of calculation error exists aswell, the use of a suitable computer program is recommended.

It is recommended to first conduct a simple analysis of variance (ANOVA) before thefactorial ANOVA in order to determine whether the outcomes in the experiment rows areat all significantly different. If this is not the case, the conducting of a factorial ANOVAis, naturally, not sensible.

Page 44: 4. Statistical Procedures Formulas and Tables

- 44 -

Example of the factorial analysis of variance:

The electrode gap and the drawing dimension C of a spark plug is to be investigatedwith regard to its influence on the ignition voltage.For this, a 2 2 design was conducted under defined laboratory conditions in which theelectrode gap (factor A) and the drawing dimension C (factor B) were varied at tworespective levels:

Level - Level +Factor A 0.9 mm 1.0 mmFactor B 0.2 mm 0.4 mm

Experi-ment No.

FactorsA B AB

Measurement outcomes yi j

Ignition voltage in kV

1 - - + 10.59 11.00 9.91 11.25 11.43 9.9

2 + - - 13.41 12.31 12.43 10.68 12.04 11.51

3 - + - 16.32 17.70 15.16 14.62 14.26 14.85

4 + + + 17.25 18.08 17.52 16.2 17.29 16.93

Evaluation

Experi-ment No.

FactorsA B AB

Meansy i

Variancessi

2

1 - - + 10.680 0.4398

2 + - - 12.063 0.8458

3 - + - 15.485 1.6722

4 + + + 17.212 0.3919

First, a simple analysis of variance is conducted:

Average variance: s y2 1

40 4398 0 8458 16722 0 3919 08374= ⋅ + + + =( ). . . . .

Total mean: y = ⋅ + + + =14

10 68 12 063 15 485 17 212 1386( ). . . . .

Variance of the means: s y yy ii

2 2

1

414 1

9 0716=−

⋅ − ==∑ ( ) . (pocket calculator!)

Test statistic: Fs

sy

y

=⋅

=⋅

=6 6 9 0716

0837465

2

2

..

f 1 4 1 3= − = and f 2 6 1 4 20= − ⋅ =( )

Table value: F( )99 4 94% .= (Table 8) F F> ( %)99

The difference of the means in the four experiments is highly significant.

Page 45: 4. Statistical Procedures Formulas and Tables

- 45 -

Assessment of factor A:

Mean of all means in which A is on the level +: 12 063 17 212

214 6375

. ..

+=

Mean of all means in which A is on the level -: 10 68 15485

2130825

. ..

+=

The variance of these two means is s Ax2 14 6375 130825

21209( ) ( )

=−

=. . . .

The number of measured values per level of factor A is 12. The degrees of freedom for theF-test are f 1 2 1= − = 1 and f 2 6 1 4 20= − ⋅ =( ) (this also applies to B and AB).

F =⋅

=1209 12

0 837417 32

..

. Table value: F( %) .99 81= F F> ( %)99

The influence of the electrode gap on the ignition voltage is, then, highly significant.

Assessment of factor B:

Mean of all means in which B is on the level +: 15485 17 212

216 3485

+=

..

Mean of all means in which B is on the level -: 10 68 12 063

2113715

. ..

+=

The variance of both of these means is s Bx2

216 3485 1137152

12 385( ) ( )=

−=

. . . .

F =⋅

=12 385 12

0 8374177 5

..

. Table value: F( )99 81% .= F F> ( %)99

The influence of the drawing dimension C to the ignition voltage is, then, highly

significant.

Assessment of the interaction AB:

Mean of all means in which AB is on the level +: 10 68 17 212

213946

. ..

+=

Mean of all means in which AB is on the level -: 12 063 15485

213774

. ..

+=

The variance of both of these means is s ABx2

213 774 13 9462

0 0148( ) ( )=

−=

. . . .

F =⋅

=0 0148 12

083740 21

..

. f 1 2 1= − = 1 and f 2 6 1 4 20= − ⋅ =( )

Table value: F( )95 4 35% .= F F< ( %)95

It is obvious that no significant interaction exists between the electrode gap and thedrawing dimension C.

Outcome: A value as high as possible was striven for for the target quantity of ignitionvoltage in this experiment. If the means of all means y i are respectively taken intoconsideration in which a factor was set on the lower level (-) or upper level (+), itbecomes immediately clear which factor setting is the more favorable with respect to thetarget quantity optimization. In this example, the upper level (+) is to be selected, then,with both factors A and B.

Page 46: 4. Statistical Procedures Formulas and Tables

- 46 -

Regression Line

Problem:

n ordered pairs ( xi ; yi ) are given. A recording of the points belonging to these orderedpairs in an x-y diagram (correlation diagram) indicates that it makes sense to draw anapproximating line (regression line) through this cluster of points.

Slope b and axis section (intercept) a of the compensating line y a b x= + ⋅ are to becalculated.

Input quantities:

xi , yi , x , y , sx2 , n

Procedure:

1. Calculation of slope b :

bn

y y

s

s

si

n

i

x

x y

x

=−

⋅ − ⋅ −

==∑1

1 12 2

(x x) ( )i

The expression sx y is called the covariance and can be calculated with the formula

sn

x y n x yx y i ii

n

=−

⋅ ⋅ − ⋅ ⋅

=∑1

1 1

(compare with correlation on page 48).

2. Calculation of a :

a y b x= − ⋅

Remark:

a and b are determined in such a manner that the sum of the vertical quadratic deviationsof measurement values yi from the values y xi( ) given by the regression line are minimal;the procedure is therefore called the least-squares method.

There are situations in which it is unclear whether the first value or the second value of avalue couple will be respectively allocated to the x-axis. Here, both a recording of thepoints ( xi ; yi ) as well as a recording of the points ( yi ; xi ) is possible. For both of thesecases, in general, two different regression lines are produced.

Page 47: 4. Statistical Procedures Formulas and Tables

- 47 -

Example of the calculation of a regression line:

The torque M of a starter is measured with dependence on current I. A regression liney a b x= + ⋅ is to be calculated for 30 points ( xi ; yi ).

Current I [A] (Values xi )110 122 125 132 136 148 152 157 167 173182 187 198 209 219 220 233 245 264 273281 295 311 321 339 350 360 375 390 413

Torque M [Nm] (Values yi )1,80 1,46 1,90 1,76 2,45 2,28 2,67 2,63 2,85 3,153,10 3,52 3,85 3,85 4,07 4,69 4,48 4,96 5,55 5,345,95 6,45 6,52 6,84 7,55 7,27 7,80 7,90 8,65 8,65

Allocation of the values:

( x1; y1) = (110; 1.80) ( x 2 ; y2 ) = (122; 1.46) ( x11; y11) = (182; 3.10) etc.

The following is produced: x = 236 233. y = 4 665. s x2 8060 6= . n = 30

s x y =⋅ + ⋅ + + ⋅ − ⋅ ⋅

=110 18 122 146 413 8 65 30 236 2 4 67

29200 372

. . . . . . . . . .

One then finds: b = =200 3728060 6

0 025..

.

and a = − ⋅ = −4 665 0 025 236 233 1208. . . . .

Equation of the straight line y x= − + ⋅1208 0 025. .

Page 48: 4. Statistical Procedures Formulas and Tables

- 48 -

Correlation Coefficient

Problem:

n ordered pairs ( xi ; yi ) of two quantities X and Y are given. A recording of the pointsbelonging to these ordered pairs in a correlation diagram (x-y diagram) shows a cluster ofpoints which allows the presumption that a functional association (a correlation) existsbetween quantities X and Y.

The correlation coefficient r , a measure for the quality of the presumed association, isto be calculated.

Input quantities:

xi , yi , x , y , sx , s y , n

Procedure:

Calculation of r :

rn

y y

s ss

s si

n

i

x y

x y

x y=

−⋅ − ⋅ −

⋅=

⋅=∑1

1 1

(x x) ( )i

Remark:

r can assume values between -1 and +1. A value in the proximity of +1 (-1), e.g. 0,9(-0,9), corresponds to a cluster of points which can be approximated quite well with aregression line with positive (negative) slope; one then refers to a strong positive(negative) correlation.

A strong correlation does not inevitably signify that Y is directly dependent on X; it can beproduced by the dependence of both quantities X and Y on a third quantity Z (illusorycorrelation).

The expression sx y in the numerator is called the covariance of the sample. It can also becalculated with the assistance of the formula

sn

x y n x yx y i ii

n

=−

⋅ ⋅ − ⋅ ⋅

=∑1

1 1

.

In the example represented on the right, it is clear in advance that there is a correlationbetween X and Y. Here, the central question is that for the degree of the correlation. It isclear that the means of both data sets are different (one should observe the scaling!).

Page 49: 4. Statistical Procedures Formulas and Tables

- 49 -

Example of the calculation of the correlation coefficient:

The flow rate of 25 injectors was measured at two different test benches with different testmedia (test fluids):

x = 4 5324. y = 4 8992. s x = 0 0506. s y = 0 042. n = 25

x yi ii

n

⋅ = ⋅ + ⋅ + + ⋅ ==∑

1

4 50 4 87 4 54 4 92 4 61 4 94 5551733. . . . . . . . . .

s x y =− ⋅ ⋅

=5551733 25 4 5324 4 8992

240 00187

. . . .

r =⋅

=0 001870 0506 0 042

0 88.. .

.

Test bench 1, Fluid 1, measured values xi

Flow rate (mass) per 1000 strokes in g4.50 4.54 4.56 4.61 4.56 4.42 4.60 4.46 4.56 4.504.50 4.54 4.53 4.55 4.49 4.49 4.45 4.52 4.57 4.594.52 4.60 4.50 4.54 4.61

Test bench 2, Fluid 2, measured values yi

Flow rate (mass) per 1000 strokes in g4.87 4.92 4.94 4.95 4.95 4.81 4.95 4.83 4.90 4.914.92 4.90 4.85 4.90 4.88 4.85 4.84 4.87 4.92 4.944.89 4.96 4.87 4.92 4.94

Page 50: 4. Statistical Procedures Formulas and Tables

- 50 -

Law of Error Propagation

The Gaussian law of error propagation describes how the errors of measurement of severalindependent measurands xi influence a target quantity z , which is calculated according toa functional relation z f x x xk= ( )1 2, ,... , :

szx

sz

xs

zx

sk

k2

1

2

12

2

2

22

22≈

⋅ +

⋅ + +

∂∂

∂∂

∂∂

... .

z is then, in general, only an indirectly measurable quantity. For example, the area of arectangle can be determined in which the side lengths are measured and the measurementoutcomes are multiplied with each other.

The accuracy with which z can be given is dependent on the accuracy of the measurandsxi . In general, the mean xi and the standard deviation si of a sequence of repeatedmeasurements are respectively given for xi .

In the above expression, ∂∂

zxi

designates the partial derivations of the function z

according to the variables xi . They are to be respectively calculated at the positions xi .The derivation of the formula for sz contains the development of z in a Taylors seriesand the neglecting of terms of a higher order. For this reason, the approximation symboltakes the place of the equal symbol, designating the approximate equality.

The use of the law of error propagation is most easily comprehensible on the basis of anexample.We are considering two resistors at which a sequence of repeated measurements arerespectively conducted. As the outcome of these measurement series, the mean andstandard deviation are respectively given as the measure for the average error:

R1 47 0 8= ± . Ω R2 68 11= ± . Ω .

In the case of the parallel connection, the total resistance R is to be calculated accordingto

RR RR R

=⋅+

1 2

1 2

: R =⋅+

=47 6847 68

27 8Ω

Ω. .

In the calculation of the corresponding average error, the partial derivations of Raccording to R1 and R2 are first required:

∂∂

RR

R

R R1

22

1 22

2

268 0 35=

+= =

( ) (47 + 68).

∂∂

RR

R

R R2

12

1 22

2

247

47 680 167=

+=

+=

( ) ( ). .

Page 51: 4. Statistical Procedures Formulas and Tables

- 51 -

The average error of the total resistance is calculated using the law of error propagation:

s R2 2 2 2 20 35 0 8 0167 11 011= ⋅ + ⋅ =. . . . .( ) ( )Ω Ω Ω ⇒ =sR 0 33. Ω .

The corresponding result is: R = ±27 8 0 3. . Ω .

The law of error propagation can be represented very easily if the function f , whichdesignates the association of the independent measurands x i , is a sum.

The partial derivations are, then, all equal to 1.0 and the following is produced:

s s s sz k2

12

22 2= + + +... .

Correspondingly, the following applies for the variances of the corresponding totalpopulation:

σ σ σ σz k2

12

22 2= + + +... .

This means that the variance of an indirect measurand, which is determined by addition ofindependent individual measurands, is equal to the sum of the variances of the individualmeasurands.

Hint:The above equation is identical to a general relationship between the variances of severalindependent random variables. It can also be derived completely independent of the law oferror propagation solely on the basis of statistical calculation rules.

Example:In the case of a series connection of the two resistors, one finds:

s s sR R R2 2 2 2 2

1 20 8 11= + = +( ) ( ). .Ω Ω ⇔ =s R 136. Ω

that is R = ±115 14. Ω .

Page 52: 4. Statistical Procedures Formulas and Tables

- 52 -

Use of Weibull Paper

Problem:

The failure behavior of products can generally be described with the Weibull distribution.Important parameters of this distribution are the characteristic service life T and the shapeparameter b . Estimated values of both of these parameters should be determined byrecording of the measured failure times t i of n products (components) on Weibull paper.

Input quantities: Times t i up to the failure of the i -th product

Procedure:

1.1 Determination of the cumulative frequencies H j with a large sample n > 50.• Subdivide the total observation time into k classes (compare to histogram).• Determine the absolute frequencies n j of the failure times falling into the

individual classes by counting out.

• Calculate the cumulative frequency G nj ii

j

==∑

1

for j k= 1 2, , . . . ,

i.e. G n1 1= , G n n2 1 2= + , . . . , G n n nj j= + + +1 2 . . . .

• Calculate the relative cumulative frequency HGnj

j= ⋅100% for each class.

1.2 Determination of the cumulative frequencies H j with a small sample 6 50< ≤n .• Arrange the n individual values in order of magnitude.

The smallest value has the rank of 1, the largest has the rank of n .• Allocate the tabulated relative cumulative frequencies H j in Table 1 for sample

size n to each rank number j .

2. Record the determined relative cumulative frequencies H j over the logarithmic timeaxis (t-axis). In case 1.1, the recording occurs via the upper class limits (the highestclass is inapplicable here), and in case 1.2 it occurs via the individual values t i .

3. Draw an approximating line through the points. If the points drawn-in lie adequatelywell on the straight line, one can conclude from this that the present failurecharacteristic can be described quite well by the Weibull distribution. Otherwise, adetermination of T and b is not reasonable.

4. Go to the point of intersection of the straight line and the horizontal 63,2%-line. Dropa perpendicular on the time axis. The characteristic life T can there be read.

5. Draw a line parallel to the approximating line through the center point (1; 63.2%) ofthe partial circle. The shape parameter b (slope) can then be read on the partial circlescale.

Page 53: 4. Statistical Procedures Formulas and Tables

- 53 -

Remark:

The scaling of the logarithmic time axis (t-axis) can be adjusted through the multiplicationwith a suitable scale factor of the present failure times. If one selects, for example, thefactor 10h (10 hours), the following corresponds:the number 1 on the t-axis to the value 10 hours,the number 10 on the t-axis to the value 100 hours,the number 100 on the t-axis to the value 1000 hours.The scaling can also, for example, be conducted for the mileage, the number of loadcycles, switching sequences or work cycles, instead of being conducted for the time.

Example:

A sample of 10 electric motors was examined according to a fixed test cycle at acontinuous test bench. Until the respective failure, the test objects achieved the followingcycle counts (in increasing sequence and in units of 10 5 cycles):

Motor No. 1 2 3 4 5 6 7 8 9 10Cycles until failure /105 0.41 0.55 0.79 0.92 1.1 1.1 1.4 1.5 1.8 1.9Rel. cumulative freq. in % 6.2 15.9 25.5 35.2 45.2 54.8 64.8 74.5 84.1 93.8

Characteristic life T = ⋅131 10 5. test cycles, Shape factor b = 2 27.

Hint: With the assistance of conversions x ti i= ln( ) and y Hi i= −ln(ln( ( ))1 1/ , b and

T can also be directly calculated: bss

x y

x= 2 and T e

x yb=

−.

Here, x and s x are the mean or the standard deviation of x i , y is the mean of y i and s x y

is the covariance (compare with the section on regression lines) of x i and y i (withi k= 1. . . in case 1.1 and i n= 1. . . in case 1.2).

Representation of thepoints on Weibullpaper:x-axis (= t-axis):Number of test cyclesuntil failurey-axis:relative cumulativefrequency in percent

Page 54: 4. Statistical Procedures Formulas and Tables

- 54 -

Tables

Page 55: 4. Statistical Procedures Formulas and Tables

- 55 -

Table 1

Cumulative frequencies H ni ( ) (in percent) for the recording of the points ( , )x Hi i fromranked samples on probability paper

i n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13 n=14 n=15

12345

10.226.142.157.973.9

8.922.436.350.063.7

7.819.831.944.056.0

6.817.628.439.450.0

6.215.925.535.245.2

5.614.523.332.341.3

5.213.121.529.537.8

4.812.319.827.434.8

4.511.318.425.532.3

4.110.617.123.930.2

6789

10

89.8 77.691.2

68.180.292.2

60.671.682.493.2

54.864.874.584.193.8

50.058.767.776.785.5

46.054.062.270.578.5

42.550.057.565.272.6

39.446.453.660.667.7

36.743.350.056.763.3

1112131415

94.4 86.994.9

80.287.795.3

74.581.688.795.5

69.876.182.989.495.9

The cumulative frequency H ni ( ) for rank number i can also be calculated with theapproximation formulas

H n ini ( ) = − 0 5.

and H n ini ( ) = −

+0 30 4..

.

The deviation from exact table values is hereby insignificant.

Example: n = 15 i = 12 Table value: 76.1

H12 15 12 0 515

76 7( ) = − =. . or H12 15 12 0 315 0 4

76 0( ) = −+

=..

.

Page 56: 4. Statistical Procedures Formulas and Tables

- 56 -

Table 1 (cont.)

i n=16 n=17 n=18 n=19 n=20 n=21 n=22 n=23 n=24 n=25

12345

3.910.016.122.428.4

3.79.3

15.220.926.8

3.48.9

14.219.825.1

3.38.4

13.618.723.9

3.17.9

12.917.922.7

2.97.6

12.317.121.8

2.87.2

11.716.420.6

2.76.9

11.315.619.8

2.66.7

10.714.918.9

2.46.4

10.414.218.1

6789

10

34.840.946.853.259.1

32.638.244.050.056.0

30.936.341.747.252.8

29.134.539.744.850.0

27.832.637.842.547.6

26.431.235.940.545.2

25.129.834.138.643.3

24.228.432.637.141.3

23.327.431.635.639.7

22.426.130.234.138.2

1112131415

65.271.677.683.990.0

61.867.473.279.184.8

58.363.769.174.980.2

55.260.365.570.976.1

52.457.562.267.472.2

50.054.859.564.168.8

47.652.456.761.465.9

45.650.054.458.762.9

43.648.052.056.460.3

42.146.050.054.057.9

1617181920

96.1 90.796.3

85.891.296.6

81.386.491.696.7

77.382.187.192.196.9

73.678.282.987.792.4

70.274.979.483.688.3

67.471.675.880.284.4

64.468.472.676.781.1

61.865.969.873.977.6

2122232425

97.1 92.897.2

88.793.197.3

85.189.393.397.4

81.985.889.693.697.6

Page 57: 4. Statistical Procedures Formulas and Tables

- 57 -

Table 1 (cont.)

i n=26 n=27 n=28 n=29 n=30 n=31 n=32 n=33 n=34 n=35

12345

2.46.29.9

13.817.6

2.35.99.5

13.416.9

2.25.79.2

12.716.4

2.15.58.9

12.315.9

2.15.38.7

11.915.2

2.05.18.3

11.614.8

1.95.08.1

11.214.3

1.94.87.8

10.813.9

1.84.77.6

10.513.5

1.84.67.4

10.213.1

6789

10

21.525.129.133.036.7

20.624.228.131.635.2

19.823.327.130.534.1

19.222.726.129.533.0

18.721.825.128.431.9

18.021.224.427.630.8

17.420.523.626.729.8

16.919.922.925.928.9

16.419.322.225.128.1

15.918.821.624.527.3

1112131415

40.544.448.052.055.6

39.042.546.450.053.6

37.441.344.848.451.6

36.339.743.346.450.0

35.238.641.745.248.4

34.037.240.443.646.8

32.936.039.142.245.3

31.934.937.941.044.0

31.033.936.839.842.7

30.133.035.838.641.5

1617181920

59.563.367.070.974.9

57.561.064.868.471.9

55.258.762.665.969.5

53.656.760.363.767.0

51.654.858.361.464.8

50.053.256.459.662.8

48.451.654.757.860.9

47.050.053.056.059.0

45.648.551.554.457.3

44.347.250.052.855.7

2122232425

78.582.486.290.293.8

75.879.483.186.690.5

72.976.780.283.687.3

70.573.977.380.884.1

68.171.674.978.281.3

66.069.272.475.678.8

64.067.170.273.376.4

62.165.168.171.174.1

60.263.266.169.071.9

58.561.464.267.069.9

2627282930

97.6 94.197.7

90.894.397.8

87.791.294.597.9

84.888.191.394.797.9

82.085.388.591.794.9

79.582.685.788.891.9

77.180.183.186.289.2

74.977.880.783.686.5

72.775.678.481.384.1

3132333435

98.0 95.098.1

92.295.298.1

89.592.495.398.2

86.989.892.695.598.2

Page 58: 4. Statistical Procedures Formulas and Tables

- 58 -

Table 2 On the test of normality (according to Pearson)

Level of significanceα = 0 5, %

Level of significanceα = 2 5, %

Sample sizen

Lowerlimit

Upperlimit

Lowerlimit

Upperlimit

345

6789

10

1112131415

1617181920

2530354045

5055606570

7580859095

100150200500

1000

1.7351.831.98

2.112.222.312.392.46

2.532.592.642.702.74

2.792.832.872.902.94

3.093.213.323.413.49

3.563.623.683.743.79

3.833.883.923.963.99

4.034.324.535.065.50

2.0002.4472.813

3.1153.3693.5853.7723.935

4.0794.2084.3254.4314.530

4.624.704.784.854.91

5.195.405.575.715.83

5.936.026.106.176.24

6.306.356.406.456.49

6.536.827.017.607.99

1.7451.932.09

2.222.332.432.512.59

2.662.722.782.832.88

2.932.973.013.053.09

3.243.373.483.573.66

3.733.803.863.913.96

4.014.054.094.134.17

4.214.484.685.255.68

2.0002.4392.782

3.0563.2823.4713.6343.777

3.9034.024.124.214.29

4.374.444.514.574.63

4.875.065.215.345.45

5.545.635.705.775.83

5.885.935.986.036.07

6.116.396.607.157.54

Page 59: 4. Statistical Procedures Formulas and Tables

- 59 -

Table 3

Upper limit UL for the outlier test (David-Hartley-Pearson-Test)

Level of significance α

n 10% 5% 2.5% 1% 0.5%

345

6789

10

1112131415

1617181920

3040506080

100150200500

1000

1.9972.4092.712

2.9493.1433.3083.4493.570

3.683.783.873.954.02

4.094.154.214.274.32

4.704.965.155.295.51

5.685.966.156.727.11

1.9992.4292.753

3.0123.2223.3993.5523.685

3.803.914.004.094.17

4.244.314.384.434.49

4.895.155.355.505.73

5.906.186.386.947.33

2.0002.4392.782

3.0563.2823.4713.6343.777

3.9034.014.114.214.29

4.374.444.514.574.63

5.065.345.545.705.93

6.116.396.597.157.54

2.0002.4452.803

3.0953.3383.5433.7203.875

4.0124.1344.2444.344.43

4.514.594.664.734.79

5.255.545.775.936.18

6.366.646.857.427.80

2.0002.4472.813

3.1153.3693.5853.7723.935

4.0794.2084.3254.4314.53

4.624.694.774.844.91

5.395.695.916.096.35

6.546.847.037.607.99

Page 60: 4. Statistical Procedures Formulas and Tables

- 60 -

Table 4 Standard normal distribution Φ Φ( ) ( )− = −u u1 D u u u( ) ( ) ( )= − −Φ Φ

u Φ (-u) Φ (u) D(u)0.01 0.496011 0.503989 0.0079790.02 0.492022 0.507978 0.0159570.03 0.488033 0.511967 0.0239330.04 0.484047 0.515953 0.0319070.05 0.480061 0.519939 0.0398780.06 0.476078 0.523922 0.0478450.07 0.472097 0.527903 0.0558060.08 0.468119 0.531881 0.0637630.09 0.464144 0.535856 0.0717130.10 0.460172 0.539828 0.0796560.11 0.456205 0.543795 0.0875910.12 0.452242 0.547758 0.0955170.13 0.448283 0.551717 0.1034340.14 0.444330 0.555670 0.1113400.15 0.440382 0.559618 0.1192350.16 0.436441 0.563559 0.1271190.17 0.432505 0.567495 0.1349900.18 0.428576 0.571424 0.1428470.19 0.424655 0.575345 0.1506910.20 0.420740 0.579260 0.1585190.21 0.416834 0.583166 0.1663320.22 0.412936 0.587064 0.1741290.23 0.409046 0.590954 0.1819080.24 0.405165 0.594835 0.1896700.25 0.401294 0.598706 0.1974130.26 0.397432 0.602568 0.2051360.27 0.393580 0.606420 0.2128400.28 0.389739 0.610261 0.2205220.29 0.385908 0.614092 0.2281840.30 0.382089 0.617911 0.2358230.31 0.378281 0.621719 0.2434390.32 0.374484 0.625516 0.2510320.33 0.370700 0.629300 0.2586000.34 0.366928 0.633072 0.2661430.35 0.363169 0.636831 0.2736610.36 0.359424 0.640576 0.2811530.37 0.355691 0.644309 0.2886170.38 0.351973 0.648027 0.2960540.39 0.348268 0.651732 0.3034630.40 0.344578 0.655422 0.3108430.41 0.340903 0.659097 0.3181940.42 0.337243 0.662757 0.3255140.43 0.333598 0.666402 0.3328040.44 0.329969 0.670031 0.3400630.45 0.326355 0.673645 0.3472900.46 0.322758 0.677242 0.3544840.47 0.319178 0.680822 0.3616450.48 0.315614 0.684386 0.3687730.49 0.312067 0.687933 0.3758660.50 0.308538 0.691462 0.382925

u Φ (-u) Φ (u) D(u)0.51 0.305026 0.694974 0.3899490.52 0.301532 0.698468 0.3969360.53 0.298056 0.701944 0.4038880.54 0.294598 0.705402 0.4108030.55 0.291160 0.708840 0.4176810.56 0.287740 0.712260 0.4245210.57 0.284339 0.715661 0.4313220.58 0.280957 0.719043 0.4380850.59 0.277595 0.722405 0.4448090.60 0.274253 0.725747 0.4514940.61 0.270931 0.729069 0.4581380.62 0.267629 0.732371 0.4647420.63 0.264347 0.735653 0.4713060.64 0.261086 0.738914 0.4778280.65 0.257846 0.742154 0.4843080.66 0.254627 0.745373 0.4907460.67 0.251429 0.748571 0.4971420.68 0.248252 0.751748 0.5034960.69 0.245097 0.754903 0.5098060.70 0.241964 0.758036 0.5160730.71 0.238852 0.761148 0.5222960.72 0.235762 0.764238 0.5284750.73 0.232695 0.767305 0.5346100.74 0.229650 0.770350 0.5407000.75 0.226627 0.773373 0.5467450.76 0.223627 0.776373 0.5527460.77 0.220650 0.779350 0.5587000.78 0.217695 0.782305 0.5646090.79 0.214764 0.785236 0.5704720.80 0.211855 0.788145 0.5762890.81 0.208970 0.791030 0.5820600.82 0.206108 0.793892 0.5877840.83 0.203269 0.796731 0.5934610.84 0.200454 0.799546 0.5990920.85 0.197662 0.802338 0.6046750.86 0.194894 0.805106 0.6102110.87 0.192150 0.807850 0.6157000.88 0.189430 0.810570 0.6211410.89 0.186733 0.813267 0.6265340.90 0.184060 0.815940 0.6318800.91 0.181411 0.818589 0.6371780.92 0.178786 0.821214 0.6424270.93 0.176186 0.823814 0.6476290.94 0.173609 0.826391 0.6527820.95 0.171056 0.828944 0.6578880.96 0.168528 0.831472 0.6629450.97 0.166023 0.833977 0.6679540.98 0.163543 0.836457 0.6729140.99 0.161087 0.838913 0.6778261.00 0.158655 0.841345 0.682689

Page 61: 4. Statistical Procedures Formulas and Tables

- 61 -

u Φ (-u) Φ (u) D(u)1.01 0.156248 0.843752 0.6875051.02 0.153864 0.846136 0.6922721.03 0.151505 0.848495 0.6969901.04 0.149170 0.850830 0.7016601.05 0.146859 0.853141 0.7062821.06 0.144572 0.855428 0.7108551.07 0.142310 0.857690 0.7153811.08 0.140071 0.859929 0.7198581.09 0.137857 0.862143 0.7242871.10 0.135666 0.864334 0.7286681.11 0.133500 0.866500 0.7330011.12 0.131357 0.868643 0.7372861.13 0.129238 0.870762 0.7415241.14 0.127143 0.872857 0.7457141.15 0.125072 0.874928 0.7498561.16 0.123024 0.876976 0.7539511.17 0.121001 0.878999 0.7579991.18 0.119000 0.881000 0.7620001.19 0.117023 0.882977 0.7659531.20 0.115070 0.884930 0.7698611.21 0.113140 0.886860 0.7737211.22 0.111233 0.888767 0.7775351.23 0.109349 0.890651 0.7813031.24 0.107488 0.892512 0.7850241.25 0.105650 0.894350 0.7887001.26 0.103835 0.896165 0.7923311.27 0.102042 0.897958 0.7959151.28 0.100273 0.899727 0.7994551.29 0.098525 0.901475 0.8029491.30 0.096801 0.903199 0.8063991.31 0.095098 0.904902 0.8098041.32 0.093418 0.906582 0.8131651.33 0.091759 0.908241 0.8164821.34 0.090123 0.909877 0.8197551.35 0.088508 0.911492 0.8229841.36 0.086915 0.913085 0.8261701.37 0.085344 0.914656 0.8293131.38 0.083793 0.916207 0.8324131.39 0.082264 0.917736 0.8354711.40 0.080757 0.919243 0.8384871.41 0.079270 0.920730 0.8414601.42 0.077804 0.922196 0.8443921.43 0.076359 0.923641 0.8472831.44 0.074934 0.925066 0.8501331.45 0.073529 0.926471 0.8529411.46 0.072145 0.927855 0.8557101.47 0.070781 0.929219 0.8584381.48 0.069437 0.930563 0.8611271.49 0.068112 0.931888 0.8637761.50 0.066807 0.933193 0.866386

u Φ (-u) Φ (u) D(u)1.51 0.065522 0.934478 0.8689571.52 0.064256 0.935744 0.8714891.53 0.063008 0.936992 0.8739831.54 0.061780 0.938220 0.8764401.55 0.060571 0.939429 0.8788581.56 0.059380 0.940620 0.8812401.57 0.058208 0.941792 0.8835851.58 0.057053 0.942947 0.8858931.59 0.055917 0.944083 0.8881651.60 0.054799 0.945201 0.8904011.61 0.053699 0.946301 0.8926021.62 0.052616 0.947384 0.8947681.63 0.051551 0.948449 0.8968991.64 0.050503 0.949497 0.8989951.65 0.049471 0.950529 0.9010571.66 0.048457 0.951543 0.9030861.67 0.047460 0.952540 0.9050811.68 0.046479 0.953521 0.9070431.69 0.045514 0.954486 0.9089721.70 0.044565 0.955435 0.9108691.71 0.043633 0.956367 0.9127341.72 0.042716 0.957284 0.9145681.73 0.041815 0.958185 0.9163701.74 0.040929 0.959071 0.9181411.75 0.040059 0.959941 0.9198821.76 0.039204 0.960796 0.9215921.77 0.038364 0.961636 0.9232731.78 0.037538 0.962462 0.9249241.79 0.036727 0.963273 0.9265461.80 0.035930 0.964070 0.9281391.81 0.035148 0.964852 0.9297041.82 0.034379 0.965621 0.9312411.83 0.033625 0.966375 0.9327501.84 0.032884 0.967116 0.9342321.85 0.032157 0.967843 0.9356871.86 0.031443 0.968557 0.9371151.87 0.030742 0.969258 0.9385161.88 0.030054 0.969946 0.9398921.89 0.029379 0.970621 0.9412421.90 0.028716 0.971284 0.9425671.91 0.028067 0.971933 0.9438671.92 0.027429 0.972571 0.9451421.93 0.026803 0.973197 0.9463931.94 0.026190 0.973810 0.9476201.95 0.025588 0.974412 0.9488241.96 0.024998 0.975002 0.9500041.97 0.024419 0.975581 0.9511621.98 0.023852 0.976148 0.9522971.99 0.023295 0.976705 0.9534092.00 0.022750 0.977250 0.954500

Page 62: 4. Statistical Procedures Formulas and Tables

- 62 -

u Φ (-u) Φ (u) D(u)2.01 0.022216 0.977784 0.9555692.02 0.021692 0.978308 0.9566172.03 0.021178 0.978822 0.9576442.04 0.020675 0.979325 0.9586502.05 0.020182 0.979818 0.9596362.06 0.019699 0.980301 0.9606022.07 0.019226 0.980774 0.9615482.08 0.018763 0.981237 0.9624752.09 0.018309 0.981691 0.9633822.10 0.017864 0.982136 0.9642712.11 0.017429 0.982571 0.9651422.12 0.017003 0.982997 0.9659942.13 0.016586 0.983414 0.9668292.14 0.016177 0.983823 0.9676452.15 0.015778 0.984222 0.9684452.16 0.015386 0.984614 0.9692272.17 0.015003 0.984997 0.9699932.18 0.014629 0.985371 0.9707432.19 0.014262 0.985738 0.9714762.20 0.013903 0.986097 0.9721932.21 0.013553 0.986447 0.9728952.22 0.013209 0.986791 0.9735812.23 0.012874 0.987126 0.9742532.24 0.012545 0.987455 0.9749092.25 0.012224 0.987776 0.9755512.26 0.011911 0.988089 0.9761792.27 0.011604 0.988396 0.9767922.28 0.011304 0.988696 0.9773922.29 0.011011 0.988989 0.9779792.30 0.010724 0.989276 0.9785522.31 0.010444 0.989556 0.9791122.32 0.010170 0.989830 0.9796592.33 0.009903 0.990097 0.9801942.34 0.009642 0.990358 0.9807162.35 0.009387 0.990613 0.9812272.36 0.009137 0.990863 0.9817252.37 0.008894 0.991106 0.9822122.38 0.008656 0.991344 0.9826872.39 0.008424 0.991576 0.9831522.40 0.008198 0.991802 0.9836052.41 0.007976 0.992024 0.9840472.42 0.007760 0.992240 0.9844792.43 0.007549 0.992451 0.9849012.44 0.007344 0.992656 0.9853132.45 0.007143 0.992857 0.9857142.46 0.006947 0.993053 0.9861062.47 0.006756 0.993244 0.9864892.48 0.006569 0.993431 0.9868622.49 0.006387 0.993613 0.9872262.50 0.006210 0.993790 0.987581

u Φ (-u) Φ (u) D(u)2.51 0.006037 0.993963 0.9879272.52 0.005868 0.994132 0.9882642.53 0.005703 0.994297 0.9885942.54 0.005543 0.994457 0.9889152.55 0.005386 0.994614 0.9892282.56 0.005234 0.994766 0.9895332.57 0.005085 0.994915 0.9898302.58 0.004940 0.995060 0.9901202.59 0.004799 0.995201 0.9904022.60 0.004661 0.995339 0.9906782.61 0.004527 0.995473 0.9909462.62 0.004397 0.995603 0.9912072.63 0.004269 0.995731 0.9914612.64 0.004145 0.995855 0.9917092.65 0.004025 0.995975 0.9919512.66 0.003907 0.996093 0.9921862.67 0.003793 0.996207 0.9924152.68 0.003681 0.996319 0.9926382.69 0.003573 0.996427 0.9928552.70 0.003467 0.996533 0.9930662.71 0.003364 0.996636 0.9932722.72 0.003264 0.996736 0.9934722.73 0.003167 0.996833 0.9936662.74 0.003072 0.996928 0.9938562.75 0.002980 0.997020 0.9940402.76 0.002890 0.997110 0.9942202.77 0.002803 0.997197 0.9943942.78 0.002718 0.997282 0.9945642.79 0.002635 0.997365 0.9947292.80 0.002555 0.997445 0.9948902.81 0.002477 0.997523 0.9950462.82 0.002401 0.997599 0.9951982.83 0.002327 0.997673 0.9953452.84 0.002256 0.997744 0.9954892.85 0.002186 0.997814 0.9956282.86 0.002118 0.997882 0.9957632.87 0.002052 0.997948 0.9958952.88 0.001988 0.998012 0.9960232.89 0.001926 0.998074 0.9961472.90 0.001866 0.998134 0.9962682.91 0.001807 0.998193 0.9963862.92 0.001750 0.998250 0.9965002.93 0.001695 0.998305 0.9966102.94 0.001641 0.998359 0.9967182.95 0.001589 0.998411 0.9968222.96 0.001538 0.998462 0.9969232.97 0.001489 0.998511 0.9970222.98 0.001441 0.998559 0.9971172.99 0.001395 0.998605 0.9972103.00 0.001350 0.998650 0.997300

Page 63: 4. Statistical Procedures Formulas and Tables

- 63 -

u Φ (-u) Φ (u) D(u)3.01 0.001306 0.998694 0.9973873.02 0.001264 0.998736 0.9974723.03 0.001223 0.998777 0.9975543.04 0.001183 0.998817 0.9976343.05 0.001144 0.998856 0.9977113.06 0.001107 0.998893 0.9977863.07 0.001070 0.998930 0.9978593.08 0.001035 0.998965 0.9979303.09 0.001001 0.998999 0.9979983.10 0.000968 0.999032 0.9980653.11 0.000936 0.999064 0.9981293.12 0.000904 0.999096 0.9981913.13 0.000874 0.999126 0.9982523.14 0.000845 0.999155 0.9983103.15 0.000816 0.999184 0.9983673.16 0.000789 0.999211 0.9984223.17 0.000762 0.999238 0.9984753.18 0.000736 0.999264 0.9985273.19 0.000711 0.999289 0.9985773.20 0.000687 0.999313 0.9986263.21 0.000664 0.999336 0.9986733.22 0.000641 0.999359 0.9987183.23 0.000619 0.999381 0.9987623.24 0.000598 0.999402 0.9988053.25 0.000577 0.999423 0.9988463.26 0.000557 0.999443 0.9988863.27 0.000538 0.999462 0.9989243.28 0.000519 0.999481 0.9989623.29 0.000501 0.999499 0.9989983.30 0.000483 0.999517 0.9990333.31 0.000467 0.999533 0.9990673.32 0.000450 0.999550 0.9991003.33 0.000434 0.999566 0.9991313.34 0.000419 0.999581 0.9991623.35 0.000404 0.999596 0.9991923.36 0.000390 0.999610 0.9992203.37 0.000376 0.999624 0.9992483.38 0.000362 0.999638 0.9992753.39 0.000350 0.999650 0.9993013.40 0.000337 0.999663 0.9993263.41 0.000325 0.999675 0.9993503.42 0.000313 0.999687 0.9993743.43 0.000302 0.999698 0.9993963.44 0.000291 0.999709 0.9994183.45 0.000280 0.999720 0.9994393.46 0.000270 0.999730 0.9994603.47 0.000260 0.999740 0.9994793.48 0.000251 0.999749 0.9994983.49 0.000242 0.999758 0.9995173.50 0.000233 0.999767 0.999535

u Φ (-u) Φ (u) D(u)3.51 0.000224 0.999776 0.9995523.52 0.000216 0.999784 0.9995683.53 0.000208 0.999792 0.9995843.54 0.000200 0.999800 0.9996003.55 0.000193 0.999807 0.9996153.56 0.000185 0.999815 0.9996293.57 0.000179 0.999821 0.9996433.58 0.000172 0.999828 0.9996563.59 0.000165 0.999835 0.9996693.60 0.000159 0.999841 0.9996823.61 0.000153 0.999847 0.9996943.62 0.000147 0.999853 0.9997053.63 0.000142 0.999858 0.9997173.64 0.000136 0.999864 0.9997273.65 0.000131 0.999869 0.9997383.66 0.000126 0.999874 0.9997483.67 0.000121 0.999879 0.9997573.68 0.000117 0.999883 0.9997673.69 0.000112 0.999888 0.9997763.70 0.000108 0.999892 0.9997843.71 0.000104 0.999896 0.9997933.72 0.000100 0.999900 0.9998013.73 0.000096 0.999904 0.9998083.74 0.000092 0.999908 0.9998163.75 0.000088 0.999912 0.9998233.76 0.000085 0.999915 0.9998303.77 0.000082 0.999918 0.9998373.78 0.000078 0.999922 0.9998433.79 0.000075 0.999925 0.9998493.80 0.000072 0.999928 0.9998553.81 0.000070 0.999930 0.9998613.82 0.000067 0.999933 0.9998673.83 0.000064 0.999936 0.9998723.84 0.000062 0.999938 0.9998773.85 0.000059 0.999941 0.9998823.86 0.000057 0.999943 0.9998873.87 0.000054 0.999946 0.9998913.88 0.000052 0.999948 0.9998963.89 0.000050 0.999950 0.9999003.90 0.000048 0.999952 0.9999043.91 0.000046 0.999954 0.9999083.92 0.000044 0.999956 0.9999113.93 0.000042 0.999958 0.9999153.94 0.000041 0.999959 0.9999183.95 0.000039 0.999961 0.9999223.96 0.000037 0.999963 0.9999253.97 0.000036 0.999964 0.9999283.98 0.000034 0.999966 0.9999313.99 0.000033 0.999967 0.9999344.00 0.000032 0.999968 0.999937

Page 64: 4. Statistical Procedures Formulas and Tables

- 64 -

Table 5

Factors k for the calculation of limiting values

Probability P

n 95% 99%

2345

6789

10

1214161820

2530354050

60120

19.25.13.32.6

2.22.051.901.801.72

1.621.541.481.441.40

1.341.301.271.251.22

1.191.12

9911.65.84.0

3.22.82.52.32.2

2.01.851.751.681.62

1.521.451.401.371.32

1.281.18

Page 65: 4. Statistical Procedures Formulas and Tables

- 65 -

Table 6

Quantiles of the t-distribution (two-sided)

Probability

f 95% 99% 99.9%

12345

6789

10

1112131415

1617181920

253035404550

100200300400500

12.74.3

3.182.782.57

2.452.372.312.262.23

2.202.182.162.152.13

2.122.112.102.092.09

2.062.042.032.022.012.01

1.981.971.971.971.97

1.96

63.79.935.844.604.03

3.713.503.363.253.17

3.113.063.012.982.95

2.922.902.882.862.85

2.792.752.722.702.692.68

2.632.602.592.592.59

2.58

636.631.612.98.616.87

5.965.415.044.784.59

4.444.324.224.144.07

4.023.973.923.883.85

3.733.653.593.553.523.50

3.393.343.323.323.31

3.30

Page 66: 4. Statistical Procedures Formulas and Tables

- 66 -

Table 7

Factors for the determination of the confidence interval of a standard deviation

P=95% P=99%

n c1 c2 c1 c2

2345

6789

10

1520253040

50100

0.450.520.570.60

0.620.640.660.680.69

0.730.760.780.800.82

0.830.88

32.36.293.732.87

2.452.202.041.921.83

1.581.461.391.351.28

1.251.16

0.360.430.480.52

0.550.570.590.600.62

0.670.700.720.750.77

0.790.84

16714.16.454.41

3.482.982.662.442.28

1.861.671.561.491.40

1.341.22

Page 67: 4. Statistical Procedures Formulas and Tables

- 67 -

Table 8 Quantiles of the F-distribution (one-sided to the value 95%)

f 2 f 1 1= f 1 2= f 1 3= f 1 4= f 1 5= f 1 6= f 1 7= f 1 8= f 1 9=

12345

6789

10

1112131415

1617181920

2224262830

3234363840

5060708090

100150200

1000

16118.510.17.716.61

5.995.595.325.124.96

4.844.754.674.604.54

4.494.454.414.384.35

4.304.264.234.204.17

4.154.134.114.104.08

4.034.003.983.963.95

3.943.903.893.85

20019.09.556.945.79

5.144.744.464.264.10

3.983.893.813.743.68

3.633.593.553.523.49

3.443.403.373.343.32

3.303.283.263.243.23

3.183.153.133.113.10

3.093.063.043.00

21619.29.286.595.41

4.764.354.073.863.71

3.593.493.413.343.29

3.243.203.163.133.10

3.053.012.982.952.92

2.902.882.872.852.84

2.792.762.742.722.71

2.702.662.652.61

22519.29.126.395.19

4.534.123.843.633.48

3.363.263.183.113.06

3.012.962.932.902.87

2.822.782.742.712.69

2.672.652.632.622.61

2.562.532.502.492.47

2.462.432.422.38

23019.39.016.265.05

4.393.973.693.483.33

3.203.113.032.962.90

2.852.812.772.742.71

2.662.622.592.562.53

2.512.492.482.462.45

2.402.372.352.332.32

2.312.272.262.22

23419.38.946.164.95

4.283.873.583.373.22

3.093.002.922.852.79

2.742.702.662.632.60

2.552.512.472.452.42

2.402.382.362.352.34

2.292.252.232.212.20

2.192.162.142.11

23719.48.896.094.88

4.213.793.503.293.14

3.012.912.832.762.71

2.662.612.582.542.51

2.462.422.392.362.33

2.312.292.282.262.25

2.202.172.142.132.11

2.102.072.062.02

23919.48.856.044.82

4.153.733.443.233.07

2.952.852.772.702.64

2.592.552.512.482.45

2.402.362.322.292.27

2.242.232.212.192.18

2.132.102.072.062.04

2.032.001.981.95

24119.48.816.004.77

4.103.683.393.183.02

2.902.802.712.652.59

2.542.492.462.422.39

2.342.302.272.242.21

2.192.172.152.142.12

2.072.042.022.001.99

1.971.941.931.89

Page 68: 4. Statistical Procedures Formulas and Tables

- 68 -

Table 8 (cont.) Quantiles of the F-distribution (one-sided to the value 95%)

f 2 f 1 10= f 1 15= f 1 20= f 1 30= f 1 40= f 1 50= f 1 100= f 1→∞

12345

6789

10

1112131415

1617181920

2224262830

3234363840

5060708090

100150200

1000

24219.48.795.964.74

4.063.643.353.142.98

2.852.752.672.602.54

2.492.452.412.382.35

2.302.252.222.192.16

2.142.122.112.092.08

2.031.991.971.951.94

1.931.891.881.84

24619.48.705.864.62

3.943.513.223.012.85

2.722.622.532.462.40

2.352.312.272.232.20

2.152.112.072.042.01

1.991.971.951.941.92

1.871.841.811.791.78

1.771.731.721.68

24819.48.665.804.56

3.873.443.152.942.77

2.652.542.462.392.33

2.282.232.192.162.12

2.072.031.991.961.93

1.911.891.871.851.84

1.781.751.721.701.69

1.681.641.621.58

25019.58.625.754.50

3.813.383.082.862.70

2.572.472.382.312.25

2.192.152.112.072.04

1.981.941.901.871.84

1.821.801.781.761.74

1.691.651.621.601.59

1.571.531.521.47

25119.58.595.724.46

3.773.343.042.832.66

2.532.432.342.272.20

2.152.102.062.031.99

1.941.891.851.821.79

1.771.751.731.711.69

1.631.591.571.541.53

1.521.481.461.41

25219.58.585.704.44

3.753.323.022.802.64

2.512.402.312.242.18

2.122.082.042.001.97

1.911.861.821.791.76

1.741.711.691.681.66

1.601.561.531.511.49

1.481.441.411.36

25319.58.555.664.41

3.713.272.972.762.59

2.462.352.262.192.12

2.072.021.981.941.91

1.851.801.761.731.70

1.671.651.621.611.59

1.521.481.451.431.41

1.391.341.321.26

25419.58.535.634.37

3.673.232.932.712.54

2.402.302.212.132.07

2.011.961.921.881.84

1.781.731.691.651.62

1.591.571.551.531.51

1.441.391.351.321.30

1.281.221.191.08

Page 69: 4. Statistical Procedures Formulas and Tables

- 69 -

Table 8 Qantiles of the F-distribution (one-sided to the value 99%)

f 2 f 1 1= f 1 2= f 1 3= f 1 4= f 1 5= f 1 6= f 1 7= f 1 8= f 1 9=

12345

6789

10

1112131415

1617181920

2224262830

3234363840

5060708090

100150200

1000

405298.534.121.216.3

13.712.211.310.610.0

9.659.339.078.868.68

8.538.408.298.188.10

7.957.827.727.647.56

7.507.447.407.357.31

7.177.087.016.966.93

6.906.816.766.66

499999.030.818.013.3

10.99.558.658.027.56

7.216.936.706.516.36

6.236.116.015.935.85

5.725.615.535.455.39

5.345.295.255.215.18

5.064.984.924.884.85

4.824.754.714.63

540399.229.516.712.1

9.788.457.596.996.55

6.225.955.745.565.42

5.295.185.095.014.94

4.824.724.644.574.51

4.464.424.384.344.31

4.204.134.084.044.01

3.983.923.883.80

562599.328.716.011.4

9.157.857.016.425.99

5.675.415.215.044.89

4.774.674.584.504.43

4.314.224.144.074.02

3.973.933.893.863.83

3.723.653.603.563.54

3.513.453.413.34

576499.328.215.511.0

8.757.466.636.065.64

5.325.064.864.704.56

4.444.344.254.174.10

3.993.903.823.753.70

3.653.613.573.543.51

3.413.343.293.263.23

3.213.143.113.04

585999.327.915.210.7

8.477.196.375.805.39

5.074.824.624.464.32

4.204.104.013.943.87

3.763.673.593.533.47

3.433.393.353.323.29

3.193.123.073.043.01

2.992.922.892.82

592899.427.715.010.5

8.266.996.185.615.20

4.894.644.444.284.14

4.033.933.843.773.70

3.593.503.423.363.30

3.263.223.183.153.12

3.022.952.912.872.84

2.822.762.732.66

598299.427.514.810.3

8.106.846.035.475.06

4.744.504.304.144.00

3.893.793.713.633.56

3.453.363.293.233.17

3.133.093.053.022.99

2.892.822.782.742.72

2.692.632.602.53

602299.427.314.710.2

7.986.725.915.354.94

4.634.394.194.033.89

3.783.683.603.523.46

3.353.263.183.123.07

3.022.982.952.922.89

2.792.722.672.642.61

2.592.532.502.43

Page 70: 4. Statistical Procedures Formulas and Tables

- 70 -

Table 8 (cont.) Quantiles of the F-distribution (one-sided to the value 99%)

f 2 f 1 10= f 1 15= f 1 20= f 1 30= f 1 40= f 1 50= f 1 100= f 1→∞

12345

6789

10

1112131415

1617181920

2224262830

3234363840

5060708090

100150200

1000

605699.427.214.510.1

7.876.625.815.264.85

4.544.304.103.943.80

3.693.593.513.433.37

3.263.173.093.032.98

2.932.892.862.832.80

2.702.632.592.552.52

2.502.442.412.34

615799.426.914.29.72

7.566.315.524.964.56

4.254.013.823.663.52

3.413.313.233.153.09

2.982.892.822.752.70

2.662.622.582.552.52

2.422.352.312.272.24

2.222.162.132.06

620999.426.714.09.55

7.406.165.364.814.41

4.103.863.663.513.37

3.263.163.083.002.94

2.832.742.662.602.55

2.502.462.432.402.37

2.272.202.152.122.09

2.072.001.971.90

626199.526.513.89.38

7.235.995.204.654.25

3.943.703.513.353.21

3.103.002.922.842.78

2.672.582.502.442.39

2.342.302.262.232.20

2.102.031.981.941.92

1.891.831.791.72

628799.526.413.79.29

7.145.915.124.574.17

3.863.623.433.273.13

3.022.922.842.762.69

2.582.492.422.352.30

2.252.212.172.142.11

2.011.941.891.851.82

1.801.731.691.61

630099.526.413.79.24

7.095.865.074.524.12

3.813.573.383.223.08

2.972.872.782.712.64

2.532.442.362.302.25

2.202.162.122.092.06

1.951.881.831.791.76

1.731.661.631.54

633099.526.213.69.13

6.995.754.964.424.01

3.713.473.273.112.98

2.862.762.682.602.54

2.422.332.252.192.13

2.082.042.001.971.94

1.821.751.701.661.62

1.601.521.481.38

636699.526.113.59.02

6.885.654.864.313.91

3.603.363.173.002.87

2.752.652.572.492.42

2.312.212.132.062.01

1.961.911.871.841.80

1.681.601.541.491.46

1.431.331.281.11

Page 71: 4. Statistical Procedures Formulas and Tables

- 71 -

Table 9 Quantiles of the F-distribution (two-sided to the value 95%)

f 2 f 1 1= f 1 2= f 1 3= f 1 4= f 1 5= f 1 6= f 1 7= f 1 8= f 1 9=

12345

6789

10

1112131415

1617181920

2224262830

4050607080

90100200500

64838.517.412.210.0

8.818.077.577.216.94

6.726.556.416.306.20

6.126.045.985.925.87

5.795.725.665.615.57

5.425.345.295.255.22

5.205.185.095.05

80039.016.010.68.43

7.266.546.065.715.46

5.255.104.974.864.77

4.694.624.564.514.46

4.384.324.274.224.18

4.053.973.933.893.86

3.843.833.763.72

86439.215.49.987.76

6.605.895.425.084.83

4.634.474.354.244.15

4.084.013.953.903.86

3.783.723.673.633.59

3.463.393.343.313.28

3.263.253.183.14

90039.215.19.607.39

6.235.525.054.724.47

4.274.124.003.893.80

3.733.663.613.563.51

3.443.383.333.293.25

3.133.053.012.972.95

2.932.922.852.81

92239.314.99.367.15

5.995.294.824.484.24

4.043.893.773.663.58

3.503.443.383.333.29

3.223.153.103.063.03

2.902.832.792.752.73

2.712.702.632.59

93739.314.79.206.98

5.825.124.654.324.07

3.883.733.603.503.41

3.343.283.223.173.13

3.052.992.942.902.87

2.742.672.632.592.57

2.552.542.472.43

94839.414.69.076.85

5.704.994.534.203.95

3.763.613.483.383.29

3.223.163.103.053.01

2.932.872.822.782.75

2.622.552.512.472.45

2.432.422.352.31

95739.414.58.986.76

5.604.904.434.103.85

3.663.513.393.293.20

3.123.063.012.962.91

2.842.782.732.692.65

2.532.462.412.382.35

2.342.322.262.22

96339.414.58.906.68

5.524.824.364.033.78

3.593.443.313.213.12

3.052.982.932.882.84

2.762.702.652.612.57

2.452.382.332.302.28

2.262.242.182.14

Page 72: 4. Statistical Procedures Formulas and Tables

- 72 -

Table 9 (cont.) Quantiles of the F-distribution (two-sided to the value 95%)

f 2 f 1 10= f 1 15= f 1 20= f 1 30= f 1 40= f 1 50= f 1 100= f 1→∞

12345

6789

10

1112131415

1617181920

2224262830

4050607080

90100200500

96939.414.48.846.62

5.464.764.303.963.72

3.523.373.253.153.06

2.992.922.872.822.77

2.702.642.592.552.51

2.392.322.272.242.21

2.192.182.112.07

98539.414.38.666.43

5.274.574.103.773.52

3.343.183.052.952.86

2.792.722.672.622.57

2.502.442.392.342.31

2.182.112.062.032.00

1.981.971.901.86

99339.414.28.566.33

5.174.474.003.673.42

3.223.072.952.842.76

2.682.622.562.512.46

2.392.332.282.232.20

2.071.991.941.911.88

1.861.851.781.74

100139.514.18.466.23

5.074.363.893.563.31

3.122.962.842.732.64

2.572.502.442.392.35

2.272.212.162.112.07

1.941.871.821.781.75

1.731.711.641.60

100639.514.08.416.18

5.014.313.843.513.26

3.062.912.782.672.58

2.512.442.382.332.29

2.212.152.092.052.01

1.881.801.741.711.68

1.661.641.561.51

100839.514.08.386.14

4.984.283.813.473.22

3.032.872.742.642.55

2.472.412.352.302.25

2.172.112.052.011.97

1.831.751.701.661.63

1.611.591.511.46

101339.514.08.326.08

4.924.213.743.403.15

2.952.802.672.562.47

2.402.332.272.222.17

2.092.021.971.921.88

1.741.661.601.561.53

1.501.481.391.34

101839.513.98.266.02

4.854.143.673.333.08

2.882.722.602.492.40

2.322.252.192.132.09

2.001.941.881.831.79

1.641.551.481.441.40

1.371.351.231.14

Page 73: 4. Statistical Procedures Formulas and Tables

- 73 -

Table 9 Quantiles of the F-distribution (two-sided to the value 99%)

f 2 f 1 1= f 1 2= f 1 3= f 1 4= f 1 5= f 1 6= f 1 7= f 1 8= f 1 9=

12345

6789

10

1112131415

1617181920

2224262830

4050607080

90100200500

1620019855.631.322.8

18.616.214.713.612.8

12.211.811.411.110.8

10.610.410.210.19.94

9.739.559.419.289.18

8.838.638.498.408.33

8.288.248.067.95

2000019949.826.318.3

14.512.411.010.19.43

8.918.518.197.927.70

7.517.357.217.096.99

6.816.666.546.446.35

6.075.905.805.725.67

5.625.595.445.36

2160019947.424.316.5

12.910.99.608.728.08

7.607.236.936.686.48

6.306.166.035.925.82

5.655.525.415.325.24

4.984.834.734.664.61

4.574.544.404.33

2250019946.223.215.6

12.010.18.807.967.34

6.886.526.236.005.80

5.645.505.375.275.17

5.024.894.794.704.62

4.374.234.144.084.03

3.993.963.843.76

2310019945.322.514.9

11.59.528.307.476.87

6.426.075.795.565.37

5.215.074.964.854.76

4.614.494.384.304.23

3.993.853.763.703.65

3.623.593.473.40

2340019944.822.014.5

11.19.167.957.136.54

6.105.765.485.265.07

4.914.784.664.564.47

4.324.204.104.023.95

3.713.583.493.433.39

3.353.333.213.14

2370019944.421.614.2

10.88.897.696.896.30

5.865.525.255.034.85

4.694.564.444.344.26

4.113.993.893.813.74

3.513.383.293.233.19

3.153.133.012.94

2390019944.121.414.0

10.68.687.506.696.12

5.685.355.084.864.67

4.524.394.284.184.09

3.943.833.733.653.58

3.353.223.133.083.03

3.002.972.862.79

2410019943.821.113.8

10.48.517.346.545.97

5.545.204.934.724.54

4.384.254.144.043.96

3.813.693.603.523.45

3.223.093.012.952.91

2.872.852.732.66

Page 74: 4. Statistical Procedures Formulas and Tables

- 74 -

Table 9 (cont.) Quantiles of the F-distribution (two-sided to the value 99%)

f 2 f 1 10= f 1 15= f 1 20= f 1 30= f 1 40= f 1 50= f 1 100= f 1→∞

12345

6789

10

1112131415

1617181920

2224262830

4050607080

90100200500

2420019943.721.013.6

10.38.387.216.425.85

5.425.094.824.604.42

4.274.144.033.933.85

3.703.593.493.413.34

3.122.992.902.852.80

2.772.742.632.56

2460019943.120.413.1

9.817.976.816.035.47

5.054.724.464.254.07

3.923.793.683.593.50

3.363.253.153.073.01

2.782.652.572.512.47

2.442.412.302.23

2480019942.820.212.9

9.597.756.615.835.27

4.864.534.274.063.88

3.733.613.503.403.32

3.183.062.972.892.82

2.602.472.392.332.29

2.252.232.112.04

2500019942.519.912.7

9.367.536.405.625.07

4.654.334.073.863.69

3.543.413.303.213.12

2.982.872.772.692.63

2.402.272.192.132.08

2.052.021.911.84

2510019942.419.812.5

9.247.426.295.524.97

4.554.233.973.763.58

3.443.313.203.113.02

2.882.772.672.592.52

2.302.162.082.021.97

1.941.911.791.72

2520019942.219.712.5

9.177.356.225.454.90

4.494.173.913.703.52

3.373.253.143.042.96

2.822.702.612.532.46

2.232.102.011.951.90

1.871.841.711.64

2530019942.019.512.3

9.037.226.095.324.77

4.364.043.783.573.39

3.253.123.012.912.83

2.692.572.472.392.32

2.091.951.861.801.75

1.711.681.541.46

2550020041.819.312.1

8.887.085.955.194.64

4.233.903.653.443.26

3.112.982.872.782.69

2.552.432.332.252.18

1.931.791.691.621.56

1.521.491.311.18

Page 75: 4. Statistical Procedures Formulas and Tables
Page 76: 4. Statistical Procedures Formulas and Tables

Robert Bosch GmbH

C/QMM

Postfach 30 02 20

D-70442 Stuttgart

Germany

Phone +49 711 811-4 47 88

Fax +49 711 811-2 31 26

www.bosch.com