15
This article was downloaded by: [McGill University Library] On: 09 December 2014, At: 09:42 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK International Journal of Systems Science Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tsys20 Optimal estimation of digital stochastic sequences A. J. MILLER a & P. MARS b a G.E.C. Hirst Research Centre , East Lane, Wembley, England b School of Electronic and Electrical Engineering, Robert Gordons Institute of Technology , Aberdeen, AB9 1FR, Scotland Published online: 30 Mar 2007. To cite this article: A. J. MILLER & P. MARS (1977) Optimal estimation of digital stochastic sequences, International Journal of Systems Science, 8:6, 683-696, DOI: 10.1080/00207727708942074 To link to this article: http://dx.doi.org/10.1080/00207727708942074 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Optimal estimation of digital stochastic sequences

  • Upload
    p

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Optimal estimation of digital stochastic sequences

This article was downloaded by: [McGill University Library]On: 09 December 2014, At: 09:42Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

International Journal of Systems SciencePublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/tsys20

Optimal estimation of digital stochastic sequencesA. J. MILLER a & P. MARS ba G.E.C. Hirst Research Centre , East Lane, Wembley, Englandb School of Electronic and Electrical Engineering, Robert Gordons Institute of Technology ,Aberdeen, AB9 1FR, ScotlandPublished online: 30 Mar 2007.

To cite this article: A. J. MILLER & P. MARS (1977) Optimal estimation of digital stochastic sequences, International Journal ofSystems Science, 8:6, 683-696, DOI: 10.1080/00207727708942074

To link to this article: http://dx.doi.org/10.1080/00207727708942074

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Optimal estimation of digital stochastic sequences

INT. J. SYSTEMS SOL, 1977, VOL . 8, No.6, 683-696

Optimal estimation of digital stochastic sequences

A. J.l\iILLERt and P .l\iARSt

Tho paper considers t he problem of the opt ima l on -line estlmanion of digital stochasticsequences. Spec ifically flo ca lculus of variations a pproach is used to prove t ha t forstationary atochust.io sequences exponen t ia l and moving average algorithms are equallyviable, both techniques representing close approximations to the theoretical optimumsolution . F or non-stationary sequen ces the moving average algorithm is shown tobe the better choice, due to it s symmetrical we ighting function. The paper concludeswith a d iscussion of the influence of induced correlation on the estimation process.

1. IntroductionMany industrial problems in process control and simulation require the on­

line estimation of the probability of occurrence of a digital stochastic sequence.Frequently the demand for fast on-line computation necessitates the synthesisof hardware logic systems as opposed to a software approach. Although theresults to be described have general applicability to the stochastic estimationproblem the main objective was a study of the output interface problem indigital stochastic computers (Miller et al . 1973, 1974 a, b , 1975). It has beendemonstrated that randomly switched digital logic circuits with statisticallyindependent inputs may be used to simulate the conventional analogue oper­ations of summations, multiplication, inversion and integration (Gaines 1967).Recently the design and applications of a universal stochastic machine have beendescribed (Mars 1976). Although stochastic computers are not quite as fast asanalogue machines and not as accurate as digital computers they posses a speed­size-economy combination which cannot be matched by conventionalcomputers.

2. Output interface designThe basic estimation problem is illustrated in Fig. 1. A deterministic output

is required to be produced corresponding to the probability of occurrence of aninput pulse.

The output of a stochastic computing system will normally be in the form ofa non-stationary Bernoulli sequence. Such a sequence can be viewed in prob­abilistic terms as a deterministic signal with sup erimposed noise. The outputinterface of a stochastic system must be able to reject the noise component, andgive a measure of the mean value of the sequences generating probability. Itmust also be simple enough to be realized using logic circuitry. There are severaltechniques available for examining non-stationary processses . If it is assumedthat the variation rate of the deterministic signal is relatively low, then the signal

Received 3 September 1976.t G.E.C. Hirst Research Centre, East Lane , Wembley, England.t School of Electronic and Electrical Engineering, Robert Gordons Institute of

Technology, Aberdeen AB9 IFR, Scotland.

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 3: Optimal estimation of digital stochastic sequences

U84 A. J. Niller and P . .Mars

can be su bstantially separated from the noise by low-pass filtering. Thefollowing section describes how the filtering of a stochastic sequence using digitalcircuitry is accomplished by computing an average.

2.1. Oeneral method. of averc/{Jing

If iV consecutive clock intervals of a Bernoulli sequence arc examined, andthe lUI mber of ON logic levels counted, then the ratio of the count to the numberof clock intervals gives an estimate of the sequences generating probability.When tho sequence represents a fixed quantity, so that tho correspondingBernoulli sequence is stationary, the accuracy of the measurement can be in­creased by increasing the sample size, N. The price paid for the increase inaccuracy is thc length of time required to make the measurements. With atime-varying input signal the output interface must have the ability to trackthe signal continuously, or else tho higher frequency components of the waveformwill bo lost. To do this requires the calculation of a short time or Moving Average(Brown 19(3).

~stochost i c input

sequence

optimalestimator

Figure 1. Basic estimation problem.

To form a Moving Average of a stochastic sequence, the presence or absenceof It pulse in all of N adjacent clock periods is recorded and stored, and the averagepulse rate calculated. The next clock interval of the sequence is then interrog­ated and the average is recalculated over the N most recent clock periods . Theinformation contained in the first interval is lost, If 13N is the estimate of thegonorating probability, p, over N clock intervals, and Ai' (0,1), is the value ofthe logic levol at the ith clock pulse, then the short-time averaging technique canbe described by the equation

i.e.

1PN = ­

N

N-I

L A;+ANi~l

1) P AN-Ao1'",= "'-1 + N (1 )

'I'he smallor the sample size, N, the more effect the new value of Ai has on theestimate. 'When the average is taken over a short time interval the estimatedprobability, PN , responds quickly to change and is able to accommodate signalswith It high harmonic content. When the value of N is increased the accuracyimproves but the bandwidth is restricted,

'I'he disndvantngo of using this technique for filtering lies in the requirement tostore all the logic levels present in the previous N clock intervals. However,since only thc first and last levels are used in the calculation at anyone time, aserial in/serial out shift register oflength N ean be chosen as the storage medium.

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 4: Optimal estimation of digital stochastic sequences

Optimal estimation ofdigital stochastic sequences 685

Using these registers a circuit was designed to implement a "Moving Average,Fig. 2. For standard enol' of 1%, the length of the register must be more than2500 stages, and for a 5% error the number drops to ] 00 stages. Metal oxidesemiconductor shift registers, although expensive, make this approach feasiblefor both high and low accuracy stochastic systems.

Figure 2. Circuit for moving average.

So far we have only considered one form of averaging. However, any pro­cedure which takes N observations of data and then forms a summation of theweighted readings, can be said to compute an average of the data. This isdescribed in the equation

S/=aOAN+alAN_l + a2A N _ 2 + ... +aN_1A1

where S/ is the computed average at time t, i.e.

For the average to be unbiased

,,\'- 1

S/= I a jAN _ 11=0

N-lI aj=1i=O

(2)

(3)

If the subscript i is taken as referring to clock intervals in a digital system, theweighting coefficients can be plotted on a graph against time, and the relativeimportance which the average assigns to consecutive readings can be readilyassessed. When all the coefficients, ai' are made equal, eqn, (2) describes aMoving Average:

l.V-l

I aj=Na=1;=0

i.e.1

0,;= N for all i

Any combination of coefficients which obeys eqn. (3) can be used to form anaverage, and various examples are shown in Fig . 3. IfN is taken as extendingto infinity, equation (3) can be satisfied by making the coefficients a geometricsequence:

a1 = exao }

a 2 = ex2o,o •

aN - l = ex"' -lao

(4-)

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 5: Optimal estimation of digital stochastic sequences

686 A. J ..Miller and P . .Mars

a(T)

la)

(e)

T

-r

cl-rl

(b)

(d)

-,.

Figure 3. Weighting coefficients that sum to unity.

The !H1m of a geometric sequence is given by

X-I

Li=1

l-rxN

a·=u.o - - -/ I.-rx

lfrx<l

{

N - t } a.P L (( 'i = _0_

N~CIJ i==O 1-0;:(5)

. By letting (1'0 =] - «, eqn. (3) is satisfied, and the average, S/, is given by

St=(J -rx)A,y+ex{l-rx)A,y_l +rx2(1 -rx)A N _ 2 + . .. (6)

'I'his method is known as exponential averaging 01" Exponential Smoothing,and its graph is drawn in :H'ig . 3 (d).

2.2. Exponentialsmoolhiny _

The equation which describes Exponential Smoothing can be rewritten as

St= (I - ex)A t+ rx{(l - rx)A/_I + rx(l - a)A/_2 + ... }= (1 - a}A/ + ex' S/_1 (7)

'I'his equation states that the new estimate is equal to the previous estimate plusIt correction term multiplied by (1- ex). The correction term is the differencebetween the latest value of the input, AI> and the previously estimated value i.e.

(8)

'I'ho effect that the correction term has on the value of S, depends directly on thevalue of ex. With rx = I" the estimate will be unchanged by the new information,and for rx = 0, the estimate will simply take on the present value of the data. Ifa signal with a high noise content is to be averaged, the estimate, 8/, can be made

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 6: Optimal estimation of digital stochastic sequences

Optimal estimation ofdigital stochastic sequences 687

insensitive to random fluctuations by choosing 0: to be close to zero. If, on theother hand, the signal has a fairly low noise level, a high value of 0: can be used,and any change in the input signal will be quickly reflected in the average output.As with the Moving Average method there is a trade-off between noise rejectionand bandwidth.

input

Figure 4. Exponential smoothing circuit.

The digital circuitry for implementing exponential smoothing is shown inFig. 4. At any given time instant, t, the previous estimate, 8 1_ 1, is stored by thecounter in binary number form . The number is transformed into a digital se­quence compatible with the input, AI' and fed back to the input to act as acount-down signal. Equilibrinm is established when count-up and count-downrates equalize. The value of a: is governed by the length of the counter, being(I-liN) for an N state counter.

The circuit of Fig. 4 is the generalized form of the adaptive digital logiccircuit (Addie) previously described (Miller et al, 1973). The feedback signal inthe Addie is generated by comparing the binary number in the counter with a

. sequence of random numbers. The count-down signal is thus a stochasticsequence with the same randomness properties as the input. This unnecessaryintroduction of noise to the circuit characterized its operation and in differentiat­ing it from subsequent designs it was renamed a Noise Addie.

It has been noted that the Moving and Exponential Averages are not theonly possible alternatives for dealing with the averaging problem, and indeeddo not necessarily represent the optimum solution. They are of particularinterest because their weighting operation can be easily duplicated using digitalcircuitry, and the circuitry, moreover, computes synchronously with theincoming sequence. In the following section the theoretical optimum forweighting coefficients will be derived using the Calculus of Variations and therelationship of this solution to the Moving and Exponential averages will beestablished.

3. Optimization of averaging techniqueThere are three factors which describe the performance characteristics of

an average.'. They are the sum of the coefficients, the average delay of the out­put and the variance of the output. In this context the output is the computedaverage of an input signal.

In order to ensure an unbiased output, the sum of the coefficients must beunity. This is evident from the case where the weights are applied to a constantvalued input. The average, given by the summation LajA N _ j, simplifies toA Iaj. which must equate with A for an unbiased answer. However, this is notcritical, as a correction factor could be incorporated into the read-out procedure.

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 7: Optimal estimation of digital stochastic sequences

688 A. J. Miller omd. P. M.nrs

The operation of averaging implies a smoothing out of any local changes inthe data. Ifnew input data were suddenly to jump in value, the effect on theaverage would be a slow change, with the output only reaching the new valueafter the new data had dominated the averaging sample space. The output atan intermediate time, t, clock periods later is given by .

t8(t) = 8(0) +p L al (9)

1=0

where p is the step change in the input value. The time-lag between input isa reci procal measure of the speed of response of the particular averaging operationand is directly related to the weighting coefficients. The time-lag, L, can bedefined in clock periods as

00

I WIL= i-O (10)

00

I aj

i=O

This formula states that the sum, 8(t) , computed with certain weightingcoefficients, is the average of the data at It time L clock periods ago. ForExponential Smoothing, the value of L is the time constant of the curve, andfor the Moving Avorage it is (N - ])/2, where N is the number of clock periodsaveraged.

The output interface of a stochastic computer is designed to estimate theprobability of occurrence of a logic ' I ' at any clock interval It does this bysumming the outcome of a number of independent samples, where the samplesare taken from successive clock intervals , The variance of a sum of randomvariables is equal to the sum of the individual variances, i.e.

When each sample has its own weighting coefficient the equation becomes

(11 )

.08"= (12)

As in our case, if tho variances, o?, are all equal , eqn. (I2) simplifies to

N

082=02 I ai 2

i=O

The accuracy of the output interface in estimating the input probability isdirectly governed by the su In of the squares of the weighting coefficients.

'I'hese three factors can now be expressed as equations of continuous functionswithout violating their relevance to the stochastic output operation :

00

L =C1 J'Ta('T) d'To

(13)

(14)

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 8: Optimal estimation of digital stochastic sequences

Optimal estimation ofdigital stochastic sequences

00

uS2= 0 2Ja2(r) dT

o

689

(15)

The constant 0 1 is equal to I/K, and O2 is the variance of the input signal dividedby./(2.

3.1. Stationary inputs

All the various forms of the weighting function, aCT), will be able to computean average to a certain degree of accuracy. The deciding factor of merit willbe the time taken to achieve it, given similar inputs and initial conditions. Theproblem of finding the optimum solution is therefore one of minimizing the lag,L, given specified values for the variance, us2 , and the constant, K. The solutioninvolves using Lagrange Multipliers in the Calculus of Variations. A minimumfor L is found by minimizing J in the equation

00

J = J Ta(T)+A1a2(T)+A

2a(T)dTo

(16)

where Al and ,1,2 are undetermined coefficients known as Lagrange Multipliers.Replacing a(7 lby a, the object function can be written as

H(a,T) = -ra +A1a2 + A2a

Substituting this in the Euler Differential Equation

aH 1 (H)aa - aa aa =0

results in

(L7)

i.e .

( L8)

II L I

"~ a

-L--.j

.,.Figure 5. Possible weighting functions of cqn. (L8).

This is a linear equation and by assuming only positive coefficents for neT), it cantake either of the forms shown in Fig. 5. It is apparent that an increasingfunction has a greater value of L compared with a decreasing function of thesame slope and duration. The optimum solution is therefore given as

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 9: Optimal estimation of digital stochastic sequences

690 A . J. »uu« and P. Mars

Tho values of A, Band T can be calculated from the side conditions. Leta·i= 1/N and 1\. = .I. This results in

2AT=-­

2B-l

A2T(B2-B+t}= ~N

Substituting these values into eqn. (l4) gives

L = ~ :N. 9B3 - I5B2 + 9B - ] 29 8133 - I2B2+ 6B - J

(I9)

(20)

Tho right-hand side of the equation is solely dependent on B. A minimumvalue of L can be found by substituting values of B ranging from 1 to N. Avalue of I gives the function shown in Fig. :{ (a), while a value of N gives approxi­mately the "Moving Average function. It should be noted that

4-L= -N whenB=l

f)

L=!-N when B=N

\Vihh the value ofL increasing to .I/2N as B increases, the minimum form ofa(T) is

Using oqn. (19) with the appropriate value of B results in

3 1A=- '­

2 N4

T=-N3

3( :h)aT=- 1--() 2N " 4-N (2] )

Comparing tho various forms of weighting functions, it should be noted theoptimum solution shows only a 5'5% improvement over the Moving average.Following the same procedure \\:ith the Exponential Smoother results in theequations

(22)

L=tN

This indicates tlHLt with these particular specifications the Moving and Exponen­tia� avorugos am identical in performance.

:(2. Non -slaiionars] inputs

The performance criteria for the stochastic output interface has stressed theuvorage delay incurred when trying to follow an input signal. This is importantwhen the output tends towards a stationary value, and the interface must givean accurate reading as fast as possible. However, if the output is non-stationary,

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 10: Optimal estimation of digital stochastic sequences

Optimal estimation of digital stochasticsequences 691

the minimization of delay between the input signal and output reading is aless important objective than the reduction of systematic distortion introducedby the averaging operation.

Consider the example of two waveforms, one varying at a constant speed,giving a straight-line graph of probability against time, and the other increasingexponentially with the graph being a curve concave upwards. Both the Movingand Exponential averages will follow the first waveform but with a certain time­lag, Fig. 6. The output is a faithful replica of the signal and in many instancesthe incurred phase lag is of little consequence. However, the second waveformcauses problems with the interface circuitry being unable to follow the signalsuccessfully, This is due to the convolution of the input with weighting co­efficients, which affects a fundamental trade-off between thc circuit's ability totrack complex waveforms, and its accuracy in measuring static inputs. Theseconsiderations, devoid of the time-lag constraint, give new criteria which canform a basis of a second optimization (Vincent 197:3),

Q}

VICo0..VI "W'- L .:;4-' -

time

Figure 6. Response of av erager to ramp input,

The output, Set} is given by the equation

t-'I'l

S(t}= J p(t-T}a(T}dT-1'2

(23)

where T 1 is the time-lag between input and output, and T 2 can take on any value,The input, now assumed to have second and higher-order derivatives, can beexpressed as a Taylor series :

pet - T} =pet}- Tp'(t}+T2p"(t}J2! - T3p"'(t.}/:J! + ' . , (24)

It is clear that a symmetrical weighting function would have the advantage ofcancelling out all the odd terms in the Taylor series . The output reading, withits variance, would then be solely dependent on the even-order coefficients.

The total read-out error will be the combined sum of the systematic distor­tion, D(t}, and the random fluctuations, aCt) :

+'1' +TD(t}=p(t}- J p(t-T)a(T)dT/ J a(T}dT

-'1' -'1'(25)

(26)

With the weighting function symmetrical, the terms not cancelled in eqn. (24)are the even-order terms starting at the second, Assuming the signal, p(t) , can

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 11: Optimal estimation of digital stochastic sequences

6H2 A. J . Miller and P . .Mars

be udequutoly described by terms of below fourth order, eqn. (25) becomes

+T TJ)(t,)= J ~-r2p"(-r)a{r)dTI f a(-r)dT

-1' -:J.'(27)

An extremum of the function a(T) can now be sought by minimizing the system­atic distortion, D(t), given fixed values for the variance, a(t), and the integral ofthe function, K :

+ 7'

J = J [{!T2p"(-r)a{T)}2 +1\ {a(-r)}2 +>'2a(T)] d-r- 'I'

(28)

To make the minimum value of J refer to the entire signal history and notjust the time range of the function a(-r) , average values of the signal's secondderivative and variance can be substituted into eqn. (28).

Following the same procedure as before

(29)

The Euler Differential Equation gives

-~ T2p" (t ) + 21\a(l)a(T) +"\2 = 0

a(T) = ki + kZTZ

where k] and k2 arc constants.The function , a(T), is either' an increasing or decreasing square-law device,

and with the former having a relatively high second moment, eqn, (27), theabove equation can be rewritten as

the vut-iance with liN and the integral of n(T) with I , gives

AT= :32(:3B-] )

A2T= ~. 152N 15B2-IOB+a

'I'ho results in eqn , (27) being rewritten as

D= ~. (15B2-lOB+:3)2(5B-3) . N2]25 (aB- 1)5 4

(30)

(3])

(32)

To oliminate negative values from the weighting function, the constant Bcan take on any value from + I upwards. 'Vith high values of B relative ofunity, a(T) becomes constant, and the function can then be equated with theMoving Average. The distortion factor, D, is at It minimum when B =] .

Substituting this into the three equations above results in

5 1 ( 25T2

)a(T)= '4 . N 1 - 7N2 (33)

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 12: Optimal estimation of digital stochastic sequences

Optimal estimation ofdi(fitalstochastic sequences 693

Calculating the systematic distortion gives

9J)= _N2

]25

This compares closely with the value 0"83 N2 derived from the Moving Averagefunction.

:3.:3. Specified. inputs

'I'he above calculations worked from the constraint that the weighting func­tions should give an output with a certain specified variance. The optimumsolution however involves a balancing of the factors of random fluctuations andsystematic distortion, the correct combination being dependent on the particularform of the weighting function. If the stochastic output interface is to be em­ployed generally, dealing with stationary and non-stationary inputs, its firstrequirement will be to satisfy a specification for static accuracy. The circuitry 'sability to track changing signals will then be determined. On the other hand,when the input can be specified, the interface can be designed with the aim ofminimizing the total error, E. Combining the systematic distortion and thovariance.

E2 = D2+ U S2Substituting in eqns. (26) and (27)

E2={1~ iT2p" (t)a(T)dT/ :r a(T)dTf

+ X: ~(t){a(T)}2dT/{ J: a(T)dTrBy making 8=T/T anelx(8)=a(T)

{_ }2 T4 +1

E2 = p"(t} - { J 82x(8) d81

4 -1

+1J x(8)d8}2-1

+u2(t )~ I {x(8)}2d81 { I x(8}d8r(34)

This error equation can be considered as a function of T. To minimize theexpression, we differentiate with respect to T, and equate the expression withzero. This results in

+1a 2(t ) J {x(B)}2dB

T5- -1

- (p"(t»)2{J B2X(8} eer

For any particular weighting function, this equation gives the sample space ortime range OVel" which the average should be computed in order to achievemmuuum error

T5= ~.{ a(t)}22 p"(t) (36)

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 13: Optimal estimation of digital stochastic sequences

(i!)4 A. J. Miller and P. Mars

The ratio of the two valuos, R,r, is calculated as

i.o.

This cOlllpams with the value '·2 calculated from eqn. (:33) where the variances,were made equal.

In the foregoing oaloulutions integral equations have been used to solveproblems analogous to the stochastic interface problem. The analysis hasshown tim!;

(n) for fast estimate of steady-state probabilities the Exponential andMoving Averages are cquully viable, both techniques being close approximationsto tho theoretical optimum solution;

(b) in dealing with non-stationary stochastic sequences the Moving Averageis the better' choice due to its symmetrical weighting function and its similarityto the derived optimum.

3.4 . Induced. Correlation.

The series generated at the output of an averaging operation possessesatutistica which are markedly different from the original input. As alreadyshown in eqns. (14) and (15) tho variance is decreased and a time-lag incurred.However, just as important is the loss of independence between the membersof tho series. In the act of smoothing the data a degree of inertia is introducedto the signnl and its maximum rate of change becomes strictly limited. Neigh­bouring members of the series have comparable values and the data are said tobe positively correlated.

The induced correlation is a function of the weighting coefficients of theaverage]' . Taking the example of a short-term averager of sample size Nandtimo delay 'In ,

It follows that

N

8/+1>1 = 2 aiA,+li-)

(37)

COV (SIS/H) = E{a)At+1 +a'2A'+2 + ... }{a1A 1+k +1 +a2A ,+k +2 " .} (38)

Assuming tho A's are independent this reduces to

coV(S,S/H) = IE [A 2]{a1(1./,'+1 +U2(1.k+2 + ... +a"'_k(l.",}

tho kth uutocorrelation coefficient of the series is given by

sE[A2] L a;'1.

i=)

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 14: Optimal estimation of digital stochastic sequences

i .e.

Optimal estimation of digital stochastic sequences

N-k

L aiai+!.:i-I

.v

L al;=1

695

(39)

This equation shows that the averaged series possesses non-vanishing auto­correlations up to order N . Moreover p is always positive and can be quite forsmall values k.

The Moving Average has weights all equal to l iN. Applying this to eqn.(39) gives

For the Exponential Smoother the value of N extends to infinity

(t; ={I - a)o:i

The right-hand side of eqn, (3S) becomes

{I - (Y)2{(YI(Yk+l + a2a k +2 + (X 3a k+ 3 • . • }

This condenses to

i=l

Thus eqn. (39) can be rewritten as

Pk=(Yk

When the Exponential Smoother ha s an n-stage counter, (Y equals (l- IJ2n ) .

Thus both averaging techniques impart 11 substantial degree of serial depon­dency on their output signals. It is tempting to treat the correlation as a sideissue of academic' interest only. Unfortunately its presence has a significanteffect on the behaviour of the signal, often giving it the appearance ofa systematicoscillation. Although seemingly deterministic, it is the predicted result ofaveraging random events, and is known as the Slutzky-Yale effect (Kendal19n).

4. ConclusionsThe optimal estimate of digital stoohastic sequences has been considered and

a Calculus of Variations approach used to prove that for stationary stochasticsequences Exponential and 'Moving Average algorithms are equally viable.For non-stationary sequenoes the Moving Average technique is superior, due toits symmetrical weighting function . Although the results have been specificallyderived for the stochastic computer output interface optimization, they areequally applicable to the general problem area of on-line estimation of stochasticsequences in control, simulation and forecasting.

ACKNOWLEDGMENTS

The authors wish to acknowledge the award of a U .K. S.R.C. grant in supportof the work described. One of the authors (A . J. Miller) also wishes to aoknow­ledge the support of an S.R.C. studentship.

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014

Page 15: Optimal estimation of digital stochastic sequences

61.16 Optimal estimation of digital stochastic sequences

HEFERENCES

BIWWN, It. G., 19Ga, Smoothing , Eorecastimq and Prediction of Discrete Time Series(Prentice-Hall).

GAINES, B. n., 1967, AFJPS, 30, SJCC, p. 149.KENDAL, 1\1.. , 1973, Time Series A nalys is (Griffin).MAltS, P., 1976, J.E.E. Oolloquim. on Parallel Digital Computing lJi/ethods, Paper 5.l\11LLEH , A. J. , BROWN, A. \V., and MAHS, P., 1973, Electron. Lett., 9, 500 ; 1974 (a),

t-«.J. Electron., 37, 637, 1974- b, Electron. Lett. , 10,419- 1975, Ibid., 11,326.VINCENT, C. H., 1973, Random Pulse Trains, their Measurement and Statistical

Properties (Peter Peregrinus).

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

09:

42 0

9 D

ecem

ber

2014