33
STOCHASTIC INFERENCE FOR SPATIAL STATISTICS by Lars Smedegaard Andersen TECHNICAL REPORT No. 120 February 1988 Department of Statistics, GN-22 University of WaShington Seattle, Washington· 98195 USA

STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

STOCHASTIC INFERENCE FOR SPATIAL STATISTICS

by

Lars Smedegaard Andersen

TECHNICAL REPORT No. 120

February 1988

Department of Statistics, GN-22

University of WaShington

Seattle, Washington· 98195 USA

Page 2: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Stochastic inference for spatial statistics.

Lars Smedegaard AndersenAfdelingen for Teoretisk Statistik

Aarhus Universitet, Denmark

present address:Department of Statistics ON - 22

Padelford HallUniversity of WashingtonSeattle, WA 98195. USA

Abstract

Statistical image analysis and analysis of lattice data on lattices of dimension two or higherhave assumed increasing interest over the past few years. For image analysis especially, theemphasis has so far been focused on producing fast and reliable algorithms for restorationthat gradually improve the visual quality of the resulting restored image. In this paper wedevelop and study algorithms with precise mathematical optimality properties that facilitatethe use of classical statistical methods for hypothesis testing, estimation, classification, andassessment of uncertainty.

A foundation for statistical inference for image analysis is developed and designed to fit themost commonly used image models: Markov random fields corrupted by local spatiallyindependent noise process. This foundation is found to be general enough to accommodateinference requiring only that decisions could be made had the true image been observed, andas such is suitable also for highly specialized decision problems. Applications beyond thebasic assumptions are explored.

American Mathematical 1980 sut"ec::t cJtas~rifjc::afj.on.

'''/ltJ·Il.~· Bootstrapestiimaltioltl, optimality,

contract mlIntJ(~r

Page 3: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

-2-

1. Introduction

In recent the field of image analysis has to some degree focused attention on

investigation of iterative procedures that gradually improve the visual quality of a given

corrupted imag~: Besag (1986), Geman and Geman(1984), Geman and Graffigne (1986),

and others. These procedures unfortunately lack the possibility of assessing the variability of

the restored image, let alone draw inferencefor the underlying signal. We may draw

attention to a method defined by Haslett and Horgan (1986), and later commented upon by

Derin et al (1984) which produces an approximately optimal image restoration for Markov

mesh random fields corrupted additively with Gaussian errors by applying results from the

one dimensional special case. In this paper we design a method which is entirely spatial and

which avoids limiting assumption of Gaussianity of errors - and for which the approximation

is stochastic, introduced by a Monte Carlo approximation, and can thus be adjusted to any

is the measures restoration algorithms. We,

however, believe there is a definite need for these measures to aid the statistician in choosing

candidates.for image restoration procedures. It is important though to k~pin

the aid

nnages. The rele'vant SUCI:::ess me,asu:re

restoration procedure meets this

extent to which the

USl131Jyis one

is no otnrl0rlS' a

we shall mtt:odllce it po:ssilble to overcome

se(;tt(J~n we

is to

Page 4: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

- 3 -

development of resampling techniques for image models and to the deduction from these

optimal restoration schemes under different objectives. The more inference

problems are illustrated in section 3, largely through examples. Section 4 includes

concluding remarks and ideas on how the methodology developed in this paper can be

generalized to more complex image models, such as the hidden Markov models and pattern

synthesis models (Grenander 1981, 1984 ).

1.1 The basic image models.

We shall throughout this paper work in the fonowing set up. Let T Zd be a finite

rectangular lattice, and let xT be an unobserved image on T we assume that xT is a

realization of a stationary discrete Markov random field XT , with respect to a translational

invariant neighborhood system; we avoid edge effects by thinking of T as wrapped around a

toms in Rd +1. The observation available is YT' and we assume that

p(YT I XT ) = n p(Yt I X t )

t E T

and furthermore we assume that the set ofpossible vaIuesof X t and Y t are A and B

respectively, where A and B are finite sets, the matrix Ut =p ( Y t I Xt ) is assumed to be

square and invertible.

(1.1 )

Contrary tothe usually imposed assumption of independent identically distributed errors,

independent of XT , (1.1) requires only spatial independence of the error mechanism. For

clarity of exposition we shall consider only translational invariant error mechanisms,

= U , for all t. The assumption that

tranSl.atl,oml1 invariant are UIlJleC,essary restm::;tl()1lS as is SP~~Cilll al~SUltnp'tlorls

is error me~chall1sm

simlplil:::ity we

are knl[)Wlt1: course UStlalJly not case, we to

estiimate p:ar81:net:ers on

Adopting an empirjical is

Page 5: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

-4-

random field, inference for xT should thus be carried out using the posterior distribution

, and estimation of parameters for the prior ( often called hyper parameters ) should

be based on the marginal distribution p (YT ) of the observed image. Knowing the matrix U

means that the device through which we receive the signal is known. Non - parametric

models for the image process can also be estimated using similar methods for direct

estimation of the local conditional distributions. When resampling from the non parametric

models we shall use the term bootstrap resampling, since the grey scales A and B are finite

choosing a non - parametric model is, in reality the same as expanding the number of

parameters in a parametric model.

The image models described above are usually chosen for their simplicity rather than

through realistic considerations of mechanisms generating the images. For more structured

models an alternative was proposed by Grenander (1981) that allows for locally highly

specialized models - these models even though appealing have been studied only

sporadically in the literature, mainly because of the even greater analytical problems than for

the Markov random fields models. There is, however, no doubt that applications of far more

advanced and specialized ima.ge modelsthantlIe ones we study here will be introduced in the

future.

2. Resampling methods for Markov random fields.

Given a Markov random field the local conditional distributions are on the form

is the set of clilClm~s

( vc(Xt;XH,» ,c E C,

respelct to

(2.1)

--t R are

are as

nOtm~llizing constant mak:l11lg

) ( )

Page 6: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

-5-

or at least a Markov random field with the same local conditional distributions as

by (2.1) can be simulated Metropolis algorithm:

1 Choose a starting value xfO), from the set of configurations Ar .

2 Iteratively with n, pick a pixel t at random ( such that all pixels are visited infinitely

often) and let x.}nJ{t} =xj':..{g and generate x?) using (2.1).

it can be proved, by a Markov chain argument, that xf71) is a recurrent Markov chain with

state space AT. The stable distribution ofxfn ), (as n ~ (0), exists and is a Markov random

field with local conditional distributions (2.1). Usually the<full simultaneous distribution of

Xr is given by (2.2).

2.1 local conditional distributions.

As we argued in section 1 the relevant distribution for restoration of the observed image Yr

is that of Xr I Xr , denoted here for convenience by PYr (xr ); formally this may be written as

P(Yr I xr )P (xr )py/xr)= ()

P Yr

where by the local conditional distributions of Xr conditional on Yr are given by

(2.3)

J7 (x I x ) =Yr t HI P(Yt I X't)p(x't I XH)IE A

(2.4)

is COlldi1tiOllallty on true is

we see no is

we can geI1lera.te as repilicate s~tml)le8 as we

Page 7: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

wish from I YT =

-6

The optimality results we are to state

Local optimality ofresamp,ling restoration schemes.

Contrary to the restoration algorithms of simulated annealing and iterative conditional modes

the resampling restoration schemes typically will not produce a global or local maximum

posterior likelihood estimate. For ordinary forecasting problems these approaches are

justified by reference to the Gaussian time series; for image analysis, however, the posterior

distriJ,ution is remarkably flat close to its optimum. J:<urtllerrnoJr~

eventhe most likely configuration in AT is

and prove here will be of definite subjective nature, such as minimum mean squared error,

minimum misclassification restoration etc. This, however, is also the key to the pixel to pixel

variation measure since we acquire more information about the distribution of XT I YT than

just a point estimate of the mode. By introduction of objective functions and coupling of

pixels we are in the position where we can make minimum cost decisions, and for reasonably

small blocks of pixels, B , say the full distribution of XB I YT can be well approximated by

simple~{)nte Carlo.

Definition 2.1

A structure identifier, F, say is a function from AT (= x At)~M, where M is atE T

metric space.

structure ide~ntj,fiers

as salllple ave~raJ~es, mean sQ1Jlare~d s::!!mple error. paraml~ter estumllOI'S,

cl1:1faeterisltics etc. a structure ide~ntj,fie,r,

a po~~tetlOr

t:orec~lStrng. Bellow we

structure identjJl1ers: term

Page 8: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

- 7 -

broad sense - coupling of pixels is allowed.

Definition 2.2

A local objective function is a real valued multi linear structure identifier, F, depending

on xT' such that

Fxr(X'T)= 1: It,x, (x't)tE T

and

It.x (x) = O,lt,x (y);;::: O,lor all x,y e A, and t e T.

With this definition in hand we are able to state and prove theorems of optimality for the

resampling estimation schemes.

Theorem 2.1

Let x; ,... ,x: be resampling replicates from (2.3) using Metropolis algorithm as

described above given a local objective function, defined by I : A2 ~ R there exist

o

E{ as n ~ co

where the mittimum is taken over all ttmlctl~(mS 'If

over ( ).

eXJ,ectatllon is

E{ f' • } :: E {E { 1 ' .

Page 9: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

- 8 -

= E{ E{ft,x,(+(n 1"'" *» I YT }}teT

wbere by it suffices to prove tbat

The right band side is calculated by

1: f u; (-1 )p (x ;Xt I YT ) ,xeA

(2.5)

where y minimizes (2.5). Since Pt, the bootstrap empirical distribution function for Xt

converges to p (.,Xt I YT ) with probability one y minimizing

1: ft.x(Y)Pt(x)xeA

converges with probability one to y, where by the theorem follows, since A and Bare

finite.

To appreciate the above•theorem fully we tum to a few examples; let f t ,.x (y ) = 1{x :;t:y }, then

the majority vote estimate (among the resampling replicates) converges witb probability one

to the minimum mean misc1assification estimate, as the number of iterations <per resampling

replicate, and the number of bootstrap samples tend to infinity. In the case where A and B

fwlb~~r asswnptions

for f and tbe local conditional distributions.

o

bas a unique

it

E{ } -7 '" E{ ) } , as n

Page 10: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

9-

where the minimum is taken over functions 'If: AT -t AT, and the mean is over

( , Y T )·

Proof

As above, only minimization of (2.5) is performed through differentiation. As to

conditions sufficient for the existence and uniqueness of the stable distribution for the

simulation Markov chain see Andersen (1988,b) and Revuz (1975).

o

As a special case of corollary 2.1 we get a resampling approximation to the classical two­

filter and Kalman filter for time series models that are special cases of Markov random fields

on one dimensional lattices (Solo (1982) and Meinhold and Singpurvalla (1983) ), along

with a natural generalization of these methods to non Gaussian cases and spatial Gaussian

and non Gaussian Markov random fields, see Andersen (1988,b), Ripley(1981), Besag

(1974,1975) and Green, Jennisonand/Seheult (1985). Gaussian Markov random field

models the mean squared error optimal restoration produces an approximation to the

maximum posterior probability estimate, ie., the same as simulated annealing.

that C011pIJInjl;

in special cases; assume for instance that after obtaining the restored image special interest is

focused on a region, B, say by resampling again an optimal estimate can found for or

strllctll1re identifier thereof, as one

can m<lI'gin<lLlly or cOlldrtiOlla!J[y on

corlSUllllrLg for some ch()ic(~s structure l(lentlJtl.er

1

Page 11: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

- 10-

The local asymptotic measure of perfonnance of the restoration:

lim E {Itn~oo

) I YT }

is consistently estimated, but in general not unbiasedly, by

L It ,x (xt)Pt (x) ,xeA

where Pt (x ) is the observed frequency of the event Xt = X , among the resampling

replicates.

In practise we need to choose the number of samples from XT I YT = YT, and in doing this

we also set the internal precision of the stochastic approximation by Monte Carlo. In general

it is our belief that fewer samples of higher quality are preferable to more samples of lesser

quality, which in practice amounts to letting the local updating procedure from Metropolis

algorithm run longer and taking fewer samples. Using these procedures, however, for simple

structure identifiers does not require excessive computer time.

2.3 Exal1tples.

Using Metropolis algorithm we shall present two examples, both employing artificially

generated data, from a Markov random field, . In order to study more structured images,

similar procedures can be developed for the pattern synthesis models see Grenander (1981)

a problem we plan to look into in near future.

1

o

Let ={O,l assume is a MarKC'V r~lJldom

assume is lsOJmeJtric

nearest next nearest nei.ghl,ors,

{---} + ,t +

Page 12: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

- 11-

) is the number of ones among the first four nearest neighbors of twhere n l(t) and

and the second nearest neighbors, respec;tively The corresponding conditional

local conditional distributions are easily calculated

We choose to set the parameters at a=(-5.2,1.2,1.0), and p=0.2. Figure 2.1 a) shows

the true signal, b) shows the corrupted image, corrupted through a symmetric binary

noisy channel, in c) we show the minimum misclassification restoration estimate of the

ttuesignaland in d) we show the minimum mean squared error restoration, both using

200 bootstrap samples and 50,OOOlocal updates per replicate. Finally e) and f) show the

estimated local misclassification rates, and the highest 5 percent estimated local

misclassification rates overlaid on the estimated image. Not surprisingly figure 2.1 e)

picks out the boundary with high success.

Insert figure no. 2.1 here

Example 2.2

This example differs from the previous one by having three unordered grey levels. For

restoration purposes local srnoothingalgoritmnsaremown to Work very Wellfor A

categorical although local smoothing becomes less useful. We have chosen the local

conditional distributions such that

o

T11(j»+ (n

cOJ:Te~,pondJtn.glyIS

sec:ond nearest neiighboJrs error prc.ce:ss is de:;;cribed

pI-p

= { p=

Page 13: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

- 12-

Figure a), the origiual image was generated from (2.7) usiug 5 x 106 local upclates,

and = 1.05,~= Siuce T is wrapped around a torus, we iu only

major connected patches. b) is the corrupted image, p= 0.35, the minimum

misclassification restoration usiug 200 resampliug replicates and 5 x 104 local up(1at(~s

per replicate shows 150 misclassmcations, d). Figure e) shows this a local variability

measure, again picking out the iuterior boundaries very successfully - f) shows this

"estimated" iuterior boundary process overlaid the restored image.

Insert figure no. 2.2 here

3. Stochastic inference.

The practical inference problems of spatial statistics are many-sided and iu image analysis

often highly context dependent - there are, however, basic problems iu fonnalized hypothesis

testiugsfu~e thehypothesisiare noteaselydescribediutenns oithe Markov random fields

models. A counter example is that of parameter reduciug hypothesis for the Gibbsian

parameters; the estimation problem was studied iu Andersen (l988,a), and we shall assume

that the distortion mechanism pennits consistent estimation of the Gibbsian parameters under

both the full model and the null hypothesis, usiug the maximnm quasi likelihood estimate.

3.1 Examples.

1

..-...t R a structure ide~ntj.tier,

res:anlplilng we two mCtdets

a test can two dis1tributicJns. test is so

so is

one realiz:lticln no

Page 14: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

13

practical irnportance.

As a natural choice of structure identifier we suggest the total quasi likelihood,

(3.1)

where the last tenn on the right hand side is the pseudo likelihood function, Besag

(1974,1975). this choice of structure identifier produces a test that will be sensitive

towards hypotheses corresponding to models changing the posterior probabilities for the

restored image significantly. As to applications for image analysis it will often be more

relevant though to use a structure identifier which is chosen specificly to the context.

o

The classification problem and pattern recognition problems are solved more easily in

principal. They require the ability to classify any given directly observed image, as well as a

cost function for misclassification.

Example 3.2

Consider figure 3.1 below, a) showing the original image, generated by the same

Markov l'andomfield as mexatllple 2.2, super imposed on a), in the smaller frame is a

black object drawn by hand. Figure 3.1 b) shows the corrupted image: The original

image received through a symmetric noisy channel with

In c) is the restored using the miumnmn nlis(;la~jsif'ica,tion e:stitl11ation

the postertor dJlstrilbut:ion

T, reslJecttively F 1 is area

lar~test area

T inte.rsectin~ B a COl1tnec:ted

is area lar~;est cOrltnec:ted set mtersectin~B,

Page 15: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

all

ext~rimeJllt from

is the estlm~lte, counting the number of

and f) it that 1

biased toward volumes then the true of 163, by repetition

the same original image it seems that 1 and F 2 are biased toward smaller Vo]lulliles

while F 3 seems biased toward larger volumes; the biases of course delpeflld actual

realization of the error process. The true volume of B is 163, and the volume of the

estimated object B is 174, different from EF 1 I YT' EF2 I YT and E F 3 I YT'

o

Insert figure 3.1 here

3.2 Optimality ofdecision rules.

Having chosen the Markov random field image models as the general frame work we must

accept that all configurations over T are possible, the model parameters can not be used to

confine the true inlage to ha.ve a specific structure. For pattern recognition and classification

purposes this characteristic makes•the introduction ofstruCtUre identifiersaJrnost impossible

to avoid; in order to make decisions on the basis of images, using the Markov random field

models, we must be able to classify any given configuration on AT. As usual we also require

the knowledge of a cost function.

It is a serious disadvantage of Markov random field models that a given

feature can not be directly as a askingif :::::

and shape of a

can not

trallslalted into a sta1tistjlcaI hVl)otlnesiS

classific~lti(Jtn pU!1JtOS~~S we can resample COIlCfI.ltlOt1311!y on

is classi:fyable

a structure IdentIfier

tenus

Page 16: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Theorem

Let F : AT -t (I, ... , k } be a structure identifier, and let W be a cost function such

that W is invertible, Wi,i = 0 and Wi,j ~ 0, then the classification rule determined by the

minimum cost identifier of F (xi) ), i = I , ... , 11., with respect to I YT ) ,

i = 1 , ... , k is the minimum cost classifier of xT into

Proof

The proof is trivial since we can approximate the distribution function to F (XT ) I YT

arbitrarily well, see theorem 2.1.

o

Insert figure 3.2·here

Example 33

Consider figure 3.2, in a) is a realization of the Markov random field from example 2.2 -

black patches are separated by only two pixels; b) shows the corrupted image and b) the

minimum misclassification restor:ati(m of a), misclassification rate: 1.96 peI'Cel1tt; we are

iIlt€~rested in finding a minimum m1:scl.:tSsjlficau<m to answer

mrpo1;edon

mlsclasS:ltt<:atllon costs

Fll,2,3 -tIO,l},

Page 17: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

- 16-

F = 1 if the largest set of connected black pixels intersecting the top frame and the

largest set of connected black pixels intersecting the lower frame of figure a) is the

same.

F = 0 otherwise.

In 135 of the 200 replicates F = 0 was observed. the procedure, thus, correctly classified

the observed image in figure 3.2 b).

Figure 3.3 shows the minimum misclassification restoration given that the black "blobs"

are figure 3.3 a) is really two objects, misclassification rate: 1.95 percent. Note again.

that since the Markov random field models can not force global features for the

restoration there is no guarantee that F (XT ) = O. In figure 3.3 b) we have displayed the

pixels of disagreement between figures 3.2 c) and 3.3 a) as black pixels overlaid on

figure 3.3 a), there is a total of 13 black pixels in figure 3.3 b).

o

[nsert figure 3.3 here

4. Concluding remarks

The methodology suggested in this paper has been developed to suit the tasks faced in

statistical image analysis under special model assumptions. However, by straight forward

generalizations the results can be retailored for more diverse areas of statistical analysis

lattice data such as Kriging, field experiments in agricultural and statistic,il] ITlecllalrucs.

The idea has the of a

lie,c:ib:ility to the

Mar!kc,v nmdom field 'uuu'"', ass'mnptl<ms.

is seen most

an

to- one

structure Ide~ntllh.er.

toin nocontext delJeDrdellt

Page 18: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

- 17-

fonnulate, in mathematically tractable how the eye and the extract

the "important" information from an no both

weakness and the strength of the method in that the formulation a tealtm:e

characteristic is next to impossible in some cases based on all the information one can extract

visually an example consider the problem of classifying of cats and

and brain do this in split seconds, but describing this process mathematically is not possible

outside simplified examples far too simple to be of any practical value). In the dependency of

these methods on chosen feature identifier lies also a great advantage in that the analysis in a

well defined way can be made dependent on the particular context in which the data were

coUectedandthe purpose for which the results

One of the serious disadvantages of using the Markov random field models is that all

configurations of grey levels over T occur with positive probability. The pattern synthesis

models ( Grenander (1981) ) allow for restriction in the space of configurations, AT; two

alternative ways of obtaining this are:

1), Assume we are interestedoruy in realizations for whichxa isfixed,then replicates from

XT I YT =YT , Xa =xT can be generated by leaving xa fixed in all of the local updates

when resampling.

is a set of configurations that contain some known feature, or is allowable, then a

sm:npJle of replicates

XI,T"'" ,T

can pnme<l to

I,T ... , ,T

a sarrlple = E

to:rlr.to~~ral)hy a is needed

can to some extent

Page 19: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

asSUlTlptllon that A and B from 1.1 the same and reallZJlI1,:?; that

the that does not depend on t - which was for reasons

exposition only - is in fact immaterial, as is the stationarity of , the MarKcrv random field

part of the model.

Acknowledgements

The author is grateful to The Danish Natural Science Research Council for financial support

during his stay at Department of Statistics, UniversitYQf Washington. Inspir~tion and helpful

discussions with students, faculty and guests at University of Wasbington are also gratefully

appreciated.

Page 20: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

A11IdeJrSeltl, L. S. (1988,a) : Consistent parameter estimation for corrup'tedMatrkc.v nmdom

with applications to report no. 117, University

Washington, (submittedfor publication)

Andersen, L. S. (1988,b): Spatial generalizations of the two - filter. (under preparation).

Stat.

: Spatial interaction and the statistical !'I1'\llthr<!l<!

pp. 75 - 83.

~v~fe1l11~_ J. Roy.

Besag, J. (1975): Spatial analysis of non lattice data. The Statistician, 24, pp. 179 195

Besag, J. (1986) : Restoration of dirty pictures. J. Roy. Stat. Soc., B, 47, (with discussion).

Derin, H., Elliott, H., Cristi, R. and Geman, D. (1984): Bayes smoothing algorithms for

segmentation of binary images modeled by Markov random fields. IEEE. Trans. Patt. Anal.

Mash. Intel., 6, pp. 707 - 720.

Geman, V~IU.laJL1, D. ~tO(;has1~ic re~laxlation, Gibbs distrib1JltiOIlS,

Trans. Patt. Anal. Mash. Intel., 6, pp. -741.

Geman,

Page 21: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

20-

Haslett, J. and Horgan, G. (1986) : Linear models in spatial discriminant (to apl)ear)

Haslett, J. and Horgan, G. (1987) : Spatial discriminant analysis - A linear discriminant

function for the black I white case. Theory and P.

Devijver and J. Kittler, NATO-ASI F, Springer-Verlag pp. 47-

Lehmann, E. L. (1959) : Testing statistical hypotheses. Wiley, New York.

Meinhold, R. J. and Singpurwalla, N. D. (1983): Understanding the Kalman filter. ArneI'.

Statistician 43, pp. 123 - 127.

Revuz, D. (1975) : Markov chains, volume II North-Holland, Amsterdam.

Ripley, RD. (1981) : Spatial Statistics, Wiley and Sons, New York.

Ripley, RD. (1987): Stochastic simulations. Wiley and Sons, New York.

Solo, V. (1982) : Smoothing estimation of stochastic processes: two filter formulas. IEEE

trans. aut. contr., AC-27, pp. 473 - 476.

Page 22: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Figure 2.1 a}. Origin;!.1 image

realization of Markov l'lIndom

Page 23: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Figure 2.2 e),

from 200 independent replica1e8. pt =p {Xt =xi I Yr >,image shows Pt (1-p, ).

...

Figure 2.2 f), The highest .5 pemmt estimated

misclas.sificatlon m1e8 overlayed on the restored image.

Page 24: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

II

,

II

I

I

- hI

- II

- II

- II

- -J:-

- --

-

-Figure 3.1 d), empericaldistribution functioo of F to the

mr"m value is

Figure 3.1 b}, Corrupted image 35 per cent

misdassitied pixels.

~: Jo r

l\)......o

......<0o

......<0 ­o

......01o

......'!o

Figure 3.1 c}, the minimum misclll88ificatiOn resoration,

using the algorithm, and the parameters from example

2.2. 200 bootstrap samples with ooסס5 local updates per

sample. 'The super imposed object shows up clear in the

nailer frame The number of misclassified pixels is

IIs 1

187.

- Ii

II

iI

II

1o

......<0o

......01o

Figure emperiell.l dlstributloo

mean value is 175.8.

Figure 3.1 f). empericai diairibution function of F $, the

mean value is 177.3.

Page 25: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation
Page 26: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Figure 3.3 b), disagreements betweenfiguf<IIl3.2 e) and

3.3 a) overillyed3ll black pixels on figure 3.3 a).

Page 27: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

6. PERFORMING ORG. REPORT NUMBER

iI.

10. PROGRAM ELEMENT, PROJECT, TASKAREA a WOI'lK UNIT NUMBERS

98195

from ControWn'·Offlce)

ANO AOORESS

NAME ANO AOORESS

St.:ltist:i.cs, GN-22

Lars

IS., OECLASSIFICATION/OOWNGRAOINGSCHEOULE

16, OISTRIBUTION STATEMENT (of this Report)

APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED.

17. OISTRIBUTION STATEMENT (of the abe tract entered In Block :ZO, If different from Report)

SUPPI..EM EN T ARY NOTES

Bootstrap CQ11lP .......,u5' conditional inference, conditioning, decision theory,resam~.lj.ng, variability.

EOITION OF I NOV 6S IS OBSOLETE

",

Page 28: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

ing

isMarkov random

process. Thisinference requir­

made had the true image been observed,also for highly specialized decision problems.basic assumptions are explored.

Page 29: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Figure 2.1 b), Observed image 20 percent

miclassification rate.

Figure 2.1 a). Original image 100 x 100 pixels,

realization of Markov random field after IOO local

updates.

,.

Figure 2.1 c), Minimum misclMsification restoration,

on the basis of 200 replicate samples and ooסס1 local

updates per sample. Misclassification rate 2.07 percent,

estimated misclassification rate 2.7 percent.

Figure 2.1 d}, Minimum mean squared error estimate.

Estimated mean squared error 0.022, actual mean

squared error 0.017.

"iLl'liI

Figure 2.1 The highest .5 percent estiima,ted

miscla:ssification rates overlayed on the restored image.

ratesEstimated local miscl,as£;ifi.:atioo

Page 30: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Figure 2.2 a), Orginal image 100 x 100 pixels,

realization of Markov random field after 5 x 106 local

updates. parameters are "'I = 1.05, "'2 = 0.9

Figure 2.2 c), Minimum misclassification restoration,

on the basis of 200 replicate samples and 50000 local

updates per sample. Misclassification rate \.50 percent,

estimated misclassilication rate 2.92 percent.

Figure image percent

miclassilication rate, symmetric noisy channel. Actually

observed misclassified pixels 36.5 percent.

Figure 2.2 d), The 150 errors overlayed on the

errors are black.

Figure 2.2 f), The highest 5 percent estimated local

misclassification rates overlayed on the restored image.

Page 31: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

......01o

Figure 3.1 a), Original image on 100 x 100 square

lattice, the image consists of a realization from the

Markov random field in example 2.2, and a black object

super imposed in the smaller frame.

Figure 3.1 c), the minimum misclassification resoration,

using the algorithm, and the parameters from example

2.2. 200 boot'ltrap samples with ooסס5 local updates per

sample. TIle super imposed object shows up clear in the

The number of misclassified pixels is

Figure 3.1 e), emperical distribution function of F z, tbe

mean value is 175.8.

......01o

.....'!o

.....(0o

rv.....o

.....01o

.....'!o

......(0o

o

Figure 3.1 b), Corrupted image 35 per cent

misclassified pixels.

"I

II

II

JI"~ ~-_._-~---~~~ --_.,

- II

- -J-

-J

-Figure 3.1 d), emperical distribution function of F t, tbe

mean valueis 175.4.

-

~L_-J

II

"IiI

- II

- -J-

- -

-

-Figure 3.1 fl, emperical distribution function of F 3, the

mean value is 177.3.

Page 32: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Figure 3.2 a), Realization of Markov random field, as in

example 2.2; the black ball in the upper half of the

image was super imposed artificially.

Figure 3.2 c), Restoration using 200 resampled

replicates from Markov random fields model; example

2.2.

Figure 3.2 b), 35 percent corruption of a), symmetric

noisy channel.

Page 33: STOCHASTIC INFERENCEFORSPATIAL STATISTICS · the use of classical statistical methods for hypothesis testing, estimation, classification, and assessment ofuncertainty. A foundation

Figure 3.3 a). minimum misclassilication restoration

conditional on F (XT ) =0, restoration on the basis of

135 replicates.

Figure 3.3 b), disagreements between figures 3.2 c) and

3.3 a) overlayed as black pixels on figure 3.3 a).