12
Performance model for joint tracking and ATR with HRR radar Shan Cong a , Lang Hong a , and Erik Blasch b a Wright State University,Dayton, OH 45435 b Air Force Research Laboratory (AFRL/SNA),WPAFB, OH 45433 ABSTRACT Joint tracking and ATR with HRR radar is an important eld of research in recent years. This paper addresses the issue of end-to-end performance modeling for HRR radar based joint tracking and ATR system under various operating conditions. To this end, an ATR system with peak location and amplitude as features is considered. A complete set of models are developed to capture the statistics in all stages of processing, including HRR signal, extracted features, Baysian classier and tracker. In particular, we demonstrate that the eect of operating conditions on feature can be represented through a random variable with Log-normal distribution. Then, the result is extended to predicting the system performance under specied operating conditions. Although this paper is developed based on a type of ATR and tracking system, the result indicates the trend of the performance for general joint ATR and tracking system over operating conditions. It also provides guidance to how the empirical performance model of a general joint tracking and ATR system shall be constructed. Keywords: Performance model, ATR, Feature-aided Tracking, HRR 1. INTRODUCTION HRR-based tracking and ATR is a relatively new development in the research community. It is introduced mostly to account for the need of moving target tracking and identication. A critical aspect to support such operation is the extraction of HRR features. In general, the objective for utilizing feature is to establish a confusion vector for each target so that an indication is given for which class a target i is related to. a i =[a i0 a i1 ··· a ij ··· a in ] T (1) where n is the number of classes. Thus, by comparing a ij , an application is able to obtain information about the orignin of the target in terms of its classication. Table 1 is a summary of recent development in this area for the past 10 year or so. It is by no means a complete list of references. Rather it is to provide some examples of how the features may be derived. It can be seen that the eort to model HRR features is generally divided into to classes. The rst is to directly model the peak locations as a features for a target. The peak locations can be as long as the entire range prole, or only a selected part of the range prole. The second class is to map a target HRR range prole into a lower dimensional feature space nonlinearly related to the range prole so that the classication can be achieved easier. The rst class of features carries clear physical meaning and requires less modeling and possibly less training. Therefore, more applications prefer this approach. For the second class of features, there is still no dominant approach established for now. The generic structure of fusion systems for tracking and classication can be obtained through extension of the ATR model of the FITE program 26 to include both ATR and tracking. Under this model, the trade space of a fusion operation is composed of four components, i.e. R, V, E, B, dened as follows: R: Real world; V: Contextual information about R; E: Sensing processing, including functions A and T: A: ATR, T: Tracking; B: Fusion operation. Algorithms for Synthetic Aperture Radar Imagery XV, edited by Edmund G. Zelnio, Frederick D. Garber Proc. of SPIE Vol. 6970, 69700T, (2008) · 0277-786X/08/$18 · doi: 10.1117/12.777708 Proc. of SPIE Vol. 6970 69700T-1 2008 SPIE Digital Library -- Subscriber Archive Copy

Performance model for joint tracking and ATR with HRR radar

  • Upload
    wright

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Performance model for joint tracking and ATR with HRRradar

Shan Conga, Lang Honga, and Erik Blaschb

aWright State University,Dayton, OH 45435bAir Force Research Laboratory (AFRL/SNA),WPAFB, OH 45433

ABSTRACT

Joint tracking and ATR with HRR radar is an important field of research in recent years. This paper addressesthe issue of end-to-end performance modeling for HRR radar based joint tracking and ATR system under variousoperating conditions. To this end, an ATR system with peak location and amplitude as features is considered. Acomplete set of models are developed to capture the statistics in all stages of processing, including HRR signal,extracted features, Baysian classifier and tracker. In particular, we demonstrate that the effect of operatingconditions on feature can be represented through a random variable with Log-normal distribution. Then, theresult is extended to predicting the system performance under specified operating conditions.

Although this paper is developed based on a type of ATR and tracking system, the result indicates the trend ofthe performance for general joint ATR and tracking system over operating conditions. It also provides guidanceto how the empirical performance model of a general joint tracking and ATR system shall be constructed.

Keywords: Performance model, ATR, Feature-aided Tracking, HRR

1. INTRODUCTION

HRR-based tracking and ATR is a relatively new development in the research community. It is introduced mostlyto account for the need of moving target tracking and identification. A critical aspect to support such operationis the extraction of HRR features. In general, the objective for utilizing feature is to establish a confusion vectorfor each target so that an indication is given for which class a target i is related to.

ai = [ai0 ai1 · · · aij · · · ain]T (1)

where n is the number of classes. Thus, by comparing aij , an application is able to obtain information aboutthe orignin of the target in terms of its classification.

Table 1 is a summary of recent development in this area for the past 10 year or so. It is by no means acomplete list of references. Rather it is to provide some examples of how the features may be derived. It can beseen that the effort to model HRR features is generally divided into to classes. The first is to directly model thepeak locations as a features for a target. The peak locations can be as long as the entire range profile, or only aselected part of the range profile. The second class is to map a target HRR range profile into a lower dimensionalfeature space nonlinearly related to the range profile so that the classification can be achieved easier. The firstclass of features carries clear physical meaning and requires less modeling and possibly less training. Therefore,more applications prefer this approach. For the second class of features, there is still no dominant approachestablished for now.

The generic structure of fusion systems for tracking and classification can be obtained through extension ofthe ATR model of the FITE program26 to include both ATR and tracking. Under this model, the trade spaceof a fusion operation is composed of four components, i.e. R, V, E, B, defined as follows:

• R: Real world;• V: Contextual information about R;• E: Sensing processing, including functions A and T:

— A: ATR,

— T: Tracking;

• B: Fusion operation.

Algorithms for Synthetic Aperture Radar Imagery XV, edited by Edmund G. Zelnio, Frederick D. GarberProc. of SPIE Vol. 6970, 69700T, (2008) · 0277-786X/08/$18 · doi: 10.1117/12.777708

Proc. of SPIE Vol. 6970 69700T-12008 SPIE Digital Library -- Subscriber Archive Copy

The main difference between this model and the FITE model is the substitution of E with A so that tracking(T) can be included. As a consequence, we also denote BE as the fuser for E.

Table 1: Summary of Recent Development on HRR Feature Processing

Feature Processing Approach Application Reference1 Peak Locations Form template for the HRR profile. Match Joint Tracking&ATR 3, 28,32,36

template to measured profile to obtain score or ATR 7, 14,20,22

for ATR. The resulting feature is in the 12,15, 18

form ak, rk.Some variation exists in order torelax the requirement on pose angle.

2 AR Model for Form AR model for power spectrum of ATR 4, 5

Spectrum Radar return signal. Can be viewed aspeak location feature in frequency domain

3 2D& 3D Invariant Identify geometrical invariants of a target and Joint tracking&ATR 8

track invariants as features.4 Local Motion Identify target motion relates to its platform Joint tracking&ATR 9

and track the motion as feature.5 Wavelet Statistics Preprocess HRR profile with wavelet transform Joint tracking& ATR 11

and model the statistics of wavelet coefficients or ATR 27

as feature.6 SVM Map HRR profile into a feature space through ATR 23

a set of kernel function. After training 33

classification is achieved by separatingtarget into different hyperplanes.

7 Length Extract target length from profile and FAT 21

use length as feature.8 Neural Network Preprocess HRR profile and classify with NN. ATR 1

According to this model, the fusion performance model problem is essentially a distribution mapping problem,i.e. the fusion performance is obtained as the conditional distribution of R:

p(BE |R) =E,V

p(BE |A, V )p(A|V,R)p(V |R), (2)

where E and V are ranges of E and V , respectively.

Thus, if the distribution of the operating condition about R is given, then the fusion performance is obtained,e.g. the ATR performance as probability of classification (identification) is obtained as

pc =Rp(BA|R)p(R)dR (3)

and tracking performance as the covariance of track state estimate (RE) is obtained as

RE =R(x− x0)(x− x0)T p(BT |R)p(R)dR (4)

where x and x0 are track state estimate and true track state, respectively.

Clearly operating condition modeling is an important part of the fusion performance model development.This subject has been under investigation for years.17 There are at least three classes of operating conditions:

• OCs for Targets (RT ), such as rotation around x,y,z axes of target, distance to target, inception angle ofLOS to target, type (scattering model) of target, and obscuration of target;

• OCs for Sensors (Rs), such as operating frequency, resolution in range, cross range and Doppler, scanpattern, signal to noise ratio, depression angle and sampling rates;

• OCs for Environment (RE), types of background and types of clutters.

Proc. of SPIE Vol. 6970 69700T-2

In this paper, we completed a general performance model for HRR feature based classification and tracking.In the model, a term θk = RTk, Rsk, REk is introduced as the comprehensive operational condition at a timek. The operating conditions tie the model to published HRR data collection, allowing the model parameters tobe calibrated by the real data. Results of the identified HRR statistics and the predicated classification/trackingperformance are presented also to illustrate the performance model.

2. STATISTICAL MODELS FOR HRR FEATURE AND FEATURE PROCESSING

2.1 Model of HRR profiles

Given θk as an operating condition, a range profile is modeled as

T (θk) =

Nr

i=1

aikδ(t− i) (5)

where Nr is the length of a range profile and

aik = aTik + acik, (6)

aTik and acik satisfying aikT ∼ fT (aikT , θk) and aikc ∼ fc(aikc , θk).For HRR radar, coherantly detected range profile with corrupting clutter and noise signal can be modeled as

fT (aTik, θk) = (1− poT ik)pd(SNR, Th, Ni)N(aT ik; aTik,σT ik) (7)

fc(acik, θk) =

Mc

j=1

pcjpfa(SNR, Th, Ni)N(acik; ajcik,σ

jcik) (8)

where poT ik is a probability of occlusion at the location on a range profile and pcj is an environmentaltransition probability conditioned on operating conditions for MTUM model with Mc model classes. pd andpfa are probability of detection and probability of false alarms, which are functions of SNR, detection thresholdand pulse integration (Ni is number of pulse integrated. pd and pfa are introduced to account for the fact arange profile is available any after each range location is detected. Detection and false alarm probability can becalculated either by Marcum’s Q function based traditional detection analysis approaches or more recent resultsas in.35

2.2 Model of features

We start the development with a model of the features at a given OC θk:

TF (θk) = ai, xi, i = 1, 2, · · · , Nf (9)

where Nf is the number of available features. Denote observations of a feature as as TF (θk) = ai, xi, i =1, 2, · · · ,Nfd and Nfd is the number of detected features, then

p(ai, xi|θk) = p(ai|xi, θk)p(xi|xi, θk) (10)

where both p(ai|xi, θk) and p(xi|xi, θk) are modeled as Gaussian distributions,p(ai|xi, θk) = N(ai; ai(xi, θk),σai(xi, θk) (11)

p(xi|xi, θk) = N(xi;xi(θk),σxi(θk) (12)

For each of the objects and object aspects to be detected, the number of features in its model is different.GivenNf as the number of features in a model as defined in (9), Nf is modeled as following a Poisson distribution:

Nf ∼ λTNf !

e−λT (13)

where λT = mean(Nf ).

Proc. of SPIE Vol. 6970 69700T-3

2.3 Model of feature evaluation

A. Matching measured feature and model

Given a feature model as TF (θk) and a measured feature TF (θk), a nearest neighborhood assignment offeature is formed as

γ∗ = argmaxγd∈Γ

i∈γdp(ai, xi|θk) (14)

where γd is an indice set denoting the way that TF (θk) and TF (θk) match each other. Apparently, the degree ofset γd is Nfd.

B. Bayesian decision of classification

Denote D as the decision of classification, then under an observed operation condition θk, the probability ofa decision D = m can be derived according to Bayes rule as

p(D = m|TF (θk), θk) =p(TF (θk)|D = m, θk)p(D = m|θk)NT

i=1 p(TF (θk)|D = i, θk)p(D = i|θk)(15)

In the above, please note that the observed OC θk can be different from the actual OC θk.

C. Distribution of classication decision

In Eq. (15), the likelihood for a measured feature set fitting a target model i is calculated according to thefeature model Eq. (10):

p(TF (θk)|D = i, θk) = cm exp(−j∈γ∗

(xj − xij)22σ2x

+(aj − aij)2

2σ2a) (16)

where cm is a normalizing factor for Gaussian distribution. Substitude Eq. (16) into Eq. (15), we have

p(D = m|TF (θk), θk)

=e−

j∈γ∗(xj−xij)2

2σ2x+(aj−aij)2

2σ2a p(D = m|θk)

e−

j∈γ∗(xj−xij)2

2σ2x+(aj−aij)2

2σ2a p(D = m|θk) + cm l=m e−

j∈γ∗(xj−xlj)2

2σ2x+(aj−alj)2

2σ2a p(D = l|θk)(17)

In the above, we assume that the variations for feature location and amplitude are the same across models. Thisassumption reduces the model dimensionality and will not significantly change the result, because feature modelhas to maintain sufficient quality across its range. If quality of a feature is significant lower than the others, itcannot be selected as a feature. Given a set of possible targets and clutters with all possible aspect angles, themodel parameters xij and aij may be at any location within their ranges with equal probability. Thus, we canassume the model parameters are uniformly distributed:

px(xij) = 1/M, xij ∈ [0, M ] (18)

pa(aij) = 1/A, aij ∈ [amin, amin +A] (19)

where M is average object size, amin is the minimum detection amplitude of a feature and A is average featurevariation range. This assumption will be validated in the next section.

M , A and amin are a set of parameters related to operating conditions. Denote M0 as the nominal size of anaverage target, φd an φa as sensor depressing angle and target aspect angle, then

M =M0cosφasinφd. (20)

Proc. of SPIE Vol. 6970 69700T-4

A is a parameter related to the availability of target types. For a fixed set of possible targets, this parametercan be a constant. amin is a modeling algorithm related term, defining the sensitivity of the algorithm towardsthe selection of a feature point. The variance related terms σx and σa are jointly determined by sensor SNR,resolution, environmental conditions and modeling algorithm.

With this uniform assumption of model parameters, we define four random variables:

ηxi = −Nfd

j

(xj − xij)22σ2x

(21)

ηai = −Nfd

j

(aj − aij)22σ2a

(22)

ξi = cmeηxieηai (23)

µ =1

Nt − 1 i

ξi (24)

where Nt is the number of classes in consideration.

Apperantly, ηxi and ηai represent the location and amplitude error terms of the feature with regard tointerfering target types, ξi is the error in form of likelihood, and µ is the combined likelihood of all interferingtaget types other than the correct target model. When Nfd is larger than 3, ηx and ηa can be very wellapproximated with a Gaussian distribution:

ηx ∼ N(ηx;EΣx,σΣx) (25)

ηa ∼ N(ηa;EΣa,σΣa) (26)

In the above, derivation of EΣx, σΣx, EΣa and σΣa are given in the appendix section.

Thus, both ξ and µ can be modeled as log-normal distributions with density function as:

pξ =1

cξplog−N (Eξ ,σξ) (27)

pµ = plog−N (Eµ,σµ) (28)

where

Eξ = EΣx + EΣa (29)

σξ = (σΣx + σΣx)1/2 (30)

cξ = 1/cm (31)

σµ = log[ i e2Eξi+σξi(eσ

2ξi − 1)

( i eEξi+σξi/2)2

+ 1] (32)

Eµ = log(i=m

eEξi+σξi/2)− σ2µ2. (33)

Applying the model of average likelihood of incorrect features (µ) and assuming there is no prior informationof classification, Eq. (15) is converted into:

p(D = m|TF (θk), θk) = p(TF (θk)|D = m, θk)p(D = m|θk)p(TF (θk)|D = m, θk)p(D = m|θk) + µ(TF (θk))(1− p(D = m|θk))

(34)

If there is no prior information, then

p(D = m|TF (θk), θk) = p(TF (θk)|D = m, θk)

p(TF (θk)|D = m, θk) + (Nt − 1)µ(TF (θk))(35)

This means that given an object and its feature model, the behavior of the probability of detecting this objectunder an operating condition can be approximated as a function of a lognormally distributed random variable.

Proc. of SPIE Vol. 6970 69700T-5

2.4 Extracted feature statistics

In this section, extracted feature statistics are presented. These feature statistics are important to support thebuilding of ROC models. These feature statistics are based on the published HRR data from AFRL. In the dataset, there are 8 types of objects, each has 350 to 360 different view angles. As the object models from differentangles are mostly uncorrelated, this is equivalent to about 2800 different objects, sufficient to draw conclusionson the statistics of the feature statistics.

Four types of feature statistics are calculated, namely feature locations, feature amplitudes, number of featuresand object sizes. From Figure 1.a, it can be seen that the features are mostly equally distributed accross thelength of an object, altough a slight trend of increasing probability distribution from the left to right. Consideringthat the objects in reallife should be almost equally possible to be viewed from different direction, the trend willbe averaged out.

Figure 1.b is the distribution of feature amplitude. Although the overall distribution of the amplitudesresembles a bell-shaped distribution such as lognormal or χ2 distribution, over 72% the amplitudes are distributedbetweem 0.15 to 0.45. Within this range, the variance of the distribution is very small.

Figures 1.a and 1.b provide support to the assumptions in Eqs. (18) and (19).

Figure 1.c is the distribution of number of features available for a range profile. This distribution closelycorrelates to a Possison distribution with λ = 4.8.

Figure 1.d is the distribution of object sizes accross the different object types and view angles. Supprisinglythat the object size is mostly distributed between 15 to 29 feet (over 85%), although the size can be observedanywhere between 11 to 39 feet. Meanwhile, apart from a peak at 25 feet, the distribution of object size is fairlyclose to uniform.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

Object feature location (Normalized)

Den

sity

of

Ob

ject

fea

ture

loca

tion

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.02

0.04

0.06

0.08

0.1

0.12

Normalized Peak Amplitude

Per

cen

tage

of

Pea

k A

mpl

itude

2 3 4 5 6 7 8 9 100

0.05

0.1

0.15

0.2

0.25

Number of Features

Per

cent

age

of

Obj

ect

with

Giv

en N

umb

er o

f F

eatu

res

10 15 20 25 30 35 400

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

Per

cen

tage

of

Obj

ect

with

Giv

e Le

ngth

Object Length on LOS

(a) Distribution of feature locations (b) Distribution of feature amplitude

(c) Distribution of number of features (d) Distribution of projected object size

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

Object feature location (Normalized)

Den

sity

of

Ob

ject

fea

ture

loca

tion

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.02

0.04

0.06

0.08

0.1

0.12

Normalized Peak Amplitude

Per

cen

tage

of

Pea

k A

mpl

itude

2 3 4 5 6 7 8 9 100

0.05

0.1

0.15

0.2

0.25

Number of Features

Per

cent

age

of

Obj

ect

with

Giv

en N

umb

er o

f F

eatu

res

10 15 20 25 30 35 400

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

Per

cen

tage

of

Obj

ect

with

Giv

e Le

ngth

Object Length on LOS

(a) Distribution of feature locations (b) Distribution of feature amplitude

(c) Distribution of number of features (d) Distribution of projected object size

Figure 1: Estracted feature statistics.

3. APPLICATIONS OF THE STATISTICAL MODELS

3.1 Application 1: ROC model of classification

From (35), the probability estimate to support the classification of an object of type m as class m under anoperating condition is:

pc(θk) = p(D = m|TmF , θk) (36)

and the probability estimate to support the miss classification type m as type j is

pjcf (θk) = p(D = j|TmF , θk)

=ξj

p(TmF (θk)|D = m, θk) + (Nt − 1)µ(37)

Proc. of SPIE Vol. 6970 69700T-6

where TmF denotes feature measurement for target of type m. Given a classification threshold Thc, then theprobability of correct classification is obtained as

Pcc(θk) =Tm

F,µ:pc>Thc

pc(θk)p(TF |θk)plog−N (µ;Eµ,σµ)dTFdµ (38)

where TmF , µ : pc > Thc is a feasible set of features and µ satisfying the threshold. Similarly, we have theprobability of miss classifying the object as:

Pcf (θk) =j=m θk,ξ,µ:pcf>Thc

pcf (θk)p(TF |θk)plog−N (ξ;Eξ ,σξ)plog−N (Eµ,σµ)dTmdµdξ (39)

Intergrating over the range of OC, we have an estimate of the predicted classification performance in the rangeof ROC. In this case, the probabilities of correct classification and false classification are

Pcc =Θ Θk

Pcc(θk)p(θk|θk)p(θk)dθkdθk (40)

Pcf =Θ Θk

Pcf (θk)p(θk|θk)p(θk)dθkdθk (41)

In the above, Θ is the range of operating conditions and Θk is the range of currently predicted operatingcondition.

3.2 Application 2: feature-aided tracking performance prediction

An approach29 was developed for predicting feature-aided tracking performance. In the paper, tracking perfor-mance is summarized in two elements, i.e. probability of correct association (Pca) and covariance of track stateestimate (RE). Given feature related term ζ defined as

ζ = aij/aii (42)

where aij and aii are the off-diagonal and diagonal items in a confusion matrix defined as Eq. (1) obtained infeature processing, these two elements are given as follows:

• Probability of correct associationPca = (1− Cm(σ/r)m)N−1 (43)

where N is the number of targets, m is the dimension of measurements, r is the radius of the measure-ment space and σ is the average standard innovation deviation (σ = [det(S)]1/2m, S is the covariance of

innovation). In this equation, Cm is defined as :

Cm =1√2π2m/2

0

(xm + ζxm−2)e−(x−(ζ/x))2/2dx (44)

When the number of targets is assumed to be Poisson distributed with γ as mean, then Eq. (43) can berevised into:

Pca(ζ) = exp(−Cmβm) (45)

where βm = γ(σ/r)m.

• Covariance of track state estimateRE(ζ) = R+ (Υ2 −Υ1)βm exp(Cmβm) (46)

where R is measurement covariance and

Υ1 = CmR+ Cm2RS−1R (47)

Υ2 = (Cm − 2Cm3)(2Q+R) + Cm2(2Q+R)S−1(2Q +R) + Cm4S. (48)

Proc. of SPIE Vol. 6970 69700T-7

In the above, the coefficients Cm2, Cm3 and Cm4 are defined as:

Cm2 =1√2π2m/2−1

0

xme−(x−(ζ/x))2/2dx (49)

Cm3 =1√2π2m/2

0

(xm + ζxm−2)e−(x−(ζ/x))2/2dx (50)

Cm4 =1

(m+ 2)√2π2m/2+1

0

(xm+2 + ζxm)e−(x−(ζ/x))2/2dx (51)

Thus, considering Eq. (27), we can translate the tracking performance into a function of operation conditions:

P ∗ca(θ) =Ωf

−∞Pca(

ζ

p(Tf |θ) )1

cξplog−N (Eξ ,σξ)p(Tf |θ)dξdTf (52)

R∗E(θ) =Ωf

−∞RE(

ζ

p(Tf |θ) )1

cξplog−N (Eξ ,σξ)p(Tf |θ)dξdTf (53)

3.3 Application 3: multilook classification performance

If more than one frame of range profile is available, the classification information can be accumulated. DenoteTF (θk) and TF (θk−1) as measured features for frame k and k− 1, respectively, then the accumulated probabilityof classification is obtained according to Bayes rule:

p(D = m|TF (θk), TF (θk−1))

=p(TF (θk)|D = m, TF (θk−1))p(D = m|TF (θk−1))

p(TF (θk)|D = m, TF (θk−1))p(D = m|TF (θk−1)) + p(TF (θk)|D = m, TF (θk−1))p(D = m|TF (θk−1))

=p(TF (θk)|D = m, TF (θk−1))p(D = m|TF (θk−1))

p(TF (θk)|D = m, TF (θk−1))p(D = m|TF (θk−1)) + (1− p(D = m|TF (θk−1))) j=mp(TF (θk)|D = j, TF (θk−1))

(54)

Thus, p(D = m|TF (θk), TF (θk−1)) is obtained as result of p(D = m|TF (θk−1)).When nonperfect information is used in classification due to association uncertainty, then the multilook

performance has to be revised to include the effect of tracking. Denote φji as the event that a measurement oftarget j is assigned to target i, then

p(D = m|T iF (θk), T iF (θk−1)) =

N

j=1

p(D = m,φji |T iF (θk), T iF (θk−1))

=

N

j=1

p(D = m|φji , T iF (θk), T iF (θk−1))p(φji |T iF (θk), T iF (θk−1))

≈ p(D = m|φii, T iF (θk), T iF (θk−1))p(φii|T iF (θk), T iF (θk−1))= p(D = m|φii, T iF (θk), T iF (θk−1))Pca= p(D = m|T iF (θk), T iF (θk−1))Pca (55)

In the above, we assume that target i is of type m. We also assume that the probability of miss classificationtarget as type m when a measurement from a different target is used is very low.

4. EXAMPLES

In this section, we present some results of performance prediction for HRR based tracking and classification.Throughout the calculation, the feature statistics data from the previous section are used in the performancemodel.

In Figures 2 and 3, the probability of correct classification (pc) is plotted as a function of number of looks,without considering the association performance.

Proc. of SPIE Vol. 6970 69700T-8

In Figure 2, six cases of different number of features are presented. In each figure, from bottom to topare seven performance curves of increasing SNR from 10db to 22 db. Clearly there is significant change inperformance over the range of SNR. At low SNR, classification with HRR is almost impossible; with high SNR,at excellent classification performance can be reached. Especially when more than 4 features available, probabilityof classification can be over 0.9 for three looks; when 5 features are available, probability of classification can beover 0.9 in just two looks.

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8Number of features = 2

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 3

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 4

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 5

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 6

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 7

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8Number of features = 2

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 3

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 4

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 5

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 6

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Number of features = 7

Number of Looks

Pc

Figure 2: Probability of classification vs. number of features and number of looks.

In Figure 3, six cases of different levels of SNR are presented, illustrating the change of performance for SNRfrom 10 db to 20 db. In each of the figures, 7 curves are presented for the performance of different numberof features. A thick red curve is used in each figure to represent the case with 7 features. A very interestingphenomenon occurs over these cases. When SNR is low, the model with higher number of features generallyperformance worse than the model with lower number of features. However, when SNR is high, the trend reverses.The model with higher number of features general performances better than the model with lower number offeatures. This is because that the more features used in a model, the higher request of SNR in order to fullydetect the features. Thus, the effect of SNR is significant to the classification of object.

Figure 4 presents the probability of false classification as a function of number of looks, illustrating the changeof performance for SNR from 10 db to 20 db. Again, in each of the figures, 7 curves are presented for the falseclassification performance of different number of features, among them a thick red curve is used to represent themodel with 7 features. Again, as in the case of pc, pcf is also low for models with higher number of featureswhen SNR is low, and the high for models with higher number of features when SNR is high. Meanwhile, pcf isalmost constant for the model with a single feature. The reason for this interesting phenomenon is due to theway that a false classification is generated.

In order to fully understand pcf , we need to explain how false classification is formed. Because a model has

Nf features, there areNf−1n=0 C

Nfn ways to form a false classification. As probability of false alarm (detection)

is generally much lower than the probability of detection, the most important ways to cause false classificationis when detected features carries large error or at most one feature is not detected while a false detection occurs.For these cases, with the improvement of SNR, the probability of false classification also increases.

Figure 5 illustrates the performance of feature-aided tracking. In this example, we assume that an averageof 3 targets are presented in the scene, with average separation of 20 meters. Current track covariance (2D) is

Proc. of SPIE Vol. 6970 69700T-9

diag([300 300]). Radar resolution is 1 foot. Figures 5.a and 5.b are for performance as function of number offeatures used in a classification model. Clearly the tracking performance improves with the increase of number offeatures used in the model. However, when the number of features in a model is low, the use of feature can causemiss association and reduces tracking performance. When more features are used, the model is more descriptiveand beneficial to tracking. This indicates that feature is a very important dimension of information that can beused to help track association. However, the design of model is also very critical. Low quality model can actuallydecrease tracking performance.

Figures 5.c and 5.d illustrate the combined performance of target classification when tracking performance isconsidered. In this case. we assume a track update rate of 1 hz. The SNR is assumed to be 18 db. Comparisonof the two figures indicates that it is clear that association process can corrupt the classification processingin this relatively dense target scenario. This is because that uncertainty in track association may introducecontradictory information into classification, such as different targets are used to update a track. Thus, targetclassification cannot be confirmed. To improve the situation, we need to either reduce the target density in eithergeometrical space or classification space.

5. CONCLUSION

This paper summarized a development of performance prediction for HRR based tracking and classification. Forclassification, the ROC type performance model is provided. For tracking, the models for probability of correctassociation and track covariance are developed. The feature statistics data are found to support the assumptionsused in the model. The results from the performance model also provide interesting insight into the trend oftracking and classification performance as function of number of features used, SNR and number of looks oftarget.

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16SNR = 10

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.05

0.1

0.15

0.2

0.25SNR = 12

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.05

0.1

0.15

0.2

0.25

0.3

0.35SNR = 14

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45SNR = 16

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SNR = 18

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SNR = 20

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16SNR = 10

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.05

0.1

0.15

0.2

0.25SNR = 12

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.05

0.1

0.15

0.2

0.25

0.3

0.35SNR = 14

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45SNR = 16

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SNR = 18

Number of Looks

Pc

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SNR = 20

Number of Looks

Pc

Figure 3: Probability of classification vs. SNR and number of looks.

Proc. of SPIE Vol. 6970 69700T-10

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

1

2

3

4

5

6x 10

-6 SNR = 10

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

1

2

3

4

5

6

7

8

9x 10

-6 SNR = 12

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.2

0.4

0.6

0.8

1

1.2

1.4x 10

-5 SNR = 14

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6x 10

-5 SNR = 16

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.5

1

1.5

2

2.5x 10

-5 SNR = 18

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.5

1

1.5

2

2.5

3

3.5x 10

-5 SNR = 20

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

1

2

3

4

5

6x 10

-6 SNR = 10

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

1

2

3

4

5

6

7

8

9x 10

-6 SNR = 12

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.2

0.4

0.6

0.8

1

1.2

1.4x 10

-5 SNR = 14

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6x 10

-5 SNR = 16

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.5

1

1.5

2

2.5x 10

-5 SNR = 18

Number of Looks

Pcf

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.5

1

1.5

2

2.5

3

3.5x 10

-5 SNR = 20

Number of Looks

Pcf

Figure 4: Probability of false classification vs. SNR and number of looks.

1 2 3 4 5 6 70.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

Number of Features

Pca

1 2 3 4 5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7SNR = 18, With Effect of Tracking

Number of Looks

Pc

1 2 3 4 5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of Looks

Pc

SNR = 18, Without Effect of Tracking

1 2 3 4 5 6 7250

300

350

400

450

500

Number of Features

|det

(P)|

(a) Probability of correct association vs. Number of features (b) Std vs. Number of features

(c) Probability of correct association with effect of tracking error (d) Probability of correct association w/o effect of tracking error

1 2 3 4 5 6 70.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

Number of Features

Pca

1 2 3 4 5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7SNR = 18, With Effect of Tracking

Number of Looks

Pc

1 2 3 4 5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of Looks

Pc

SNR = 18, Without Effect of Tracking

1 2 3 4 5 6 7250

300

350

400

450

500

Number of Features

|det

(P)|

(a) Probability of correct association vs. Number of features (b) Std vs. Number of features

(c) Probability of correct association with effect of tracking error (d) Probability of correct association w/o effect of tracking error

Figure 5: Tracking performace.

Proc. of SPIE Vol. 6970 69700T-11

REFERENCES

[1] E. Avci, I. Turkoglu and M. Poyraz, “Intelligent target recognition based on wavelet packet neural network”, ExpertSystems with Applications, vol. 29, pp.175-182, 2005.

[2] E. Blasch and T. Connare, “Feature aided JBPDAF group tracking and classification using a IFFN sensor”, SPIEvol. 4728, pp. 208-217, 2002.

[3] B. Denne, R. de Figueiredo and R. Williams, “A new approach to parametric HRR signature modeling for improved1-D ATR”, SPIE vol. 4053, pp. 372-383, 2000.

[4] K. Eom and R. Chellappa, “Noncooperative target classification using hierarchical modeling of HRR radar signatures”,IEEE transactions on Signal Processing, vol 45., No. 9, pp. 2318- 2327, 1997.

[5] K. Eom, “Time-varying autoregressive modeling of HRR radar signatures”, IEEE Transactions on Aerospace andElectronics Systems, vol. 35, No. 3, pp. 974-988, 1999.

[6] A. Friedlander and L. Greenstein, “A generalized clutter computation procedure for airborne pulse Doppler radars”,IEEE Transaction on Aerospace and Electronics Systems, vol. 6, No. 1, pp. 51-61, 1970.

[7] J. Greenwald and S. Musick, “Nonlinear filtering for tracking large objects in radar imagery”, SPIE vol 5809, pp.12-22,2005.

[8] D. Gross, M. Oppenheimera, J. Schmitza, K. Sturtza, “Feature Aided Tracking Using Invariant Features of HRRSignatures”, SPIE 4382, pp. 143-152.

[9] L. Hong, N. Cui, M. Pronobis and S. Scott, Local Motion Feature Aided Ground Moving Target Tracking with GMTIand HRR Measurements”, IEEE Transactions on Automatic Control, Vol. 50, No. 1, pp. 127-133, 2005.

[10] L. Hong, S. Wu and J. Layne, Invariant-Based Probabilistic Target Tracking and Identification With GMTI/HRRMeasurements”, IEE Proceedings: Part F, Radar, Sonar and Navigation, Vol. 151, No. 5, pp. 280-290, 2004.

[11] L. Hong, S. Cong, M. Pronobis and S. Scott Wavelets Feature Aided Tracking (WFAT) Using GMTI/HRR Data”,Signal Processing, Vol. 86, No. 12, pp. 2683-2690, 2003.

[12] M. Hussain, “HRR, length and velocity decision region for rapid target identification”, SPIE vol. 3810, pp. 40-52,1999.

[13] D. Iny and M. Morici, “Quantitative analysis of HRR NCTR performance drivers”, SPIE vol. 2747, pp. 144-152,1996.

[14] S. Jacobs and J. OSullivan, “Automatic Target Recognition Using Sequences of High Resolution Radar Range-Profiles”, IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, VOL. 36, NO. 2, 2000.

[15] N. Jiang and J. Li, “Multiple moving target feature extraction for airborne HRR radar”, IEEE Transaction onAerospace and Electronics Systems, vol. 37, No. 4, pp. 1254-1266, 2001.

[16] B. Kahler, E. Blasch, D. Pikas and T. Ross, “EO/IR ATR Performance modelling to support fusion experimentation”.[17] B. Kahler, E. Blasch and L. Goodwon, “Operating Condition Modeling for ATR Fusion Assessment”, SPIE 2007.[18] M. Koudelka, J. Richards and M. Koch, “Multinomial pattern matching for HRR radar profiles”, SPIE vol 6568,2007.

[19] J. Lancaster and S. Blackman, “Joint IMM/MHT tracking and ID with confusers and track stitching”, SPIE vol.6236, 2007.

[20] J. Layne and D. Simon, “A multiple model estimator for a tightly coupled HRR automatic target recognition andMTI tracking system”, SPIE vol. 3721, pp. 362-373, 1999.

[21] J. Layne, “Automatic target recognition and tracking filter”, IRIS National Symposium on Sensor and Daa Fusion,pp. 211-225, 1998.

[22] R. Levin and J. Kay, “Optimum tracking and target identification using GMTI and HRR profiles”, SPIE vol. 4050,pp. 380-390, 2000.

[23] H. Li, Y. Zhao, J. Ma and S. Ahalt, “Kernel-based feature extraction and its application on HRR signatures”, SPIEvol. 4726, pp.222-229, 2002.

[24] J. Malas, K. Pasala, J. Westerkamp, “Wideband radar signal modeling of ground moving targets in clutter”, SPIEvol. 4718, pp.324-335, 2002.

[25] R. Mitchell and J Westerkamp, “A statistical feature based classifier for robust HRR target identification”. Vol. 35,No. 3, pp. 857 - 865, 1999.

[26] D. Morgana and T. Rossb, “A Bayesian Framework for ATR Decision-Level Fusion Experiments”, SPIE, 2007.[27] H. Morris and M. De Pass, “Wavelet Feature Extraction of HRR Radar Profiles Using Generalized Gaussian Distri-butions for Automatic Target Recognition”, SPIE vol 5809, pp.166-175.

[28] M. Ressler, R. Williams, D. Gross and A. Palomino, “ Bayesiam multiple-ook updating applied to the SHARP ATRsystem”, SPIE vol. 4053, pp. 418-427, 2000.

[29] Y. Ruan, L. Hong and D. Wicker, Performance Study of a Feature-Aided Global Nearest Neighbor Algorithm inApplications with Typical Tracking Geometries”, submitted to IEE Proceedings: Control Theory and Applications.

[30] Y. Ruan and L. Hong, Feature Extraction by Gaussian Mixture with Rigidity Constraint for Feature-Aided TargetTracking”, submitted to IEEE Transactions on Automatic Control.

[31] K. Sullivan, c. Agate and D. Beckman, “Feature-aided tracking of ground targets using a class-independent approach”,SPIE 5429, pp. 54-65, 2004.

[32] K. Sullivan, M. Ressler and R. Williams, “Signature-aided tracking using HRR profiles”, SPIE vol. 4382, pp. 132-142,2001.

[33] D. Waagen, M. Cassabaum, H. Schmitt and B. Pollock, “Support vector machine optimization via margin distributionanalysis”, SPIE vol. 5094, pp. 348-357, 2003.

[34] D.A. Shnidman, “Generalized radar clutter model”, IEEE Transactions on Aerospace and Electronic Systems, Volume35, No. 3, pp. 857 - 865, 1999.

Proc. of SPIE Vol. 6970 69700T-12